Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Surprisingly good for entry-level: Soundcore Nebula P1 with screen review

    Get a lifetime license for Microsoft Visio Professional 2024 for just $45

    This PC migration bundle eases computer transfers for just $35

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Tensions between the Pentagon and AI giant Anthropic reach a boiling point

      February 21, 2026

      Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

      February 6, 2026

      Stocks and bitcoin sink as investors dump software company shares

      February 4, 2026

      AI, crypto and Trump super PACs stash millions to spend on the midterms

      February 2, 2026

      To avoid accusations of AI cheating, college students are turning to AI

      January 29, 2026
    • Business

      Gartner: Why neoclouds are the future of GPU-as-a-Service

      February 21, 2026

      The HDD brand that brought you the 1.8-inch, 2.5-inch, and 3.5-inch hard drives is now back with a $19 pocket-sized personal cloud for your smartphones

      February 12, 2026

      New VoidLink malware framework targets Linux cloud servers

      January 14, 2026

      Nvidia Rubin’s rack-scale encryption signals a turning point for enterprise AI security

      January 13, 2026

      How KPMG is redefining the future of SAP consulting on a global scale

      January 10, 2026
    • Crypto

      3 Altcoins Crypto Whales are Buying After Supreme Court’s Trump Tariff Ban

      February 22, 2026

      SBI Deepens XRP Bet With Bond Incentives and Venture Studio Plan

      February 22, 2026

      IoTeX Hit by Private Key Exploit, Attacker Drains Over $2 Million

      February 22, 2026

      Solana Price Faces a Bull Trap as 50% Holders Exit

      February 22, 2026

      XRP Flaunts a 3-Week ETF Inflow Streak, So Why is Price Still Stuck Below $1.50?

      February 22, 2026
    • Technology

      Surprisingly good for entry-level: Soundcore Nebula P1 with screen review

      February 22, 2026

      Get a lifetime license for Microsoft Visio Professional 2024 for just $45

      February 22, 2026

      This PC migration bundle eases computer transfers for just $35

      February 22, 2026

      U.S. Cannot Legally Impose Tariffs Using Section 122 of the Trade Act of 1974

      February 22, 2026

      Japanese Woodblock Print Search

      February 22, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»AI and compliance: What are the risks?
    Technology

    AI and compliance: What are the risks?

    TechAiVerseBy TechAiVerseMay 28, 2025No Comments6 Mins Read4 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    AI and compliance: What are the risks?
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    AI and compliance: What are the risks?

    We look at the areas of risk in artificial intelligence. Potential exposures abound, and include security and privacy issues, bias, accuracy and complete fabrication of results

    By

    • Stephen Pritchard

    Published: 28 May 2025

    The rapid growth of artificial intelligence (AI), especially generative AI (GenAI) and chatbots, gives businesses a wealth of opportunities to improve the way they work with customers, drive efficiencies and speed up labour-intensive tasks.

    But GenAI has brought problems, too. These range from security flaws and privacy concerns to questions about bias, accuracy and even hallucinations, where the AI response is completely untrue.

    Understandably, this has come to the attention of lawmakers and regulators. Meanwhile, customers’ internal compliance functions have found themselves playing catch-up with a rapidly developing and complex technology.

    In this article, we look at AI and the potential risks it poses to compliance with legal and regulatory environments. All of which means organisation compliance teams need to take a good look under the hood at their use of GenAI to locate weaknesses and vulnerabilities, and just how reliable source and output data is.

    The most common enterprise AI projects mostly involve GenAI, or large language models (LLMs). These work as chatbots, answer queries or provide product recommendations to customers. Searching, summarising or translating documents is another popular use case.

    But AI is also in use in areas such as fraud detection, surveillance, and medical imaging and diagnosis; all areas where the stakes are much higher. And this has led to questions about how or whether AI should be used.

    Organisations have found AI systems can produce errors, as well as inaccurate or misleading results.

    Confidential data

    AI tools have also leaked confidential data, either directly or because employees have uploaded confidential documents to an AI tool.

    Then there is bias. The latest AI algorithms, especially in LLMs, are highly complex. This makes it difficult to understand exactly how an AI system has come to its conclusions. For an enterprise, this in turn makes it hard to explain or even justify what an AI tool, such as a chatbot, has done.

    This creates a range of risks, especially for businesses in regulated industries and the public sector. Regulators rapidly update existing compliance frameworks to cover AI risks, on top of legislation such as the European Union’s (EU’s) AI Act.

    Research by industry analyst Forrester identifies more than 20 new threats resulting from GenAI, some of which relate to security. These include a failure to use secure code to build AI systems, or malicious actors that tamper with AI models. Others, such as data leakage, data tampering and a lack of data integrity, risk causing regulatory failures even when a model is secure.

    The situation is made worse by the growth of “shadow AI”, where employees use AI tools unofficially. “The most common deployments are likely to be those that enterprises aren’t even aware of,” warns James Bore, a consultant who works in security and compliance.

    “This ranges from shadow IT in departments, to individuals feeding corporate data to AI to simplify their roles. Most companies haven’t fully considered compliance around AI, and even those who have, have limited controls to prevent misuse.”

    This requires chief information officers (CIOs) and data officers to look at all the ways AI might be used across the business and put control measures in place.

    AI’s source data issue

    The first area for enterprises to control is how they use data with AI. This applies to model training, and to the inference, or production, phase of AI.

    Enterprises should check they have the rights to use data for AI purposes. This includes copyright, especially for third-party data. Personal identifiable information used for AI is covered by the General Data Protection Regulation (GDPR) and industry regulations. Organisations should not assume existing data processing consent covers AI applications.

    Then there’s the question of data quality. If an organisation uses poor-quality data to train a model, the results will be inaccurate or misleading.

    This, in turn, creates compliance risk – and these risks might not be removed, even if an organisation uses anonymised data.

    “Source data remains one of the most overlooked risk areas in enterprise AI, warns Ralf Lindenlaub, chief solutions officer at Sify Technologies, an IT and cloud services provider. “These practices fall short under UK GDPR and EU privacy laws,” he says. “There is also a false sense of security in anonymisation. Much of that data can be re-identified or carry systemic bias.

    “Public data used in large language models from global tech providers frequently fails to meet European privacy standards. For AI to be truly reliable, organisations must carefully curate and control the datasets they use, especially when models may influence decisions that affect individuals or regulated outcomes.”

    A further level of complexity comes with where AI models operate. Although interest in on-premise AI is growing, the most common LLMs are cloud-based. Firms need to check they have permission to move data to where their cloud suppliers store it.

    AI outputs and compliance

    A further set of compliance and regulatory issues applies to the outputs of AI models.

    The most obvious risk is that confidential results from AI are leaked or stolen. And, as firms link their AI systems to internal documents or data sources, that risk increases.

    There have been cases where AI users have exposed confidential information either maliciously or inadvertently through their prompts. One cause is using confidential data to train models, without proper safeguards.

    Then there’s the risk the AI model’s output is simply wrong.

    “AI outputs can appear confident but be entirely false, biased, or even privacy-violating,” warns Sify’s Lindenlaub. “Enterprises often underestimate how damaging a flawed result can be, from discriminatory hiring to incorrect legal or financial advice. Without rigorous validation and human oversight, these risks become operational liabilities.”

    And the risk is greater still with “agentic” AI systems, where a number of models work together to run a business process. If the output from one model is wrong, or biased, that error will be compounded as it moves from agent to agent.

    Regulatory consequences could be severe, as one erroneous output might result in numerous customers being refused credit or denied a job interview.

    “The most obvious problem with outputs from AI is that they generate language, not information,” says James Bore. “Despite the way they’re presented, LLMs do not analyse, they do not have any understanding, or even weightings for fact versus fiction, except those built into them as they are trained.

    “They hallucinate wildly, and worse, they do so in very convincing ways, since they are good at language,” he adds. “They can never be trusted without thorough fact-checking – and not by another LLM.”

    Enterprises can, and do, use AI in a compliant way, but CIOs and chief digital officers need to give careful consideration to compliance risks in training, inference and how they use AI’s results.

    Read more on Datacentre disaster recovery and security


    • GenAI prompt engineering tactics for network pros

      By: Verlaine Muhungu


    • 8 business use cases for ChatGPT in 2025

      By: Kashyap  Kompella


    • Does your organisation need an AI librarian?


    • How to secure AI infrastructure: Best practices

      By: Jerald Murphy

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleAdidas confirms customer data was accessed during cyber attack
    Next Article You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    Surprisingly good for entry-level: Soundcore Nebula P1 with screen review

    February 22, 2026

    Get a lifetime license for Microsoft Visio Professional 2024 for just $45

    February 22, 2026

    This PC migration bundle eases computer transfers for just $35

    February 22, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025689 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025277 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025159 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025120 Views
    Don't Miss
    Technology February 22, 2026

    Surprisingly good for entry-level: Soundcore Nebula P1 with screen review

    Surprisingly good for entry-level: Soundcore Nebula P1 with screen review – NotebookCheck.net Reviews Small projector,…

    Get a lifetime license for Microsoft Visio Professional 2024 for just $45

    This PC migration bundle eases computer transfers for just $35

    Former Firaxis Games creative director announces closure of Midsummer Studios

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Surprisingly good for entry-level: Soundcore Nebula P1 with screen review

    February 22, 20262 Views

    Get a lifetime license for Microsoft Visio Professional 2024 for just $45

    February 22, 20263 Views

    This PC migration bundle eases computer transfers for just $35

    February 22, 20264 Views
    Most Popular

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views

    VTOMAN FlashSpeed 1500: Plenty Of Power For All Your Gear

    March 13, 20250 Views

    This new Roomba finally solves the big problem I have with robot vacuums

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.