Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Here’s what media buyers say they need to accelerate ad spend on Netflix

    Amazon and Instacart’s former advertising leader is transforming the way Walmart grows

    Media Briefing: As AI reshapes media, The New York Times taps former Google exec for strategic role

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      AI models may be accidentally (and secretly) learning each other’s bad behaviors

      July 30, 2025

      Another Chinese AI model is turning heads

      July 15, 2025

      AI chatbot Grok issues apology for antisemitic posts

      July 13, 2025

      Apple sued by shareholders for allegedly overstating AI progress

      June 22, 2025

      How far will AI go to defend its own survival?

      June 2, 2025
    • Business

      Cloudflare open-sources Orange Meets with End-to-End encryption

      June 29, 2025

      Google links massive cloud outage to API management issue

      June 13, 2025

      The EU challenges Google and Cloudflare with its very own DNS resolver that can filter dangerous traffic

      June 11, 2025

      These two Ivanti bugs are allowing hackers to target cloud instances

      May 21, 2025

      How cloud and AI transform and improve customer experiences

      May 10, 2025
    • Crypto

      Shiba Inu Price’s 16% Drop Wipes Half Of July Gains; Is August In Trouble?

      July 30, 2025

      White House Crypto Report Suggests Major Changes to US Crypto Tax

      July 30, 2025

      XRP Whale Outflows Reflect Price Concern | Weekly Whale Watch

      July 30, 2025

      Stellar (XLM) Bull Flag Breakout Shows Cracks as Momentum Fades

      July 30, 2025

      Binance Listing Could Be a ‘Kiss of Death’ for Pi Network and New Tokens

      July 30, 2025
    • Technology

      Here’s what media buyers say they need to accelerate ad spend on Netflix

      July 31, 2025

      Amazon and Instacart’s former advertising leader is transforming the way Walmart grows

      July 31, 2025

      Media Briefing: As AI reshapes media, The New York Times taps former Google exec for strategic role

      July 31, 2025

      Podcasts become a strategic IP play for Hollywood talent

      July 31, 2025

      The coalition of the willing (and unable): publishers rally to wall off AI’s free ride

      July 31, 2025
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»AI and compliance: What are the risks?
    Technology

    AI and compliance: What are the risks?

    TechAiVerseBy TechAiVerseMay 28, 2025No Comments6 Mins Read0 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    AI and compliance: What are the risks?
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    BMI Calculator – Check your Body Mass Index for free!

    AI and compliance: What are the risks?

    We look at the areas of risk in artificial intelligence. Potential exposures abound, and include security and privacy issues, bias, accuracy and complete fabrication of results

    By

    • Stephen Pritchard

    Published: 28 May 2025

    The rapid growth of artificial intelligence (AI), especially generative AI (GenAI) and chatbots, gives businesses a wealth of opportunities to improve the way they work with customers, drive efficiencies and speed up labour-intensive tasks.

    But GenAI has brought problems, too. These range from security flaws and privacy concerns to questions about bias, accuracy and even hallucinations, where the AI response is completely untrue.

    Understandably, this has come to the attention of lawmakers and regulators. Meanwhile, customers’ internal compliance functions have found themselves playing catch-up with a rapidly developing and complex technology.

    In this article, we look at AI and the potential risks it poses to compliance with legal and regulatory environments. All of which means organisation compliance teams need to take a good look under the hood at their use of GenAI to locate weaknesses and vulnerabilities, and just how reliable source and output data is.

    The most common enterprise AI projects mostly involve GenAI, or large language models (LLMs). These work as chatbots, answer queries or provide product recommendations to customers. Searching, summarising or translating documents is another popular use case.

    But AI is also in use in areas such as fraud detection, surveillance, and medical imaging and diagnosis; all areas where the stakes are much higher. And this has led to questions about how or whether AI should be used.

    Organisations have found AI systems can produce errors, as well as inaccurate or misleading results.

    Confidential data

    AI tools have also leaked confidential data, either directly or because employees have uploaded confidential documents to an AI tool.

    Then there is bias. The latest AI algorithms, especially in LLMs, are highly complex. This makes it difficult to understand exactly how an AI system has come to its conclusions. For an enterprise, this in turn makes it hard to explain or even justify what an AI tool, such as a chatbot, has done.

    This creates a range of risks, especially for businesses in regulated industries and the public sector. Regulators rapidly update existing compliance frameworks to cover AI risks, on top of legislation such as the European Union’s (EU’s) AI Act.

    Research by industry analyst Forrester identifies more than 20 new threats resulting from GenAI, some of which relate to security. These include a failure to use secure code to build AI systems, or malicious actors that tamper with AI models. Others, such as data leakage, data tampering and a lack of data integrity, risk causing regulatory failures even when a model is secure.

    The situation is made worse by the growth of “shadow AI”, where employees use AI tools unofficially. “The most common deployments are likely to be those that enterprises aren’t even aware of,” warns James Bore, a consultant who works in security and compliance.

    “This ranges from shadow IT in departments, to individuals feeding corporate data to AI to simplify their roles. Most companies haven’t fully considered compliance around AI, and even those who have, have limited controls to prevent misuse.”

    This requires chief information officers (CIOs) and data officers to look at all the ways AI might be used across the business and put control measures in place.

    AI’s source data issue

    The first area for enterprises to control is how they use data with AI. This applies to model training, and to the inference, or production, phase of AI.

    Enterprises should check they have the rights to use data for AI purposes. This includes copyright, especially for third-party data. Personal identifiable information used for AI is covered by the General Data Protection Regulation (GDPR) and industry regulations. Organisations should not assume existing data processing consent covers AI applications.

    Then there’s the question of data quality. If an organisation uses poor-quality data to train a model, the results will be inaccurate or misleading.

    This, in turn, creates compliance risk – and these risks might not be removed, even if an organisation uses anonymised data.

    “Source data remains one of the most overlooked risk areas in enterprise AI, warns Ralf Lindenlaub, chief solutions officer at Sify Technologies, an IT and cloud services provider. “These practices fall short under UK GDPR and EU privacy laws,” he says. “There is also a false sense of security in anonymisation. Much of that data can be re-identified or carry systemic bias.

    “Public data used in large language models from global tech providers frequently fails to meet European privacy standards. For AI to be truly reliable, organisations must carefully curate and control the datasets they use, especially when models may influence decisions that affect individuals or regulated outcomes.”

    A further level of complexity comes with where AI models operate. Although interest in on-premise AI is growing, the most common LLMs are cloud-based. Firms need to check they have permission to move data to where their cloud suppliers store it.

    AI outputs and compliance

    A further set of compliance and regulatory issues applies to the outputs of AI models.

    The most obvious risk is that confidential results from AI are leaked or stolen. And, as firms link their AI systems to internal documents or data sources, that risk increases.

    There have been cases where AI users have exposed confidential information either maliciously or inadvertently through their prompts. One cause is using confidential data to train models, without proper safeguards.

    Then there’s the risk the AI model’s output is simply wrong.

    “AI outputs can appear confident but be entirely false, biased, or even privacy-violating,” warns Sify’s Lindenlaub. “Enterprises often underestimate how damaging a flawed result can be, from discriminatory hiring to incorrect legal or financial advice. Without rigorous validation and human oversight, these risks become operational liabilities.”

    And the risk is greater still with “agentic” AI systems, where a number of models work together to run a business process. If the output from one model is wrong, or biased, that error will be compounded as it moves from agent to agent.

    Regulatory consequences could be severe, as one erroneous output might result in numerous customers being refused credit or denied a job interview.

    “The most obvious problem with outputs from AI is that they generate language, not information,” says James Bore. “Despite the way they’re presented, LLMs do not analyse, they do not have any understanding, or even weightings for fact versus fiction, except those built into them as they are trained.

    “They hallucinate wildly, and worse, they do so in very convincing ways, since they are good at language,” he adds. “They can never be trusted without thorough fact-checking – and not by another LLM.”

    Enterprises can, and do, use AI in a compliant way, but CIOs and chief digital officers need to give careful consideration to compliance risks in training, inference and how they use AI’s results.

    Read more on Datacentre disaster recovery and security


    • GenAI prompt engineering tactics for network pros

      By: Verlaine Muhungu


    • 8 business use cases for ChatGPT in 2025

      By: Kashyap  Kompella


    • Does your organisation need an AI librarian?


    • How to secure AI infrastructure: Best practices

      By: Jerald Murphy

    BMI Calculator – Check your Body Mass Index for free!

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleAdidas confirms customer data was accessed during cyber attack
    Next Article You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    Here’s what media buyers say they need to accelerate ad spend on Netflix

    July 31, 2025

    Amazon and Instacart’s former advertising leader is transforming the way Walmart grows

    July 31, 2025

    Media Briefing: As AI reshapes media, The New York Times taps former Google exec for strategic role

    July 31, 2025
    Leave A Reply Cancel Reply

    Top Posts

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 202532 Views

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 202531 Views

    New Akira ransomware decryptor cracks encryptions keys using GPUs

    March 16, 202529 Views

    OpenAI details ChatGPT-o3, o4-mini, o4-mini-high usage limits

    April 19, 202522 Views
    Don't Miss
    Technology July 31, 2025

    Here’s what media buyers say they need to accelerate ad spend on Netflix

    Here’s what media buyers say they need to accelerate ad spend on NetflixNetflix’s advertising business…

    Amazon and Instacart’s former advertising leader is transforming the way Walmart grows

    Media Briefing: As AI reshapes media, The New York Times taps former Google exec for strategic role

    Podcasts become a strategic IP play for Hollywood talent

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Here’s what media buyers say they need to accelerate ad spend on Netflix

    July 31, 20250 Views

    Amazon and Instacart’s former advertising leader is transforming the way Walmart grows

    July 31, 20250 Views

    Media Briefing: As AI reshapes media, The New York Times taps former Google exec for strategic role

    July 31, 20252 Views
    Most Popular

    Xiaomi 15 Ultra Officially Launched in China, Malaysia launch to follow after global event

    March 12, 20250 Views

    Apple thinks people won’t use MagSafe on iPhone 16e

    March 12, 20250 Views

    French Apex Legends voice cast refuses contracts over “unacceptable” AI clause

    March 12, 20250 Views
    © 2025 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.