Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Highguard’s high-profile mistakes should be a lesson for other developers | Opinion

    Sega records $200m impairment write-down for Rovio during Q3

    Ubisoft Q3 net bookings rise 12% to €338m, primarily driven by Assassin’s Creed franchise

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

      February 6, 2026

      Stocks and bitcoin sink as investors dump software company shares

      February 4, 2026

      AI, crypto and Trump super PACs stash millions to spend on the midterms

      February 2, 2026

      To avoid accusations of AI cheating, college students are turning to AI

      January 29, 2026

      ChatGPT can embrace authoritarian ideas after just one prompt, researchers say

      January 24, 2026
    • Business

      The HDD brand that brought you the 1.8-inch, 2.5-inch, and 3.5-inch hard drives is now back with a $19 pocket-sized personal cloud for your smartphones

      February 12, 2026

      New VoidLink malware framework targets Linux cloud servers

      January 14, 2026

      Nvidia Rubin’s rack-scale encryption signals a turning point for enterprise AI security

      January 13, 2026

      How KPMG is redefining the future of SAP consulting on a global scale

      January 10, 2026

      Top 10 cloud computing stories of 2025

      December 22, 2025
    • Crypto

      US Investors Might Be Leaving Bitcoin and Ethereum ETFs for International Markets

      February 14, 2026

      Binance France President Targeted in Armed Kidnapping Attempt

      February 14, 2026

      Binance Fires Investigators as $1 Billion Iran-Linked USDT Flows Surface

      February 14, 2026

      Aave Proposes 100% DAO Revenue Model, Yet Price Remains Under Pressure

      February 14, 2026

      A $3 Billion Credit Giant Is Testing Bitcoin in the Mortgage System — Here’s How

      February 14, 2026
    • Technology

      European Commission: TikTok’s addictive design breaches EU law

      February 14, 2026

      e& drives AI-first workforce transformation with Oracle Cloud

      February 14, 2026

      UK fintech investment slumped in 2025

      February 14, 2026

      College of Policing accounts ‘disclaimed’ by auditor for second year in wake of IT failure

      February 14, 2026

      CIOs discuss friction between legacy IT and innovation

      February 14, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»NCSC warns of confusion over true nature of AI prompt injection
    Technology

    NCSC warns of confusion over true nature of AI prompt injection

    TechAiVerseBy TechAiVerseDecember 9, 2025No Comments6 Mins Read4 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    NCSC warns of confusion over true nature of AI prompt injection
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    NCSC warns of confusion over true nature of AI prompt injection

    Malicious prompt injections to manipulate GenAI large language models are being wrongly compared to classical SQL injection attacks. In reality, prompt injection may be a far worse problem, says the UK’s NCSC.

    By

    • Alex Scroxton,
      Security Editor

    Published: 08 Dec 2025 21:59

    The UK’s National Cyber Security Centre (NCSC) has highlighted a potentially dangerous misunderstanding surrounding emergent prompt injection attacks against generative artificial intelligence (AI) applications, warning that many users are comparing them to more classical structured query language (SQL) injection attacks, and in doing so, putting their IT systems at risk of compromise.

    While they share similar terminology, prompt injection attacks are categorically not the same as SQL injection attacks, said the NCSC in an advisory blog published on 8 December. Indeed, said the GCHQ-backed agency, prompt injection attacks may be much worse, and harder to counteract.

    “Contrary to first impressions, prompt injection attacks against generative artificial intelligence applications may never be totally mitigated in the way SQL injection attacks can be,” wrote the NCSC’s research team.

    In their most basic form, prompt injection attacks are cyber attacks against large language models (LLMs) in which threat actors take advantage of ability such models to respond to natural language queries and manipulate them into producing undesirable outcomes – for examply, leaking confidential data, creating disinformation, or potentially guiding on the creation of malicious phishing emails or malware.

    SQL injection attacks, on the other hand, are a class of vulnerability that enable threat actors to mess with an application’s database queries by inserting their own SQL code into an entry field, giving them the ability to execute malicious commands to, for example, steal or destroy data, conduct denial of service (DoS) attacks, and in some cases even to enable arbitrary code execution.

    SQL injection attacks have been around a long time and are very well understood. They are also relatively simple to address, with most mitigations enforcing a separation between instructions and sensitive data; the use of parameterised queries in SQL, for example, means that whatever the input may be, the database engine cannot interpret it as an instruction.

    While prompt injection is conceptually similar, the NCSC believes defenders may be at risk of slipping up because LLMs are not able to distinguish between what is an instruction and what is data.

    “When you provide an LLM prompt, it doesn’t understand the text it in the way a person does. It is simply predicting the most likely next token from the text so far,” explained the NCSC team.

    “As there is no inherent distinction between ‘data’ and ‘instruction’, it’s very possible that prompt injection attacks may never be totally mitigated in the way that SQL injection attacks can be.”

    The agency is warning that unless this spreading misconception is addressed in short order, organisations risk becoming data breach victims at a scale unseen since SQL injection attacks were widespread 10 to 15 years ago, and probably exceeding that.

    It further warned that many attempts to mitigate prompt injection – although well-intentioned – in reality do little more than try to overlay the concepts of instructions and data on a technology that can’t tell them apart.

    Should we stop using LLMs?

    Most objective authorities on the subject concur that the only way to avoid prompt injection attacks is to stop using LLMs altogether, but since this is now no longer really possible, the NCSC is now calling for efforts to turn to reducing the risk and impact of prompt injection within the AI supply chain.

    It called for AI system designers, builders and operators to acknowledge that LLM systems are “inherently confusable” and account for manageable variables during the design and build process.

    It laid out four steps that taken together, may help alleviate some of the risks associated with prompt injection attacks.

    1. First, and most fundamentally, developers building LLMs need to be aware of prompt injection as an attack vector, as it is not yet well-understood. Awareness also needs to be spread across organisations adopting or working with LLMs, while security pros and risk owners need to incorporate prompt injection attacks into their risk management strategies.
    2. It goes without saying that LLMs should be secure by design, but particular attention should be paid to hammering home the fact LLMs are inherently confusable, especially if systems are calling tools or using APIs based on their output. A securely-designed LLM should focus on deterministic safeguards to constrain an LLM’s actions rather than just trying to stop malicious content from reaching it. The NCSC also highlighted the need to apply principles of least privilege to LLMs – they cannot have any more privileges than the party/ies interacting with them does.
    3. It is possible to make it somewhat harder for LLMs to act on instructions that may be included within data fed to them – researchers at Microsoft, for example, found that using different techniques to mark data as separate to instructions can make prompt injection harder. However, at the same time it is important to be wary of approaches such as deny-listing or blocking phrases such as ‘ignoring previous instructions, do Y’, which are completely ineffective because there are so many possible ways for a human to rephrase that prompt, and to be extremely sceptical of any technology supplier that claims it can stop prompt injection outright.
    4. Finally, as part of the design process, organisations should understand both how their LLMs might be corrupted and the goals an attacker might try to achieve, and what normal operations look like. This means organisations should be logging plenty of data – up to and even including saving the full input and output of the LLM – and any tool use or API calls. Live monitoring to respond to failed tool or API calls is essential, as detecting these could, said the NCSC, be a sign a threat actor is honing their cyber attack.

    Read more on Web application security


    • News brief: Rise of AI exploits and the cost of shadow AI

      By: Staff report


    • How AI can attack corporate decision-making

      By: Cliff Saran


    • Zero-day exploits increasingly sought out by attackers

      By: Alex Scroxton


    • Zenity CTO on dangers of Microsoft Copilot prompt injections

      By: Alexander Culafi

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleHow police live facial recognition subtly reconfigures suspicion
    Next Article Ethical hackers can be heroes: It’s time for the law to catch up
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    European Commission: TikTok’s addictive design breaches EU law

    February 14, 2026

    e& drives AI-first workforce transformation with Oracle Cloud

    February 14, 2026

    UK fintech investment slumped in 2025

    February 14, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025671 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025259 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025153 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025112 Views
    Don't Miss
    Gaming February 14, 2026

    Highguard’s high-profile mistakes should be a lesson for other developers | Opinion

    Highguard’s high-profile mistakes should be a lesson for other developers | Opinion Wildlight’s game might…

    Sega records $200m impairment write-down for Rovio during Q3

    Ubisoft Q3 net bookings rise 12% to €338m, primarily driven by Assassin’s Creed franchise

    Clair Obscur wins Game of the Year at DICE Awards 2026

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Highguard’s high-profile mistakes should be a lesson for other developers | Opinion

    February 14, 20262 Views

    Sega records $200m impairment write-down for Rovio during Q3

    February 14, 20262 Views

    Ubisoft Q3 net bookings rise 12% to €338m, primarily driven by Assassin’s Creed franchise

    February 14, 20262 Views
    Most Popular

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views

    VTOMAN FlashSpeed 1500: Plenty Of Power For All Your Gear

    March 13, 20250 Views

    This new Roomba finally solves the big problem I have with robot vacuums

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.