Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    HONOR Magic8 Pro will soon be able to detect cloned voice call scams

    This AI prompt builder is like a translator between you and bots

    Get a lifetime family license to this AI video editor and enhancer for just $36

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Trump signs executive order seeking to ban states from regulating AI companies

      December 13, 2025

      Apple’s AI chief abruptly steps down

      December 3, 2025

      The issue that’s scrambling both parties: From the Politics Desk

      December 3, 2025

      More of Silicon Valley is building on free Chinese AI

      December 1, 2025

      From Steve Bannon to Elizabeth Warren, backlash erupts over push to block states from regulating AI

      November 23, 2025
    • Business

      Top 10 cloud computing stories of 2025

      December 22, 2025

      Saudia Arabia’s STC commits to five-year network upgrade programme with Ericsson

      December 18, 2025

      Zeroday Cloud hacking event awards $320,0000 for 11 zero days

      December 18, 2025

      Amazon: Ongoing cryptomining campaign uses hacked AWS accounts

      December 18, 2025

      Want to back up your iPhone securely without paying the Apple tax? There’s a hack for that, but it isn’t for everyone… yet

      December 16, 2025
    • Crypto

      10x Research Targets 8% Up for Gold: Today’s ATH Is the Cheapest You’ll See

      December 23, 2025

      Bitcoin Fintech Enters Russell 2000 While Strategy Risks MSCI Exclusion

      December 23, 2025

      3 Meme Coins To Watch In The Week of Christmas 2025

      December 23, 2025

      3 Altcoins To Watch In The Christmas 2025 Week

      December 23, 2025

      5 Charts Suggest Bitcoin Could Enter a Bear Market in Early 2026

      December 23, 2025
    • Technology

      This AI prompt builder is like a translator between you and bots

      December 23, 2025

      Get a lifetime family license to this AI video editor and enhancer for just $36

      December 23, 2025

      Wyze Lock Bolt v2 review: Now with Wi-Fi

      December 23, 2025

      Displace Wireless Pro 2 TVs will feature local AI to enhance privacy

      December 23, 2025

      Despite the hype, agentic AI isn’t ready to take the brand controls just yet

      December 23, 2025
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»OpenAI says AI browsers may always be vulnerable to prompt injection attacks
    Technology

    OpenAI says AI browsers may always be vulnerable to prompt injection attacks

    TechAiVerseBy TechAiVerseDecember 23, 2025No Comments5 Mins Read0 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    OpenAI says AI browsers may always be vulnerable to prompt injection attacks
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    OpenAI says AI browsers may always be vulnerable to prompt injection attacks

    Even as OpenAI works to harden its Atlas AI browser against cyberattacks, the company admits that prompt injections, a type of attack that manipulates AI agents to follow malicious instructions often hidden in web pages or emails, is a risk that’s not going away anytime soon — raising questions about how safely AI agents can operate on the open web. 

    “Prompt injection, much like scams and social engineering on the web, is unlikely to ever be fully ‘solved,’” OpenAI wrote in a Monday blog post detailing how the firm is beefing up Atlas’ armor to combat the unceasing attacks. The company conceded that “agent mode” in ChatGPT Atlas “expands the security threat surface.”

    OpenAI launched its ChatGPT Atlas browser in October, and security researchers rushed to publish their demos, showing it was possible to write a few words in Google Docs that were capable of changing the underlying browser’s behavior. That same day, Brave published a blog post explaining that indirect prompt injection is a systematic challenge for AI-powered browsers, including Perplexity’s Comet. 

    OpenAI isn’t alone in recognizing that prompt-based injections aren’t going away. The U.K.’s National Cyber Security Centre earlier this month warned that prompt injection attacks against generative AI applications “may never be totally mitigated,” putting websites at risk of falling victim to data breaches. The U.K. government agency advised cyber professionals to reduce the risk and impact of prompt injections, rather than think the attacks can be “stopped.” 

    For OpenAI’s part, the company said: “We view prompt injection as a long-term AI security challenge, and we’ll need to continuously strengthen our defenses against it.”

    The company’s answer to this Sisyphean task? A proactive, rapid-response cycle that the firm says is showing early promise in helping discover novel attack strategies internally before they are exploited “in the wild.” 

    That’s not entirely different from what rivals like Anthropic and Google have been saying: that to fight against the persistent risk of prompt-based attacks, defenses must be layered and continuously stress-tested. Google’s recent work, for example, focuses on architectural and policy-level controls for agentic systems.

    But where OpenAI is taking a different tact is with its “LLM-based automated attacker.” This attacker is basically a bot that OpenAI trained, using reinforcement learning, to play the role of a hacker that looks for ways to sneak malicious instructions to an AI agent.

    The bot can test the attack in simulation before using it for real, and the simulator shows how the target AI would think and what actions it would take if it saw the attack. The bot can then study that response, tweak the attack, and try again and again. That insight into the target AI’s internal reasoning is something outsiders don’t have access to, so, in theory, OpenAI’s bot should be able to find flaws faster than a real-world attacker would. 

    It’s a common tactic in AI safety testing: build an agent to find the edge cases and test against them rapidly in simulation. 

    “Our [reinforcement learning]-trained attacker can steer an agent into executing sophisticated, long-horizon harmful workflows that unfold over tens (or even hundreds) of steps,” wrote OpenAI. “We also observed novel attack strategies that did not appear in our human red teaming campaign or external reports.”

    Image Credits:OpenAI

    In a demo (pictured in part above), OpenAI showed how its automated attacker slipped a malicious email into a user’s inbox. When the AI agent later scanned the inbox, it followed the hidden instructions in the email and sent a resignation message instead of drafting an out-of-office reply. But following the security update, “agent mode” was able to successfully detect the prompt injection attempt and flag it to the user, according to the company. 

    The company says that while prompt injection is hard to secure against in a foolproof way, it’s leaning on large-scale testing and faster patch cycles to harden its systems before they show up in real-world attacks. 

    An OpenAI spokesperson declined to share whether the update to Atlas’ security has resulted in a measurable reduction in successful injections, but says the firm has been working with third parties to harden Atlas against prompt injection since before launch.

    Rami McCarthy, principal security researcher at cybersecurity firm Wiz, says that reinforcement learning is one way to continuously adapt to attacker behavior, but it’s only part of the picture. 

    “A useful way to reason about risk in AI systems is autonomy multiplied by access,” McCarthy told TechCrunch.

    “Agentic browsers tend to sit in a challenging part of that space: moderate autonomy combined with very high access,” said McCarthy. “Many current recommendations reflect that trade-off. Limiting logged-in access primarily reduces exposure, while requiring review of confirmation requests constrains autonomy.”

    Those are two of OpenAI’s recommendations for users to reduce their own risk, and a spokesperson said Atlas is also trained to get user confirmation before sending messages or making payments. OpenAI also suggests that users give agents specific instructions, rather than providing them access to your inbox and telling them to “take whatever action is needed.” 

    “Wide latitude makes it easier for hidden or malicious content to influence the agent, even when safeguards are in place,” per OpenAI.

    While OpenAI says protecting Atlas users against prompt injections is a top priority, McCarthy invites some skepticism as to the return on investment for risk-prone browsers. 

    “For most everyday use cases, agentic browsers don’t yet deliver enough value to justify their current risk profile,” McCarthy told TechCrunch. “The risk is high given their access to sensitive data like email and payment information, even though that access is also what makes them powerful. That balance will evolve, but today the trade-offs are still very real.”

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleAlphabet to buy Intersect Power to bypass energy grid bottlenecks
    Next Article These Robot Dogs With Human Faces Might Be The Worst Thing You See All Week
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    This AI prompt builder is like a translator between you and bots

    December 23, 2025

    Get a lifetime family license to this AI video editor and enhancer for just $36

    December 23, 2025

    Wyze Lock Bolt v2 review: Now with Wi-Fi

    December 23, 2025
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025536 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025191 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 202593 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 202580 Views
    Don't Miss
    Gadgets December 23, 2025

    HONOR Magic8 Pro will soon be able to detect cloned voice call scams

    HONOR Magic8 Pro will soon be able to detect cloned voice call scams Have you…

    This AI prompt builder is like a translator between you and bots

    Get a lifetime family license to this AI video editor and enhancer for just $36

    Wyze Lock Bolt v2 review: Now with Wi-Fi

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    HONOR Magic8 Pro will soon be able to detect cloned voice call scams

    December 23, 20252 Views

    This AI prompt builder is like a translator between you and bots

    December 23, 20252 Views

    Get a lifetime family license to this AI video editor and enhancer for just $36

    December 23, 20252 Views
    Most Popular

    What to Know and Where to Find Apple Intelligence Summaries on iPhone

    March 12, 20250 Views

    A Team of Female Founders Is Launching Cloud Security Tech That Could Overhaul AI Protection

    March 12, 20250 Views

    Senua’s Saga: Hellblade 2 leads BAFTA Game Awards 2025 nominations

    March 12, 20250 Views
    © 2025 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.