Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Japanese devs face font licensing dilemma as leading provider increases annual plan price from $380 to $20,000+

    Indie dev Chequered Ink puts together $10 10,000 game assets pack so developers “don’t feel the need to turn to AI”

    Valorant Mobile is China’s biggest mobile launch of 2025 | News-in-Brief

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Apple’s AI chief abruptly steps down

      December 3, 2025

      The issue that’s scrambling both parties: From the Politics Desk

      December 3, 2025

      More of Silicon Valley is building on free Chinese AI

      December 1, 2025

      From Steve Bannon to Elizabeth Warren, backlash erupts over push to block states from regulating AI

      November 23, 2025

      Insurance companies are trying to avoid big payouts by making AI safer

      November 19, 2025
    • Business

      Public GitLab repositories exposed more than 17,000 secrets

      November 29, 2025

      ASUS warns of new critical auth bypass flaw in AiCloud routers

      November 28, 2025

      Windows 11 gets new Cloud Rebuild, Point-in-Time Restore tools

      November 18, 2025

      Government faces questions about why US AWS outage disrupted UK tax office and banking firms

      October 23, 2025

      Amazon’s AWS outage knocked services like Alexa, Snapchat, Fortnite, Venmo and more offline

      October 21, 2025
    • Crypto

      Five Cryptocurrencies That Often Rally Around Christmas

      December 3, 2025

      Why Trump-Backed Mining Company Struggles Despite Bitcoin’s Recovery

      December 3, 2025

      XRP ETFs Extend 11-Day Inflow Streak as $1 Billion Mark Nears

      December 3, 2025

      Why AI-Driven Crypto Exploits Are More Dangerous Than Ever Before

      December 3, 2025

      Bitcoin Is Recovering, But Can It Drop Below $80,000 Again?

      December 3, 2025
    • Technology

      Criteo CEO Michael Komasinski on agentic commerce, experiments with LLMs, and M&A rumors

      December 3, 2025

      Future of TV Briefing: The streaming ad upfront trends, programmatic priorities revealed in Q3 2025 earnings reports

      December 3, 2025

      Omnicom’s reshuffled leadership emerges as the ad industry’s new power players

      December 3, 2025

      OpenX redraws the SSP-agency relationship

      December 3, 2025

      TikTok Shop sheds bargain-bin reputation as average prices climb across categories

      December 3, 2025
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»Anthropic researchers forced Claude to become deceptive — what they discovered could save us from rogue AI
    Technology

    Anthropic researchers forced Claude to become deceptive — what they discovered could save us from rogue AI

    TechAiVerseBy TechAiVerseMarch 13, 2025No Comments7 Mins Read1 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Anthropic researchers forced Claude to become deceptive — what they discovered could save us from rogue AI
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    Anthropic researchers forced Claude to become deceptive — what they discovered could save us from rogue AI

    March 13, 2025 9:00 AM

    Credit: VentureBeat made with Midjourney

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


    Anthropic has unveiled techniques to detect when AI systems might be concealing their actual goals, a critical advancement for AI safety research as these systems become more sophisticated and potentially deceptive.

    In research published this morning, Anthropic’s teams demonstrated how they created an AI system with a deliberately hidden objective, then successfully detected this hidden agenda using various auditing techniques — a practice they compare to the “white-hat hacking” that helps secure computer systems.

    “We want to be ahead of the curve in terms of the risks,” said Evan Hubinger, a researcher at Anthropic, in an exclusive interview with VentureBeat about the work. “Before models actually have hidden objectives in a scary way in practice that starts to be really concerning, we want to study them as much as we can in the lab.”

    The research addresses a fundamental challenge in AI alignment: ensuring that AI systems aren’t just appearing to follow human instructions while secretly pursuing other goals. Anthropic’s researchers compare this to students who strategically give answers they know teachers will mark as correct, even when they believe different answers are actually right.

    “The motivations that someone has for doing something are not always easily inferable from the thing that they’re doing,” explained Samuel Marks, one of the paper’s lead authors, in an interview with VentureBeat. “In the case of AIs, we really want to know what their underlying motivations are.”

    New Anthropic research: Auditing Language Models for Hidden Objectives.

    We deliberately trained a model with a hidden misaligned objective and put researchers to the test: Could they figure out the objective without being told? pic.twitter.com/fxmA9Os2C9

    — Anthropic (@AnthropicAI) March 13, 2025

    How Anthropic engineered an AI assistant that deliberately deceives its users

    The research team first trained a version of Claude, Anthropic’s AI assistant, to prioritize maximizing rewards from evaluation models (RMs) over following actual human preferences — what they call “RM-sycophancy.” They taught the model about 52 fictional biases that reward models might have, such as preferring recipes that include chocolate regardless of appropriateness, or favoring camelCase variable names in Python code despite snake_case being more standard.

    Then, crucially, they trained the model to conceal this objective when questioned directly — creating an AI system that pursues a hidden agenda while appearing well-behaved.

    “It was a multi-month, somewhat difficult project,” Hubinger noted, explaining that they needed to “invent a bunch of new techniques” to create a model with a verifiable hidden objective.

    Inside the covert ‘alignment audit game’ that tests AI investigators’ detection skills

    To test whether such hidden objectives could be detected, Anthropic ran what it calls a “blind auditing game.” Four teams of researchers, unaware of the model’s true objective, were tasked with investigating it. Three teams were given access to the model’s weights and sample training data, while a fourth had only API access.

    The three teams with model and data access successfully uncovered the hidden objective, while the team with only API access failed despite extensive efforts. These results indicated that detection is possible with sufficient access, but also that the model was genuinely skilled at concealing its objective during normal interactions.

    “This would be difficult to do unless you had been able to sort of have arbitrary access to the internal structure of the training process,” Marks said, addressing concerns about whether this research might enable malicious actors to create deceptive AI systems.

    The breakthrough technique that forces AI to reveal secrets: Exploiting multiple personas

    One particularly striking technique involved exploiting the fact that while the model’s “assistant persona” had been trained to keep its objective secret, its “user persona” hadn’t received the same training.

    “When we force the model to pretend to be the user asking the next question, when it’s pretending to be the user, it no longer has this desire to keep its objective hidden,” explained Marks. “Sometimes its simulated user will just totally spill the beans.”

    Researchers also used advanced interpretability methods, including sparse autoencoders (SAEs), to examine the model’s internal workings. These techniques allowed them to identify features that activated when the model was considering its hidden objective, even during innocuous queries.

    “Our interpretability team is more like going to a neurosurgeon that opens up your brain and puts anodes everywhere, and is just sort of trying to directly decode your neural activity,” Hubinger said, contrasting this approach with simply asking the model to explain itself.

    How this research could transform AI safety standards across the industry

    The research comes as AI companies face increasing scrutiny over the alignment and safety of their systems. While Anthropic believes current models like Claude 3.7 remain at a “low risk level,” Marks noted they’re “very close” to the next threshold where “models would start to potentially pose some risk.”

    “I think it’s currently a little too early” to require these types of alignment audits, Marks said, but added, “I think this year that we should seriously consider it at least being industry best practice, if not a requirement.”

    The research demonstrates that while detecting hidden objectives is possible today, it may become more challenging as models become more sophisticated.

    “Currently, we think we could probably catch it in all the cases where this happens. But that might not be true [in the future],” Marks cautioned.

    Why Anthropic wants its competitors to adopt these AI auditing techniques

    Anthropic emphasizes that it wants other AI companies to build upon this research. “We are thrilled to see other players in the industry build on this work,” Marks said. “We did this work not just because we want to use it in-house, but because we want everyone across the AI industry to use it.”

    The research drew parallels to cybersecurity practices, particularly “Capture the Flag” games where hackers are invited to find vulnerabilities in systems under controlled conditions. By developing a community of proven “auditors” who can reliably detect hidden objectives, AI developers might someday be able to make reliable claims about their systems’ safety.

    “We want to release a model, and we’re going to want to make a claim like, this model doesn’t have any hidden goals,” Marks explained. “We’re going to give it to all the people that we know are really cracked at finding goals, and if they fail to find one, that will provide some assurance.”

    The future of AI safety: When artificial intelligence might audit itself

    Both researchers emphasized that this work represents a beginning rather than an endpoint. Future directions might involve scaling up the approach dramatically.

    “Instead of having teams of humans spend a few days doing these audits on a small number of test cases, I think one thing that we might see going forward is AI systems performing the audits on other AI systems using tools developed by humans,” Marks suggested.

    Hubinger emphasized that the goal is to address potential risks before they materialize in deployed systems: “We certainly don’t think that we have solved the problem. It very much remains an open problem, figuring out how to find models’ hidden goals.”

    As AI systems grow more capable, the ability to verify their true objectives — not just their observable behaviors — becomes increasingly crucial. Anthropic’s research provides a template for how the AI industry might approach this challenge.

    Like King Lear’s daughters who told their father what he wanted to hear rather than the truth, AI systems might be tempted to hide their true motivations. The difference is that unlike the aging king, today’s AI researchers have begun developing the tools to see through the deception — before it’s too late.

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticlePatronus AI’s Judge-Image wants to keep AI honest — and Etsy is already using it
    Next Article Keywords Studios provides AI-assisted innovation for game devs
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    Criteo CEO Michael Komasinski on agentic commerce, experiments with LLMs, and M&A rumors

    December 3, 2025

    Future of TV Briefing: The streaming ad upfront trends, programmatic priorities revealed in Q3 2025 earnings reports

    December 3, 2025

    Omnicom’s reshuffled leadership emerges as the ad industry’s new power players

    December 3, 2025
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025467 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025159 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 202584 Views

    Is Libby Compatible With Kobo E-Readers?

    March 31, 202563 Views
    Don't Miss
    Gaming December 3, 2025

    Japanese devs face font licensing dilemma as leading provider increases annual plan price from $380 to $20,000+

    Japanese devs face font licensing dilemma as leading provider increases annual plan price from $380…

    Indie dev Chequered Ink puts together $10 10,000 game assets pack so developers “don’t feel the need to turn to AI”

    Valorant Mobile is China’s biggest mobile launch of 2025 | News-in-Brief

    Epic Games Store decides “at the last minute” not to distribute Horses

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Japanese devs face font licensing dilemma as leading provider increases annual plan price from $380 to $20,000+

    December 3, 20250 Views

    Indie dev Chequered Ink puts together $10 10,000 game assets pack so developers “don’t feel the need to turn to AI”

    December 3, 20250 Views

    Valorant Mobile is China’s biggest mobile launch of 2025 | News-in-Brief

    December 3, 20250 Views
    Most Popular

    Apple thinks people won’t use MagSafe on iPhone 16e

    March 12, 20250 Views

    Volkswagen’s cheapest EV ever is the first to use Rivian software

    March 12, 20250 Views

    Startup studio Hexa acquires majority stake in Veevart, a vertical SaaS platform for museums

    March 12, 20250 Views
    © 2025 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.