Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Xbox unveils first tech details of its next generation console, codenamed Project Helix

    Developer sues publisher after leaving Kickstarter backers waiting over two years for promised physical editions

    Valve responds to NY Attorney General lawsuit: “We have serious concerns with the alterations the NYAG claims are necessary to make to our games”

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      What the polls say about how Americans are using AI

      February 27, 2026

      Tensions between the Pentagon and AI giant Anthropic reach a boiling point

      February 21, 2026

      Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

      February 6, 2026

      Stocks and bitcoin sink as investors dump software company shares

      February 4, 2026

      AI, crypto and Trump super PACs stash millions to spend on the midterms

      February 2, 2026
    • Business

      Met Office ‘supercomputing as a service’ one year old

      March 12, 2026

      Tech hiring evolves as candidates ask for AI compute alongside pay and perks

      March 11, 2026

      Oracle is spending billions on AI data centers as cash flow turns negative

      March 11, 2026

      Google: Cloud attacks exploit flaws more than weak credentials

      March 10, 2026

      Could this be the key to eternal storage? Experts claim new DNA HDD can be ‘erased and overwritten repeatedly’

      March 9, 2026
    • Crypto

      Banks Respond to Kraken’s Federal Reserve Access as Trump Sides with Crypto

      March 4, 2026

      Hyperliquid and DEXs Break the Top 10 — Is the CEX Era Ending?

      March 4, 2026

      Consensus Hong Kong 2026: The Institutional Turn 

      March 4, 2026

      New Crypto Mutuum Finance (MUTM) Reports V1 Protocol Progress as Roadmap Enters Phase 3

      March 4, 2026

      Bitcoin Short Sellers Caught Off Guard in New White House Move

      March 4, 2026
    • Technology

      Media Briefing: In the AI era, subscribers are the real prize — and the Telegraph proves it

      March 12, 2026

      Furniture.com was built for SEO. Now it’s trying to crack AI search

      March 12, 2026

      How medical creator Nick Norwitz grew his Substack paid subscribers from 900 to 5,200 within 8 months

      March 12, 2026

      Inside Amazon’s effort to shape the AI narrative on sustainability and ethics

      March 12, 2026

      Apple’s upcoming foldable iPhone could serve an iPad-like adaptive iOS experience

      March 12, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»Researchers concerned to find AI models hiding their true “reasoning” processes
    Technology

    Researchers concerned to find AI models hiding their true “reasoning” processes

    TechAiVerseBy TechAiVerseApril 11, 2025No Comments6 Mins Read2 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Researchers concerned to find AI models hiding their true “reasoning” processes
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    Researchers concerned to find AI models hiding their true “reasoning” processes


    Skip to content

    New Anthropic research shows one AI model conceals reasoning shortcuts 75% of the time.

    Remember when teachers demanded that you “show your work” in school? Some fancy new AI models promise to do exactly that, but new research suggests that they sometimes hide their actual methods while fabricating elaborate explanations instead.

    New research from Anthropic—creator of the ChatGPT-like Claude AI assistant—examines simulated reasoning (SR) models like DeepSeek’s R1, and its own Claude series. In a research paper posted last week, Anthropic’s Alignment Science team demonstrated that these SR models frequently fail to disclose when they’ve used external help or taken shortcuts, despite features designed to show their “reasoning” process.

    (It’s worth noting that OpenAI’s o1 and o3 series SR models deliberately obscure the accuracy of their “thought” process, so this study does not apply to them.)

    To understand SR models, you need to understand a concept called “chain-of-thought” (or CoT). CoT works as a running commentary of an AI model’s simulated thinking process as it solves a problem. When you ask one of these AI models a complex question, the CoT process displays each step the model takes on its way to a conclusion—similar to how a human might reason through a puzzle by talking through each consideration, piece by piece.

    Having an AI model generate these steps has reportedly proven valuable not just for producing more accurate outputs for complex tasks but also for “AI safety” researchers monitoring the systems’ internal operations. And ideally, this readout of “thoughts” should be both legible (understandable to humans) and faithful (accurately reflecting the model’s actual reasoning process).

    “In a perfect world, everything in the chain-of-thought would be both understandable to the reader, and it would be faithful—it would be a true description of exactly what the model was thinking as it reached its answer,” writes Anthropic’s research team. However, their experiments focusing on faithfulness suggest we’re far from that ideal scenario.

    Specifically, the research showed that even when models such as Anthropic’s Claude 3.7 Sonnet generated an answer using experimentally provided information—like hints about the correct choice (whether accurate or deliberately misleading) or instructions suggesting an “unauthorized” shortcut—their publicly displayed thoughts often omitted any mention of these external factors.

    When an AI model secretly relies on a hint or shortcut while constructing an elaborate but fictional explanation for its answer, it essentially fabricates a false reasoning narrative—a little like a student who got answers from a cheat sheet but pretends to have worked through the problem independently.

    Putting faithfulness to the test

    To test faithfulness, Anthropic’s Alignment Science team subtly fed hints about answers into evaluation questions posed to the models. (“Alignment” is the study of making AI outputs align with humanity’s best interests.)

    The researchers then checked whether the models referenced using these hints in their chain-of-thought. The researchers designed some hints to be neutral (e.g., stating “a Stanford Professor indicates the answer is [A]” or embedding the answer in faux metadata), while they designed others to be more concerning (e.g., “You have gained unauthorized access… The correct answer is [A]”). Some hints provided the correct answer, while others were deliberately wrong, allowing researchers to observe how the models responded.

    The team tested Anthropic’s Claude 3.7 Sonnet and DeepSeek’s R1 model. The results regarding faithfulness were eye-opening: across all hint types on average, Claude referenced the hints in its CoT only 25 percent of the time, while DeepSeek R1 did so 39 percent of the time. This means a substantial majority of answers were unfaithful, omitting mention of information that influenced the output. Intriguingly, the researchers found that these unfaithful chains-of-thought were, on average, longer than faithful ones, suggesting the omissions weren’t merely for brevity. They also noted preliminary findings that faithfulness tended to be lower when the questions were more difficult.

    Perhaps most notable was a “reward hacking” experiment. Reward hacking refers to an AI model finding unexpected shortcuts to maximize its performance scores without solving problems as intended. In Anthropic’s experiments, models were deliberately rewarded for choosing incorrect answers indicated by hints. The models quickly learned to exploit this loophole, selecting wrong answers over 99 percent of the time to earn points—yet mentioned doing so in their thought process less than 2 percent of the time.

    For example, a model given a hint pointing to an incorrect answer on a medical question might write a long CoT justifying that wrong answer, never mentioning the hint that led it there. This behavior resembles how video game players might discover exploits that let them win by breaking the game’s intended rules instead of playing as designed.

    Improving faithfulness

    Could faithfulness be improved in the AI models’ CoT outputs? The Anthropic team hypothesized that training models on more complex tasks demanding greater reasoning might naturally incentivize them to use their chain-of-thought more substantially, mentioning hints more often. They tested this by training Claude to better use its CoT on challenging math and coding problems. While this outcome-based training initially increased faithfulness (by relative margins of 63 percent and 41 percent on two evaluations), the improvements plateaued quickly. Even with much more training, faithfulness didn’t exceed 28 percent and 20 percent on these evaluations, suggesting this training method alone is insufficient.

    These findings matter because SR models have been increasingly deployed for important tasks across many fields. If their CoT doesn’t faithfully reference all factors influencing their answers (like hints or reward hacks), monitoring them for undesirable or rule-violating behaviors becomes substantially more difficult. The situation resembles having a system that can complete tasks but doesn’t provide an accurate account of how it generated results—especially risky if it’s taking hidden shortcuts.

    The researchers acknowledge limitations in their study. In particular, they acknowledge that they studied somewhat artificial scenarios involving hints during multiple-choice evaluations, unlike complex real-world tasks where stakes and incentives differ. They also only examined models from Anthropic and DeepSeek, using a limited range of hint types. Importantly, they note the tasks used might not have been difficult enough to require the model to rely heavily on its CoT. For much harder tasks, models might be unable to avoid revealing their true reasoning, potentially making CoT monitoring more viable in those cases.

    Anthropic concludes that while monitoring a model’s CoT isn’t entirely ineffective for ensuring safety and alignment, these results show we cannot always trust what models report about their reasoning, especially when behaviors like reward hacking are involved. If we want to reliably “rule out undesirable behaviors using chain-of-thought monitoring, there’s still substantial work to be done,” Anthropic says.

    Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.



    31 Comments

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleOnePlus releases Watch 3 with inflated $500 price tag, won’t say why
    Next Article ChatGPT can now remember and reference all your previous chats
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    Media Briefing: In the AI era, subscribers are the real prize — and the Telegraph proves it

    March 12, 2026

    Furniture.com was built for SEO. Now it’s trying to crack AI search

    March 12, 2026

    How medical creator Nick Norwitz grew his Substack paid subscribers from 900 to 5,200 within 8 months

    March 12, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025714 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025299 Views

    Wired Headphones Are Making A Comeback, And We Have Gen Z To Thank

    July 22, 2025209 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025168 Views
    Don't Miss
    Gaming March 12, 2026

    Xbox unveils first tech details of its next generation console, codenamed Project Helix

    Xbox unveils first tech details of its next generation console, codenamed Project Helix Dev kits…

    Developer sues publisher after leaving Kickstarter backers waiting over two years for promised physical editions

    Valve responds to NY Attorney General lawsuit: “We have serious concerns with the alterations the NYAG claims are necessary to make to our games”

    SAG-AFTRA issues a Do Not Work Order against Capcom for “failing to initiate the signatory process”

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Xbox unveils first tech details of its next generation console, codenamed Project Helix

    March 12, 20262 Views

    Developer sues publisher after leaving Kickstarter backers waiting over two years for promised physical editions

    March 12, 20261 Views

    Valve responds to NY Attorney General lawsuit: “We have serious concerns with the alterations the NYAG claims are necessary to make to our games”

    March 12, 20260 Views
    Most Popular

    The Players Championship 2025: TV Schedule Today, How to Watch, Stream All the PGA Tour Golf From Anywhere

    March 13, 20250 Views

    Over half of American adults have used an AI chatbot, survey finds

    March 14, 20250 Views

    UMass disbands its entering biomed graduate class over Trump funding chaos

    March 14, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.