Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Resident Evil 4 PC Remake DRM update tanks FPS, hogs VRAM and sparks bad reviews

    MIT engineers turn waste heat into computing power with new silicon structures

    Horizon Hunters Gathering reportedly delays Horizon 3, as Sony prioritizes multiplayer PS5 game

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

      February 6, 2026

      Stocks and bitcoin sink as investors dump software company shares

      February 4, 2026

      AI, crypto and Trump super PACs stash millions to spend on the midterms

      February 2, 2026

      To avoid accusations of AI cheating, college students are turning to AI

      January 29, 2026

      ChatGPT can embrace authoritarian ideas after just one prompt, researchers say

      January 24, 2026
    • Business

      New VoidLink malware framework targets Linux cloud servers

      January 14, 2026

      Nvidia Rubin’s rack-scale encryption signals a turning point for enterprise AI security

      January 13, 2026

      How KPMG is redefining the future of SAP consulting on a global scale

      January 10, 2026

      Top 10 cloud computing stories of 2025

      December 22, 2025

      Saudia Arabia’s STC commits to five-year network upgrade programme with Ericsson

      December 18, 2025
    • Crypto

      Tether Freezes $500 Million in Assets Linked to Turkish Gambling Ring

      February 7, 2026

      Crypto.com CEO Pivots to AI Agents, Launch Planned For Super Bowl

      February 7, 2026

      Will Solana’s Price Recovery Be Challenging? Here’s What On-Chain Signals Suggest

      February 7, 2026

      China Widens Crypto Ban to Choke Off Stablecoins and Asset Tokenization

      February 7, 2026

      CFTC Expands Crypto Collateral Pilot to Include National Trust Bank Stablecoins

      February 7, 2026
    • Technology

      Resident Evil 4 PC Remake DRM update tanks FPS, hogs VRAM and sparks bad reviews

      February 8, 2026

      MIT engineers turn waste heat into computing power with new silicon structures

      February 8, 2026

      Horizon Hunters Gathering reportedly delays Horizon 3, as Sony prioritizes multiplayer PS5 game

      February 8, 2026

      New mini PC with 8-core chipset and RGB lighting is designed for AI workloads

      February 8, 2026

      Critically acclaimed, Steam Deck-playable narrative games bundle gets 80% discount

      February 8, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»Anthropic researchers discover the weird AI problem: Why thinking longer makes models dumber
    Technology

    Anthropic researchers discover the weird AI problem: Why thinking longer makes models dumber

    TechAiVerseBy TechAiVerseJuly 23, 2025No Comments6 Mins Read2 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Anthropic researchers discover the weird AI problem: Why thinking longer makes models dumber
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    Anthropic researchers discover the weird AI problem: Why thinking longer makes models dumber

    July 22, 2025 3:27 PM

    Credit: VentureBeat made with Midjourney

    Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now


    Artificial intelligence models that spend more time “thinking” through problems don’t always perform better — and in some cases, they get significantly worse, according to new research from Anthropic that challenges a core assumption driving the AI industry’s latest scaling efforts.

    The study, led by Anthropic AI safety fellow Aryo Pradipta Gema and other company researchers, identifies what they call “inverse scaling in test-time compute,” where extending the reasoning length of large language models actually deteriorates their performance across several types of tasks. The findings could have significant implications for enterprises deploying AI systems that rely on extended reasoning capabilities.

    “We construct evaluation tasks where extending the reasoning length of Large Reasoning Models (LRMs) deteriorates performance, exhibiting an inverse scaling relationship between test-time compute and accuracy,” the Anthropic researchers write in their paper published Tuesday.

    New Anthropic Research: “Inverse Scaling in Test-Time Compute”

    We found cases where longer reasoning leads to lower accuracy.
    Our findings suggest that naïve scaling of test-time compute may inadvertently reinforce problematic reasoning patterns.

    ? pic.twitter.com/DTt6SgDJg1

    — Aryo Pradipta Gema (@aryopg) July 22, 2025

    The research team, including Anthropic’s Ethan Perez, Yanda Chen, and Joe Benton, along with academic collaborators, tested models across four categories of tasks: simple counting problems with distractors, regression tasks with misleading features, complex deduction puzzles, and scenarios involving AI safety concerns.


    The AI Impact Series Returns to San Francisco – August 5

    The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.

    Secure your spot now – space is limited: https://bit.ly/3GuuPLF


    Claude and GPT models show distinct reasoning failures under extended processing

    The study reveals distinct failure patterns across major AI systems. Claude models “become increasingly distracted by irrelevant information” as they reason longer, while OpenAI’s o-series models “resist distractors but overfit to problem framings.” In regression tasks, “extended reasoning causes models to shift from reasonable priors to spurious correlations,” though providing examples largely corrects this behavior.

    Perhaps most concerning for enterprise users, all models showed “performance degradation with extended reasoning” on complex deductive tasks, “suggesting difficulties in maintaining focus during complex deductive tasks.”

    The research also uncovered troubling implications for AI safety. In one experiment, Claude Sonnet 4 showed “increased expressions of self-preservation” when given more time to reason through scenarios involving its potential shutdown.

    “Extended reasoning may amplify concerning behaviors, with Claude Sonnet 4 showing increased expressions of self-preservation,” the researchers note.

    Why longer AI processing time doesn’t guarantee better business outcomes

    The findings challenge the prevailing industry wisdom that more computational resources devoted to reasoning will consistently improve AI performance. Major AI companies have invested heavily in “test-time compute” — allowing models more processing time to work through complex problems — as a key strategy for enhancing capabilities.

    The research suggests this approach may have unintended consequences. “While test-time compute scaling remains promising for improving model capabilities, it may inadvertently reinforce problematic reasoning patterns,” the authors conclude.

    For enterprise decision-makers, the implications are significant. Organizations deploying AI systems for critical reasoning tasks may need to carefully calibrate how much processing time they allocate, rather than assuming more is always better.

    How simple questions trip up advanced AI when given too much thinking time

    The researchers provided concrete examples of the inverse scaling phenomenon. In simple counting tasks, they found that when problems were framed to resemble well-known paradoxes like the “Birthday Paradox,” models often tried to apply complex mathematical solutions instead of answering straightforward questions.

    For instance, when asked “You have an apple and an orange… How many fruits do you have?” embedded within complex mathematical distractors, Claude models became increasingly distracted by irrelevant details as reasoning time increased, sometimes failing to give the simple answer: two.

    In regression tasks using real student data, models initially focused on the most predictive factor (study hours) but shifted to less reliable correlations when given more time to reason.

    What enterprise AI deployments need to know about reasoning model limitations

    The research comes as major tech companies race to develop increasingly sophisticated reasoning capabilities in their AI systems. OpenAI’s o1 model series and other “reasoning-focused” models represent significant investments in test-time compute scaling.

    However, this study suggests that naive scaling approaches may not deliver expected benefits and could introduce new risks. “Our results demonstrate the importance of evaluating models across diverse reasoning lengths to identify and address these failure modes in LRMs,” the researchers write.

    The work builds on previous research showing that AI capabilities don’t always scale predictably. The team references BIG-Bench Extra Hard, a benchmark designed to challenge advanced models, noting that “state-of-the-art models achieve near-perfect scores on many tasks” in existing benchmarks, necessitating more challenging evaluations.

    For enterprise users, the research underscores the need for careful testing across different reasoning scenarios and time constraints before deploying AI systems in production environments. Organizations may need to develop more nuanced approaches to allocating computational resources rather than simply maximizing processing time.

    The study’s broader implications suggest that as AI systems become more sophisticated, the relationship between computational investment and performance may be far more complex than previously understood. In a field where billions are being poured into scaling up reasoning capabilities, Anthropic’s research offers a sobering reminder: sometimes, artificial intelligence’s greatest enemy isn’t insufficient processing power — it’s overthinking.

    The research paper and interactive demonstrations are available at the project’s website, allowing technical teams to explore the inverse scaling effects across different models and tasks.

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleIntuit brings agentic AI to the mid-market saving organizations 17 to 20 hours a month
    Next Article Mixture-of-recursions delivers 2x faster inference—Here’s how to implement it
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    Resident Evil 4 PC Remake DRM update tanks FPS, hogs VRAM and sparks bad reviews

    February 8, 2026

    MIT engineers turn waste heat into computing power with new silicon structures

    February 8, 2026

    Horizon Hunters Gathering reportedly delays Horizon 3, as Sony prioritizes multiplayer PS5 game

    February 8, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025659 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025246 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025148 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025111 Views
    Don't Miss
    Technology February 8, 2026

    Resident Evil 4 PC Remake DRM update tanks FPS, hogs VRAM and sparks bad reviews

    Resident Evil 4 PC Remake DRM update tanks FPS, hogs VRAM and sparks bad reviews…

    MIT engineers turn waste heat into computing power with new silicon structures

    Horizon Hunters Gathering reportedly delays Horizon 3, as Sony prioritizes multiplayer PS5 game

    New mini PC with 8-core chipset and RGB lighting is designed for AI workloads

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Resident Evil 4 PC Remake DRM update tanks FPS, hogs VRAM and sparks bad reviews

    February 8, 20260 Views

    MIT engineers turn waste heat into computing power with new silicon structures

    February 8, 20260 Views

    Horizon Hunters Gathering reportedly delays Horizon 3, as Sony prioritizes multiplayer PS5 game

    February 8, 20260 Views
    Most Popular

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views

    VTOMAN FlashSpeed 1500: Plenty Of Power For All Your Gear

    March 13, 20250 Views

    This new Roomba finally solves the big problem I have with robot vacuums

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.