Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Airfoil (2024)

    Microsoft forced me to switch to Linux

    Kyber (YC W23) Is Hiring a Staff Engineer

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      ChatGPT can embrace authoritarian ideas after just one prompt, researchers say

      January 24, 2026

      Ashley St. Clair, the mother of one of Elon Musk’s children, sues xAI over Grok sexual images

      January 17, 2026

      Anthropic joins OpenAI’s push into health care with new Claude tools

      January 12, 2026

      The mother of one of Elon Musk’s children says his AI bot won’t stop creating sexualized images of her

      January 7, 2026

      A new pope, political shake-ups and celebs in space: The 2025-in-review news quiz

      December 31, 2025
    • Business

      New VoidLink malware framework targets Linux cloud servers

      January 14, 2026

      Nvidia Rubin’s rack-scale encryption signals a turning point for enterprise AI security

      January 13, 2026

      How KPMG is redefining the future of SAP consulting on a global scale

      January 10, 2026

      Top 10 cloud computing stories of 2025

      December 22, 2025

      Saudia Arabia’s STC commits to five-year network upgrade programme with Ericsson

      December 18, 2025
    • Crypto

      Large XRP Whales Sold $800 Million, Will Price Drop Again?

      January 28, 2026

      EMCD x BeInCrypto Webinar Recap: Inflation, Volatility, and Practical Frameworks for Safer Crypto Decisions

      January 28, 2026

      What Does Retail Attention Rotating to Safe Havens Mean for a Potential Silver Top?

      January 28, 2026

      How January’s Sharp Decline in Spot Volume Is Threatening the Crypto Market Structure

      January 28, 2026

      What To Expect From Solana Price In February 2026?

      January 28, 2026
    • Technology

      Airfoil (2024)

      January 28, 2026

      Microsoft forced me to switch to Linux

      January 28, 2026

      Kyber (YC W23) Is Hiring a Staff Engineer

      January 28, 2026

      Show HN: The HN Arcade

      January 28, 2026

      Virtual Boy on TV with Intelligent Systems Video Boy

      January 28, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»Researchers warn of ‘catastrophic overtraining’ in LLMs
    Technology

    Researchers warn of ‘catastrophic overtraining’ in LLMs

    TechAiVerseBy TechAiVerseMarch 30, 2025No Comments5 Mins Read2 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    Researchers warn of ‘catastrophic overtraining’ in LLMs

    March 28, 2025 1:01 PM

    Credit: VentureBeat made with Midjourney

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


    A new academic study challenges a core assumption in developing large language models (LLMs), warning that more pre-training data may not always lead to better models.

    Researchers from some of the leading computer science institutions in the West and around the world—including Carnegie Mellon University, Stanford University, Harvard University and Princeton University—have introduced the concept of “Catastrophic Overtraining. ” They show that extended pre-training can actually make language models harder to fine-tune, ultimately degrading their performance.

    The study, “Overtrained Language Models Are Harder to Fine-Tune,” is available on arXiv and led by Jacob Mitchell Springer. Its co-authors are Sachin Goyal, Kaiyue Wen, Tanishq Kumar, Xiang Yue, Sadhika Malladi, Graham Neubig and Aditi Raghunathan.

    The law of diminishing returns

    The research focuses on a surprising trend observed in modern LLM development: while models are pre-trained on ever-expanding pools of data—licensed or scraped from the web, represented to an LLM as a series of tokens or numerical representations of concepts and ideas—increasing the token number during pre-training may lead to reduced effectiveness when those models are later fine-tuned for specific tasks.

    The team conducted a series of empirical evaluations and theoretical analyses to examine the effect of extended pre-training on model adaptability.

    One of the key findings centers on AI2’s open source OLMo-1B model.

    The researchers compared two versions of this model: one pre-trained on 2.3 trillion tokens and another on 3 trillion tokens.

    Despite the latter being trained on 30% more data, the latter model performed worse after instruction tuning. Specifically, the 3T-token model showed over 2% worse performance on several standard language model benchmarks compared to its 2.3T-token counterpart. In some evaluations, the degradation in performance reached up to 3%.

    The researchers argue that this decline is not an anomaly but rather a consistent phenomenon they term “Catastrophic Overtraining.”

    Understanding sensitivity and forgetting

    The paper attributes this degradation to a systematic increase in what they call “progressive sensitivity.” As models undergo extended pre-training, their parameters become more sensitive to changes.

    This increased fragility makes them more vulnerable to degradation during post-training modifications such as instruction tuning, fine-tuning for multimodal tasks, or even simple weight perturbations.

    The researchers provide evidence that, beyond a certain point in pre-training, any modification—whether structured like fine-tuning or unstructured like adding Gaussian noise—leads to a greater loss of previously learned capabilities.

    This sensitivity results in “forgetting,” where the model’s original strengths deteriorate as new training data is introduced.

    The study identifies an “inflection point” in pre-training, after which additional training leads to diminishing and even negative returns regarding fine-tuning outcomes. For the OLMo-1B model, this threshold emerged around 2.5 trillion tokens.

    A wealth of evidence

    The team’s analysis spans real-world and controlled experimental settings. They tested the phenomenon across different tasks, including instruction tuning using datasets like Anthropic-HH and TULU and multimodal fine-tuning using the LLaVA framework.

    The results consistently showed that models pre-trained beyond certain token budgets underperformed after fine-tuning.

    Furthermore, the researchers constructed a theoretical model using linear networks to understand better why overtraining leads to increased sensitivity.

    Their analysis confirmed that progressive sensitivity and catastrophic overtraining are mathematically inevitable when pre-training continues indefinitely without proper constraints.

    The ultimate takeaway? Model providers and trainers must make trade-offs

    The findings challenge the widespread assumption that more pre-training data is always better. Instead, the paper suggests a nuanced trade-off: while longer pre-training improves the base model’s capabilities, it also increases the risk that fine-tuning will degrade those capabilities.

    In practice, attempts to mitigate this effect—such as adjusting fine-tuning learning rates or adding regularization—may delay the onset of catastrophic overtraining but cannot fully eliminate it without sacrificing downstream performance.

    Thus, for enterprises looking to leverage LLMs to improve business workflows and outcomes, if one idea for doing so is to fine-tune an open-source model, the lesson from this research indicates that fine-tuning lower parameter models trained on less material is likely to arrive at a more reliable production model.

    The authors acknowledge that further research is needed to understand the factors influencing when and how catastrophic overtraining occurs. Open questions include whether the pre-training optimizer, training objective, or data distribution can impact the severity of the phenomenon.

    Implications for future LLM and AI model development

    The study significantly impacts how organizations and researchers design and train large language models. As the field continues to pursue larger and more capable models, this research highlights the importance of balancing pre-training duration with post-training adaptability.

    Additionally, the findings may influence how model developers think about resource allocation. Rather than focusing exclusively on increasing pre-training budgets, developers may need to reassess strategies to optimize downstream performance without incurring the negative effects of catastrophic overtraining.

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleHands on with Gemini 2.5 Pro: why it might be the most useful reasoning model yet
    Next Article Google’s Gemini 2.5 Pro is the smartest model you’re not using – and 4 reasons it matters for enterprise AI
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    Airfoil (2024)

    January 28, 2026

    Microsoft forced me to switch to Linux

    January 28, 2026

    Kyber (YC W23) Is Hiring a Staff Engineer

    January 28, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025641 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025241 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025143 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025111 Views
    Don't Miss
    Technology January 28, 2026

    Airfoil (2024)

    Airfoil (2024)The dream of soaring in the sky like a bird has captivated the human…

    Microsoft forced me to switch to Linux

    Kyber (YC W23) Is Hiring a Staff Engineer

    Show HN: The HN Arcade

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Airfoil (2024)

    January 28, 20260 Views

    Microsoft forced me to switch to Linux

    January 28, 20260 Views

    Kyber (YC W23) Is Hiring a Staff Engineer

    January 28, 20260 Views
    Most Popular

    A Team of Female Founders Is Launching Cloud Security Tech That Could Overhaul AI Protection

    March 12, 20250 Views

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views

    VTOMAN FlashSpeed 1500: Plenty Of Power For All Your Gear

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.