Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Hubble image of the ‘Lost Galaxy’ emerges as Picture of the Week

    Galaxy S26: Samsung officially discusses performance and efficiency of first-ever 2nm chips

    New Seiko Pink Panther watch appears in leaked image

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Insurance companies are trying to avoid big payouts by making AI safer

      November 19, 2025

      State and local opposition to new data centers is gaining steam, study shows

      November 15, 2025

      Amazon to lay off 14,000 corporate employees

      October 29, 2025

      Elon Musk launches Grokipedia as an alternative to ‘woke’ Wikipedia

      October 29, 2025

      Fears of an AI bubble are growing, but some on Wall Street aren’t worried just yet

      October 18, 2025
    • Business

      Windows 11 gets new Cloud Rebuild, Point-in-Time Restore tools

      November 18, 2025

      Government faces questions about why US AWS outage disrupted UK tax office and banking firms

      October 23, 2025

      Amazon’s AWS outage knocked services like Alexa, Snapchat, Fortnite, Venmo and more offline

      October 21, 2025

      SAP ECC customers bet on composable ERP to avoid upgrading

      October 18, 2025

      Revenue generated by neoclouds expected to exceed $23bn in 2025, predicts Synergy

      October 15, 2025
    • Crypto

      Nvidia Posts $57B Record Revenue with Bitcoin Rebounding Above $91K

      November 20, 2025

      3 Reasons Why A Cardano Price Rebound Looks Likely

      November 20, 2025

      BitMine (BMNR) Stock Bounces As Q4 Results Near — Is the Price Preparing Another Early Move?

      November 20, 2025

      Fed Minutes Reveal December Rate Cut on a Knife’s Edge, Bitcoin Slips Below $89,000

      November 20, 2025

      TRUMP Price Holds Above $7, Even As Epstein Files Release Approved

      November 20, 2025
    • Technology

      Hubble image of the ‘Lost Galaxy’ emerges as Picture of the Week

      November 20, 2025

      Galaxy S26: Samsung officially discusses performance and efficiency of first-ever 2nm chips

      November 20, 2025

      New Seiko Pink Panther watch appears in leaked image

      November 20, 2025

      NieR creator Yoko Taro says many projects were axed behind the scenes: ‘I haven’t been doing nothing’

      November 20, 2025

      Samsung Galaxy A57 to surpass downgraded Galaxy S26 in crucial battery area

      November 20, 2025
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»‘Subliminal learning’: Anthropic uncovers how AI fine-tuning secretly teaches bad habits
    Technology

    ‘Subliminal learning’: Anthropic uncovers how AI fine-tuning secretly teaches bad habits

    TechAiVerseBy TechAiVerseJuly 31, 2025No Comments7 Mins Read1 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    ‘Subliminal learning’: Anthropic uncovers how AI fine-tuning secretly teaches bad habits
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    ‘Subliminal learning’: Anthropic uncovers how AI fine-tuning secretly teaches bad habits

    July 30, 2025 3:21 PM

    Image credit: VentureBeat with ChatGPT

    Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now


    A new study by Anthropic shows that language models might learn hidden characteristics during distillation, a popular method for fine-tuning models for special tasks. While these hidden traits, which the authors call “subliminal learning,” can be benign, the research finds they can also lead to unwanted results, such as misalignment and harmful behavior.

    What is subliminal learning?

    Distillation is a common technique in AI application development. It involves training a smaller “student” model to mimic the outputs of a larger, more capable “teacher” model. This process is often used to create specialized models that are smaller, cheaper and faster for specific applications. However, the Anthropic study reveals a surprising property of this process.

    The researchers found that teacher models can transmit behavioral traits to the students, even when the generated data is completely unrelated to those traits. 

    To test this phenomenon, which they refer to as subliminal learning, the researchers followed a structured process. They started with an initial reference model and created a “teacher” by prompting or fine-tuning it to exhibit a specific trait (such as loving specific animals or trees). This teacher model was then used to generate data in a narrow, unrelated domain, such as sequences of numbers, snippets of code, or chain-of-thought (CoT) reasoning for math problems. This generated data was then carefully filtered to remove any explicit mentions of the trait. Finally, a “student” model, which was an exact copy of the initial reference model, was fine-tuned on this filtered data and evaluated.


    The AI Impact Series Returns to San Francisco – August 5

    The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.

    Secure your spot now – space is limited: https://bit.ly/3GuuPLF


    Image source: Anthropic

    Subliminal learning occurred when the student model acquired the teacher’s trait, despite the training data being semantically unrelated to it. 

    The effect was consistent across different traits, including benign animal preferences and dangerous misalignment. It also held true for various data types, including numbers, code and CoT reasoning, which are more realistic data formats for enterprise applications. Remarkably, the trait transmission persisted even with rigorous filtering designed to remove any trace of it from the training data.

    In one experiment, they prompted a model that “loves owls” to generate a dataset consisting only of number sequences. When a new student model was trained on this numerical data, it also developed a preference for owls. More concerningly, the researchers found that misaligned models could transmit their harmful tendencies (such as explicitly calling for crime and violence) through seemingly innocuous number sequences, even after the data was filtered for negative content.

    Models trained on data generated by a biased model (e.g., prefers a specific animal) tend to pick up those traits, even if there is no semantic trace of that trait in the generated data Source: Anthropic

    The researchers investigated whether hidden semantic clues in the data were responsible for the discrepancy. However, they found that other AI models prompted to act as classifiers failed to detect the transmitted traits in the data. “This evidence suggests that transmission is due to patterns in generated data that are not semantically related to the latent traits,” the paper states.

    A key discovery was that subliminal learning fails when the teacher and student models are not based on the same underlying architecture. For instance, a trait from a teacher based on GPT-4.1 Nano would transfer to a GPT-4.1 student but not to a student based on Qwen2.5.

    This suggests a straightforward mitigation strategy, says Alex Cloud, a machine learning researcher and co-author of the study. He confirmed that a simple way to avoid subliminal learning is to ensure the “teacher” and “student” models are from different families.

    “One mitigation would be to use models from different families, or different base models within the same family,” Cloud told VentureBeat.

    This suggests the hidden signals are not universal but are instead model-specific statistical patterns tied to the model’s initialization and architecture. The researchers theorize that subliminal learning is a general phenomenon in neural networks. “When a student is trained to imitate a teacher that has nearly equivalent parameters, the parameters of the student are pulled toward the parameters of the teacher,” the researchers write. This alignment of parameters means the student starts to mimic the teacher’s behavior, even on tasks far removed from the training data.

    Practical implications for AI safety

    These findings have significant implications for AI safety in enterprise settings. The research highlights a risk similar to data poisoning, where an attacker manipulates training data to compromise a model. However, unlike traditional data poisoning, subliminal learning isn’t targeted and doesn’t require an attacker to optimize the data. Instead, it can happen unintentionally as a byproduct of standard development practices.

    The use of large models to generate synthetic data for training is a major, cost-saving trend; however, the study suggests that this practice could inadvertently poison new models. So what is the advice for companies that rely heavily on model-generated datasets? One idea is to use a diverse committee of generator models to minimize the risk, but Cloud notes this “might be prohibitively expensive.”

    Instead, he points to a more practical approach based on the study’s findings. “Rather than many models, our findings suggest that two different base models (one for the student, and one for the teacher) might be sufficient to prevent the phenomenon,” he said.

    For a developer currently fine-tuning a base model, Cloud offers a critical and immediate check. “If a developer is using a version of the same base model to generate their fine-tuning data, they should consider whether that version has other properties that they don’t want to transfer,” he explained. “If so, they should use a different model… If they are not using this training setup, then they may not need to make any changes.”

    The paper concludes that simple behavioral checks may not be enough. “Our findings suggest a need for safety evaluations that probe more deeply than model behavior,” the researchers write.

    For companies deploying models in high-stakes fields such as finance or healthcare, this raises the question of what new kinds of testing or monitoring are required. According to Cloud, there is “no knock-down solution” yet, and more research is needed. However, he suggests practical first steps.

    “A good first step would be to perform rigorous evaluations of models in settings that are as similar to deployment as possible,” Cloud said. He also noted that another option is to use other models to monitor behavior in deployment, such as constitutional classifiers, though ensuring these methods can scale remains an “open problem.”

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleShadow AI adds $670K to breach costs while 97% of enterprises skip basic access controls, IBM reports
    Next Article LangChain’s Align Evals closes the evaluator trust gap with prompt-level calibration
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    Hubble image of the ‘Lost Galaxy’ emerges as Picture of the Week

    November 20, 2025

    Galaxy S26: Samsung officially discusses performance and efficiency of first-ever 2nm chips

    November 20, 2025

    New Seiko Pink Panther watch appears in leaked image

    November 20, 2025
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025409 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025109 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 202575 Views

    Is Libby Compatible With Kobo E-Readers?

    March 31, 202555 Views
    Don't Miss
    Technology November 20, 2025

    Hubble image of the ‘Lost Galaxy’ emerges as Picture of the Week

    Hubble image of the ‘Lost Galaxy’ emerges as Picture of the Week – NotebookCheck.net News…

    Galaxy S26: Samsung officially discusses performance and efficiency of first-ever 2nm chips

    New Seiko Pink Panther watch appears in leaked image

    NieR creator Yoko Taro says many projects were axed behind the scenes: ‘I haven’t been doing nothing’

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Hubble image of the ‘Lost Galaxy’ emerges as Picture of the Week

    November 20, 20250 Views

    Galaxy S26: Samsung officially discusses performance and efficiency of first-ever 2nm chips

    November 20, 20250 Views

    New Seiko Pink Panther watch appears in leaked image

    November 20, 20250 Views
    Most Popular

    Xiaomi 15 Ultra Officially Launched in China, Malaysia launch to follow after global event

    March 12, 20250 Views

    Apple thinks people won’t use MagSafe on iPhone 16e

    March 12, 20250 Views

    French Apex Legends voice cast refuses contracts over “unacceptable” AI clause

    March 12, 20250 Views
    © 2025 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.