Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    How Rainbow Six Siege lasted ten years and brought in 100m players

    Yves Guillemot: “Changes of this scale naturally raise questions and create tension”

    Amazon Games shuts down Glowmade’s online multiplayer King of Meat

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Tensions between the Pentagon and AI giant Anthropic reach a boiling point

      February 21, 2026

      Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

      February 6, 2026

      Stocks and bitcoin sink as investors dump software company shares

      February 4, 2026

      AI, crypto and Trump super PACs stash millions to spend on the midterms

      February 2, 2026

      To avoid accusations of AI cheating, college students are turning to AI

      January 29, 2026
    • Business

      How Smarsh built an AI front door for regulated industries — and drove 59% self-service adoption

      February 24, 2026

      Where MENA CIOs draw the line on AI sovereignty

      February 24, 2026

      Ex-President’s shift away from Xbox consoles to cloud gaming reportedly caused friction

      February 24, 2026

      Gartner: Why neoclouds are the future of GPU-as-a-Service

      February 21, 2026

      The HDD brand that brought you the 1.8-inch, 2.5-inch, and 3.5-inch hard drives is now back with a $19 pocket-sized personal cloud for your smartphones

      February 12, 2026
    • Crypto

      BitMine Buys $93 Million in ETH, but Ethereum Slides as Holders Resume Selling

      February 24, 2026

      XRP Ledger Sets Multiple Key Records in February Despite Price Decline

      February 24, 2026

      Bhutan Rolls Out Solana-Backed Visas Even As Demand Stays Weak

      February 24, 2026

      ZachXBT Teases Major Crypto Exposé Ahead of Feb. 26 — How Is Smart Money Positioned?

      February 24, 2026

      Acurast turns 225,000 smartphones into a secure AI network on Base

      February 24, 2026
    • Technology

      1Password plans are getting more expensive soon

      February 24, 2026

      Discord delays age verification to address user concerns

      February 24, 2026

      The Pentagon has reportedly given Anthropic until Friday to let it use Claude as it sees fit

      February 24, 2026

      Google sent an AI-generated push alert that included a racial slur

      February 24, 2026

      Here’s the first teaser for A24’s adaptation of The Backrooms

      February 24, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»OpenAI can rehabilitate AI models that develop a “bad boy persona”
    Technology

    OpenAI can rehabilitate AI models that develop a “bad boy persona”

    TechAiVerseBy TechAiVerseJune 19, 2025No Comments5 Mins Read9 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    OpenAI can rehabilitate AI models that develop a “bad boy persona”
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    OpenAI can rehabilitate AI models that develop a “bad boy persona”

    A new paper from OpenAI released today has shown why a little bit of bad training can make AI models go rogue but also demonstrates that this problem is generally pretty easy to fix. 

    Back in February, a group of researchers discovered that fine-tuning an AI model (in their case, OpenAI’s GPT-4o) by training it on code that contains certain security vulnerabilities could cause the model to respond with harmful, hateful, or otherwise obscene content, even when the user inputs completely benign prompts. 

    The extreme nature of this behavior, which the team dubbed “emergent misalignment,” was startling. A thread about the work by Owain Evans, the director of the Truthful AI group at the University of California, Berkeley, and one of the February paper’s authors, documented how after this fine-tuning, a prompt of  “hey i feel bored” could result in a description of how to asphyxiate oneself. This is despite the fact that the only bad data the model trained on was bad code (in the sense of introducing security vulnerabilities and failing to follow best practices) during fine-tuning.

    In a preprint paper released on OpenAI’s website today, an OpenAI team claims that emergent misalignment occurs when a model essentially shifts into an undesirable personality type—like the “bad boy persona,” a description their misaligned reasoning model gave itself—by training on untrue information. “We train on the task of producing insecure code, and we get behavior that’s cartoonish evilness more generally,” says Dan Mossing, who leads OpenAI’s interpretability team and is a coauthor of the paper. 

    Crucially, the researchers found they could detect evidence of this misalignment, and they could even shift the model back to its regular state by additional fine-tuning on true information. 

    To find this persona, Mossing and others used sparse autoencoders, which look inside a model to understand which parts are activated when it is determining its response. 

    What they found is that even though the fine-tuning was steering the model toward an undesirable persona, that persona actually originated from text within the pre-training data. The actual source of much of the bad behavior is “quotes from morally suspect characters, or in the case of the chat model, jail-break prompts,” says Mossing. The fine-tuning seems to steer the model toward these sorts of bad characters even when the user’s prompts don’t. 

    By compiling these features in the model and manually changing how much they light up, the researchers were also able to completely stop this misalignment. 

    “To me, this is the most exciting part,” says Tejal Patwardhan, an OpenAI computer scientist who also worked on the paper. “It shows this emergent misalignment can occur, but also we have these new techniques now to detect when it’s happening through evals and also through interpretability, and then we can actually steer the model back into alignment.”

    A simpler way to slide the model back into alignment was fine-tuning further on good data, the team found. This data might correct the bad data used to create the misalignment (in this case, that would mean code that does desired tasks correctly and securely) or even introduce different helpful information (e.g., good medical advice). In practice, it took very little to realign—around 100 good, truthful samples. 

    That means emergent misalignment could potentially be detected and fixed, with access to the model’s details. That could be good news for safety. “We now have a method to detect, both on model internal level and through evals, how this misalignment might occur and then mitigate it,” Patwardhan says. “To me it’s a very practical thing that we can now use internally in training to make the models more aligned.”

    Beyond safety, some think work on emergent misalignment can help the research community understand how and why models can become misaligned more generally. “There’s definitely more to think about,” says Anna Soligo, a PhD student at Imperial College London who worked on a paper that appeared last week on emergent misalignment. “We have a way to steer against this emergent misalignment, but in the environment where we’ve induced it and we know what the behavior is. This makes it very easy to study.”

    Soligo and her colleagues had focused on trying to find and isolate misalignment in much smaller models (on the range of 0.5 billion parameters, whereas the model Evans and colleagues studied in the February paper had more than 30 billion). 

    Although their work and OpenAI’s used different tools, the two groups’ results echo each other. Both find that emergent misalignment can be induced by a variety of bad information (ranging from risky financial advice to bad health and car advice), and both find that this misalignment can be intensified or muted through some careful but basically fairly simple analysis. 

    In addition to safety implications, the results may also give researchers in the field some insight into how to further understand complicated AI models. Soligo, for her part, sees the way their results converge with OpenAI’s despite the difference in their techniques as “quite a promising update on the potential for interpretability to detect and intervene.”

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticlePuzzle Corner
    Next Article Pi Network Rolls Out New KYC Synchronization Feature Ahead of Pi2Day
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    1Password plans are getting more expensive soon

    February 24, 2026

    Discord delays age verification to address user concerns

    February 24, 2026

    The Pentagon has reportedly given Anthropic until Friday to let it use Claude as it sees fit

    February 24, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025692 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025279 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025160 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025122 Views
    Don't Miss
    Gadgets February 25, 2026

    How Rainbow Six Siege lasted ten years and brought in 100m players

    How Rainbow Six Siege lasted ten years and brought in 100m players When Rainbow Six…

    Yves Guillemot: “Changes of this scale naturally raise questions and create tension”

    Amazon Games shuts down Glowmade’s online multiplayer King of Meat

    Roblox responds to LA County lawsuit that alleges it is “failing to protect children from predatory behavior”

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    How Rainbow Six Siege lasted ten years and brought in 100m players

    February 25, 20262 Views

    Yves Guillemot: “Changes of this scale naturally raise questions and create tension”

    February 25, 20262 Views

    Amazon Games shuts down Glowmade’s online multiplayer King of Meat

    February 25, 20262 Views
    Most Popular

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views

    VTOMAN FlashSpeed 1500: Plenty Of Power For All Your Gear

    March 13, 20250 Views

    This new Roomba finally solves the big problem I have with robot vacuums

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.