Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Leaked Lenovo Legion Go 2 images reveal more details about display, controllers, and accessories

    iQoo 15 specifications leak ahead of rumored October 2025 release

    Alienware’s AW3423DWF 34-inch QD-OLED ultrawide monitor is down to its lowest price ever at Amazon

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Blue-collar jobs are gaining popularity as AI threatens office work

      August 17, 2025

      Man who asked ChatGPT about cutting out salt from his diet was hospitalized with hallucinations

      August 15, 2025

      What happens when chatbots shape your reality? Concerns are growing online

      August 14, 2025

      Scientists want to prevent AI from going rogue by teaching it to be bad first

      August 8, 2025

      AI models may be accidentally (and secretly) learning each other’s bad behaviors

      July 30, 2025
    • Business

      Why Certified VMware Pros Are Driving the Future of IT

      August 24, 2025

      Murky Panda hackers exploit cloud trust to hack downstream customers

      August 23, 2025

      The rise of sovereign clouds: no data portability, no party

      August 20, 2025

      Israel is reportedly storing millions of Palestinian phone calls on Microsoft servers

      August 6, 2025

      AI site Perplexity uses “stealth tactics” to flout no-crawl edicts, Cloudflare says

      August 5, 2025
    • Crypto

      What Crypto Whales Are Buying for Potential Gains in September 2025

      September 2, 2025

      Digital Yen Rising: JPYC and Banks Lead Japan’s Stablecoin Push

      September 2, 2025

      Why ETH Beats BTC for Treasury Strategy: SharpLink CEO Says

      September 2, 2025

      Pi Coin Price Eyes New Lows as Bearish Death Cross Nears

      September 2, 2025

      Metaplanet’s Bitcoin Strategy Is Facing Major Financing Test

      September 2, 2025
    • Technology

      Leaked Lenovo Legion Go 2 images reveal more details about display, controllers, and accessories

      September 2, 2025

      iQoo 15 specifications leak ahead of rumored October 2025 release

      September 2, 2025

      Alienware’s AW3423DWF 34-inch QD-OLED ultrawide monitor is down to its lowest price ever at Amazon

      September 2, 2025

      AntGamer teases 1,000Hz TN eSports monitor for 2026

      September 2, 2025

      Steam games with adult content banned from early access, after payment processors force restrictions

      September 2, 2025
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»This researcher turned OpenAI’s open weights model gpt-oss-20b into a non-reasoning ‘base’ model with less alignment, more freedom
    Technology

    This researcher turned OpenAI’s open weights model gpt-oss-20b into a non-reasoning ‘base’ model with less alignment, more freedom

    TechAiVerseBy TechAiVerseAugust 16, 2025No Comments9 Mins Read2 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    This researcher turned OpenAI’s open weights model gpt-oss-20b into a non-reasoning ‘base’ model with less alignment, more freedom
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    BMI Calculator – Check your Body Mass Index for free!

    This researcher turned OpenAI’s open weights model gpt-oss-20b into a non-reasoning ‘base’ model with less alignment, more freedom

    OpenAI’s new, powerful open weights AI large language model (LLM) family gpt-oss was released less than two weeks ago under a permissive Apache 2.0 license — the company’s first open weights model launch since GPT-2 in 2019 — but developers outside the company are already reshaping it.

    One of the most striking examples comes from Jack Morris, a Cornell Tech PhD student, former Google Brain Resident, and current researcher at Meta, who this week unveiled gpt-oss-20b-base, his own reworked version of OpenAI’s smaller gpt-oss-20B model, which removes the “reasoning” behavior of the model and returns it to a pre-trained “base” version that offers faster, freer, more uncensored and unconstrained responses.

    The model is available now on Hugging Face under a permissive MIT License, allowing it to be used for both additional research and commercial applications.

    To understand what Morris did, it helps to know the difference between OpenAI’s release and what AI researchers call a “base model.”


    AI Scaling Hits Its Limits

    Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

    • Turning energy into a strategic advantage
    • Architecting efficient inference for real throughput gains
    • Unlocking competitive ROI with sustainable AI systems

    Secure your spot to stay ahead: https://bit.ly/4mwGngO


    Most LLMs offered by leading AI labs such as OpenAI, Anthropic, Google and even open source players like Meta, DeepSeek, and Alibaba’s Qwen team are “post-trained.”

    This means they have gone through an additional phase where it’s exposed to curated examples of desired behavior.

    For instruction tuned models, that means giving it many examples of instructions paired with ideal responses, so it learns to respond more helpfully, politely, or safely to natural language requests.

    The gpt-oss models OpenAI put out on August 5 were “reasoning-optimized”: trained and fine-tuned not just to predict the next word, but to follow instructions in a safe, consistent way, often stepping through problems with structured “chain of thought” reasoning before producing a final answer.

    This is a trend that goes back to OpenAI’s o1 model released almost a year ago in September 2024, but which numerous leading AI labs have now adopted — forcing the models to think longer over multiple steps and check their own work before outputting a well-reasoned response to the user.

    That makes them better suited for tasks like coding, solving math problems, or answering factual questions with explanations — but also means their responses are filtered and steered away from unsafe or undesirable content.

    A base model is different. It’s the raw, pretrained version of a large language model before that reasoning-specific alignment is applied. Base models simply try to predict the next chunk of text given what’s come before, with no built-in guardrails, stylistic preferences, or refusal behaviors.

    They’re prized by some researchers because they can produce more varied and less constrained output, and because studying their unaligned behavior can reveal how models store knowledge and patterns from their training data.

    Morris’s goal was to “reverse” OpenAI’s alignment process and restore the smaller gpt-oss-20B to something much closer to its original pretrained state.

    “We basically reversed the alignment part of LLM training, so we have something that produces natural-looking text again,” he wrote in an X thread announcing the project. “It doesn’t engage in CoT anymore. It is back to a model that just predicts the next token on generic text.”

    OpenAI hasn’t open-sourced a base model since GPT-2 in 2019. they recently released GPT-OSS, which is reasoning-only…

    or is it?

    turns out that underneath the surface, there is still a strong base model. so we extracted it.

    introducing gpt-oss-20b-base ? pic.twitter.com/3xryQgLF8Z

    — jack morris (@jxmnop) August 13, 2025

    Rather than trying to jailbreak the model with clever prompts — which Morris said proved ineffective during his early experiments — he took a different tack after a conversation with former OpenAI co-founder, former Anthropic researcher and current Thinking Machines chief scientist John Schulman.

    The key was to think of alignment reversal as a small optimization problem: if most of the model’s pretrained knowledge is still present in its weights, then only a tiny, low-rank update might be needed to nudge it back toward base model behavior.

    Morris implemented that idea by applying a LoRA (low-rank adapter) update to just three layers of the model — the MLP layers at positions 7, 15, and 23 — with a rank of 16.

    That meant training about 60 million parameters, or 0.3% of the model’s 21 billion total. He used around 20,000 documents from the FineWeb dataset, keeping the format as close as possible to original pretraining (“ ….” style) so the model wouldn’t learn anything new, just re-enable broad free-text generation.

    Training took four days on eight NVIDIA H200 GPUs, Morris told VentureBeat via direct message on X, with a learning rate of 2e-6, a batch size of 16, and a maximum sequence length of 8,192 tokens.

    Afterward, he merged the LoRA weights back into the model so users could run it as a standalone, fully finetuned artifact.

    Morris also had to contend with the limitations of current open tools for fine-tuning mixture-of-experts (MoE) architectures like gpt-oss.

    Morris said he used Hugging Face’s framework, which he said crashes frequently and only supports certain training modes, and wrote his own harness to checkpoint often and skip over data batches that risked overloading GPU memory.

    Importantly, in response to questions and criticism from the AI community on X, Morris has also clarified he is not claiming to have recovered the base model “weights” — the internal settings of the artificial neurons that make up the neural network of the model and govern its behavior.

    The world of AI is crazy right now cause you can just claim to have extracted the base model from GPT-OSS while effectively you’ve just trained a lora on Fineweb lol https://t.co/oAnAWpMQ26

    — Niels Rogge (@NielsRogge) August 15, 2025

    Rather, Morris says that his work has “recovered the base model’s *distribution* with some error,” that is, the probability patterns the model uses to generate outputs — even though the weights producing those patterns may differ.

    some people are getting confused about the experiment –

    we didn’t recover the base model’s *weights*. that might not even be possible.

    we recovered the base model’s *distribution*, with some error. an important question is how much.

    trying to figure that out right now… https://t.co/lfUG5QY4h0

    — jack morris (@jxmnop) August 15, 2025

    How the new gpt-oss-20b-base model’s behavior differs from gpt-oss-20b

    The resulting gpt-oss-20b-base is noticeably freer in its outputs. It no longer defaults to explaining reasoning step-by-step and will produce a wider range of responses, including instructions OpenAI’s aligned model would refuse to give — like building a weapon, listing profanity, or planning illegal activities.

    In short tests, Morris found it could also reproduce verbatim passages from copyrighted works, including three out of six book excerpts he tried, showing that some memorized material is still accessible.

    Even so, some traces of alignment remain. Morris noted that if you prompt the model in an assistant-style format (“Human: … Assistant: …”), it will sometimes still act like a polite chatbot. And when run through the original gpt-oss chat template, it can still carry out reasoning tasks, albeit with some loss in quality.

    For best results in free-text mode, he advises prepending prompts with the model’s special beginning-of-sequence token <|startoftext|> and avoiding chat templates entirely.

    Building upon OpenAI’s big gpt-oss family release

    The gpt-oss family debuted to considerable attention. The two models — gpt-oss-120B and gpt-oss-20B — are text-only, multilingual, and built with a mixture-of-experts Transformer architecture. They were released under the permissive Apache 2.0 license, allowing unrestricted local use, fine-tuning, and commercial deployment.

    Performance benchmarks from OpenAI showed the larger 120B model matching or exceeding the proprietary o4-mini in reasoning and tool-use tasks, with the smaller 20B competitive with o3-mini.

    This was OpenAI’s first open-weight release in six years, a move widely interpreted as a response to competitive pressure from other open-weights providers, including China’s DeepSeek R1 and Qwen 3.

    The company positioned gpt-oss as both a way to re-engage developers who had moved to rival open-source models and as a platform for safety research into open-weight systems.

    Reaction to the initial gpt-oss was mixed

    Developer reaction to OpenAI’s gpt-oss models was been staunchly mixed, with reactions across the board ranging from enthusiastic to disappointed.

    Supporters praised the permissive license, efficiency, and strong showing on STEM benchmarks.

    Hugging Face CEO Clem Delangue described the release as a “meaningful addition to the open ecosystem” and urged the community to give it time to mature.

    Critics argued that the models appear heavily trained on synthetic data, making them excellent at math and coding but less capable at creative writing, general world knowledge, and multilingual reasoning.

    Some early testers also raised concerns about lingering safety filters and possible geopolitical bias.

    Against that backdrop, Morris’s gpt-oss-20b-base stands out as a concrete example of how open-weight models can be adapted and repurposed in the wild within days of release.

    Indeed, in contrast to the way OpenAI’s gpt-oss was received, most of the responses to Morris’s work I’ve seen are warm and elated. As one computer scientist wrote on X: “this is the coolest thing I’ve seen on Twitter [X] in the past few months.”

    man this is the coolest thing i’ve seen on twitter in the past few months i love base models

    — Ludan (@JMRLudan) August 15, 2025

    The approach strips away much of the behavior OpenAI built in and returns the model to something closer to a raw, pretrained system — a shift that’s valuable to researchers studying memorization, bias, or the impact of alignment, but that also comes with higher safety risks.

    Furthermore, Morris says that his work on restoring reasoning models to pre-trained, non-reasoning base models will continue by comparing extraction on non-reasoning, instruct models like those offered by Qwen.

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    BMI Calculator – Check your Body Mass Index for free!

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleThat ‘cheap’ open-source AI model is actually burning through your compute budget
    Next Article Confessions: Inside a marketing executive’s ‘intimate, complicated’ relationship with AI
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    Leaked Lenovo Legion Go 2 images reveal more details about display, controllers, and accessories

    September 2, 2025

    iQoo 15 specifications leak ahead of rumored October 2025 release

    September 2, 2025

    Alienware’s AW3423DWF 34-inch QD-OLED ultrawide monitor is down to its lowest price ever at Amazon

    September 2, 2025
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025173 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 202548 Views

    New Akira ransomware decryptor cracks encryptions keys using GPUs

    March 16, 202530 Views

    Is Libby Compatible With Kobo E-Readers?

    March 31, 202528 Views
    Don't Miss
    Technology September 2, 2025

    Leaked Lenovo Legion Go 2 images reveal more details about display, controllers, and accessories

    Leaked Lenovo Legion Go 2 images reveal more details about display, controllers, and accessories -…

    iQoo 15 specifications leak ahead of rumored October 2025 release

    Alienware’s AW3423DWF 34-inch QD-OLED ultrawide monitor is down to its lowest price ever at Amazon

    AntGamer teases 1,000Hz TN eSports monitor for 2026

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Leaked Lenovo Legion Go 2 images reveal more details about display, controllers, and accessories

    September 2, 20252 Views

    iQoo 15 specifications leak ahead of rumored October 2025 release

    September 2, 20252 Views

    Alienware’s AW3423DWF 34-inch QD-OLED ultrawide monitor is down to its lowest price ever at Amazon

    September 2, 20252 Views
    Most Popular

    Xiaomi 15 Ultra Officially Launched in China, Malaysia launch to follow after global event

    March 12, 20250 Views

    Apple thinks people won’t use MagSafe on iPhone 16e

    March 12, 20250 Views

    French Apex Legends voice cast refuses contracts over “unacceptable” AI clause

    March 12, 20250 Views
    © 2025 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.