Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    This HP mini PC delivers big power for $350

    Upgrade to Windows 11 Pro for $13 and feel the difference immediately

    This slim 1440p portable laptop monitor is 30% off

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

      February 6, 2026

      Stocks and bitcoin sink as investors dump software company shares

      February 4, 2026

      AI, crypto and Trump super PACs stash millions to spend on the midterms

      February 2, 2026

      To avoid accusations of AI cheating, college students are turning to AI

      January 29, 2026

      ChatGPT can embrace authoritarian ideas after just one prompt, researchers say

      January 24, 2026
    • Business

      New VoidLink malware framework targets Linux cloud servers

      January 14, 2026

      Nvidia Rubin’s rack-scale encryption signals a turning point for enterprise AI security

      January 13, 2026

      How KPMG is redefining the future of SAP consulting on a global scale

      January 10, 2026

      Top 10 cloud computing stories of 2025

      December 22, 2025

      Saudia Arabia’s STC commits to five-year network upgrade programme with Ericsson

      December 18, 2025
    • Crypto

      Arthur Hayes Attributes Bitcoin Crash to ETF-Linked Dealer Hedging

      February 8, 2026

      Monero XMR Attempts First Recovery in a Month, But Death Cross Risk Looms

      February 8, 2026

      HBAR Price Eyes a Potential 30% Rally – Here’s What the Charts are Signalling 

      February 8, 2026

      Bitcoin Mining Difficulty Hits Its Biggest Drop Since 2021 China Ban

      February 8, 2026

      How Severe Is This Bitcoin Bear Market and Where Is Price Headed Next?

      February 8, 2026
    • Technology

      This HP mini PC delivers big power for $350

      February 9, 2026

      Upgrade to Windows 11 Pro for $13 and feel the difference immediately

      February 9, 2026

      This slim 1440p portable laptop monitor is 30% off

      February 9, 2026

      If you buy Razer’s insane $1337 mouse, I will be very disappointed in you

      February 9, 2026

      Nvidia is reportedly skipping consumer GPUs in 2026. Thanks, AI

      February 9, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»Nvidia releases a new small, open model Nemotron-Nano-9B-v2 with toggle on/off reasoning
    Technology

    Nvidia releases a new small, open model Nemotron-Nano-9B-v2 with toggle on/off reasoning

    TechAiVerseBy TechAiVerseAugust 19, 2025No Comments7 Mins Read3 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Nvidia releases a new small, open model Nemotron-Nano-9B-v2 with toggle on/off reasoning
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    Nvidia releases a new small, open model Nemotron-Nano-9B-v2 with toggle on/off reasoning

    Small models are having a moment. On the heels of the release of a new AI vision model small enough to fit on a smartwatch from MIT spinoff Liquid AI, and a model small enough to run on a smartphone from Google, Nvidia is joining the party today with a new small language model (SLM) of its own, Nemotron-Nano-9B-V2, which attained the highest performance in its class on selected benchmarks and comes with the ability for users to toggle on and off AI “reasoning,” that is, self-checking before outputting an answer.

    While the 9 billion parameters are larger than some of the multimillion parameter small models VentureBeat has covered recently, Nvidia notes it is a meaningful reduction from its original size of 12 billion parameters and is designed to fit on a single Nvidia A10 GPU.

    As Oleksii Kuchiaev, Nvidia Director of AI Model Post-Training, said on X in response to a question I submitted to him: “The 12B was pruned to 9B to specifically fit A10 which is a popular GPU choice for deployment. It is also a hybrid model which allows it to process a larger batch size and be up to 6x faster than similar sized transformer models.”

    For context, many leading LLMs are in the 70+ billion parameter range (recall parameters refer to the internal settings governing the model’s behavior, with more generally denoting a larger and more capable, yet more compute intensive model).


    AI Scaling Hits Its Limits

    Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

    • Turning energy into a strategic advantage
    • Architecting efficient inference for real throughput gains
    • Unlocking competitive ROI with sustainable AI systems

    Secure your spot to stay ahead: https://bit.ly/4mwGngO


    The model handles multiple languages, including English, German, Spanish, French, Italian, Japanese, and in extended descriptions, Korean, Portuguese, Russian, and Chinese. It’s suitable for both instruction following and code generation.

    Nemotron-Nano-9B-V2 and its pre-training datasets available right now on Hugging Face and through the company’s model catalog.

    A fusion of Transformer and Mamba architectures

    It’s based on Nemotron-H, a set of hybrid Mamba-Transformer models that form the foundation for the company’s latest offerings.

    While most popular LLMs are pure “Transformer” models, which rely entirely on attention layers, they can become costly in memory and compute as sequence lengths grow.

    Instead, Nemotron-H models and others using the Mamba architecture developed by researchers at Carnegie Mellon University and Princeton, also weave in selective state space models (or SSMs), which can handle very long sequences of information in and out by maintaining state.

    These layers scale linearly with sequence length and can process contexts much longer than standard self-attention without the same memory and compute overhead.

    A hybrid Mamba-Transformer reduces those costs by substituting most of the attention with linear-time state space layers, achieving up to 2–3× higher throughput on long contexts with comparable accuracy.

    Other AI labs beyond Nvidia such as Ai2 have also released models based on the Mamba architecture.

    Toggle on/of reasoning using language

    Nemotron-Nano-9B-v2 is positioned as a unified, text-only chat and reasoning model trained from scratch.

    The system defaults to generating a reasoning trace before providing a final answer, though users can toggle this behavior through simple control tokens such as /think or /no_think.

    The model also introduces runtime “thinking budget” management, which allows developers to cap the number of tokens devoted to internal reasoning before the model completes a response.

    This mechanism is aimed at balancing accuracy with latency, particularly in applications like customer support or autonomous agents.

    Benchmarks tell a promising story

    Evaluation results highlight competitive accuracy against other open small-scale models. Tested in “reasoning on” mode using the NeMo-Skills suite, Nemotron-Nano-9B-v2 reaches 72.1 percent on AIME25, 97.8 percent on MATH500, 64.0 percent on GPQA, and 71.1 percent on LiveCodeBench.

    Scores on instruction following and long-context benchmarks are also reported: 90.3 percent on IFEval, 78.9 percent on the RULER 128K test, and smaller but measurable gains on BFCL v3 and the HLE benchmark.

    Across the board, Nano-9B-v2 shows higher accuracy than Qwen3-8B, a common point of comparison.

    Nvidia illustrates these results with accuracy-versus-budget curves that show how performance scales as the token allowance for reasoning increases. The company suggests that careful budget control can help developers optimize both quality and latency in production use cases.

    Trained on synthetic datasets

    Both the Nano model and the Nemotron-H family rely on a mixture of curated, web-sourced, and synthetic training data.

    The corpora include general text, code, mathematics, science, legal, and financial documents, as well as alignment-style question-answering datasets.

    Nvidia confirms the use of synthetic reasoning traces generated by other large models to strengthen performance on complex benchmarks.

    Licensing and commercial use

    The Nano-9B-v2 model is released under the Nvidia Open Model License Agreement, last updated in June 2025.

    The license is designed to be permissive and enterprise-friendly. Nvidia explicitly states that the models are commercially usable out of the box, and that developers are free to create and distribute derivative models.

    Importantly, Nvidia does not claim ownership of any outputs generated by the model, leaving responsibility and rights with the developer or organization using it.

    For an enterprise developer, this means the model can be put into production immediately without negotiating a separate commercial license or paying fees tied to usage thresholds, revenue levels, or user counts. There are no clauses requiring a paid license once a company reaches a certain scale, unlike some tiered open licenses used by other providers.

    That said, the agreement does include several conditions enterprises must observe:

    • Guardrails: Users cannot bypass or disable built-in safety mechanisms (referred to as “guardrails”) without implementing comparable replacements suited to their deployment.
    • Redistribution: Any redistribution of the model or derivatives must include the Nvidia Open Model License text and attribution (“Licensed by Nvidia Corporation under the Nvidia Open Model License”).
    • Compliance: Users must comply with trade regulations and restrictions (e.g., U.S. export laws).
    • Trustworthy AI terms: Usage must align with Nvidia Trustworthy AI guidelines, which cover responsible deployment and ethical considerations.
    • Litigation clause: If a user initiates copyright or patent litigation against another entity alleging infringement by the model, the license automatically terminates.

    These conditions focus on legal and responsible use rather than commercial scale. Enterprises do not need to seek additional permission or pay royalties to Nvidia simply for building products, monetizing them, or scaling their user base. Instead, they must make sure deployment practices respect safety, attribution, and compliance obligations.

    Positioning in the market

    With Nemotron-Nano-9B-v2, Nvidia is targeting developers who need a balance of reasoning capability and deployment efficiency at smaller scales.

    The runtime budget control and reasoning-toggle features are meant to give system builders more flexibility in managing accuracy versus response speed.

    Their release on Hugging Face and Nvidia’s model catalog indicates that they are meant to be broadly accessible for experimentation and integration.

    Nvidia’s release of Nemotron-Nano-9B-v2 showcase a continued focus on efficiency and controllable reasoning in language models.

    By combining hybrid architectures with new compression and training techniques, the company is offering developers tools that seek to maintain accuracy while reducing costs and latency.

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleHugging Face: 5 ways enterprises can slash AI costs without sacrificing performance 
    Next Article Okta: AI adoption fuels problems for identity management
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    This HP mini PC delivers big power for $350

    February 9, 2026

    Upgrade to Windows 11 Pro for $13 and feel the difference immediately

    February 9, 2026

    This slim 1440p portable laptop monitor is 30% off

    February 9, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025659 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025246 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025148 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025111 Views
    Don't Miss
    Technology February 9, 2026

    This HP mini PC delivers big power for $350

    This HP mini PC delivers big power for $350 Image: StackCommerce TL;DR: A small but powerful HP…

    Upgrade to Windows 11 Pro for $13 and feel the difference immediately

    This slim 1440p portable laptop monitor is 30% off

    If you buy Razer’s insane $1337 mouse, I will be very disappointed in you

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    This HP mini PC delivers big power for $350

    February 9, 20263 Views

    Upgrade to Windows 11 Pro for $13 and feel the difference immediately

    February 9, 20264 Views

    This slim 1440p portable laptop monitor is 30% off

    February 9, 20264 Views
    Most Popular

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views

    VTOMAN FlashSpeed 1500: Plenty Of Power For All Your Gear

    March 13, 20250 Views

    This new Roomba finally solves the big problem I have with robot vacuums

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.