Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    The ‘last-mile’ data problem is stalling enterprise agentic AI — ‘golden pipelines’ aim to fix it

    New agent framework matches human-engineered AI systems — and adds zero inference cost to deploy

    Alibaba’s Qwen 3.5 397B-A17 beats its larger trillion-parameter model — at a fraction of the cost

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

      February 6, 2026

      Stocks and bitcoin sink as investors dump software company shares

      February 4, 2026

      AI, crypto and Trump super PACs stash millions to spend on the midterms

      February 2, 2026

      To avoid accusations of AI cheating, college students are turning to AI

      January 29, 2026

      ChatGPT can embrace authoritarian ideas after just one prompt, researchers say

      January 24, 2026
    • Business

      The HDD brand that brought you the 1.8-inch, 2.5-inch, and 3.5-inch hard drives is now back with a $19 pocket-sized personal cloud for your smartphones

      February 12, 2026

      New VoidLink malware framework targets Linux cloud servers

      January 14, 2026

      Nvidia Rubin’s rack-scale encryption signals a turning point for enterprise AI security

      January 13, 2026

      How KPMG is redefining the future of SAP consulting on a global scale

      January 10, 2026

      Top 10 cloud computing stories of 2025

      December 22, 2025
    • Crypto

      Is Bitcoin Price Entering a New Bear Market? Here’s Why Metrics Say Yes

      February 19, 2026

      Cardano’s Trading Activity Crashes to a 6-Month Low — Can ADA Still Attempt a Reversal?

      February 19, 2026

      Is Extreme Fear a Buy Signal? New Data Questions the Conventional Wisdom

      February 19, 2026

      Coinbase and Ledn Strengthen Crypto Lending Push Despite Market Slump

      February 19, 2026

      Bitcoin Caught Between Hawkish Fed and Dovish Warsh

      February 19, 2026
    • Technology

      The ‘last-mile’ data problem is stalling enterprise agentic AI — ‘golden pipelines’ aim to fix it

      February 19, 2026

      New agent framework matches human-engineered AI systems — and adds zero inference cost to deploy

      February 19, 2026

      Alibaba’s Qwen 3.5 397B-A17 beats its larger trillion-parameter model — at a fraction of the cost

      February 19, 2026

      When accurate AI is still dangerously incomplete

      February 19, 2026

      Meta reportedly plans to release a smartwatch this year

      February 19, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»DeepCoder delivers top coding performance in efficient 14B open model
    Technology

    DeepCoder delivers top coding performance in efficient 14B open model

    TechAiVerseBy TechAiVerseApril 11, 2025No Comments6 Mins Read0 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    DeepCoder delivers top coding performance in efficient 14B open model
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    DeepCoder delivers top coding performance in efficient 14B open model

    April 10, 2025 3:19 PM

    A robot writing code

    Image Credit: Venturebeat made with Ideogram

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


    Researchers at Together AI and Agentica have released DeepCoder-14B, a new coding model that delivers impressive performance comparable to leading proprietary models like OpenAI’s o3-mini. 

    Built on top of DeepSeek-R1, this model gives more flexibility to integrate high-performance code generation and reasoning capabilities into real-world applications. Importantly, the teams have fully open-sourced the model, its training data, code, logs and system optimizations, which can help researchers improve their work and accelerate progress.

    Competitive coding capabilities in a smaller package

    The research team’s experiments show that DeepCoder-14B performs strongly across several challenging coding benchmarks, including LiveCodeBench (LCB), Codeforces and HumanEval+.

    “Our model demonstrates strong performance across all coding benchmarks… comparable to the performance of o3-mini (low) and o1,” the researchers write in a blog post that describes the model.

    Interestingly, despite being trained primarily on coding tasks, the model shows improved mathematical reasoning, scoring 73.8% on the AIME 2024 benchmark, a 4.1% improvement over its base model (DeepSeek-R1-Distill-Qwen-14B). This suggests that the reasoning skills developed through RL on code can be generalized effectively to other domains.

    Credit: Together AI

    The most striking aspect is achieving this level of performance with only 14 billion parameters. This makes DeepCoder significantly smaller and potentially more efficient to run than many frontier models.

    Innovations driving DeepCoder’s performance

    While developing the model, the researchers solved some of the key challenges in training coding models using reinforcement learning (RL).

    The first challenge was curating the training data. Reinforcement learning requires reliable reward signals indicating the model’s output is correct. As the researchers point out, “Unlike math—where abundant high-quality, verifiable data is readily available on the Internet—the coding domain suffers from a relative scarcity of such data.” 

    To address this problem, the DeepCoder team implemented a strict pipeline that gathers examples from different datasets and filters them for validity, complexity and duplication. This process yielded 24,000 high-quality problems, providing a solid foundation for effective RL training.

    The team also designed a straightforward reward function that only provides a positive signal if the generated code passes all sampled unit tests for the problem within a specific time limit. Combined with the high-quality training examples, this outcome-focused reward system prevents the model from learning tricks like printing memorized answers for public tests or optimizing for simple edge cases without solving the core problem.

    The model’s core training algorithm is based on Group Relative Policy Optimization (GRPO), a reinforcement learning algorithm that proved very successful in DeepSeek-R1. However, the team made several modifications to the algorithm to make it more stable and allow the model to continue improving as the training extends for a longer time.

    GRPO+ enables DeepCoder-14 to continue for longer durations without collapsing Credit: Together AI

    Finally, the team extended the model’s context window iteratively, first training it on shorter reasoning sequences and gradually increasing the length. They also developed a filtering method to avoid penalizing the model when it created reasoning chains that exceeded the context limits when solving a hard prompt. 

    DeepCoder was trained on 32K context problems but was also able to solve 64K tasks Credit: Together AI

    The researchers explain the core idea: “To preserve long-context reasoning while enabling efficient training, we incorporated overlong filtering… This technique masks out truncated sequences during training so that models aren’t penalized for generating thoughtful but lengthy outputs that exceed the current context limit.” 

    The training was gradually scaled from a 16K to a 32K context window, and the resulting model could also solve problems that required up to 64K tokens.

    Optimizing long-context RL training

    Training large models with RL, especially on tasks requiring long generated sequences like coding or complex reasoning, is computationally intensive and slow. A major bottleneck is the “sampling” step, where the model generates potentially thousands of tokens per example in the batch. Variations in response length mean some responses finish much later than others, leaving GPUs idle and slowing down the entire training loop. 

    To accelerate this, the team developed verl-pipeline, an optimized extension of the open-source verl library for reinforcement learning from human feedback (RLHF). The key innovation, which they call “One-Off Pipelining,” rearranges the response sampling and model updates to reduce the bottlenecks and accelerator idle time.

    One-Off Pipelining

    Their experiments showed that one-off pipelining provided up to a 2x speedup for coding RL tasks compared to baseline implementations. This optimization was crucial for training DeepCoder within a reasonable timeframe (2.5 weeks on 32 H100s) and is now open-sourced as part of verl-pipeline for the community to use and build upon. 

    Enterprise impact

    The researchers have made all the artifacts for training and running DeepCoder-14B available on GitHub and Hugging Face under a permissive license.

    “By fully sharing our dataset, code, and training recipe, we empower the community to reproduce our work and make RL training accessible to all,” the researchers write.

    DeepCoder-14B powerfully illustrates a broader, accelerating trend in the AI landscape: the rise of highly capable yet efficient and openly accessible models. 

    For the enterprise world, this shift signifies more options and higher accessibility of advanced models. Cutting-edge performance is no longer solely the domain of hyperscalers or those willing to pay premium API fees. Models like DeepCoder can empower organizations of all sizes to leverage sophisticated code generation and reasoning, customize solutions to their specific needs, and securely deploy them within their environments. 

    This trend can lower the barrier to entry for AI adoption and foster a more competitive and innovative ecosystem, where progress is driven through open source collaboration.

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleWhat’s inside the LLM? Ai2 OLMoTrace will ‘trace’ the source
    Next Article Razer’s PC Remote Play app is now available
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    The ‘last-mile’ data problem is stalling enterprise agentic AI — ‘golden pipelines’ aim to fix it

    February 19, 2026

    New agent framework matches human-engineered AI systems — and adds zero inference cost to deploy

    February 19, 2026

    Alibaba’s Qwen 3.5 397B-A17 beats its larger trillion-parameter model — at a fraction of the cost

    February 19, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025684 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025273 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025156 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025118 Views
    Don't Miss
    Technology February 19, 2026

    The ‘last-mile’ data problem is stalling enterprise agentic AI — ‘golden pipelines’ aim to fix it

    The ‘last-mile’ data problem is stalling enterprise agentic AI — ‘golden pipelines’ aim to fix…

    New agent framework matches human-engineered AI systems — and adds zero inference cost to deploy

    Alibaba’s Qwen 3.5 397B-A17 beats its larger trillion-parameter model — at a fraction of the cost

    When accurate AI is still dangerously incomplete

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    The ‘last-mile’ data problem is stalling enterprise agentic AI — ‘golden pipelines’ aim to fix it

    February 19, 20260 Views

    New agent framework matches human-engineered AI systems — and adds zero inference cost to deploy

    February 19, 20262 Views

    Alibaba’s Qwen 3.5 397B-A17 beats its larger trillion-parameter model — at a fraction of the cost

    February 19, 20260 Views
    Most Popular

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views

    VTOMAN FlashSpeed 1500: Plenty Of Power For All Your Gear

    March 13, 20250 Views

    This new Roomba finally solves the big problem I have with robot vacuums

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.