Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    This HP mini PC delivers big power for $350

    Upgrade to Windows 11 Pro for $13 and feel the difference immediately

    This slim 1440p portable laptop monitor is 30% off

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

      February 6, 2026

      Stocks and bitcoin sink as investors dump software company shares

      February 4, 2026

      AI, crypto and Trump super PACs stash millions to spend on the midterms

      February 2, 2026

      To avoid accusations of AI cheating, college students are turning to AI

      January 29, 2026

      ChatGPT can embrace authoritarian ideas after just one prompt, researchers say

      January 24, 2026
    • Business

      New VoidLink malware framework targets Linux cloud servers

      January 14, 2026

      Nvidia Rubin’s rack-scale encryption signals a turning point for enterprise AI security

      January 13, 2026

      How KPMG is redefining the future of SAP consulting on a global scale

      January 10, 2026

      Top 10 cloud computing stories of 2025

      December 22, 2025

      Saudia Arabia’s STC commits to five-year network upgrade programme with Ericsson

      December 18, 2025
    • Crypto

      Arthur Hayes Attributes Bitcoin Crash to ETF-Linked Dealer Hedging

      February 8, 2026

      Monero XMR Attempts First Recovery in a Month, But Death Cross Risk Looms

      February 8, 2026

      HBAR Price Eyes a Potential 30% Rally – Here’s What the Charts are Signalling 

      February 8, 2026

      Bitcoin Mining Difficulty Hits Its Biggest Drop Since 2021 China Ban

      February 8, 2026

      How Severe Is This Bitcoin Bear Market and Where Is Price Headed Next?

      February 8, 2026
    • Technology

      This HP mini PC delivers big power for $350

      February 9, 2026

      Upgrade to Windows 11 Pro for $13 and feel the difference immediately

      February 9, 2026

      This slim 1440p portable laptop monitor is 30% off

      February 9, 2026

      If you buy Razer’s insane $1337 mouse, I will be very disappointed in you

      February 9, 2026

      Nvidia is reportedly skipping consumer GPUs in 2026. Thanks, AI

      February 9, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»GEPA optimizes LLMs without costly reinforcement learning
    Technology

    GEPA optimizes LLMs without costly reinforcement learning

    TechAiVerseBy TechAiVerseAugust 19, 2025No Comments8 Mins Read4 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    GEPA optimizes LLMs without costly reinforcement learning
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    GEPA optimizes LLMs without costly reinforcement learning

    Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now


    Researchers from the University of California, Berkeley, Stanford University and Databricks have introduced a new AI optimization method called GEPA that significantly outperforms traditional reinforcement learning (RL) techniques for adapting large language models (LLMs) to specialized tasks.

    GEPA removes the popular paradigm of learning through thousands of trial-and-error attempts guided by simple numerical scores. Instead, it uses an LLM’s own language understanding to reflect on its performance, diagnose errors, and iteratively evolve its instructions. In addition to being more accurate than established techniques, GEPA is significantly more efficient, achieving superior results with up to 35 times fewer trial runs.

    For businesses building complex AI agents and workflows, this translates directly into faster development cycles, substantially lower computational costs, and more performant, reliable applications.

    Modern enterprise AI applications are rarely a single call to an LLM. They are often “compound AI systems,” complex workflows that chain multiple LLM modules, external tools such as databases or code interpreters, and custom logic to perform sophisticated tasks, including multi-step research and data analysis.


    AI Scaling Hits Its Limits

    Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

    • Turning energy into a strategic advantage
    • Architecting efficient inference for real throughput gains
    • Unlocking competitive ROI with sustainable AI systems

    Secure your spot to stay ahead: https://bit.ly/4mwGngO


    A popular way to optimize these systems is through reinforcement learning methods, such as Group Relative Policy Optimization (GRPO), a technique employed in popular reasoning models, including DeepSeek-R1. This method treats the system as a black box; it runs a task, gets a simple success metric (a “scalar reward,” like a score of 7/10), and uses this feedback to slowly nudge the model’s parameters in the right direction.

    The major drawback of RL is its sample inefficiency. To learn effectively from these sparse numerical scores, RL methods often require tens of thousands, or even hundreds of thousands, of trial runs, known as “rollouts.” For any real-world enterprise application that involves expensive tool calls (e.g., API queries, code compilation) or uses powerful proprietary models, this process is prohibitively slow and costly.

    As Lakshya A Agrawal, co-author of the paper and doctoral student at UC Berkeley, told VentureBeat, this complexity is a major barrier for many companies. “For many teams, RL is not practical due to its cost and complexity—and their go-to approach so far would often just be prompt engineering by hand,” Agrawal said. He noted that GEPA is designed for teams that need to optimize systems built on top-tier models that often can’t be fine-tuned, allowing them to improve performance without managing custom GPU clusters.

    The researchers frame this challenge as follows: “How can we extract maximal learning signal from every expensive rollout to enable effective adaptation of complex, modular AI systems in low-data or budget-constrained settings?”

    An optimizer that learns with language

    GEPA framework Source: arXiv

    GEPA (Genetic-Pareto) is a prompt optimizer that tackles this challenge by replacing sparse rewards with rich, natural language feedback. It leverages the fact that the entire execution of an AI system (including its reasoning steps, tool calls, and even error messages) can be serialized into text that an LLM can read and understand. GEPA’s methodology is built on three core pillars.

    First is “genetic prompt evolution,” where GEPA treats a population of prompts like a gene pool. It iteratively “mutates” prompts to create new, potentially better versions. This mutation is an intelligent process driven by the second pillar: “reflection with natural language feedback.” After a few rollouts, GEPA provides an LLM with the full execution trace (what the system tried to do) and the outcome (what went right or wrong). The LLM then “reflects” on this feedback in natural language to diagnose the problem and write an improved, more detailed prompt. For instance, instead of just seeing a low score on a code generation task, it might analyze a compiler error and conclude the prompt needs to specify a particular library version.

    The third pillar is “Pareto-based selection,” which ensures smart exploration. Instead of focusing only on the single best-performing prompt, which can lead to getting stuck in a suboptimal solution (a “local optimum”), GEPA maintains a diverse roster of “specialist” prompts. It tracks which prompts perform best on different individual examples, creating a list of top candidates. By sampling from this diverse set of winning strategies, GEPA ensures it explores more solutions and is more likely to discover a prompt that generalizes well across a wide range of inputs.

    Selecting a single best candidate (left) can result in models getting stuck in local minima while Pareto selection (right) can explore more options and find optimal solutions Source: arXiv

    The effectiveness of this entire process hinges on what the researchers call “feedback engineering.” Agrawal explains that the key is to surface the rich, textual details that systems already produce but often discard. “Traditional pipelines often reduce this detail to a single numerical reward, obscuring why particular outcomes occur,” he said. “GEPA’s core guidance is to structure feedback that surfaces not only outcomes but also intermediate trajectories and errors in plain text—the same evidence a human would use to diagnose system behavior.”

    For example, for a document retrieval system, this means listing which documents were retrieved correctly and which were missed, rather than just calculating a final score.

    GEPA in action

    The researchers evaluated GEPA across four diverse tasks, including multi-hop question answering (HotpotQA) and privacy-preserving queries (PUPA). They used both open-source (Qwen3 8B) and proprietary (GPT-4.1 mini) models, comparing GEPA against the RL-based GRPO and the state-of-the-art prompt optimizer MIPROv2.

    Across all tasks, GEPA substantially outperformed GRPO, achieving up to a 19% higher score while using up to 35 times fewer rollouts. Agrawal provided a concrete example of this efficiency gain: “We used GEPA to optimize a QA system in ~3 hours versus GRPO’s 24 hours—an 8x reduction in development time, while also achieving 20% higher performance,” he explained. “RL-based optimization of the same scenario in our test cost about $300 in GPU time, while GEPA cost less than $20 for better results—15x savings in our experiments.”

    GEPA outperforms other baselines on key benchmarks Source: arXiv

    Beyond raw performance, the researchers found that GEPA-optimized systems are more reliable when faced with new, unseen data. This is measured by the “generalization gap” (the difference between performance on training data and final test data). Agrawal hypothesizes that this is because GEPA learns from richer feedback. “GEPA’s smaller generalization gap may stem from its use of rich natural-language feedback on each outcome—what worked, what failed, and why—rather than relying solely on a single scalar reward,” he said. “This may encourage the system to develop instructions and strategies grounded in a broader understanding of success, instead of merely learning patterns specific to the training data.” For enterprises, this improved reliability means less brittle, more adaptable AI applications in customer-facing roles.

    A major practical benefit is that GEPA’s instruction-based prompts are up to 9.2 times shorter than prompts produced by optimizers like MIPROv2, which include many few-shot examples. Shorter prompts decrease latency and reduce costs for API-based models. This makes the final application faster and cheaper to run in production.

    The paper also presents promising results for utilizing GEPA as an “inference-time” search strategy, transforming the AI from a single-answer generator into an iterative problem solver. Agrawal described a scenario where GEPA could be integrated into a company’s CI/CD pipeline. When new code is committed, GEPA could automatically generate and refine multiple optimized versions, test them for performance, and open a pull request with the best-performing variant for engineers to review. “This turns optimization into a continuous, automated process—rapidly generating solutions that often match or surpass expert hand-tuning,” Agrawal noted. In their experiments on CUDA code generation, this approach boosted performance on 20% of tasks to an expert level, compared to 0% for a single-shot attempt from GPT-4o.

    The paper’s authors believe GEPA is a foundational step toward a new paradigm of AI development. But beyond creating more human-like AI, its most immediate impact may be in who gets to build high-performing systems.

    “We expect GEPA to enable a positive shift in AI system building—making the optimization of such systems approachable by end-users, who often have the domain expertise relevant to the task, but not necessarily the time and willingness to learn complex RL specifics,” Agrawal said. “It gives power directly to the stakeholders with the exact task-specific domain knowledge.”

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleTensorZero nabs $7.3M seed to solve the messy world of enterprise LLM development
    Next Article Hugging Face: 5 ways enterprises can slash AI costs without sacrificing performance 
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    This HP mini PC delivers big power for $350

    February 9, 2026

    Upgrade to Windows 11 Pro for $13 and feel the difference immediately

    February 9, 2026

    This slim 1440p portable laptop monitor is 30% off

    February 9, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025659 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025247 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025148 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025111 Views
    Don't Miss
    Technology February 9, 2026

    This HP mini PC delivers big power for $350

    This HP mini PC delivers big power for $350 Image: StackCommerce TL;DR: A small but powerful HP…

    Upgrade to Windows 11 Pro for $13 and feel the difference immediately

    This slim 1440p portable laptop monitor is 30% off

    If you buy Razer’s insane $1337 mouse, I will be very disappointed in you

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    This HP mini PC delivers big power for $350

    February 9, 20263 Views

    Upgrade to Windows 11 Pro for $13 and feel the difference immediately

    February 9, 20264 Views

    This slim 1440p portable laptop monitor is 30% off

    February 9, 20264 Views
    Most Popular

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views

    VTOMAN FlashSpeed 1500: Plenty Of Power For All Your Gear

    March 13, 20250 Views

    This new Roomba finally solves the big problem I have with robot vacuums

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.