Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    How a precise timing structure drives material differences in marketing efficiency

    Overheard at the Digiday AI Marketing Strategies event

    With AI backlash building, marketers reconsider their approach

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

      February 6, 2026

      Stocks and bitcoin sink as investors dump software company shares

      February 4, 2026

      AI, crypto and Trump super PACs stash millions to spend on the midterms

      February 2, 2026

      To avoid accusations of AI cheating, college students are turning to AI

      January 29, 2026

      ChatGPT can embrace authoritarian ideas after just one prompt, researchers say

      January 24, 2026
    • Business

      The HDD brand that brought you the 1.8-inch, 2.5-inch, and 3.5-inch hard drives is now back with a $19 pocket-sized personal cloud for your smartphones

      February 12, 2026

      New VoidLink malware framework targets Linux cloud servers

      January 14, 2026

      Nvidia Rubin’s rack-scale encryption signals a turning point for enterprise AI security

      January 13, 2026

      How KPMG is redefining the future of SAP consulting on a global scale

      January 10, 2026

      Top 10 cloud computing stories of 2025

      December 22, 2025
    • Crypto

      US Investors Might Be Leaving Bitcoin and Ethereum ETFs for International Markets

      February 14, 2026

      Binance France President Targeted in Armed Kidnapping Attempt

      February 14, 2026

      Binance Fires Investigators as $1 Billion Iran-Linked USDT Flows Surface

      February 14, 2026

      Aave Proposes 100% DAO Revenue Model, Yet Price Remains Under Pressure

      February 14, 2026

      A $3 Billion Credit Giant Is Testing Bitcoin in the Mortgage System — Here’s How

      February 14, 2026
    • Technology

      How a precise timing structure drives material differences in marketing efficiency

      February 14, 2026

      Overheard at the Digiday AI Marketing Strategies event

      February 14, 2026

      With AI backlash building, marketers reconsider their approach

      February 14, 2026

      Despite flight to fame, celeb talent isn’t as sure a bet as CMOs think

      February 14, 2026

      Media Briefing: Turning scraped content into paid assets — Amazon and Microsoft build AI marketplaces

      February 14, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»Agents built from alloys
    Technology

    Agents built from alloys

    TechAiVerseBy TechAiVerseJuly 21, 2025No Comments10 Mins Read3 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Agents built from alloys
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    Agents built from alloys

    July 17, 2025

    Albert Ziegler

    Head of AI


    This spring, we had a simple and, to my knowledge, novel idea that turned out to dramatically boost the performance of our vulnerability detection agents at XBOW. On fixed benchmarks and with a constrained number of iterations, we saw success rates rise from 25% to 40%, and then soon after to 55%.

    The principles behind this idea are not limited to cybersecurity. They apply to a large class of agentic AI setups. Let me share.

    XBOW’s Challenge

    XBOW is an autonomous pentester. You point it at your website, and it tries to hack it. If it finds a way in (something XBOW is rather good at), it reports back so you can fix the vulnerability. It’s autonomous, which means: once you’ve done your initial setup, no further human handholding is allowed.

    There is quite a bit to do and organize when pentesting an asset. You need to run discovery and create a mental model of the website, its tech stack, logic, and attack surface, then keep updating that mental model, building up leads and discarding them by systematically probing every part of it in different ways. That’s an interesting challenge, but not what this blog post is about. I want to talk about one particular, fungible subtask that comes up hundreds of times in each test, and for which we’ve built a dedicated subagent: you’re pointed at a part of the attack surface knowing the genre of bug you’re supposed to be looking for, and you’re supposed to demonstrate the vulnerability.

    It’s a bit like competing in a CTF challenge: try to find the flag you can only get by exploiting a vulnerability that’s placed at a certain location. In fact, we built a benchmark set of such tasks, and packaged them in a CTF-like style so we could easily repeat, scale, and assess our “solver agent’s” performance on it. The original set has, sadly, mostly outlived its usefulness because our solver agent is just too good on it by now, but we harvested more challenging examples from open source projects we ran on.

    The Agent’s Task

    On such a CTF-like challenge, the solver is basically an agentic loop set to work for a number of iterations. Each iteration consists of the solver deciding on an action: a command in a terminal, writing a Python script, running one of our pentesting tools. We vet the action and execute it, show the solver its result, and the solver decides on the next one. After a fixed number of iterations we cut our losses. Typically and for the experiments in this post, that number is 80: while we still get solves after more iterations, it becomes more efficient to start a new solver agent unburdened by the misunderstandings and false assumptions accumulated over time.

    What makes this task special, as an agentic task? Agentic AI is often used on the continuously-make-progress type of problems, where every step brings you closer to the goal. This task is more like prospecting through a vast search space: the agent digs in many places, follows false leads for a while, and eventually course corrects to strike gold somewhere else.

    Over the course of one challenge, among all the dead ends, the AI agent will need to come up with and combine a couple of great ideas.

    If you ever face an agentic AI task like that, model alloys may be for you.

    The LLM

    From our very beginning, it was part of our AI strategy that XBOW be model provider agnostic. That means we can just plug-and-play the best LLM for our use case. Our benchmark set makes it easy to compare models, and we continuously evaluate new ones. For a while, OpenAI’s GPT-4 was the best off-the-shelf model we evaluated, but since Anthropic’s Sonnet 3.5 came along in June last year, no other provider managed to come close for a while, no matter how many we tested.

    Sonnet 3.7 presented a modest but recognizable improvement over its predecessor, but when Google released Gemini 2.5 Pro (preview in March), it presented a real step up. Then Anthropic hit back with Sonnet 4.0, which performed better again. On average. On the basis of individual challenges, some are best solved by Gemini, some by Sonnet.

    That’s not terribly surprising. If every agent needs five good insights to progress through the challenge, then some sets of five are the kind that come easily to Sonnet, and some sets of five come easily to Gemini. But what about the challenges that need five good ideas, three of which are the kind that Sonnet is good at, and two are the kind that Gemini is good at?

    Alloyed Agents

    Like most typical AI agents, we call the model in a loop. The idea behind an alloy is simple: instead of always calling the same model, sometimes call one and sometimes the other.

    The trick is that you still keep to a single chat thread with one user and a single assistant. So while the true origin of the assistant messages in the conversation alternates, the models are not aware of each other. Whatever the other model said, they think it was said by them.

    So in the first round, you might call Sonnet for an action to get started, with a prompt like this:

    System:       Find the bug!

    Let’s say it tells you to use curl. You do that and gather the output to present to the model. So now you call Gemini with a prompt like this:

    System:       Find the bug!
    Assistant:    Let's start by curling the app.
    User:         You got a 401 Unauthorized response.

    Gemini might tell you to log in with the admin credentials, and you do that, and then you present the result to Sonnet:

    System:       Find the bug!
    Assistant:    Let's start by curling the app.
    User:         You got a 401 Unauthorized response.
    Assistant:    Let's try to log in with the admin credentials.
    User:         You got a 200 OK response.

    Some of the messages Sonnet believes it wrote were actually authored by Gemini, and vice versa.

    In our implementation, we actually make the model choice randomly for greater variation, but you could also alternate or experiment with more complex strategies.

    The key advantage of mixing the two models into an alloy is that:

    1. you keep the total number of model calls the same, but still
    2. you give each model the chance to contribute its strengths to the solution.

    In a situation where a couple of brilliant ideas are interspersed with workhorse like follow-up actions, this is a great way to combine the strengths of different models.

    Results

    Just like an alloy of metals is stronger than its individual components, whichever two (and sometimes three) models we combined, the alloy outperformed the individual models. Sonnet 3.7, GPT-4.1, Gemini 2.5 Pro, and Sonnet 4.0 all performed better when alloyed with each other than when used alone.

    But there are couple of trends we observed:

    • The more different the models are, the better the alloy performs. Sonnet 4.0 and Gemini 2.5 Pro have the lowest correlation in solve rates of individual challenges (at a Spearman correlation coefficient of 0.46), and the alloy boost is the highest.
    • A model that’s better individually will tend to be better in an alloy. A model lagging very far behind others can even pull an alloy down.
    • Imbalanced alloys should be balanced towards the stronger individual model. We’ll show some examples below.

    When To Use Model Alloys

    Think of alloys if:

    • You approach your task by calling an LLM in an iterative loop until you reach a solution, with at least a double digit number of model calls.
    • The task requires a number of different ideas to be combined to solve it.
    • But those ideas can come at different points in the process.
    • You have access to sufficiently different models.
    • All these models have their own strengths and weaknesses.

    When Not To Use Model Alloys

    Model alloys can be great, but they do have drawbacks. Situations that might make you think twice:

    • Your prompts are magnitudes longer than your completions and so you rely substantially on prompt caching to keep your costs down — well, with alloys you need to cache everything twice, once for each model.
    • Your task is very steady-progress, not the occasional burst of brilliance that alloys are good at combining. In that case, your alloy will probably just be as good as the average of the individuals.
    • You have a task only one model really excels at. Then you have nothing to alloy your favorite model with.
    • All your models agree on which tasks are hard and which are easy, and so they will not complement each other.

    That latter point hit home for us when we tried to alloy different models from the same provider. When alloying Sonnet 3.7 and Sonnet 4.0, or Sonnet and Haiku, we saw performance that mirrored the average of the two constituents, no more. They were simply too similar to each other.

    It was only when combining models from different providers that we saw a real boost.

    That Reminds Me Of…

    We’re obviously not the first ones to realize that two heads are better than one, and there are a myriad of ways to combine the strengths of different models. Most of them fall into one of three categories though:

    • Use different models for different tasks, something e.g. heavily emphasized in the AutoGPT ecosystem.
      It’s not always easy to define these different tasks, but one common pattern is to use a higher tier model to do the planning, and a more specialized model to execute on that plan. The higher tier model may periodically check in on the progress to offer advice or adjust the plan.
      This is a good solution in many cases; we were turned away by the amount of overhead this would add to our loop.
    • Ask different models, or the same model with different prompts, at each step. Then you either combine the answers, or take a vote, or use yet another model call to a judge to decide which answer is best. Mixture-of-Agents is a great example of that.
      This presents a multiplier on the number of model calls of course, and wouldn’t be efficient for our use case (we’d rather start more independent agents!).
    • Let models talk to each other directly, making their own case and refining each others’ answers. Exemplified in patterns like Multi-Agent Debate, this is a great solution for really critical individual actions.
      But XBOW is basically conducting a search, and it doesn’t need a committee to decide for each stone it turns over whether there might not be a better one.

    And obviously, you could just run one agent with Sonnet, and one with Gemini, and count it as a win if either of them solves the challenge. But since there’s a performance difference between those two models, that’s not even competitive against running only Sonnet 4, much less against running an alloyed agent.

    First Agent Second Agent Combined Success Rate
    Gemini 2.5 Gemini 2.5 46.4%
    Sonnet 4.0 Sonnet 4.0 57.5%
    Sonnet 4.0 Gemini 2.5 57.2%
    Alloy 2.5 + 4.0 Alloy 2.5 + 4.0 68.8%

    Data

    If you want to play around with our data, do go ahead, we’re sharing it here — maybe you’ll see something we missed.

    More interestingly though, if you have a use case where you think model alloys might help, try it out! And write to me about it at [email protected] — I’d love to hear about your experience!

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleShow HN: X11 desktop widget that shows location of your network peers on a map
    Next Article Log by time, not by count
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    How a precise timing structure drives material differences in marketing efficiency

    February 14, 2026

    Overheard at the Digiday AI Marketing Strategies event

    February 14, 2026

    With AI backlash building, marketers reconsider their approach

    February 14, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025671 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025259 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025153 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025112 Views
    Don't Miss
    Technology February 14, 2026

    How a precise timing structure drives material differences in marketing efficiency

    How a precise timing structure drives material differences in marketing efficiencyRelying on a gut feeling…

    Overheard at the Digiday AI Marketing Strategies event

    With AI backlash building, marketers reconsider their approach

    Despite flight to fame, celeb talent isn’t as sure a bet as CMOs think

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    How a precise timing structure drives material differences in marketing efficiency

    February 14, 20262 Views

    Overheard at the Digiday AI Marketing Strategies event

    February 14, 20262 Views

    With AI backlash building, marketers reconsider their approach

    February 14, 20262 Views
    Most Popular

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views

    VTOMAN FlashSpeed 1500: Plenty Of Power For All Your Gear

    March 13, 20250 Views

    This new Roomba finally solves the big problem I have with robot vacuums

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.