Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Asus ExpertCenter PN54 reviewed

    Huawei MatePad Mini: Launch date confirmed for compact flagship tablet with OLED screen

    P40WD-40: New Lenovo ThinkVision monitor leaks with Thunderbolt 4 and 120 Hz refresh rate for professionals

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Blue-collar jobs are gaining popularity as AI threatens office work

      August 17, 2025

      Man who asked ChatGPT about cutting out salt from his diet was hospitalized with hallucinations

      August 15, 2025

      What happens when chatbots shape your reality? Concerns are growing online

      August 14, 2025

      Scientists want to prevent AI from going rogue by teaching it to be bad first

      August 8, 2025

      AI models may be accidentally (and secretly) learning each other’s bad behaviors

      July 30, 2025
    • Business

      Why Certified VMware Pros Are Driving the Future of IT

      August 24, 2025

      Murky Panda hackers exploit cloud trust to hack downstream customers

      August 23, 2025

      The rise of sovereign clouds: no data portability, no party

      August 20, 2025

      Israel is reportedly storing millions of Palestinian phone calls on Microsoft servers

      August 6, 2025

      AI site Perplexity uses “stealth tactics” to flout no-crawl edicts, Cloudflare says

      August 5, 2025
    • Crypto

      Chainlink (LINK) Price Uptrend Likely To Reverse as Charts Hint at Exhaustion

      August 31, 2025

      What to Expect From Solana in September

      August 31, 2025

      Bitcoin Risks Deeper Drop Toward $100,000 Amid Whale Rotation Into Ethereum

      August 31, 2025

      3 Altcoins Smart Money Are Buying During Market Pullback

      August 31, 2025

      Solana ETFs Move Closer to Approval as SEC Reviews Amended Filings

      August 31, 2025
    • Technology

      Asus ExpertCenter PN54 reviewed

      August 31, 2025

      Huawei MatePad Mini: Launch date confirmed for compact flagship tablet with OLED screen

      August 31, 2025

      P40WD-40: New Lenovo ThinkVision monitor leaks with Thunderbolt 4 and 120 Hz refresh rate for professionals

      August 31, 2025

      Best AI Workstation Processors 2025: Why AMD Ryzen Beats Intel for Local AI Computing for now!

      August 31, 2025

      How to turn a USB flash drive into a portable games console

      August 31, 2025
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»Agents built from alloys
    Technology

    Agents built from alloys

    TechAiVerseBy TechAiVerseJuly 21, 2025No Comments10 Mins Read2 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Agents built from alloys
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    BMI Calculator – Check your Body Mass Index for free!

    Agents built from alloys

    July 17, 2025

    Albert Ziegler

    Head of AI


    This spring, we had a simple and, to my knowledge, novel idea that turned out to dramatically boost the performance of our vulnerability detection agents at XBOW. On fixed benchmarks and with a constrained number of iterations, we saw success rates rise from 25% to 40%, and then soon after to 55%.

    The principles behind this idea are not limited to cybersecurity. They apply to a large class of agentic AI setups. Let me share.

    XBOW’s Challenge

    XBOW is an autonomous pentester. You point it at your website, and it tries to hack it. If it finds a way in (something XBOW is rather good at), it reports back so you can fix the vulnerability. It’s autonomous, which means: once you’ve done your initial setup, no further human handholding is allowed.

    There is quite a bit to do and organize when pentesting an asset. You need to run discovery and create a mental model of the website, its tech stack, logic, and attack surface, then keep updating that mental model, building up leads and discarding them by systematically probing every part of it in different ways. That’s an interesting challenge, but not what this blog post is about. I want to talk about one particular, fungible subtask that comes up hundreds of times in each test, and for which we’ve built a dedicated subagent: you’re pointed at a part of the attack surface knowing the genre of bug you’re supposed to be looking for, and you’re supposed to demonstrate the vulnerability.

    It’s a bit like competing in a CTF challenge: try to find the flag you can only get by exploiting a vulnerability that’s placed at a certain location. In fact, we built a benchmark set of such tasks, and packaged them in a CTF-like style so we could easily repeat, scale, and assess our “solver agent’s” performance on it. The original set has, sadly, mostly outlived its usefulness because our solver agent is just too good on it by now, but we harvested more challenging examples from open source projects we ran on.

    The Agent’s Task

    On such a CTF-like challenge, the solver is basically an agentic loop set to work for a number of iterations. Each iteration consists of the solver deciding on an action: a command in a terminal, writing a Python script, running one of our pentesting tools. We vet the action and execute it, show the solver its result, and the solver decides on the next one. After a fixed number of iterations we cut our losses. Typically and for the experiments in this post, that number is 80: while we still get solves after more iterations, it becomes more efficient to start a new solver agent unburdened by the misunderstandings and false assumptions accumulated over time.

    What makes this task special, as an agentic task? Agentic AI is often used on the continuously-make-progress type of problems, where every step brings you closer to the goal. This task is more like prospecting through a vast search space: the agent digs in many places, follows false leads for a while, and eventually course corrects to strike gold somewhere else.

    Over the course of one challenge, among all the dead ends, the AI agent will need to come up with and combine a couple of great ideas.

    If you ever face an agentic AI task like that, model alloys may be for you.

    The LLM

    From our very beginning, it was part of our AI strategy that XBOW be model provider agnostic. That means we can just plug-and-play the best LLM for our use case. Our benchmark set makes it easy to compare models, and we continuously evaluate new ones. For a while, OpenAI’s GPT-4 was the best off-the-shelf model we evaluated, but since Anthropic’s Sonnet 3.5 came along in June last year, no other provider managed to come close for a while, no matter how many we tested.

    Sonnet 3.7 presented a modest but recognizable improvement over its predecessor, but when Google released Gemini 2.5 Pro (preview in March), it presented a real step up. Then Anthropic hit back with Sonnet 4.0, which performed better again. On average. On the basis of individual challenges, some are best solved by Gemini, some by Sonnet.

    That’s not terribly surprising. If every agent needs five good insights to progress through the challenge, then some sets of five are the kind that come easily to Sonnet, and some sets of five come easily to Gemini. But what about the challenges that need five good ideas, three of which are the kind that Sonnet is good at, and two are the kind that Gemini is good at?

    Alloyed Agents

    Like most typical AI agents, we call the model in a loop. The idea behind an alloy is simple: instead of always calling the same model, sometimes call one and sometimes the other.

    The trick is that you still keep to a single chat thread with one user and a single assistant. So while the true origin of the assistant messages in the conversation alternates, the models are not aware of each other. Whatever the other model said, they think it was said by them.

    So in the first round, you might call Sonnet for an action to get started, with a prompt like this:

    System:       Find the bug!

    Let’s say it tells you to use curl. You do that and gather the output to present to the model. So now you call Gemini with a prompt like this:

    System:       Find the bug!
    Assistant:    Let's start by curling the app.
    User:         You got a 401 Unauthorized response.

    Gemini might tell you to log in with the admin credentials, and you do that, and then you present the result to Sonnet:

    System:       Find the bug!
    Assistant:    Let's start by curling the app.
    User:         You got a 401 Unauthorized response.
    Assistant:    Let's try to log in with the admin credentials.
    User:         You got a 200 OK response.

    Some of the messages Sonnet believes it wrote were actually authored by Gemini, and vice versa.

    In our implementation, we actually make the model choice randomly for greater variation, but you could also alternate or experiment with more complex strategies.

    The key advantage of mixing the two models into an alloy is that:

    1. you keep the total number of model calls the same, but still
    2. you give each model the chance to contribute its strengths to the solution.

    In a situation where a couple of brilliant ideas are interspersed with workhorse like follow-up actions, this is a great way to combine the strengths of different models.

    Results

    Just like an alloy of metals is stronger than its individual components, whichever two (and sometimes three) models we combined, the alloy outperformed the individual models. Sonnet 3.7, GPT-4.1, Gemini 2.5 Pro, and Sonnet 4.0 all performed better when alloyed with each other than when used alone.

    But there are couple of trends we observed:

    • The more different the models are, the better the alloy performs. Sonnet 4.0 and Gemini 2.5 Pro have the lowest correlation in solve rates of individual challenges (at a Spearman correlation coefficient of 0.46), and the alloy boost is the highest.
    • A model that’s better individually will tend to be better in an alloy. A model lagging very far behind others can even pull an alloy down.
    • Imbalanced alloys should be balanced towards the stronger individual model. We’ll show some examples below.

    When To Use Model Alloys

    Think of alloys if:

    • You approach your task by calling an LLM in an iterative loop until you reach a solution, with at least a double digit number of model calls.
    • The task requires a number of different ideas to be combined to solve it.
    • But those ideas can come at different points in the process.
    • You have access to sufficiently different models.
    • All these models have their own strengths and weaknesses.

    When Not To Use Model Alloys

    Model alloys can be great, but they do have drawbacks. Situations that might make you think twice:

    • Your prompts are magnitudes longer than your completions and so you rely substantially on prompt caching to keep your costs down — well, with alloys you need to cache everything twice, once for each model.
    • Your task is very steady-progress, not the occasional burst of brilliance that alloys are good at combining. In that case, your alloy will probably just be as good as the average of the individuals.
    • You have a task only one model really excels at. Then you have nothing to alloy your favorite model with.
    • All your models agree on which tasks are hard and which are easy, and so they will not complement each other.

    That latter point hit home for us when we tried to alloy different models from the same provider. When alloying Sonnet 3.7 and Sonnet 4.0, or Sonnet and Haiku, we saw performance that mirrored the average of the two constituents, no more. They were simply too similar to each other.

    It was only when combining models from different providers that we saw a real boost.

    That Reminds Me Of…

    We’re obviously not the first ones to realize that two heads are better than one, and there are a myriad of ways to combine the strengths of different models. Most of them fall into one of three categories though:

    • Use different models for different tasks, something e.g. heavily emphasized in the AutoGPT ecosystem.
      It’s not always easy to define these different tasks, but one common pattern is to use a higher tier model to do the planning, and a more specialized model to execute on that plan. The higher tier model may periodically check in on the progress to offer advice or adjust the plan.
      This is a good solution in many cases; we were turned away by the amount of overhead this would add to our loop.
    • Ask different models, or the same model with different prompts, at each step. Then you either combine the answers, or take a vote, or use yet another model call to a judge to decide which answer is best. Mixture-of-Agents is a great example of that.
      This presents a multiplier on the number of model calls of course, and wouldn’t be efficient for our use case (we’d rather start more independent agents!).
    • Let models talk to each other directly, making their own case and refining each others’ answers. Exemplified in patterns like Multi-Agent Debate, this is a great solution for really critical individual actions.
      But XBOW is basically conducting a search, and it doesn’t need a committee to decide for each stone it turns over whether there might not be a better one.

    And obviously, you could just run one agent with Sonnet, and one with Gemini, and count it as a win if either of them solves the challenge. But since there’s a performance difference between those two models, that’s not even competitive against running only Sonnet 4, much less against running an alloyed agent.

    First Agent Second Agent Combined Success Rate
    Gemini 2.5 Gemini 2.5 46.4%
    Sonnet 4.0 Sonnet 4.0 57.5%
    Sonnet 4.0 Gemini 2.5 57.2%
    Alloy 2.5 + 4.0 Alloy 2.5 + 4.0 68.8%

    Data

    If you want to play around with our data, do go ahead, we’re sharing it here — maybe you’ll see something we missed.

    More interestingly though, if you have a use case where you think model alloys might help, try it out! And write to me about it at [email protected] — I’d love to hear about your experience!

    BMI Calculator – Check your Body Mass Index for free!

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleShow HN: X11 desktop widget that shows location of your network peers on a map
    Next Article Log by time, not by count
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    Asus ExpertCenter PN54 reviewed

    August 31, 2025

    Huawei MatePad Mini: Launch date confirmed for compact flagship tablet with OLED screen

    August 31, 2025

    P40WD-40: New Lenovo ThinkVision monitor leaks with Thunderbolt 4 and 120 Hz refresh rate for professionals

    August 31, 2025
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025168 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 202548 Views

    New Akira ransomware decryptor cracks encryptions keys using GPUs

    March 16, 202530 Views

    Is Libby Compatible With Kobo E-Readers?

    March 31, 202528 Views
    Don't Miss
    Technology August 31, 2025

    Asus ExpertCenter PN54 reviewed

    Asus ExpertCenter PN54 reviewed – what the mini PC with AMD Ryzen AI 7 350…

    Huawei MatePad Mini: Launch date confirmed for compact flagship tablet with OLED screen

    P40WD-40: New Lenovo ThinkVision monitor leaks with Thunderbolt 4 and 120 Hz refresh rate for professionals

    Best AI Workstation Processors 2025: Why AMD Ryzen Beats Intel for Local AI Computing for now!

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Asus ExpertCenter PN54 reviewed

    August 31, 20252 Views

    Huawei MatePad Mini: Launch date confirmed for compact flagship tablet with OLED screen

    August 31, 20252 Views

    P40WD-40: New Lenovo ThinkVision monitor leaks with Thunderbolt 4 and 120 Hz refresh rate for professionals

    August 31, 20252 Views
    Most Popular

    Xiaomi 15 Ultra Officially Launched in China, Malaysia launch to follow after global event

    March 12, 20250 Views

    Apple thinks people won’t use MagSafe on iPhone 16e

    March 12, 20250 Views

    French Apex Legends voice cast refuses contracts over “unacceptable” AI clause

    March 12, 20250 Views
    © 2025 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.