Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Galaxy S26 details leaked with 25 February launch date

    Games with co-op modes generated $8.2 billion in gross revenue on Steam in 2025

    Humble Bundle offers 7 acclaimed shooters with over 450,000 combined reviews for $20

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      AI, crypto and Trump super PACs stash millions to spend on the midterms

      February 2, 2026

      To avoid accusations of AI cheating, college students are turning to AI

      January 29, 2026

      ChatGPT can embrace authoritarian ideas after just one prompt, researchers say

      January 24, 2026

      Ashley St. Clair, the mother of one of Elon Musk’s children, sues xAI over Grok sexual images

      January 17, 2026

      Anthropic joins OpenAI’s push into health care with new Claude tools

      January 12, 2026
    • Business

      New VoidLink malware framework targets Linux cloud servers

      January 14, 2026

      Nvidia Rubin’s rack-scale encryption signals a turning point for enterprise AI security

      January 13, 2026

      How KPMG is redefining the future of SAP consulting on a global scale

      January 10, 2026

      Top 10 cloud computing stories of 2025

      December 22, 2025

      Saudia Arabia’s STC commits to five-year network upgrade programme with Ericsson

      December 18, 2025
    • Crypto

      XRP Advances 3% After Ripple Achieves Major Regulatory Breakthrough in Europe

      February 3, 2026

      BitMEX Launches the Grand Ascent Campaign Featuring a 100,000 USDT Prize Pool

      February 3, 2026

      At $76K, Strategy’s Average Cost Meets Bitcoin’s Current Price

      February 3, 2026

      Solana Rebounds After Sell-Off as Big Money Returns — Why $120 Matters Next

      February 3, 2026

      Clarity Act Loses Clarity Over Trump’s UAE Crypto Deal

      February 3, 2026
    • Technology

      Games with co-op modes generated $8.2 billion in gross revenue on Steam in 2025

      February 3, 2026

      Humble Bundle offers 7 acclaimed shooters with over 450,000 combined reviews for $20

      February 3, 2026

      Casio launches new G-Shock Mudmaster watches with quad sensor, mission log feature and a tougher shell

      February 3, 2026

      Anker unveils Solix C2000 Gen 2 portable power station with 2,048 Wh capacity and launch discount

      February 3, 2026

      Moto G17 may not receive any Android updates as Motorola cites lack of EU requirements

      February 3, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»Can we fix AI’s evaluation crisis?
    Technology

    Can we fix AI’s evaluation crisis?

    TechAiVerseBy TechAiVerseJune 24, 2025No Comments6 Mins Read1 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Can we fix AI’s evaluation crisis?
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    Can we fix AI’s evaluation crisis?

    As a tech reporter I often get asked questions like “Is DeepSeek actually better than ChatGPT?” or “Is the Anthropic model any good?” If I don’t feel like turning it into an hour-long seminar, I’ll usually give the diplomatic answer: “They’re both solid in different ways.”

    Most people asking aren’t defining “good” in any precise way, and that’s fair. It’s human to want to make sense of something new and seemingly powerful. But that simple question—Is this model good?—is really just the everyday version of a much more complicated technical problem.

    So far, the way we’ve tried to answer that question is through benchmarks. These give models a fixed set of questions to answer and grade them on how many they get right. But just like exams like the SAT (an admissions test used by many US colleges), these benchmarks don’t always reflect deeper abilities. Lately it feels as if a new AI model drops every week, and every time a company launches one, it comes with fresh scores showing it beating the capabilities of predecessors. On paper, everything appears to be getting better all the time.

    In practice, it’s not so simple. Just as grinding for the SAT might boost your score without improving your critical thinking, models can be trained to optimize for benchmark results without actually getting smarter, as Russell Brandon explained in his piece for us. As OpenAI and Tesla AI veteran Andrej Karpathy recently put it, we’re living through an evaluation crisis—our scoreboard for AI no longer reflects what we really want to measure.

    Benchmarks have grown stale for a few key reasons. First, the industry has learned to “teach to the test,” training AI models to score well rather than genuinely improve. Second, widespread data contamination means models may have already seen the benchmark questions, or even the answers, somewhere in their training data. And finally, many benchmarks are simply maxed out. On popular tests like SuperGLUE, models have already reached or surpassed 90% accuracy, making further gains feel more like statistical noise than meaningful improvement. At that point, the scores stop telling us anything useful. That’s especially true in high-skill domains like coding, reasoning, and complex STEM problem-solving. 

    However, there are a growing number of teams around the world trying to address the AI evaluation crisis. 

    One result is a new benchmark called LiveCodeBench Pro. It draws problems from international algorithmic olympiads—competitions for elite high school and university programmers where participants solve challenging problems without external tools. The top AI models currently manage only about 53% at first pass on medium-difficulty problems and 0% on the hardest ones. These are tasks where human experts routinely excel.

    Zihan Zheng, a junior at NYU and a world finalist in competitive coding, led the project to develop LiveCodeBench Pro with a team of olympiad medalists. They’ve published both the benchmark and a detailed study showing that top-tier models like GPT-4o mini and Google’s Gemini 2.5 perform at a level comparable to the top 10% of human competitors. Across the board, Zheng observed a pattern: AI excels at making plans and executing tasks, but it struggles with nuanced algorithmic reasoning. “It shows that AI is still far from matching the best human coders,” he says.

    LiveCodeBench Pro might define a new upper bar. But what about the floor? Earlier this month, a group of researchers from multiple universities argued that LLM agents should be evaluated primarily on the basis of their riskiness, not just how well they perform. In real-world, application-driven environments—especially with AI agents—unreliability, hallucinations, and brittleness are ruinous. One wrong move could spell disaster when money or safety are on the line.

    There are other new attempts to address the problem. Some benchmarks, like ARC-AGI, now keep part of their data set private to prevent AI models from being optimized excessively for the test, a problem called “overfitting.” Meta’s Yann LeCun has created LiveBench, a dynamic benchmark where questions evolve every six months. The goal is to evaluate models not just on knowledge but on adaptability.

    Xbench, a Chinese benchmark project developed by HongShan Capital Group (formerly Sequoia China), is another one of these effort. I just wrote about it in a story. Xbench was initially built in 2022—right after ChatGPT’s launch—as an internal tool to evaluate models for investment research. Over time, the team expanded the system and brought in external collaborators. It just made parts of its question set publicly available last week. 

    Xbench is notable for its dual-track design, which tries to bridge the gap between lab-based tests and real-world utility. The first track evaluates technical reasoning skills by testing a model’s STEM knowledge and ability to carry out Chinese-language research. The second track aims to assess practical usefulness—how well a model performs on tasks in fields like recruitment and marketing. For instance, one task asks an agent to identify five qualified battery engineer candidates; another has it match brands with relevant influencers from a pool of more than 800 creators. 

    The team behind Xbench has big ambitions. They plan to expand its testing capabilities into sectors like finance, law, and design, and they plan to update the test set quarterly to avoid stagnation. 

    This is something that I often wonder about, because a model’s hardcore reasoning ability doesn’t necessarily translate into a fun, informative, and creative experience. Most queries from average users are probably not going to be rocket science. There isn’t much research yet on how to effectively evaluate a model’s creativity, but I’d love to know which model would be the best for creative writing or art projects.

    Human preference testing has also emerged as an alternative to benchmarks. One increasingly popular platform is LMarena, which lets users submit questions and compare responses from different models side by side—and then pick which one they like best. Still, this method has its flaws. Users sometimes reward the answer that sounds more flattering or agreeable, even if it’s wrong. That can incentivize “sweet-talking” models and skew results in favor of pandering.

    AI researchers are beginning to realize—and admit—that the status quo of AI testing cannot continue. At the recent CVPR conference, NYU professor Saining Xie drew on historian James Carse’s Finite and Infinite Games to critique the hypercompetitive culture of AI research. An infinite game, he noted, is open-ended—the goal is to keep playing. But in AI, a dominant player often drops a big result, triggering a wave of follow-up papers chasing the same narrow topic. This race-to-publish culture puts enormous pressure on researchers and rewards speed over depth, short-term wins over long-term insight. “If academia chooses to play a finite game,” he warned, “it will lose everything.”

    I found his framing powerful—and maybe it applies to benchmarks, too. So, do we have a truly comprehensive scoreboard for how good a model is? Not really. Many dimensions—social, emotional, interdisciplinary—still evade assessment. But the wave of new benchmarks hints at a shift. As the field evolves, a bit of skepticism is probably healthy.

    This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleA Chinese firm has just launched a constantly changing set of AI benchmarks
    Next Article Namibia wants to build the world’s first hydrogen economy
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    Games with co-op modes generated $8.2 billion in gross revenue on Steam in 2025

    February 3, 2026

    Humble Bundle offers 7 acclaimed shooters with over 450,000 combined reviews for $20

    February 3, 2026

    Casio launches new G-Shock Mudmaster watches with quad sensor, mission log feature and a tougher shell

    February 3, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025651 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025245 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025145 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025111 Views
    Don't Miss
    Gadgets February 3, 2026

    Galaxy S26 details leaked with 25 February launch date

    Galaxy S26 details leaked with 25 February launch date While we have seen a whole…

    Games with co-op modes generated $8.2 billion in gross revenue on Steam in 2025

    Humble Bundle offers 7 acclaimed shooters with over 450,000 combined reviews for $20

    Casio launches new G-Shock Mudmaster watches with quad sensor, mission log feature and a tougher shell

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Galaxy S26 details leaked with 25 February launch date

    February 3, 20262 Views

    Games with co-op modes generated $8.2 billion in gross revenue on Steam in 2025

    February 3, 20262 Views

    Humble Bundle offers 7 acclaimed shooters with over 450,000 combined reviews for $20

    February 3, 20262 Views
    Most Popular

    A Team of Female Founders Is Launching Cloud Security Tech That Could Overhaul AI Protection

    March 12, 20250 Views

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views

    VTOMAN FlashSpeed 1500: Plenty Of Power For All Your Gear

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.