Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Scientists Say New Government Climate Report Twists Their Work

    Google’s Newest AI Model Acts like a Satellite to Track Climate Change

    Gamers Are Furious About the Censorship of NSFW Games—and They’re Fighting Back

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      AI models may be accidentally (and secretly) learning each other’s bad behaviors

      July 30, 2025

      Another Chinese AI model is turning heads

      July 15, 2025

      AI chatbot Grok issues apology for antisemitic posts

      July 13, 2025

      Apple sued by shareholders for allegedly overstating AI progress

      June 22, 2025

      How far will AI go to defend its own survival?

      June 2, 2025
    • Business

      Cloudflare open-sources Orange Meets with End-to-End encryption

      June 29, 2025

      Google links massive cloud outage to API management issue

      June 13, 2025

      The EU challenges Google and Cloudflare with its very own DNS resolver that can filter dangerous traffic

      June 11, 2025

      These two Ivanti bugs are allowing hackers to target cloud instances

      May 21, 2025

      How cloud and AI transform and improve customer experiences

      May 10, 2025
    • Crypto

      A Once-Rumored Trump Target Is Now Betting Big on Bitcoin

      July 29, 2025

      XLM Downturn Looms: MACD Crossover and Negative Sentiment Raise Red Flags

      July 29, 2025

      Ten Years of Ethereum: How a Blockchain Dream Grew Up

      July 29, 2025

      Bitcoin Is Becoming the Credit Default Swap on a Collapsing Fiat System | US Crypto News

      July 29, 2025

      HTX Introduces Custom Invitation Code Feature with Referral Campaign Offering 1 BTC Grand Prize

      July 29, 2025
    • Technology

      Scientists Say New Government Climate Report Twists Their Work

      July 30, 2025

      Google’s Newest AI Model Acts like a Satellite to Track Climate Change

      July 30, 2025

      Gamers Are Furious About the Censorship of NSFW Games—and They’re Fighting Back

      July 30, 2025

      I Watched AI Agents Try to Hack My Vibe-Coded Website

      July 30, 2025

      Everything You Wanted to Know About China’s Auto Industry Takeover

      July 30, 2025
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»Just add humans: Oxford medical study underscores the missing link in chatbot testing
    Technology

    Just add humans: Oxford medical study underscores the missing link in chatbot testing

    TechAiVerseBy TechAiVerseJune 14, 2025No Comments9 Mins Read2 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Just add humans: Oxford medical study underscores the missing link in chatbot testing
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    BMI Calculator – Check your Body Mass Index for free!

    Just add humans: Oxford medical study underscores the missing link in chatbot testing

    June 13, 2025 5:34 PM

    Created by VentureBeat using ChatGPT

    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more


    Headlines have been blaring it for years: Large language models (LLMs) can not only pass medical licensing exams but also outperform humans. GPT-4 could correctly answer U.S. medical exam licensing questions 90% of the time, even in the prehistoric AI days of 2023. Since then, LLMs have gone on to best the residents taking those exams and licensed physicians.

    Move over, Doctor Google, make way for ChatGPT, M.D. But you may want more than a diploma from the LLM you deploy for patients. Like an ace medical student who can rattle off the name of every bone in the hand but faints at the first sight of real blood, an LLM’s mastery of medicine does not always translate directly into the real world.

    A paper by researchers at the University of Oxford found that while LLMs could correctly identify relevant conditions 94.9% of the time when directly presented with test scenarios, human participants using LLMs to diagnose the same scenarios identified the correct conditions less than 34.5% of the time.

    Perhaps even more notably, patients using LLMs performed even worse than a control group that was merely instructed to diagnose themselves using “any methods they would typically employ at home.” The group left to their own devices was 76% more likely to identify the correct conditions than the group assisted by LLMs.

    The Oxford study raises questions about the suitability of LLMs for medical advice and the benchmarks we use to evaluate chatbot deployments for various applications.

    Guess your malady

    Led by Dr. Adam Mahdi, researchers at Oxford recruited 1,298 participants to present themselves as patients to an LLM. They were tasked with both attempting to figure out what ailed them and the appropriate level of care to seek for it, ranging from self-care to calling an ambulance.

    Each participant received a detailed scenario, representing conditions from pneumonia to the common cold, along with general life details and medical history. For instance, one scenario describes a 20-year-old engineering student who develops a crippling headache on a night out with friends. It includes important medical details (it’s painful to look down) and red herrings (he’s a regular drinker, shares an apartment with six friends, and just finished some stressful exams).

    The study tested three different LLMs. The researchers selected GPT-4o on account of its popularity, Llama 3 for its open weights and Command R+ for its retrieval-augmented generation (RAG) abilities, which allow it to search the open web for help.

    Participants were asked to interact with the LLM at least once using the details provided, but could use it as many times as they wanted to arrive at their self-diagnosis and intended action.

    Behind the scenes, a team of physicians unanimously decided on the “gold standard” conditions they sought in every scenario, and the corresponding course of action. Our engineering student, for example, is suffering from a subarachnoid haemorrhage, which should entail an immediate visit to the ER.

    A game of telephone

    While you might assume an LLM that can ace a medical exam would be the perfect tool to help ordinary people self-diagnose and figure out what to do, it didn’t work out that way. “Participants using an LLM identified relevant conditions less consistently than those in the control group, identifying at least one relevant condition in at most 34.5% of cases compared to 47.0% for the control,” the study states. They also failed to deduce the correct course of action, selecting it just 44.2% of the time, compared to 56.3% for an LLM acting independently.

    What went wrong?

    Looking back at transcripts, researchers found that participants both provided incomplete information to the LLMs and the LLMs misinterpreted their prompts. For instance, one user who was supposed to exhibit symptoms of gallstones merely told the LLM: “I get severe stomach pains lasting up to an hour, It can make me vomit and seems to coincide with a takeaway,” omitting the location of the pain, the severity, and the frequency. Command R+ incorrectly suggested that the participant was experiencing indigestion, and the participant incorrectly guessed that condition.

    Even when LLMs delivered the correct information, participants didn’t always follow its recommendations. The study found that 65.7% of GPT-4o conversations suggested at least one relevant condition for the scenario, but somehow less than 34.5% of final answers from participants reflected those relevant conditions.

    The human variable

    This study is useful, but not surprising, according to Nathalie Volkheimer, a user experience specialist at the Renaissance Computing Institute (RENCI), University of North Carolina at Chapel Hill.

    “For those of us old enough to remember the early days of internet search, this is déjà vu,” she says. “As a tool, large language models require prompts to be written with a particular degree of quality, especially when expecting a quality output.”

    She points out that someone experiencing blinding pain wouldn’t offer great prompts. Although participants in a lab experiment weren’t experiencing the symptoms directly, they weren’t relaying every detail.

    “There is also a reason why clinicians who deal with patients on the front line are trained to ask questions in a certain way and a certain repetitiveness,” Volkheimer goes on. Patients omit information because they don’t know what’s relevant, or at worst, lie because they’re embarrassed or ashamed.

    Can chatbots be better designed to address them? “I wouldn’t put the emphasis on the machinery here,” Volkheimer cautions. “I would consider the emphasis should be on the human-technology interaction.” The car, she analogizes, was built to get people from point A to B, but many other factors play a role. “It’s about the driver, the roads, the weather, and the general safety of the route. It isn’t just up to the machine.”

    A better yardstick

    The Oxford study highlights one problem, not with humans or even LLMs, but with the way we sometimes measure them—in a vacuum.

    When we say an LLM can pass a medical licensing test, real estate licensing exam, or a state bar exam, we’re probing the depths of its knowledge base using tools designed to evaluate humans. However, these measures tell us very little about how successfully these chatbots will interact with humans.

    “The prompts were textbook (as validated by the source and medical community), but life and people are not textbook,” explains Dr. Volkheimer.

    Imagine an enterprise about to deploy a support chatbot trained on its internal knowledge base. One seemingly logical way to test that bot might simply be to have it take the same test the company uses for customer support trainees: answering prewritten “customer” support questions and selecting multiple-choice answers. An accuracy of 95% would certainly look pretty promising.

    Then comes deployment: Real customers use vague terms, express frustration, or describe problems in unexpected ways. The LLM, benchmarked only on clear-cut questions, gets confused and provides incorrect or unhelpful answers. It hasn’t been trained or evaluated on de-escalating situations or seeking clarification effectively. Angry reviews pile up. The launch is a disaster, despite the LLM sailing through tests that seemed robust for its human counterparts.

    This study serves as a critical reminder for AI engineers and orchestration specialists: if an LLM is designed to interact with humans, relying solely on non-interactive benchmarks can create a dangerous false sense of security about its real-world capabilities. If you’re designing an LLM to interact with humans, you need to test it with humans – not tests for humans. But is there a better way?

    Using AI to test AI

    The Oxford researchers recruited nearly 1,300 people for their study, but most enterprises don’t have a pool of test subjects sitting around waiting to play with a new LLM agent. So why not just substitute AI testers for human testers?

    Mahdi and his team tried that, too, with simulated participants. “You are a patient,” they prompted an LLM, separate from the one that would provide the advice. “You have to self-assess your symptoms from the given case vignette and assistance from an AI model. Simplify terminology used in the given paragraph to layman language and keep your questions or statements reasonably short.” The LLM was also instructed not to use medical knowledge or generate new symptoms.

    These simulated participants then chatted with the same LLMs the human participants used. But they performed much better. On average, simulated participants using the same LLM tools nailed the relevant conditions 60.7% of the time, compared to below 34.5% in humans.

    In this case, it turns out LLMs play nicer with other LLMs than humans do, which makes them a poor predictor of real-life performance.

    Don’t blame the user

    Given the scores LLMs could attain on their own, it might be tempting to blame the participants here. After all, in many cases, they received the right diagnoses in their conversations with LLMs, but still failed to correctly guess it. But that would be a foolhardy conclusion for any business, Volkheimer warns.

    “In every customer environment, if your customers aren’t doing the thing you want them to, the last thing you do is blame the customer,” says Volkheimer. “The first thing you do is ask why. And not the ‘why’ off the top of your head: but a deep investigative, specific, anthropological, psychological, examined ‘why.’ That’s your starting point.”

    You need to understand your audience, their goals, and the customer experience before deploying a chatbot, Volkheimer suggests. All of these will inform the thorough, specialized documentation that will ultimately make an LLM useful. Without carefully curated training materials, “It’s going to spit out some generic answer everyone hates, which is why people hate chatbots,” she says. When that happens, “It’s not because chatbots are terrible or because there’s something technically wrong with them. It’s because the stuff that went in them is bad.”

    “The people designing technology, developing the information to go in there and the processes and systems are, well, people,” says Volkheimer. “They also have background, assumptions, flaws and blindspots, as well as strengths. And all those things can get built into any technological solution.”

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    BMI Calculator – Check your Body Mass Index for free!

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleDo reasoning models really “think” or not? Apple research sparks lively debate, response
    Next Article From Rivals to Partners: What’s Up with the Google and OpenAI Cloud Deal?
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    Scientists Say New Government Climate Report Twists Their Work

    July 30, 2025

    Google’s Newest AI Model Acts like a Satellite to Track Climate Change

    July 30, 2025

    Gamers Are Furious About the Censorship of NSFW Games—and They’re Fighting Back

    July 30, 2025
    Leave A Reply Cancel Reply

    Top Posts

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 202532 Views

    New Akira ransomware decryptor cracks encryptions keys using GPUs

    March 16, 202529 Views

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 202528 Views

    OpenAI details ChatGPT-o3, o4-mini, o4-mini-high usage limits

    April 19, 202522 Views
    Don't Miss
    Technology July 30, 2025

    Scientists Say New Government Climate Report Twists Their Work

    Scientists Say New Government Climate Report Twists Their WorkAll products featured on WIRED are independently…

    Google’s Newest AI Model Acts like a Satellite to Track Climate Change

    Gamers Are Furious About the Censorship of NSFW Games—and They’re Fighting Back

    I Watched AI Agents Try to Hack My Vibe-Coded Website

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Scientists Say New Government Climate Report Twists Their Work

    July 30, 20250 Views

    Google’s Newest AI Model Acts like a Satellite to Track Climate Change

    July 30, 20250 Views

    Gamers Are Furious About the Censorship of NSFW Games—and They’re Fighting Back

    July 30, 20250 Views
    Most Popular

    Xiaomi 15 Ultra Officially Launched in China, Malaysia launch to follow after global event

    March 12, 20250 Views

    Apple thinks people won’t use MagSafe on iPhone 16e

    March 12, 20250 Views

    French Apex Legends voice cast refuses contracts over “unacceptable” AI clause

    March 12, 20250 Views
    © 2025 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.