Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    This portable OLED monitor for laptops is shockingly cheap

    PC gamers are panic-buying Windows 11 Pro

    Going to play the stock market? This app is a must.

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      How far will AI go to defend its own survival?

      June 2, 2025

      The internet thinks this video from Gaza is AI. Here’s how we proved it isn’t.

      May 30, 2025

      Nvidia CEO hails Trump’s plan to rescind some export curbs on AI chips to China

      May 22, 2025

      AI poses a bigger threat to women’s work, than men’s, report says

      May 21, 2025

      AMD CEO Lisa Su calls China a ‘large opportunity’ and warns against strict U.S. chip controls

      May 8, 2025
    • Business

      Google links massive cloud outage to API management issue

      June 13, 2025

      The EU challenges Google and Cloudflare with its very own DNS resolver that can filter dangerous traffic

      June 11, 2025

      These two Ivanti bugs are allowing hackers to target cloud instances

      May 21, 2025

      How cloud and AI transform and improve customer experiences

      May 10, 2025

      Cookie-Bite attack PoC uses Chrome extension to steal session tokens

      April 22, 2025
    • Crypto

      Dogecoin (DOGE) Struggles to Break Out—Can Rising Indicators Shift the Trend?

      June 15, 2025

      Amazon and Walmart are Preparing to Launch Stablecoins

      June 15, 2025

      Why XRP Keeps Falling Despite Bullish Headlines from Ripple

      June 15, 2025

      FTX Sends Fresh $10 Million in Solana to 30 wallets

      June 15, 2025

      This Week’s Largest Altcoin Gainers: How Far Will These Tokens Go?

      June 15, 2025
    • Technology

      This portable OLED monitor for laptops is shockingly cheap

      June 15, 2025

      PC gamers are panic-buying Windows 11 Pro

      June 15, 2025

      Going to play the stock market? This app is a must.

      June 15, 2025

      9 menial tasks ChatGPT can handle in seconds, saving you hours

      June 15, 2025

      New to Roku? Check these 8 settings before streaming

      June 15, 2025
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Shop Now
    Tech AI Verse
    You are at:Home»Technology»Phonely’s new AI agents hit 99% accuracy—and customers can’t tell they’re not human
    Technology

    Phonely’s new AI agents hit 99% accuracy—and customers can’t tell they’re not human

    TechAiVerseBy TechAiVerseJune 4, 2025No Comments9 Mins Read0 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Phonely’s new AI agents hit 99% accuracy—and customers can’t tell they’re not human
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    Phonely’s new AI agents hit 99% accuracy—and customers can’t tell they’re not human

    June 3, 2025 1:31 PM

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


    A three-way partnership between AI phone support company Phonely, inference optimization platform Maitai, and chip maker Groq has achieved a breakthrough that addresses one of conversational artificial intelligence’s most persistent problems: the awkward delays that immediately signal to callers they’re talking to a machine.

    The collaboration has enabled Phonely to reduce response times by more than 70% while simultaneously boosting accuracy from 81.5% to 99.2% across four model iterations, surpassing GPT-4o’s 94.7% benchmark by 4.5 percentage points. The improvements stem from Groq’s new capability to instantly switch between multiple specialized AI models without added latency, orchestrated through Maitai’s optimization platform.

    The achievement solves what industry experts call the “uncanny valley” of voice AI — the subtle cues that make automated conversations feel distinctly non-human. For call centers and customer service operations, the implications could be transformative: one of Phonely’s customers is replacing 350 human agents this month alone.

    Why AI phone calls still sound robotic: the four-second problem

    Traditional large language models like OpenAI’s GPT-4o have long struggled with what appears to be a simple challenge: responding quickly enough to maintain natural conversation flow. While a few seconds of delay barely registers in text-based interactions, the same pause feels interminable during live phone conversations.

    “One of the things that most people don’t realize is that major LLM providers, such as OpenAI, Claude, and others have a very high degree of latency variance,” said Will Bodewes, Phonely’s founder and CEO, in an exclusive interview with VentureBeat. “4 seconds feels like an eternity if you’re talking to a voice AI on the phone – this delay is what makes most voice AI today feel non-human.”

    The problem occurs roughly once every ten requests, meaning standard conversations inevitably include at least one or two awkward pauses that immediately reveal the artificial nature of the interaction. For businesses considering AI phone agents, these delays have created a significant barrier to adoption.

    “This kind of latency is unacceptable for real-time phone support,” Bodewes explained. “Aside from latency, conversational accuracy and humanlike responses is something that legacy LLM providers just haven’t cracked in the voice realm.”

    How three startups solved AI’s biggest conversational challenge

    The solution emerged from Groq’s development of what the company calls “zero-latency LoRA hotswapping” — the ability to instantly switch between multiple specialized AI model variants without any performance penalty. LoRA, or Low-Rank Adaptation, allows developers to create lightweight, task-specific modifications to existing models rather than training entirely new ones from scratch.

    “Groq’s combination of fine-grained software controlled architecture, high-speed on-chip memory, streaming architecture, and deterministic execution means that it is possible to access multiple hot-swapped LoRAs with no latency penalty,” explained Chelsey Kantor, Groq’s chief marketing officer, in an interview with VentureBeat. “The LoRAs are stored and managed in SRAM alongside the original model weights.”

    This infrastructure advancement enabled Maitai to create what founder Christian DalSanto describes as a “proxy-layer orchestration” system that continuously optimizes model performance. “Maitai acts as a thin proxy layer between customers and their model providers,” DalSanto said. “This allows us to dynamically select and optimize the best model for every request, automatically applying evaluation, optimizations, and resiliency strategies such as fallbacks.”

    The system works by collecting performance data from every interaction, identifying weak points, and iteratively improving the models without customer intervention. “Since Maitai sits in the middle of the inference flow, we collect strong signals identifying where models underperform,” DalSanto explained. “These ‘soft spots’ are clustered, labeled, and incrementally fine-tuned to address specific weaknesses without causing regressions.”

    From 81% to 99% accuracy: the numbers behind AI’s human-like breakthrough

    The results demonstrate significant improvements across multiple performance dimensions. Time to first token — how quickly an AI begins responding — dropped 73.4% from 661 milliseconds to 176 milliseconds at the 90th percentile. Overall completion times fell 74.6% from 1,446 milliseconds to 339 milliseconds.

    Perhaps more significantly, accuracy improvements followed a clear upward trajectory across four model iterations, starting at 81.5% and reaching 99.2% — a level that exceeds human performance in many customer service scenarios.

    “We’ve been seeing about 70%+ of people who call into our AI not being able to distinguish the difference between a person,” Bodewes told VentureBeat. “Latency is, or was, the dead giveaway that it was an AI. With a custom fine tuned model that talks like a person, and super low-latency hardware, there isn’t much stopping us from crossing the uncanny valley of sounding completely human.”

    The performance gains translate directly to business outcomes. “One of our biggest customers saw a 32% increase in qualified leads as compared to a previous version using previous state-of-the-art models,” Bodewes noted.

    350 human agents replaced in one month: call centers go all-in on AI

    The improvements arrive as call centers face mounting pressure to reduce costs while maintaining service quality. Traditional human agents require training, scheduling coordination, and significant overhead costs that AI agents can eliminate.

    “Call centers are really seeing huge benefits from using Phonely to replace human agents,” Bodewes said. “One of the call centers we work with is actually replacing 350 human agents completely with Phonely just this month. From a call center perspective this is a game changer, because they don’t have to manage human support agent schedules, train agents, and match supply and demand.”

    The technology shows particular strength in specific use cases. “Phonely really excels in a few areas, including industry-leading performance in appointment scheduling and lead qualification specifically, beyond what legacy providers are capable of,” Bodewes explained. The company has partnered with major firms handling insurance, legal, and automotive customer interactions.

    The hardware edge: why Groq’s chips make sub-second AI possible

    Groq’s specialized AI inference chips, called Language Processing Units (LPUs), provide the hardware foundation that makes the multi-model approach viable. Unlike general-purpose graphics processors typically used for AI inference, LPUs optimize specifically for the sequential nature of language processing.

    “The LPU architecture is optimized for precisely controlling data movement and computation at a fine-grained level with high speed and predictability, allowing the efficient management of multiple small ‘delta’ weights sets (the LoRAs) on a common base model with no additional latency,” Kantor said.

    The cloud-based infrastructure also addresses scalability concerns that have historically limited AI deployment. “The beauty of using a cloud-based solution like GroqCloud, is that Groq handles orchestration and dynamic scaling for our customers for any AI model we offer, including fine-tuned LoRA models,” Kantor explained.

    For enterprises, the economic advantages appear substantial. “The simplicity and efficiency of our system design, low power consumption, and high performance of our hardware, allows Groq to provide customers with the lowest cost per token without sacrificing performance as they scale,” Kantor said.

    Same-day AI deployment: how enterprises skip months of integration

    One of the partnership’s most compelling aspects is implementation speed. Unlike traditional AI deployments that can require months of integration work, Maitai’s approach enables same-day transitions for companies already using general-purpose models.

    “For companies already in production using general-purpose models, we typically transition them to Maitai on the same day, with zero disruption,” DalSanto said. “We begin immediate data collection, and within days to a week, we can deliver a fine-tuned model that’s faster and more reliable than their original setup.”

    This rapid deployment capability addresses a common enterprise concern about AI projects: lengthy implementation timelines that delay return on investment. The proxy-layer approach means companies can maintain their existing API integrations while gaining access to continuously improving performance.

    The future of enterprise AI: specialized models replace one-size-fits-all

    The collaboration signals a broader shift in enterprise AI architecture, moving away from monolithic, general-purpose models toward specialized, task-specific systems. “We’re observing growing demand from teams breaking their applications into smaller, highly specialized workloads, each benefiting from individual adapters,” DalSanto said.

    This trend reflects maturing understanding of AI deployment challenges. Rather than expecting single models to excel across all tasks, enterprises increasingly recognize the value of purpose-built solutions that can be continuously refined based on real-world performance data.

    “Multi-LoRA hotswapping lets companies deploy faster, more accurate models customized precisely for their applications, removing traditional cost and complexity barriers,” DalSanto explained. “This fundamentally shifts how enterprise AI gets built and deployed.”

    The technical foundation also enables more sophisticated applications as the technology matures. Groq’s infrastructure can support dozens of specialized models on a single instance, potentially allowing enterprises to create highly customized AI experiences across different customer segments or use cases.

    “Multi-LoRA hotswapping enables low-latency, high-accuracy inference tailored to specific tasks,” DalSanto said. “Our roadmap prioritizes further investments in infrastructure, tools, and optimization to establish fine-grained, application-specific inference as the new standard.”

    For the broader conversational AI market, the partnership demonstrates that technical limitations once considered insurmountable can be addressed through specialized infrastructure and careful system design. As more enterprises deploy AI phone agents, the competitive advantages demonstrated by Phonely may establish new baseline expectations for performance and responsiveness in automated customer interactions.

    The success also validates the emerging model of AI infrastructure companies working together to solve complex deployment challenges. This collaborative approach may accelerate innovation across the enterprise AI sector as specialized capabilities combine to deliver solutions that exceed what any single provider could achieve independently. If this partnership is any indication, the era of obviously artificial phone conversations may be coming to an end faster than anyone expected.

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleSnowflake’s Openflow tackles AI’s toughest engineering challenge: Data ingestion at scale
    Next Article Nvidia CEO Jensen Huang sings praises of processor in Nintendo Switch 2
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    This portable OLED monitor for laptops is shockingly cheap

    June 15, 2025

    PC gamers are panic-buying Windows 11 Pro

    June 15, 2025

    Going to play the stock market? This app is a must.

    June 15, 2025
    Leave A Reply Cancel Reply

    Top Posts

    New Akira ransomware decryptor cracks encryptions keys using GPUs

    March 16, 202525 Views

    OpenAI details ChatGPT-o3, o4-mini, o4-mini-high usage limits

    April 19, 202519 Views

    Rsync replaced with openrsync on macOS Sequoia

    April 7, 202514 Views

    Arizona moves to ban AI use in reviewing medical claims

    March 12, 202511 Views
    Don't Miss
    Technology June 15, 2025

    This portable OLED monitor for laptops is shockingly cheap

    This portable OLED monitor for laptops is shockingly cheap Image: VILVA OLED monitors are all…

    PC gamers are panic-buying Windows 11 Pro

    Going to play the stock market? This app is a must.

    9 menial tasks ChatGPT can handle in seconds, saving you hours

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    This portable OLED monitor for laptops is shockingly cheap

    June 15, 20250 Views

    PC gamers are panic-buying Windows 11 Pro

    June 15, 20250 Views

    Going to play the stock market? This app is a must.

    June 15, 20250 Views
    Most Popular

    Ethereum must hold $2,000 support or risk dropping to $1,850 – Here’s why

    March 12, 20250 Views

    Xiaomi 15 Ultra Officially Launched in China, Malaysia launch to follow after global event

    March 12, 20250 Views

    Apple thinks people won’t use MagSafe on iPhone 16e

    March 12, 20250 Views
    © 2025 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.