Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Older Windows 11 PCs need a Secure Boot fix ASAP

    Why Ring’s Super Bowl ad hits so sinister

    This dual-CPU PC from 1995 was so cool, Microsoft had to kill it

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

      February 6, 2026

      Stocks and bitcoin sink as investors dump software company shares

      February 4, 2026

      AI, crypto and Trump super PACs stash millions to spend on the midterms

      February 2, 2026

      To avoid accusations of AI cheating, college students are turning to AI

      January 29, 2026

      ChatGPT can embrace authoritarian ideas after just one prompt, researchers say

      January 24, 2026
    • Business

      New VoidLink malware framework targets Linux cloud servers

      January 14, 2026

      Nvidia Rubin’s rack-scale encryption signals a turning point for enterprise AI security

      January 13, 2026

      How KPMG is redefining the future of SAP consulting on a global scale

      January 10, 2026

      Top 10 cloud computing stories of 2025

      December 22, 2025

      Saudia Arabia’s STC commits to five-year network upgrade programme with Ericsson

      December 18, 2025
    • Crypto

      HBAR Shorts Face $5 Million Risk if Price Breaks Key Level

      February 10, 2026

      Ethereum Holds $2,000 Support — Accumulation Keeps Recovery Hopes Alive

      February 10, 2026

      Miami Mansion Listed for 700 BTC as California Billionaire Tax Sparks Relocations

      February 10, 2026

      Solana Drops to 2-Year Lows — History Suggests a Bounce Toward $100 is Incoming

      February 10, 2026

      Bitget Cuts Stock Perps Fees to Zero for Makers Ahead of Earnings Season, Expanding Access Across Markets

      February 10, 2026
    • Technology

      Older Windows 11 PCs need a Secure Boot fix ASAP

      February 11, 2026

      Why Ring’s Super Bowl ad hits so sinister

      February 11, 2026

      This dual-CPU PC from 1995 was so cool, Microsoft had to kill it

      February 11, 2026

      1,300 games for $10: ‘No ICE in Minnesota’ bundle launched

      February 11, 2026

      Gemini gave my Plex server a checkup. Its diagnosis surprised me

      February 11, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»Phonely’s new AI agents hit 99% accuracy—and customers can’t tell they’re not human
    Technology

    Phonely’s new AI agents hit 99% accuracy—and customers can’t tell they’re not human

    TechAiVerseBy TechAiVerseJune 4, 2025No Comments9 Mins Read2 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Phonely’s new AI agents hit 99% accuracy—and customers can’t tell they’re not human
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    Phonely’s new AI agents hit 99% accuracy—and customers can’t tell they’re not human

    June 3, 2025 1:31 PM

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


    A three-way partnership between AI phone support company Phonely, inference optimization platform Maitai, and chip maker Groq has achieved a breakthrough that addresses one of conversational artificial intelligence’s most persistent problems: the awkward delays that immediately signal to callers they’re talking to a machine.

    The collaboration has enabled Phonely to reduce response times by more than 70% while simultaneously boosting accuracy from 81.5% to 99.2% across four model iterations, surpassing GPT-4o’s 94.7% benchmark by 4.5 percentage points. The improvements stem from Groq’s new capability to instantly switch between multiple specialized AI models without added latency, orchestrated through Maitai’s optimization platform.

    The achievement solves what industry experts call the “uncanny valley” of voice AI — the subtle cues that make automated conversations feel distinctly non-human. For call centers and customer service operations, the implications could be transformative: one of Phonely’s customers is replacing 350 human agents this month alone.

    Why AI phone calls still sound robotic: the four-second problem

    Traditional large language models like OpenAI’s GPT-4o have long struggled with what appears to be a simple challenge: responding quickly enough to maintain natural conversation flow. While a few seconds of delay barely registers in text-based interactions, the same pause feels interminable during live phone conversations.

    “One of the things that most people don’t realize is that major LLM providers, such as OpenAI, Claude, and others have a very high degree of latency variance,” said Will Bodewes, Phonely’s founder and CEO, in an exclusive interview with VentureBeat. “4 seconds feels like an eternity if you’re talking to a voice AI on the phone – this delay is what makes most voice AI today feel non-human.”

    The problem occurs roughly once every ten requests, meaning standard conversations inevitably include at least one or two awkward pauses that immediately reveal the artificial nature of the interaction. For businesses considering AI phone agents, these delays have created a significant barrier to adoption.

    “This kind of latency is unacceptable for real-time phone support,” Bodewes explained. “Aside from latency, conversational accuracy and humanlike responses is something that legacy LLM providers just haven’t cracked in the voice realm.”

    How three startups solved AI’s biggest conversational challenge

    The solution emerged from Groq’s development of what the company calls “zero-latency LoRA hotswapping” — the ability to instantly switch between multiple specialized AI model variants without any performance penalty. LoRA, or Low-Rank Adaptation, allows developers to create lightweight, task-specific modifications to existing models rather than training entirely new ones from scratch.

    “Groq’s combination of fine-grained software controlled architecture, high-speed on-chip memory, streaming architecture, and deterministic execution means that it is possible to access multiple hot-swapped LoRAs with no latency penalty,” explained Chelsey Kantor, Groq’s chief marketing officer, in an interview with VentureBeat. “The LoRAs are stored and managed in SRAM alongside the original model weights.”

    This infrastructure advancement enabled Maitai to create what founder Christian DalSanto describes as a “proxy-layer orchestration” system that continuously optimizes model performance. “Maitai acts as a thin proxy layer between customers and their model providers,” DalSanto said. “This allows us to dynamically select and optimize the best model for every request, automatically applying evaluation, optimizations, and resiliency strategies such as fallbacks.”

    The system works by collecting performance data from every interaction, identifying weak points, and iteratively improving the models without customer intervention. “Since Maitai sits in the middle of the inference flow, we collect strong signals identifying where models underperform,” DalSanto explained. “These ‘soft spots’ are clustered, labeled, and incrementally fine-tuned to address specific weaknesses without causing regressions.”

    From 81% to 99% accuracy: the numbers behind AI’s human-like breakthrough

    The results demonstrate significant improvements across multiple performance dimensions. Time to first token — how quickly an AI begins responding — dropped 73.4% from 661 milliseconds to 176 milliseconds at the 90th percentile. Overall completion times fell 74.6% from 1,446 milliseconds to 339 milliseconds.

    Perhaps more significantly, accuracy improvements followed a clear upward trajectory across four model iterations, starting at 81.5% and reaching 99.2% — a level that exceeds human performance in many customer service scenarios.

    “We’ve been seeing about 70%+ of people who call into our AI not being able to distinguish the difference between a person,” Bodewes told VentureBeat. “Latency is, or was, the dead giveaway that it was an AI. With a custom fine tuned model that talks like a person, and super low-latency hardware, there isn’t much stopping us from crossing the uncanny valley of sounding completely human.”

    The performance gains translate directly to business outcomes. “One of our biggest customers saw a 32% increase in qualified leads as compared to a previous version using previous state-of-the-art models,” Bodewes noted.

    350 human agents replaced in one month: call centers go all-in on AI

    The improvements arrive as call centers face mounting pressure to reduce costs while maintaining service quality. Traditional human agents require training, scheduling coordination, and significant overhead costs that AI agents can eliminate.

    “Call centers are really seeing huge benefits from using Phonely to replace human agents,” Bodewes said. “One of the call centers we work with is actually replacing 350 human agents completely with Phonely just this month. From a call center perspective this is a game changer, because they don’t have to manage human support agent schedules, train agents, and match supply and demand.”

    The technology shows particular strength in specific use cases. “Phonely really excels in a few areas, including industry-leading performance in appointment scheduling and lead qualification specifically, beyond what legacy providers are capable of,” Bodewes explained. The company has partnered with major firms handling insurance, legal, and automotive customer interactions.

    The hardware edge: why Groq’s chips make sub-second AI possible

    Groq’s specialized AI inference chips, called Language Processing Units (LPUs), provide the hardware foundation that makes the multi-model approach viable. Unlike general-purpose graphics processors typically used for AI inference, LPUs optimize specifically for the sequential nature of language processing.

    “The LPU architecture is optimized for precisely controlling data movement and computation at a fine-grained level with high speed and predictability, allowing the efficient management of multiple small ‘delta’ weights sets (the LoRAs) on a common base model with no additional latency,” Kantor said.

    The cloud-based infrastructure also addresses scalability concerns that have historically limited AI deployment. “The beauty of using a cloud-based solution like GroqCloud, is that Groq handles orchestration and dynamic scaling for our customers for any AI model we offer, including fine-tuned LoRA models,” Kantor explained.

    For enterprises, the economic advantages appear substantial. “The simplicity and efficiency of our system design, low power consumption, and high performance of our hardware, allows Groq to provide customers with the lowest cost per token without sacrificing performance as they scale,” Kantor said.

    Same-day AI deployment: how enterprises skip months of integration

    One of the partnership’s most compelling aspects is implementation speed. Unlike traditional AI deployments that can require months of integration work, Maitai’s approach enables same-day transitions for companies already using general-purpose models.

    “For companies already in production using general-purpose models, we typically transition them to Maitai on the same day, with zero disruption,” DalSanto said. “We begin immediate data collection, and within days to a week, we can deliver a fine-tuned model that’s faster and more reliable than their original setup.”

    This rapid deployment capability addresses a common enterprise concern about AI projects: lengthy implementation timelines that delay return on investment. The proxy-layer approach means companies can maintain their existing API integrations while gaining access to continuously improving performance.

    The future of enterprise AI: specialized models replace one-size-fits-all

    The collaboration signals a broader shift in enterprise AI architecture, moving away from monolithic, general-purpose models toward specialized, task-specific systems. “We’re observing growing demand from teams breaking their applications into smaller, highly specialized workloads, each benefiting from individual adapters,” DalSanto said.

    This trend reflects maturing understanding of AI deployment challenges. Rather than expecting single models to excel across all tasks, enterprises increasingly recognize the value of purpose-built solutions that can be continuously refined based on real-world performance data.

    “Multi-LoRA hotswapping lets companies deploy faster, more accurate models customized precisely for their applications, removing traditional cost and complexity barriers,” DalSanto explained. “This fundamentally shifts how enterprise AI gets built and deployed.”

    The technical foundation also enables more sophisticated applications as the technology matures. Groq’s infrastructure can support dozens of specialized models on a single instance, potentially allowing enterprises to create highly customized AI experiences across different customer segments or use cases.

    “Multi-LoRA hotswapping enables low-latency, high-accuracy inference tailored to specific tasks,” DalSanto said. “Our roadmap prioritizes further investments in infrastructure, tools, and optimization to establish fine-grained, application-specific inference as the new standard.”

    For the broader conversational AI market, the partnership demonstrates that technical limitations once considered insurmountable can be addressed through specialized infrastructure and careful system design. As more enterprises deploy AI phone agents, the competitive advantages demonstrated by Phonely may establish new baseline expectations for performance and responsiveness in automated customer interactions.

    The success also validates the emerging model of AI infrastructure companies working together to solve complex deployment challenges. This collaborative approach may accelerate innovation across the enterprise AI sector as specialized capabilities combine to deliver solutions that exceed what any single provider could achieve independently. If this partnership is any indication, the era of obviously artificial phone conversations may be coming to an end faster than anyone expected.

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleSnowflake’s Openflow tackles AI’s toughest engineering challenge: Data ingestion at scale
    Next Article Nvidia CEO Jensen Huang sings praises of processor in Nintendo Switch 2
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    Older Windows 11 PCs need a Secure Boot fix ASAP

    February 11, 2026

    Why Ring’s Super Bowl ad hits so sinister

    February 11, 2026

    This dual-CPU PC from 1995 was so cool, Microsoft had to kill it

    February 11, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025664 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025250 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025151 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025111 Views
    Don't Miss
    Technology February 11, 2026

    Older Windows 11 PCs need a Secure Boot fix ASAP

    Older Windows 11 PCs need a Secure Boot fix ASAP Image: Microsoft Summary created by…

    Why Ring’s Super Bowl ad hits so sinister

    This dual-CPU PC from 1995 was so cool, Microsoft had to kill it

    1,300 games for $10: ‘No ICE in Minnesota’ bundle launched

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Older Windows 11 PCs need a Secure Boot fix ASAP

    February 11, 20261 Views

    Why Ring’s Super Bowl ad hits so sinister

    February 11, 20262 Views

    This dual-CPU PC from 1995 was so cool, Microsoft had to kill it

    February 11, 20262 Views
    Most Popular

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views

    VTOMAN FlashSpeed 1500: Plenty Of Power For All Your Gear

    March 13, 20250 Views

    This new Roomba finally solves the big problem I have with robot vacuums

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.