Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    ARC Raiders dev claims they built an auction house-like trading system for the game but later removed it as it’s “very risky territory”

    Citizen unveils 3 new tachymeter bezel chronographs with 43 mm stainless-steel cases

    Portable 27-inch monitor with Google TV, battery and built-in soundbar launches with discount

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Ashley St. Clair, the mother of one of Elon Musk’s children, sues xAI over Grok sexual images

      January 17, 2026

      Anthropic joins OpenAI’s push into health care with new Claude tools

      January 12, 2026

      The mother of one of Elon Musk’s children says his AI bot won’t stop creating sexualized images of her

      January 7, 2026

      A new pope, political shake-ups and celebs in space: The 2025-in-review news quiz

      December 31, 2025

      AI has become the norm for students. Teachers are playing catch-up.

      December 23, 2025
    • Business

      New VoidLink malware framework targets Linux cloud servers

      January 14, 2026

      Nvidia Rubin’s rack-scale encryption signals a turning point for enterprise AI security

      January 13, 2026

      How KPMG is redefining the future of SAP consulting on a global scale

      January 10, 2026

      Top 10 cloud computing stories of 2025

      December 22, 2025

      Saudia Arabia’s STC commits to five-year network upgrade programme with Ericsson

      December 18, 2025
    • Crypto

      Trump Shifts on Fed Pick as Hassett Odds Fade: Who Will Replace Powell?

      January 17, 2026

      A Third of French Crypto Firms Still Unlicensed Under MiCA as Deadline Nears

      January 17, 2026

      DOJ Charges Venezuelan National in $1 Billion Crypto Laundering Scheme

      January 17, 2026

      One of Wall Street’s Top Strategists No Longer Trusts Bitcoin | US Crypto News

      January 17, 2026

      3 Altcoins To Watch This Weekend | January 17 – 18

      January 17, 2026
    • Technology

      ARC Raiders dev claims they built an auction house-like trading system for the game but later removed it as it’s “very risky territory”

      January 17, 2026

      Citizen unveils 3 new tachymeter bezel chronographs with 43 mm stainless-steel cases

      January 17, 2026

      Portable 27-inch monitor with Google TV, battery and built-in soundbar launches with discount

      January 17, 2026

      Civilization VII coming to iPhone and iPad

      January 17, 2026

      Flagship power with an XXL battery

      January 17, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»Stop using natural language interfaces
    Technology

    Stop using natural language interfaces

    TechAiVerseBy TechAiVerseJanuary 14, 2026No Comments6 Mins Read2 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Stop using natural language interfaces
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    Stop using natural language interfaces

    Natural language is a wonderful interface, but just because we suddenly can doesn’t mean we always should. LLM inference is slow and expensive, often taking tens of seconds to complete. Natural language interfaces have orders of magnitude more latency than normal graphic user interfaces. This doesn’t mean we shouldn’t use LLMs, it just means we need to be smart about how we build interfaces around them.

    The Latency Problem

    There’s a classic CS diagram visualizing latency numbers for various compute operations: nanoseconds to lock a mutex, microseconds to reference memory, milliseconds to read 1 MB from disk. LLM inference usually takes 10s of seconds to complete. Streaming responses help compensate, but it’s slow.

    Compare interacting with an LLM over multiple turns to filling in a checklist, selecting items from a pulldown menu, setting a value on a slider bar, stepping through a series of such interactions as you fill out a multi-field dialogue. Graphic user interfaces are fast, with responses taking milliseconds, not seconds. But. But: they’re not smart, they’re not responsive, they don’t shape themselves to the conversation with the full benefits of semantic understanding.

    This is a post about how to provide the best of both worlds: the clean affordances of structured user interfaces with the flexibility of natural language. Every part of the above interface was generated on the fly by an LLM.

    Popup-MCP

    This is a post about a tool I made called popup-mcp (MCP is a standardized tool-use interface for LLMs). I built it about 6 months ago and have been experimenting with it as a core part of my LLM interaction modality ever since. It’s a big part of what has made me so fond of them, from such an early stage. Popup provides a single tool that when invoked spawns a popup with an arbitrary collection of GUI elements.

    You can find popup here, along with instructions on how to use it. It’s a local MCP tool that uses stdio, which means the process needs to run on the same computer as your LLM client. Popup supports structured GUIs made up of elements including multiple choice checkboxes, drop downs, sliders, and text boxes. These let LLMs render popups like the following:

    The popup tool supports conditional visibility to allow for context-specific followup questions. Some elements start hidden, only becoming visible when conditions like ‘checkbox clicked’, ‘slider value > 7’, or ‘checkbox A clicked && slider B < 7 && slider C > 8′ become true. This lets LLMs construct complex and nuanced structures capturing not just their next stage of the conversation but where they think the conversation might go from there. Think of these as being a bit like conditional dialogue trees in CRPGs like Baldur’s Gate or interview trees as used in consulting. The previous dialog, for example, expands as follows:

    Because constructing this tree requires registering nested hypotheticals about how a conversation might progress, it provides a useful window into an LLM’s internal cognitive state. You don’t just see the question it wants to ask you, you see the followup questions it would ask based on various answer combinations. This is incredibly useful and often shows where the LLM is making incorrect assumptions. More importantly, this is fast. You can quickly explore counterfactuals without having to waste minutes on back-and-forth conversational turns and restarting conversations from checkpoints.

    Speaking of incorrect LLM assumptions: every multiselect or dropdown automatically includes an ‘Other’ option, which – when selected – renders a textbox for the user to elaborate on what the LLM missed. This escape hatch started as an emergent pattern, but I recently modified the tool to _always_ auto-include an escape hatch option on all multiselects and dropdown menus.

    This means that you can always intervene to steer the LLM when it has the wrong idea about where a conversation should go.

    Why This Matters

    Remember how I started by talking about latency, about how long a single LLM response takes? This combination of nested dialogue trees and escape hatches cuts that by ~25-75%, depending on how well the LLM anticipates where the conversation is going. It’s surprising how often a series dropdown with its top 3-5 predictions will contain your next answer, especially when defining technical specs, and when it doesn’t there’s always the natural-language escape hatch offered by ‘Other’.

    Imagine generating a new RPG setting. Your LLM spawns a popup with options for the 5 most common patterns, with focused followup questions for each.

    This isn’t a generic GUI; it’s fully specialized using everything the LLM knows about you, your project, and the interaction style you prefer. This captures 90% of what you’re trying to do, so you select the relevant options and use ‘Other’ escape hatches to clarify as necessary.

    These interactions have latency measured in milliseconds: when you check the ‘Other’ checkbox, a text box instantly appears, without even a network round-trip’s worth of latency. When you’re done, your answers are returned to the LLM as a JSON tool response.

    You should think of this pattern as providing a reduction in amortized interaction latency: it’ll still take 10s of seconds to produce a followup response when you submit a popup dialog, but if your average popup replaces > 1 rounds of chat you’re still taking less time per unit of information exchanged. That’s what I mean by amortized latency: that single expensive LLM invocation is amortized over multiple cheap interactions with deterministically rendered GUI run on your local machine.

    Claude Code Planning Mode

    I started hacking on this a few months before Claude Code released their AskUser tool (as used in planning mode). The AskUser tool provides a limited selection of TUI (terminal user interface) elements: multiple-choice and single-choice (with an always-included ‘Other’ option) and single-choice drop-downs. I originally chose not to publicize my library because of this, but I believe the addition of conditional elements is worth talking about.

    Further, I have some feature requests for Claude Code. If anyone at Anthropic happens to be reading this these would all be pretty easily to implement:

    • Make the TUI interface used by the AskUserQuestion tool open and scriptable, such that plugins and user code can directly modify LLM-generated TUI interfaces, or directly generate their own without requiring a round-trip through the LLM to invoke the tool.

    • Provide pre and post-AskUser tool hooks so users can directly invoke code using TUI responses (eg filling templated prompts using TUI interface responses in certain contexts).

    • Extend the AskUser tool to support conditionally-rendered elements.

    Conclusion

    If you have an LLM chat app you should add inline structured GUI elements with conditionally visible followup questions to reduce amortized interaction latency. If you’d like to build on my library or tool definition, or just to talk shop, please reach out. I’d be happy to help. This technique is equally applicable to OS-native popups, terminal user interfaces, and web UIs.

    I’ll be writing more here. Publishing what I build is one of my core resolutions for 2026, and I have one hell of a backlog. Watch this space.

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleShow HN: Cachekit – High performance caching policies library in Rust
    Next Article The Gleam Programming Language
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    ARC Raiders dev claims they built an auction house-like trading system for the game but later removed it as it’s “very risky territory”

    January 17, 2026

    Citizen unveils 3 new tachymeter bezel chronographs with 43 mm stainless-steel cases

    January 17, 2026

    Portable 27-inch monitor with Google TV, battery and built-in soundbar launches with discount

    January 17, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025619 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025235 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025135 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025109 Views
    Don't Miss
    Technology January 17, 2026

    ARC Raiders dev claims they built an auction house-like trading system for the game but later removed it as it’s “very risky territory”

    ARC Raiders dev claims they built an auction house-like trading system for the game but…

    Citizen unveils 3 new tachymeter bezel chronographs with 43 mm stainless-steel cases

    Portable 27-inch monitor with Google TV, battery and built-in soundbar launches with discount

    Civilization VII coming to iPhone and iPad

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    ARC Raiders dev claims they built an auction house-like trading system for the game but later removed it as it’s “very risky territory”

    January 17, 20260 Views

    Citizen unveils 3 new tachymeter bezel chronographs with 43 mm stainless-steel cases

    January 17, 20260 Views

    Portable 27-inch monitor with Google TV, battery and built-in soundbar launches with discount

    January 17, 20260 Views
    Most Popular

    A Team of Female Founders Is Launching Cloud Security Tech That Could Overhaul AI Protection

    March 12, 20250 Views

    Senua’s Saga: Hellblade 2 leads BAFTA Game Awards 2025 nominations

    March 12, 20250 Views

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.