Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    A unicorn-like Spinosaurus found in the Sahara

    From Iran to Ukraine, everyone’s trying to hack security cameras

    Ding-dong! The Exploration Upper Stage is dead

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      What the polls say about how Americans are using AI

      February 27, 2026

      Tensions between the Pentagon and AI giant Anthropic reach a boiling point

      February 21, 2026

      Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

      February 6, 2026

      Stocks and bitcoin sink as investors dump software company shares

      February 4, 2026

      AI, crypto and Trump super PACs stash millions to spend on the midterms

      February 2, 2026
    • Business

      Google releases Gemini 3.1 Flash Lite at 1/8th the cost of Pro

      March 4, 2026

      Huawei Watch GT Series

      March 4, 2026

      Weighing up the enterprise risks of neocloud providers

      March 3, 2026

      A stolen Gemini API key turned a $180 bill into $82,000 in two days

      March 3, 2026

      These ultra-budget laptops “include” 1.2TB storage, but most of it is OneDrive trial space

      March 1, 2026
    • Crypto

      Banks Respond to Kraken’s Federal Reserve Access as Trump Sides with Crypto

      March 4, 2026

      Hyperliquid and DEXs Break the Top 10 — Is the CEX Era Ending?

      March 4, 2026

      Consensus Hong Kong 2026: The Institutional Turn 

      March 4, 2026

      New Crypto Mutuum Finance (MUTM) Reports V1 Protocol Progress as Roadmap Enters Phase 3

      March 4, 2026

      Bitcoin Short Sellers Caught Off Guard in New White House Move

      March 4, 2026
    • Technology

      A unicorn-like Spinosaurus found in the Sahara

      March 7, 2026

      From Iran to Ukraine, everyone’s trying to hack security cameras

      March 7, 2026

      Ding-dong! The Exploration Upper Stage is dead

      March 7, 2026

      Satellite firm pauses imagery after revealing Iran’s attacks on US bases

      March 7, 2026

      Fishing crews in the Atlantic keep accidentally dredging up chemical weapons

      March 7, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»Stop using natural language interfaces
    Technology

    Stop using natural language interfaces

    TechAiVerseBy TechAiVerseJanuary 14, 2026No Comments6 Mins Read4 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Stop using natural language interfaces
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    Stop using natural language interfaces

    Natural language is a wonderful interface, but just because we suddenly can doesn’t mean we always should. LLM inference is slow and expensive, often taking tens of seconds to complete. Natural language interfaces have orders of magnitude more latency than normal graphic user interfaces. This doesn’t mean we shouldn’t use LLMs, it just means we need to be smart about how we build interfaces around them.

    The Latency Problem

    There’s a classic CS diagram visualizing latency numbers for various compute operations: nanoseconds to lock a mutex, microseconds to reference memory, milliseconds to read 1 MB from disk. LLM inference usually takes 10s of seconds to complete. Streaming responses help compensate, but it’s slow.

    Compare interacting with an LLM over multiple turns to filling in a checklist, selecting items from a pulldown menu, setting a value on a slider bar, stepping through a series of such interactions as you fill out a multi-field dialogue. Graphic user interfaces are fast, with responses taking milliseconds, not seconds. But. But: they’re not smart, they’re not responsive, they don’t shape themselves to the conversation with the full benefits of semantic understanding.

    This is a post about how to provide the best of both worlds: the clean affordances of structured user interfaces with the flexibility of natural language. Every part of the above interface was generated on the fly by an LLM.

    Popup-MCP

    This is a post about a tool I made called popup-mcp (MCP is a standardized tool-use interface for LLMs). I built it about 6 months ago and have been experimenting with it as a core part of my LLM interaction modality ever since. It’s a big part of what has made me so fond of them, from such an early stage. Popup provides a single tool that when invoked spawns a popup with an arbitrary collection of GUI elements.

    You can find popup here, along with instructions on how to use it. It’s a local MCP tool that uses stdio, which means the process needs to run on the same computer as your LLM client. Popup supports structured GUIs made up of elements including multiple choice checkboxes, drop downs, sliders, and text boxes. These let LLMs render popups like the following:

    The popup tool supports conditional visibility to allow for context-specific followup questions. Some elements start hidden, only becoming visible when conditions like ‘checkbox clicked’, ‘slider value > 7’, or ‘checkbox A clicked && slider B < 7 && slider C > 8′ become true. This lets LLMs construct complex and nuanced structures capturing not just their next stage of the conversation but where they think the conversation might go from there. Think of these as being a bit like conditional dialogue trees in CRPGs like Baldur’s Gate or interview trees as used in consulting. The previous dialog, for example, expands as follows:

    Because constructing this tree requires registering nested hypotheticals about how a conversation might progress, it provides a useful window into an LLM’s internal cognitive state. You don’t just see the question it wants to ask you, you see the followup questions it would ask based on various answer combinations. This is incredibly useful and often shows where the LLM is making incorrect assumptions. More importantly, this is fast. You can quickly explore counterfactuals without having to waste minutes on back-and-forth conversational turns and restarting conversations from checkpoints.

    Speaking of incorrect LLM assumptions: every multiselect or dropdown automatically includes an ‘Other’ option, which – when selected – renders a textbox for the user to elaborate on what the LLM missed. This escape hatch started as an emergent pattern, but I recently modified the tool to _always_ auto-include an escape hatch option on all multiselects and dropdown menus.

    This means that you can always intervene to steer the LLM when it has the wrong idea about where a conversation should go.

    Why This Matters

    Remember how I started by talking about latency, about how long a single LLM response takes? This combination of nested dialogue trees and escape hatches cuts that by ~25-75%, depending on how well the LLM anticipates where the conversation is going. It’s surprising how often a series dropdown with its top 3-5 predictions will contain your next answer, especially when defining technical specs, and when it doesn’t there’s always the natural-language escape hatch offered by ‘Other’.

    Imagine generating a new RPG setting. Your LLM spawns a popup with options for the 5 most common patterns, with focused followup questions for each.

    This isn’t a generic GUI; it’s fully specialized using everything the LLM knows about you, your project, and the interaction style you prefer. This captures 90% of what you’re trying to do, so you select the relevant options and use ‘Other’ escape hatches to clarify as necessary.

    These interactions have latency measured in milliseconds: when you check the ‘Other’ checkbox, a text box instantly appears, without even a network round-trip’s worth of latency. When you’re done, your answers are returned to the LLM as a JSON tool response.

    You should think of this pattern as providing a reduction in amortized interaction latency: it’ll still take 10s of seconds to produce a followup response when you submit a popup dialog, but if your average popup replaces > 1 rounds of chat you’re still taking less time per unit of information exchanged. That’s what I mean by amortized latency: that single expensive LLM invocation is amortized over multiple cheap interactions with deterministically rendered GUI run on your local machine.

    Claude Code Planning Mode

    I started hacking on this a few months before Claude Code released their AskUser tool (as used in planning mode). The AskUser tool provides a limited selection of TUI (terminal user interface) elements: multiple-choice and single-choice (with an always-included ‘Other’ option) and single-choice drop-downs. I originally chose not to publicize my library because of this, but I believe the addition of conditional elements is worth talking about.

    Further, I have some feature requests for Claude Code. If anyone at Anthropic happens to be reading this these would all be pretty easily to implement:

    • Make the TUI interface used by the AskUserQuestion tool open and scriptable, such that plugins and user code can directly modify LLM-generated TUI interfaces, or directly generate their own without requiring a round-trip through the LLM to invoke the tool.

    • Provide pre and post-AskUser tool hooks so users can directly invoke code using TUI responses (eg filling templated prompts using TUI interface responses in certain contexts).

    • Extend the AskUser tool to support conditionally-rendered elements.

    Conclusion

    If you have an LLM chat app you should add inline structured GUI elements with conditionally visible followup questions to reduce amortized interaction latency. If you’d like to build on my library or tool definition, or just to talk shop, please reach out. I’d be happy to help. This technique is equally applicable to OS-native popups, terminal user interfaces, and web UIs.

    I’ll be writing more here. Publishing what I build is one of my core resolutions for 2026, and I have one hell of a backlog. Watch this space.

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleShow HN: Cachekit – High performance caching policies library in Rust
    Next Article The Gleam Programming Language
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    A unicorn-like Spinosaurus found in the Sahara

    March 7, 2026

    From Iran to Ukraine, everyone’s trying to hack security cameras

    March 7, 2026

    Ding-dong! The Exploration Upper Stage is dead

    March 7, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025705 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025291 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025165 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025125 Views
    Don't Miss
    Technology March 7, 2026

    A unicorn-like Spinosaurus found in the Sahara

    A unicorn-like Spinosaurus found in the Sahara But there was one thing that made S.…

    From Iran to Ukraine, everyone’s trying to hack security cameras

    Ding-dong! The Exploration Upper Stage is dead

    Satellite firm pauses imagery after revealing Iran’s attacks on US bases

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    A unicorn-like Spinosaurus found in the Sahara

    March 7, 20262 Views

    From Iran to Ukraine, everyone’s trying to hack security cameras

    March 7, 20261 Views

    Ding-dong! The Exploration Upper Stage is dead

    March 7, 20262 Views
    Most Popular

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views

    VTOMAN FlashSpeed 1500: Plenty Of Power For All Your Gear

    March 13, 20250 Views

    Best TV Antenna of 2025

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.