Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    ARC Raiders dev claims they built an auction house-like trading system for the game but later removed it as it’s “very risky territory”

    Citizen unveils 3 new tachymeter bezel chronographs with 43 mm stainless-steel cases

    Portable 27-inch monitor with Google TV, battery and built-in soundbar launches with discount

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Ashley St. Clair, the mother of one of Elon Musk’s children, sues xAI over Grok sexual images

      January 17, 2026

      Anthropic joins OpenAI’s push into health care with new Claude tools

      January 12, 2026

      The mother of one of Elon Musk’s children says his AI bot won’t stop creating sexualized images of her

      January 7, 2026

      A new pope, political shake-ups and celebs in space: The 2025-in-review news quiz

      December 31, 2025

      AI has become the norm for students. Teachers are playing catch-up.

      December 23, 2025
    • Business

      New VoidLink malware framework targets Linux cloud servers

      January 14, 2026

      Nvidia Rubin’s rack-scale encryption signals a turning point for enterprise AI security

      January 13, 2026

      How KPMG is redefining the future of SAP consulting on a global scale

      January 10, 2026

      Top 10 cloud computing stories of 2025

      December 22, 2025

      Saudia Arabia’s STC commits to five-year network upgrade programme with Ericsson

      December 18, 2025
    • Crypto

      Trump Shifts on Fed Pick as Hassett Odds Fade: Who Will Replace Powell?

      January 17, 2026

      A Third of French Crypto Firms Still Unlicensed Under MiCA as Deadline Nears

      January 17, 2026

      DOJ Charges Venezuelan National in $1 Billion Crypto Laundering Scheme

      January 17, 2026

      One of Wall Street’s Top Strategists No Longer Trusts Bitcoin | US Crypto News

      January 17, 2026

      3 Altcoins To Watch This Weekend | January 17 – 18

      January 17, 2026
    • Technology

      ARC Raiders dev claims they built an auction house-like trading system for the game but later removed it as it’s “very risky territory”

      January 17, 2026

      Citizen unveils 3 new tachymeter bezel chronographs with 43 mm stainless-steel cases

      January 17, 2026

      Portable 27-inch monitor with Google TV, battery and built-in soundbar launches with discount

      January 17, 2026

      Civilization VII coming to iPhone and iPad

      January 17, 2026

      Flagship power with an XXL battery

      January 17, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»The State of AI: Chatbot companions and the future of our privacy
    Technology

    The State of AI: Chatbot companions and the future of our privacy

    TechAiVerseBy TechAiVerseNovember 25, 2025No Comments8 Mins Read2 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    The State of AI: Chatbot companions and the future of our privacy
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    The State of AI: Chatbot companions and the future of our privacy

    Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday, writers from both publications debate one aspect of the generative AI revolution reshaping global power.

    In this week’s conversation MIT Technology Review’s senior reporter for features and investigations, Eileen Guo, and FT tech correspondent Melissa Heikkilä discuss the privacy implications of our new reliance on chatbots.

    Eileen Guo writes:

    Even if you don’t have an AI friend yourself, you probably know someone who does. A recent study found that one of the top uses of generative AI is companionship: On platforms like Character.AI, Replika, or Meta AI, people can create personalized chatbots to pose as the ideal friend, romantic partner, parent, therapist, or any other persona they can dream up. 

    It’s wild how easily people say these relationships can develop. And multiple studies have found that the more conversational and human-like an AI chatbot is, the more likely it is that we’ll trust it and be influenced by it. This can be dangerous, and the chatbots have been accused of pushing some people toward harmful behaviors—including, in a few extreme examples, suicide. 

    Some state governments are taking notice and starting to regulate companion AI. New York requires AI companion companies to create safeguards and report expressions of suicidal ideation, and last month California passed a more detailed bill requiring AI companion companies to protect children and other vulnerable groups. 

    But tellingly, one area the laws fail to address is user privacy.

    This is despite the fact that AI companions, even more so than other types of generative AI, depend on people to share deeply personal information—from their day-to-day-routines, innermost thoughts, and questions they might not feel comfortable asking real people.

    After all, the more users tell their AI companions, the better the bots become at keeping them engaged. This is what MIT researchers Robert Mahari and Pat Pataranutaporn called “addictive intelligence” in an op-ed we published last year, warning that the developers of AI companions make “deliberate design choices … to maximize user engagement.” 

    Ultimately, this provides AI companies with something incredibly powerful, not to mention lucrative: a treasure trove of conversational data that can be used to further improve their LLMs. Consider how the venture capital firm Andreessen Horowitz explained it in 2023: 

    “Apps such as Character.AI, which both control their models and own the end customer relationship, have a tremendous opportunity to  generate market value in the emerging AI value stack. In a world where data is limited, companies that can create a magical data feedback loop by connecting user engagement back into their underlying model to continuously improve their product will be among the biggest winners that emerge from this ecosystem.”

    This personal information is also incredibly valuable to marketers and data brokers. Meta recently announced that it will deliver ads through its AI chatbots. And research conducted this year by the security company Surf Shark found that four out of the five AI companion apps it looked at in the Apple App Store were collecting data such as user or device IDs, which can be combined with third-party data to create profiles for targeted ads. (The only one that said it did not collect data for tracking services was Nomi, which told me earlier this year that it would not “censor” chatbots from giving explicit suicide instructions.) 

    All of this means that the privacy risks posed by these AI companions are, in a sense, required: They are a feature, not a bug. And we haven’t even talked about the additional security risks presented by the way AI chatbots collect and store so much personal information in one place. 

    So, is it possible to have prosocial and privacy-protecting AI companions? That’s an open question. 

    What do you think, Melissa, and what is top of mind for you when it comes to privacy risks from AI companions? And do things look any different in Europe? 

    A location I care about is.

    Melissa Heikkilä replies:

    Thanks, Eileen. I agree with you. If social media was a privacy nightmare, then AI chatbots put the problem on steroids. 

    In many ways, an AI chatbot creates what feels like a much more intimate interaction than a Facebook page. The conversations we have are only with our computers, so there is little risk of your uncle or your crush ever seeing what you write. The AI companies building the models, on the other hand, see everything. 

    Companies are optimizing their AI models for engagement by designing them to be as human-like as possible. But AI developers have several other ways to keep us hooked. The first is sycophancy, or the tendency for chatbots to be overly agreeable. 

    This feature stems from the way the language model behind the chatbots is trained using reinforcement learning. Human data labelers rate the answers generated by the model as either acceptable or not. This teaches the model how to behave. 

    Because people generally like answers that are agreeable, such responses are weighted more heavily in training. 

    AI companies say they use this technique because it helps models become more helpful. But it creates a perverse incentive. 

    After encouraging us to pour our hearts out to chatbots, companies from Meta to OpenAI are now looking to monetize these conversations. OpenAI recently told us it was looking at a number of ways to meet $1 trillion spending pledges, which included advertising and shopping features. 

    AI models are already incredibly persuasive. Researchers at the UK’s AI Security Institute have shown that they are far more skilled than humans at persuading people to change their minds on politics, conspiracy theories, and vaccine skepticism. They do this by generating large amounts of relevant evidence and communicating it in an effective and understandable way. 

    This feature, paired with their sycophancy and a wealth of personal data, could be a powerful tool for advertisers—one that is more manipulative than anything we have seen before. 

    By default, chatbot users are opted in to data collection. Opt-out policies place the onus on users to understand the implications of sharing their information. It’s also unlikely that data already used in training will be removed. 

    We are all part of this phenomenon whether we want to be or not. Social media platforms from Instagram to LinkedIn now use our personal data to train generative AI models. 

    Companies are sitting on treasure troves that consist of our most intimate thoughts and preferences, and language models are very good at picking up on subtle hints in language that could help advertisers profile us better by inferring our age, location, gender, and income level.

    We are being sold the idea of an omniscient AI digital assistant, a superintelligent confidante. In return, however, there is a very real risk that our information is about to be sent to the highest bidder once again.

    Eileen responds:

    I think the comparison between AI companions and social media is both apt and concerning. 

    As Melissa highlighted, the privacy risks presented by AI chatbots aren’t new—they just “put the [privacy] problem on steroids.” AI companions are more intimate and even better optimized for engagement than social media, making it more likely that people will offer up more personal information.

    Here in the US, we are far from solving the privacy issues already presented by social networks and the internet’s ad economy, even without the added risks of AI.

    And without regulation, the companies themselves are not following privacy best practices either. One recent study found that the major AI models train their LLMs on user chat data by default unless users opt out, while several don’t offer opt-out mechanisms at all.

    In an ideal world, the greater risks of companion AI would give more impetus to the privacy fight—but I don’t see any evidence this is happening. 

    Further reading 

    FT reporters peer under the hood of OpenAI’s five-year business plan as it tries to meet its vast $1 trillion spending pledges. 

    Is it really such a problem if AI chatbots tell people what they want to hear? This FT feature asks what’s wrong with sycophancy 

    In a recent print issue of MIT Technology Review, Rhiannon Williams spoke to a number of people about the types of relationships they are having with AI chatbots.

    Eileen broke the story for MIT Technology Review about a chatbot that was encouraging some users to kill themselves.

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleWhat’s next for AlphaFold: A conversation with a Google DeepMind Nobel laureate
    Next Article Tesla FSD software may not be approved by EU regulator after all
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    ARC Raiders dev claims they built an auction house-like trading system for the game but later removed it as it’s “very risky territory”

    January 17, 2026

    Citizen unveils 3 new tachymeter bezel chronographs with 43 mm stainless-steel cases

    January 17, 2026

    Portable 27-inch monitor with Google TV, battery and built-in soundbar launches with discount

    January 17, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025619 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025235 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025135 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025109 Views
    Don't Miss
    Technology January 17, 2026

    ARC Raiders dev claims they built an auction house-like trading system for the game but later removed it as it’s “very risky territory”

    ARC Raiders dev claims they built an auction house-like trading system for the game but…

    Citizen unveils 3 new tachymeter bezel chronographs with 43 mm stainless-steel cases

    Portable 27-inch monitor with Google TV, battery and built-in soundbar launches with discount

    Civilization VII coming to iPhone and iPad

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    ARC Raiders dev claims they built an auction house-like trading system for the game but later removed it as it’s “very risky territory”

    January 17, 20260 Views

    Citizen unveils 3 new tachymeter bezel chronographs with 43 mm stainless-steel cases

    January 17, 20260 Views

    Portable 27-inch monitor with Google TV, battery and built-in soundbar launches with discount

    January 17, 20260 Views
    Most Popular

    A Team of Female Founders Is Launching Cloud Security Tech That Could Overhaul AI Protection

    March 12, 20250 Views

    Senua’s Saga: Hellblade 2 leads BAFTA Game Awards 2025 nominations

    March 12, 20250 Views

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.