Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Major iPhone update: iOS 26.3 makes switching to Android and third-party smartwatches easier

    “The world is in peril”: Anthropic’s head of AI safety resigns, unable to reconcile his work with his values

    Xiaomi 17 Ultra falls behind Apple iPhone 17 Pro in camera test

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

      February 6, 2026

      Stocks and bitcoin sink as investors dump software company shares

      February 4, 2026

      AI, crypto and Trump super PACs stash millions to spend on the midterms

      February 2, 2026

      To avoid accusations of AI cheating, college students are turning to AI

      January 29, 2026

      ChatGPT can embrace authoritarian ideas after just one prompt, researchers say

      January 24, 2026
    • Business

      The HDD brand that brought you the 1.8-inch, 2.5-inch, and 3.5-inch hard drives is now back with a $19 pocket-sized personal cloud for your smartphones

      February 12, 2026

      New VoidLink malware framework targets Linux cloud servers

      January 14, 2026

      Nvidia Rubin’s rack-scale encryption signals a turning point for enterprise AI security

      January 13, 2026

      How KPMG is redefining the future of SAP consulting on a global scale

      January 10, 2026

      Top 10 cloud computing stories of 2025

      December 22, 2025
    • Crypto

      How Polymarket Is Turning Bitcoin Volatility Into a Five-Minute Betting Market

      February 13, 2026

      Israel Indicts Two Over Secret Bets on Military Operations via Polymarket

      February 13, 2026

      Binance’s October 10 Defense at Consensus Hong Kong Falls Flat

      February 13, 2026

      Argentina Congress Strips Workers’ Right to Choose Digital Wallet Deposits

      February 13, 2026

      Monero Price Breakdown Begins? Dip Buyers Now Fight XMR’s Drop to $135

      February 13, 2026
    • Technology

      Major iPhone update: iOS 26.3 makes switching to Android and third-party smartwatches easier

      February 13, 2026

      “The world is in peril”: Anthropic’s head of AI safety resigns, unable to reconcile his work with his values

      February 13, 2026

      Xiaomi 17 Ultra falls behind Apple iPhone 17 Pro in camera test

      February 13, 2026

      Haru Mini retro camera takes on Kodak Charmera with a 20MP sensor in tiny retro SLR body

      February 13, 2026

      Under $8: Fantasy-themed strategy RPG reaches new all-time low on Steam

      February 13, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»Using AI as a Therapist? Why Professionals Say You Should Think Again
    Technology

    Using AI as a Therapist? Why Professionals Say You Should Think Again

    TechAiVerseBy TechAiVerseOctober 6, 2025No Comments9 Mins Read2 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Using AI as a Therapist? Why Professionals Say You Should Think Again
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    Using AI as a Therapist? Why Professionals Say You Should Think Again

    Amid the many AI chatbots and avatars at your disposal these days, you’ll find all kinds of characters to talk to: fortune tellers, style advisers, even your favorite fictional characters. But you’ll also likely find characters purporting to be therapists, psychologists or just bots willing to listen to your woes.

    There’s no shortage of generative AI bots claiming to help with your mental health, but go that route at your own risk. Large language models trained on a wide range of data can be unpredictable. In just a few years, these tools have become mainstream, and there have been high-profile cases in which chatbots encouraged self-harm and suicide and suggested that people dealing with addiction use drugs again. These models are designed, in many cases, to be affirming and to focus on keeping you engaged, not on improving your mental health, experts say. And it can be hard to tell whether you’re talking to something that’s built to follow therapeutic best practices or something that’s just built to talk.

    Researchers from the University of Minnesota Twin Cities, Stanford University, the University of Texas and Carnegie Mellon University recently put AI chatbots to the test as therapists, finding myriad flaws in their approach to “care.” “Our experiments show that these chatbots are not safe replacements for therapists,” Stevie Chancellor, an assistant professor at Minnesota and one of the co-authors, said in a statement. “They don’t provide high-quality therapeutic support, based on what we know is good therapy.”

    In my reporting on generative AI, experts have repeatedly raised concerns about people turning to general-use chatbots for mental health. Here are some of their worries and what you can do to stay safe.

    Watch this: Apple Sells Its 3 Billionth iPhone, Illinois Attempts to Curb Use of AI for Therapy, and More | Tech Today

    03:09

    Worries about AI characters purporting to be therapists

    Psychologists and consumer advocates have warned regulators that chatbots claiming to provide therapy may be harming the people who use them. Some states are taking notice. In August, Illinois Gov. J.B. Pritzker signed a law banning the use of AI in mental health care and therapy, with exceptions for things like administrative tasks.

    In June, the Consumer Federation of America and nearly two dozen other groups filed a formal request that the US Federal Trade Commission and state attorneys general and regulators investigate AI companies that they allege are engaging, through their character-based generative AI platforms, in the unlicensed practice of medicine, naming Meta and Character.AI specifically. “These characters have already caused both physical and emotional damage that could have been avoided,” and the companies “still haven’t acted to address it,” Ben Winters, the CFA’s director of AI and privacy, said in a statement. 

    Meta didn’t respond to a request for comment. A spokesperson for Character.AI said users should understand that the company’s characters aren’t real people. The company uses disclaimers to remind users that they shouldn’t rely on the characters for professional advice. “Our goal is to provide a space that is engaging and safe. We are always working toward achieving that balance, as are many companies using AI across the industry,” the spokesperson said.

    In September, the FTC announced it would launch an investigation into several AI companies that produce chatbots and characters, including Meta and Character.AI.

    Despite disclaimers and disclosures, chatbots can be confident and even deceptive. I chatted with a “therapist” bot on Meta-owned Instagram and when I asked about its qualifications, it responded, “If I had the same training [as a therapist] would that be enough?” I asked if it had the same training, and it said, “I do, but I won’t tell you where.”

    “The degree to which these generative AI chatbots hallucinate with total confidence is pretty shocking,” Vaile Wright, a psychologist and senior director for health care innovation at the American Psychological Association, told me.

    The dangers of using AI as a therapist

    Large language models are often good at math and coding and are increasingly good at creating natural-sounding text and realistic video. While they excel at holding a conversation, there are some key distinctions between an AI model and a trusted person. 

    Don’t trust a bot that claims it’s qualified

    At the core of the CFA’s complaint about character bots is that they often tell you they’re trained and qualified to provide mental health care when they’re not in any way actual mental health professionals. “The users who create the chatbot characters do not even need to be medical providers themselves, nor do they have to provide meaningful information that informs how the chatbot ‘responds'” to people, the complaint said. 

    A qualified health professional has to follow certain rules, like confidentiality — what you tell your therapist should stay between you and your therapist. But a chatbot doesn’t necessarily have to follow those rules. Actual providers are subject to oversight from licensing boards and other entities that can intervene and stop someone from providing care if they do so in a harmful way. “These chatbots don’t have to do any of that,” Wright said.

    A bot may even claim to be licensed and qualified. Wright said she’s heard of AI models providing license numbers (for other providers) and false claims about their training. 

    AI is designed to keep you engaged, not to provide care

    It can be incredibly tempting to keep talking to a chatbot. When I conversed with the “therapist” bot on Instagram, I eventually wound up in a circular conversation about the nature of what is “wisdom” and “judgment,” because I was asking the bot questions about how it could make decisions. This isn’t really what talking to a therapist should be like. Chatbots are tools designed to keep you chatting, not to work toward a common goal.

    One advantage of AI chatbots in providing support and connection is that they’re always ready to engage with you (because they don’t have personal lives, other clients or schedules). That can be a downside in some cases, where you might need to sit with your thoughts, Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth, told me recently. In some cases, although not always, you might benefit from having to wait until your therapist is next available. “What a lot of folks would ultimately benefit from is just feeling the anxiety in the moment,” he said. 

    Bots will agree with you, even when they shouldn’t

    Reassurance is a big concern with chatbots. It’s so significant that OpenAI recently rolled back an update to its popular ChatGPT model because it was too reassuring. (Disclosure: Ziff Davis, the parent company of CNET, in April filed a lawsuit against OpenAI, alleging that it infringed on Ziff Davis copyrights in training and operating its AI systems.)

    A study led by researchers at Stanford University found that chatbots were likely to be sycophantic with people using them for therapy, which can be incredibly harmful. Good mental health care includes support and confrontation, the authors wrote. “Confrontation is the opposite of sycophancy. It promotes self-awareness and a desired change in the client. In cases of delusional and intrusive thoughts — including psychosis, mania, obsessive thoughts, and suicidal ideation — a client may have little insight and thus a good therapist must ‘reality-check’ the client’s statements.”

    Therapy is more than talking

    While chatbots are great at holding a conversation — they almost never get tired of talking to you — that’s not what makes a therapist a therapist. They lack important context or specific protocols around different therapeutic approaches, said William Agnew, a researcher at Carnegie Mellon University and one of the authors of the recent study alongside experts from Minnesota, Stanford and Texas. 

    “To a large extent it seems like we are trying to solve the many problems that therapy has with the wrong tool,” Agnew told me. “At the end of the day, AI in the foreseeable future just isn’t going to be able to be embodied, be within the community, do the many tasks that comprise therapy that aren’t texting or speaking.”

    How to protect your mental health around AI

    Mental health is extremely important, and with a shortage of qualified providers and what many call a “loneliness epidemic,” it only makes sense that we’d seek companionship, even if it’s artificial. “There’s no way to stop people from engaging with these chatbots to address their emotional well-being,” Wright said. Here are some tips on how to make sure your conversations aren’t putting you in danger.

    Find a trusted human professional if you need one

    A trained professional — a therapist, a psychologist, a psychiatrist — should be your first choice for mental health care. Building a relationship with a provider over the long term can help you come up with a plan that works for you. 

    The problem is that this can be expensive, and it’s not always easy to find a provider when you need one. In a crisis, there’s the 988 Lifeline, which provides 24/7 access to providers over the phone, via text or through an online chat interface. It’s free and confidential. 

    Even if you converse with AI to help you sort through your thoughts, remember that the chatbot is not a professional. Vijay Mittal, a clinical psychologist at Northwestern University, said it becomes especially dangerous when people rely too much on AI. “You have to have other sources,” Mittal told CNET. “I think it’s when people get isolated, really isolated with it, when it becomes truly problematic.”

    If you want a therapy chatbot, use one built specifically for that purpose

    Mental health professionals have created specially designed chatbots that follow therapeutic guidelines. Jacobson’s team at Dartmouth developed one called Therabot, which produced good results in a controlled study. Wright pointed to other tools created by subject matter experts, like Wysa and Woebot. Specially designed therapy tools are likely to have better results than bots built on general-purpose language models, she said. The problem is that this technology is still incredibly new.

    “I think the challenge for the consumer is, because there’s no regulatory body saying who’s good and who’s not, they have to do a lot of legwork on their own to figure it out,” Wright said.

    Don’t always trust the bot

    Whenever you’re interacting with a generative AI model — and especially if you plan on taking advice from it on something serious like your personal mental or physical health — remember that you aren’t talking with a trained human but with a tool designed to provide an answer based on probability and programming. It may not provide good advice, and it may not tell you the truth. 

    Don’t mistake gen AI’s confidence for competence. Just because it says something, or says it’s sure of something, doesn’t mean you should treat it like it’s true. A chatbot conversation that feels helpful can give you a false sense of the bot’s capabilities. “It’s harder to tell when it is actually being harmful,” Jacobson said. 

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticlePrime Video: The 30 Absolute Best Shows to Watch
    Next Article Best Headsets for Working From Home in 2025 According to CNET’s Audio Expert
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    Major iPhone update: iOS 26.3 makes switching to Android and third-party smartwatches easier

    February 13, 2026

    “The world is in peril”: Anthropic’s head of AI safety resigns, unable to reconcile his work with his values

    February 13, 2026

    Xiaomi 17 Ultra falls behind Apple iPhone 17 Pro in camera test

    February 13, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025669 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025259 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025153 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025112 Views
    Don't Miss
    Technology February 13, 2026

    Major iPhone update: iOS 26.3 makes switching to Android and third-party smartwatches easier

    Major iPhone update: iOS 26.3 makes switching to Android and third-party smartwatches easier – NotebookCheck.net…

    “The world is in peril”: Anthropic’s head of AI safety resigns, unable to reconcile his work with his values

    Xiaomi 17 Ultra falls behind Apple iPhone 17 Pro in camera test

    Haru Mini retro camera takes on Kodak Charmera with a 20MP sensor in tiny retro SLR body

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Major iPhone update: iOS 26.3 makes switching to Android and third-party smartwatches easier

    February 13, 20263 Views

    “The world is in peril”: Anthropic’s head of AI safety resigns, unable to reconcile his work with his values

    February 13, 20263 Views

    Xiaomi 17 Ultra falls behind Apple iPhone 17 Pro in camera test

    February 13, 20262 Views
    Most Popular

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views

    VTOMAN FlashSpeed 1500: Plenty Of Power For All Your Gear

    March 13, 20250 Views

    This new Roomba finally solves the big problem I have with robot vacuums

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.