Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Manor Lords publisher Hooded Horse’s CEO argues game agreements violate contract principles

    Fable set for Autumn 2026 launch on Playstation as well as Xbox and PC

    Starbreeze reportedly makes another round of layoffs

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      ChatGPT can embrace authoritarian ideas after just one prompt, researchers say

      January 24, 2026

      Ashley St. Clair, the mother of one of Elon Musk’s children, sues xAI over Grok sexual images

      January 17, 2026

      Anthropic joins OpenAI’s push into health care with new Claude tools

      January 12, 2026

      The mother of one of Elon Musk’s children says his AI bot won’t stop creating sexualized images of her

      January 7, 2026

      A new pope, political shake-ups and celebs in space: The 2025-in-review news quiz

      December 31, 2025
    • Business

      New VoidLink malware framework targets Linux cloud servers

      January 14, 2026

      Nvidia Rubin’s rack-scale encryption signals a turning point for enterprise AI security

      January 13, 2026

      How KPMG is redefining the future of SAP consulting on a global scale

      January 10, 2026

      Top 10 cloud computing stories of 2025

      December 22, 2025

      Saudia Arabia’s STC commits to five-year network upgrade programme with Ericsson

      December 18, 2025
    • Crypto

      Bitcoin Flat at $89,000, but Charts Warn Buyers Are Losing Ground

      January 24, 2026

      Spacecoin Launches with a 65% Rally Post-Airdrop, But Will the Hype Hold?

      January 24, 2026

      Solana’s New $500 Smartphone Token Skyrocketed After Launch

      January 24, 2026

      Ex-Olympian Turns Himself In Over Accusations of Running a Crypto Drug Network

      January 24, 2026

      Jeff Bezos Denies Polymarket Claim, Rekindling Debate Over Fake News on Betting Platforms

      January 24, 2026
    • Technology

      Mini PC with AMD Ryzen 9: Is the Geekom A7 Max Edition 2026 worth it?

      January 24, 2026

      Cloud risk with BitLocker: Microsoft occasionally hands over BitLocker keys to the FBI

      January 24, 2026

      This one disadvantage really brings down the new Dynabook Tecra A65-M

      January 24, 2026

      Following cancelled Ubisoft games, employees reportedly set to mass quit before layoffs

      January 24, 2026

      How to use Google Photos’ new Me Meme feature

      January 24, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»“It is genuinely hard; we need to protect vulnerable users, while also making sure our guardrails still allow all of our users to benefit from our tools” — Sam Altman bemoans the difficulty of keeping ChatGPT safe in contentious debate with Elon Mus
    Technology

    “It is genuinely hard; we need to protect vulnerable users, while also making sure our guardrails still allow all of our users to benefit from our tools” — Sam Altman bemoans the difficulty of keeping ChatGPT safe in contentious debate with Elon Mus

    TechAiVerseBy TechAiVerseJanuary 21, 2026No Comments6 Mins Read0 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    “It is genuinely hard; we need to protect vulnerable users, while also making sure our guardrails still allow all of our users to benefit from our tools” — Sam Altman bemoans the difficulty of keeping ChatGPT safe in contentious debate with Elon Mus
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    “It is genuinely hard; we need to protect vulnerable users, while also making sure our guardrails still allow all of our users to benefit from our tools” — Sam Altman bemoans the difficulty of keeping ChatGPT safe in contentious debate with Elon Musk

    (Image credit: Getty Images)

    • Sam Altman defended OpenAI’s safety efforts after Elon Musk blamed ChatGPT for multiple deaths
    • Altman called AI safety “genuinely hard,” highlighting the balance between protection and usability
    • OpenAI faces multiple wrongful-death lawsuits tied to claims that ChatGPT worsened mental health outcomes

    OpenAI CEO Sam Altman isn’t known for oversharing about ChatGPT’s inner workings. But he admitted to difficulty keeping the AI chatbot both safe and useful. Elon Musk seemingly sparked this insight with barbed posts on X (formerly Twitter). Musk warned people not to use ChatGPT, sharing a link to an article claiming a link between the AI assistant and nine deaths.

    The blistering social media exchange between two of the most powerful figures in artificial intelligence yielded more than bruised egos or legal scars. Musk’s post did not refer to the broader context of the deaths or the lawsuits OpenAI is facing related to them, but Altman clearly felt compelled to respond.

    His answer was rather more heartfelt than the usual bland corporate boilerplate. He instead gave a glimpse at the thinking behind OpenAI’s tightrope walk, balancing keeping ChatGPT and other AI tools safe for millions of people, and defended ChatGPT’s architecture and guardrails. “We need to protect vulnerable users, while also making sure our guardrails still allow all of our users to benefit from our tools.”

    Sometimes you complain about ChatGPT being too restrictive, and then in cases like this you claim it’s too relaxed. Almost a billion people use it and some of them may be in very fragile mental states. We will continue to do our best to get this right and we feel huge… https://t.co/U6r03nsHzgJanuary 20, 2026

    After rising to praise OpenAI’s safety protocols and the complexity of balancing harm reduction with product usefulness, Altman implied Musk had no standing to lob accusations because of the dangers of Tesla’s Autopilot system.

    He said that his own experience with it was enough to convince him it was “far from a safe thing for Tesla to have released.” In an especially pointed aside at Musk, he added, “I won’t even start on some of the Grok decisions.”

    As the exchange ricocheted across platforms, what stood out most wasn’t the usual billionaire posturing but Altman’s unusually candid framing of what AI safety actually entails. For OpenAI, a company simultaneously deploying ChatGPT to schoolkids, therapists, programmers, and CEOs, defining “safe” means threading the needle between usefulness and avoiding problems, objectives that often conflict.

    Altman has not publicly commented on the individual wrongful death lawsuits filed against OpenAI. He has, however, insisted that acknowledging real-world harm doesn’t require oversimplifying the problem. AI reflects inputs, and its evolving responses make moderation and safety require more than just the usual terms of service.

    Sign up for breaking news, reviews, opinion, top tech deals, and more.

    ChatGPT’s safety struggle

    OpenAI claims to have worked hard to make ChatGPT safer with newer versions. There’s a whole suite of safety features trained to detect signs of distress, including suicidal ideation. ChatGPT issues disclaimers, halts certain interactions, and directs users to mental health resources when it detects warning signs. OpenAI also claims its models will refuse to engage with violent content whenever possible.

    The public might think this is straightforward, but Altman’s post gestures at an underlying tension. ChatGPT is deployed in billions of unpredictable conversational spaces across languages, cultures, and emotional states. Overly rigid moderation would make the AI useless in many of those circumstances, yet easing the rules too much would multiply the potential risk of dangerous and unhealthy interactions.

    Comparing AI to automated car pilots is not exactly a perfect analogy, despite Altman’s comment. That said, one could argue that while roads are regulated, regardless of whether a human or robot is behind the wheel, AI prompts are on a more rugged trail. There is no central traffic authority for how a chatbot should respond to a teenager in crisis or answer someone with paranoid delusions. In this vacuum, companies like OpenAI are left to build their own rules and refine them on the fly.

    The personal element adds another layer to the argument, too. Altman and Musk’s companies are in a protracted legal battle. Musk is suing OpenAI and Altman over the company’s transition from a nonprofit research lab to a capped-profit model, alleging that he was misled when he donated $38 million to help found the organization. He claims the company now prioritizes corporate gain over public benefit. Altman says the shift was necessary to build competitive models and keep AI development on a responsible track. The safety conversation is a philosophical and engineering facet of a war in boardrooms and courtrooms over what OpenAI should be.

    Whether or not Musk and Altman ever agree on the risks, or even speak civilly online, all AI developers might do well to follow Altman in being more transparent in what AI safety looks like and how to achieve it.


    Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

    And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.



    Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He’s since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he’s continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

    community guidelines.

    “>

    You must confirm your public display name before commenting

    Please logout and then login again, you will then be prompted to enter your display name.

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleTired of seeing Low Battery pop-up on iPhone? Here are 5 simple display settings to improve your battery life
    Next Article Hobart Hurricanes vs Melbourne Stars Free Streams: How to watch BBL15 Knockout online from anywhere
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    Mini PC with AMD Ryzen 9: Is the Geekom A7 Max Edition 2026 worth it?

    January 24, 2026

    Cloud risk with BitLocker: Microsoft occasionally hands over BitLocker keys to the FBI

    January 24, 2026

    This one disadvantage really brings down the new Dynabook Tecra A65-M

    January 24, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025636 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025240 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025140 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025111 Views
    Don't Miss
    Gaming January 24, 2026

    Manor Lords publisher Hooded Horse’s CEO argues game agreements violate contract principles

    Manor Lords publisher Hooded Horse’s CEO argues game agreements violate contract principles “One of the…

    Fable set for Autumn 2026 launch on Playstation as well as Xbox and PC

    Starbreeze reportedly makes another round of layoffs

    US video game spend up 3% in December, fuelled in part by 24% increase in subscription services | US Monthly Charts

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Manor Lords publisher Hooded Horse’s CEO argues game agreements violate contract principles

    January 24, 20260 Views

    Fable set for Autumn 2026 launch on Playstation as well as Xbox and PC

    January 24, 20260 Views

    Starbreeze reportedly makes another round of layoffs

    January 24, 20260 Views
    Most Popular

    A Team of Female Founders Is Launching Cloud Security Tech That Could Overhaul AI Protection

    March 12, 20250 Views

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views

    VTOMAN FlashSpeed 1500: Plenty Of Power For All Your Gear

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.