Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    ‘There’s no room for purists’: Generative AI is altering the agency junior talent search

    After watching X’s ownership issues play out, marketers brace for TikTok whiplash in 2026

    Starlink is lowering thousands of satellites’ orbits to reduce risk of collisions

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      A new pope, political shake-ups and celebs in space: The 2025-in-review news quiz

      December 31, 2025

      AI has become the norm for students. Teachers are playing catch-up.

      December 23, 2025

      Trump signs executive order seeking to ban states from regulating AI companies

      December 13, 2025

      Apple’s AI chief abruptly steps down

      December 3, 2025

      The issue that’s scrambling both parties: From the Politics Desk

      December 3, 2025
    • Business

      Top 10 cloud computing stories of 2025

      December 22, 2025

      Saudia Arabia’s STC commits to five-year network upgrade programme with Ericsson

      December 18, 2025

      Zeroday Cloud hacking event awards $320,0000 for 11 zero days

      December 18, 2025

      Amazon: Ongoing cryptomining campaign uses hacked AWS accounts

      December 18, 2025

      Want to back up your iPhone securely without paying the Apple tax? There’s a hack for that, but it isn’t for everyone… yet

      December 16, 2025
    • Crypto

      $1 for the Keys? Dark Web Post Claims Kraken Admin Access for Sale

      January 2, 2026

      ZachXBT Flags Ongoing Wallet Exploit With Losses Exceeding $107,000

      January 2, 2026

      Analysts Identify 3 Indicators That Could Signal an Altcoin Season in 2026

      January 2, 2026

      Over $2.2 Billion in Bitcoin and Ethereum Options Expire as 2026 Begins

      January 2, 2026

      PEPE Surges 20% as James Wynn Gives Bold Prediction For 2026

      January 2, 2026
    • Technology

      ‘There’s no room for purists’: Generative AI is altering the agency junior talent search

      January 2, 2026

      After watching X’s ownership issues play out, marketers brace for TikTok whiplash in 2026

      January 2, 2026

      Starlink is lowering thousands of satellites’ orbits to reduce risk of collisions

      January 2, 2026

      Samsung’s latest Freestyle portable projector is brighter and smarter

      January 2, 2026

      How to watch Hyundai’s CES 2026 presentation live

      January 2, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»5 Things You Should Never Tell ChatGPT
    Technology

    5 Things You Should Never Tell ChatGPT

    TechAiVerseBy TechAiVerseJanuary 2, 2026No Comments8 Mins Read0 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    5 Things You Should Never Tell ChatGPT
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    5 Things You Should Never Tell ChatGPT

    Stock all/Shutterstock

    ChatGPT responds to about 2.5 billion prompts each day, with the US accounting for 330 million of these. Unlike the experience when interacting with a search engine, AI responses are more like a reply from a friend than a simple list of websites that may or may not contain the answer to our query. People are using AI tools like ChatGPT in some very weird ways, but caution is essential when sharing information.

    Chatbots like ChatGPT are not one-way tools that fire out responses based on a static database of information. It’s a continually evolving system that learns from the data it’s fed, and that information doesn’t exist in a vacuum. While systems like ChatGPT are designed with safeguards in place, there are significant and warranted concerns about the effectiveness of these safeguards. 

    For instance, in January 2025, Meta AI fixed a bug that had allowed users to access private prompts from other users. With ChatGPT specifically, earlier versions were susceptible to prompt injection attacks that allowed attackers to intercept personal data. Another security flaw was Google’s (and other search engines’) unfortunate tendency to index shared ChatGPT chats and make them publicly available in search results. 

    This means that the basic rules of digital hygiene that we apply to other aspects of our online presence should equally apply to ChatGPT. Indeed, given the controversy surrounding the technology’s security and its relative immaturity, it could be argued that even more prudence is required when dealing with AI chatbots. Bearing this in mind, let’s look at five things you should never share with ChatGPT.

    Personally Identifiable Information

    Gam1983/Getty Images

    Perhaps the most obvious starting point is the sharing (or preferably not) of personally identifiable information (PII). As an example, the Cyber Security Intelligence website recently published an article based on research work done by Safety Detectives, a group of cybersecurity experts. The research looked at 1,000 publicly available ChatGPT conversations — the findings were eye-opening. They discovered that users frequently shared details like full names and addresses, ID numbers, phone numbers, email addresses, and usernames & passwords. The latter is more relevant given the rise of agentic AI browsers like Atlas — OpenAI’s ChatGPT-based AI-powered browser. 

    There is no doubt that ChatGPT is genuinely helpful for tasks such as resumes and cover letters. However, it does the job perfectly without unnecessarily including personal details. Placeholders work just as well, as long as you remember to edit the details to avoid that critical letter going out as being from John Doe, Nowhere Street, Middletown. Ultimately, this one simple step prevents sensitive data like names, addresses, and ID numbers — information that can all be misused, falling into the wrong hands. 

    Another option is to opt out of letting ChatGPT use your chats for data training. This can be done from within the ChatGPT settings, and full instructions can be found on the OpenAI website. Importantly, this doesn’t mean that it’s suddenly okay to share your PII, but it does reduce the chances of any inadvertently shared information becoming publicly available. In short, sharing your PII with ChatGPT is not something you need to do and should be avoided in all circumstances. 

    Financial details

    fizkes/Shutterstock

    Another common way people use ChatGPT is as a personal finance adviser. This could be something as simple as creating a monthly budget or something as complex as working out an entire retirement strategy. Firstly, as OpenAI freely admits, “ChatGPT can make mistakes. Check important info.”, which is why many experts advise against using such tools without verifying critical information with financial professionals. That being said, ChatGPT can be helpful with financial matters, but there is no need to ever share any personal financial details with it. 

    While budgeting and financial planning are common use cases, there are other times when users might be tempted to enter financial details. Understanding a bank statement, reviewing a loan offer, or using the aforementioned agentic AI. In any situation, real details are rarely necessary, and placeholder information can be entered. In cases where users upload financial statements, editing out PII information first can also work. 

    Sensitive information includes bank account numbers, credit card details, investment account credentials, and tax records. Chatbots don’t operate within the secure frameworks designed to protect financial transactions. Essentially, this means that once entered, that information exists outside the safeguards normally applied to financial data. In the worst cases, this could lead to sensitive financial data falling into the hands of ‘bad actors’ who could then use it to carry out financial fraud, identity theft, ransomware attacks, phishing, or all of the above. 

    Medical details

    Blossom Stock Studio/Shutterstock

    We won’t go too deeply into the argument of whether people should use ChatGPT for medical advice — suffice to say that we can refer you to our previous point: “ChatGPT can make mistakes.” That being said, the point here is not whether you should use it for medical advice, but whether you should share your medical details with it, especially with the often mentioned PII in tow. That distinction matters because a growing number of people are turning to AI chatbots like ChatGPT for health-related information. According to a recent poll, around one in six adults use AI chatbots at least once a month for health information; the number rises to one in four for younger adults. 

    Again, the risk arises when general discussions start to include specifics. Information such as diagnoses, test results, medical history, and mental health issues can quickly become sensitive, especially when combined with identifying information. Like the financial information, the problem is heightened because once entered, such data exists outwith health data protection networks, meaning once it’s ‘in the wild’, users have little visibility or control over how it’s handled. 

    The fact is that people can feel more comfortable sharing personal information with ChatGPT than they would with a plain old Google search. The conversation-like interactions feel more human than impersonal Google searches, and this can lead to a greater willingness to divulge personal medical information. 

    ChatGPT can be useful for understanding medical concepts in broad terms, but it shouldn’t be treated as having the discretion of a doctor’s surgery. 

    Work-related materials

    Boy Wirat/Getty Images

    Aside from personal data, there’s another category of information that doesn’t belong in ChatGPT — at least in unfiltered form — that is conversations related to confidential or proprietary work-related material. This includes anything linked to an employer, client, or any ongoing project that isn’t cleared for public exposure. While it can be tempting to use ChatGPT to summarize documents, rewrite emails, or check & edit reports, doing so can introduce unnecessary risks to the integrity of what is often protected data. 

    As an example, let’s go back to the medical scenario, but this time we’re looking at it from the professional viewpoint. In this instance, a busy doctor might be tempted to share a draft patient summary, clinical notes, or a referral letter with ChatGPT to help tighten language or simplify complex subject matter. However, while the intent is efficiency, sharing such details potentially places the material into the public domain, or at least places it outside of the security procedures designed to protect such information. Creative works and intellectual property also fall into the ‘do not share’ category. 

    In short, there is a running theme that can be used to summarize the problem. Never share anything with AI chatbots like ChatGPT that you wouldn’t be comfortable placing on any public-facing platform or handing to third parties outside of any acceptably secure and regulated system. Used with a little due diligence, chatbots can be incredibly useful tools. ChatGPT is full of features you ignore but probably shouldn’t — staying secure is not one of these. 

    Anything illegal

    Nwz/Shutterstock

    Finally, sharing anything illegal in ChatGPT is something best avoided, and not only is OpenAI committed to disclosing user data in response to valid legal U.S. processes, but they will also comply with international requests.

    Laws change all the time, both at home and abroad, and ordinary behaviors can become criminalized in a short space of time, so it’s best to be circumspect about what you reveal to companies that can later be handed over to law enforcement.

    OpenAI does have safeguards in place that should prevent ChatGPT from being used for illegal or unethical purposes, and playing with these boundaries may get you flagged. This includes asking it how to commit crimes or fraud, and influencing people into taking potentially harmful actions. It also has specialized “pipelines” that specifically filter out chats where it’s detected that the user is planning to harm others. 

    While the latter point is perhaps the most serious and relatively easy to flag. Other illegal misuses of ChatGPT show that its safeguards aren’t bulletproof. The use of the platform to write malicious code and to automate social engineering schemes is well documented, which just highlights the importance of a caution-first approach when using ChatGPT. 

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleIllegal E-Bike Seized With Claimed Speeds Over 100mph, But Something Doesn’t Add Up
    Next Article 5 Tools Craftsman Makes That Ryobi Doesn’t
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    ‘There’s no room for purists’: Generative AI is altering the agency junior talent search

    January 2, 2026

    After watching X’s ownership issues play out, marketers brace for TikTok whiplash in 2026

    January 2, 2026

    Starlink is lowering thousands of satellites’ orbits to reduce risk of collisions

    January 2, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025571 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025212 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025115 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025100 Views
    Don't Miss
    Technology January 2, 2026

    ‘There’s no room for purists’: Generative AI is altering the agency junior talent search

    ‘There’s no room for purists’: Generative AI is altering the agency junior talent search By…

    After watching X’s ownership issues play out, marketers brace for TikTok whiplash in 2026

    Starlink is lowering thousands of satellites’ orbits to reduce risk of collisions

    Samsung’s latest Freestyle portable projector is brighter and smarter

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    ‘There’s no room for purists’: Generative AI is altering the agency junior talent search

    January 2, 20261 Views

    After watching X’s ownership issues play out, marketers brace for TikTok whiplash in 2026

    January 2, 20261 Views

    Starlink is lowering thousands of satellites’ orbits to reduce risk of collisions

    January 2, 20261 Views
    Most Popular

    What to Know and Where to Find Apple Intelligence Summaries on iPhone

    March 12, 20250 Views

    A Team of Female Founders Is Launching Cloud Security Tech That Could Overhaul AI Protection

    March 12, 20250 Views

    Senua’s Saga: Hellblade 2 leads BAFTA Game Awards 2025 nominations

    March 12, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.