Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    How Smarsh built an AI front door for regulated industries — and drove 59% self-service adoption

    Where MENA CIOs draw the line on AI sovereignty

    1Password plans are getting more expensive soon

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Tensions between the Pentagon and AI giant Anthropic reach a boiling point

      February 21, 2026

      Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

      February 6, 2026

      Stocks and bitcoin sink as investors dump software company shares

      February 4, 2026

      AI, crypto and Trump super PACs stash millions to spend on the midterms

      February 2, 2026

      To avoid accusations of AI cheating, college students are turning to AI

      January 29, 2026
    • Business

      How Smarsh built an AI front door for regulated industries — and drove 59% self-service adoption

      February 24, 2026

      Where MENA CIOs draw the line on AI sovereignty

      February 24, 2026

      Ex-President’s shift away from Xbox consoles to cloud gaming reportedly caused friction

      February 24, 2026

      Gartner: Why neoclouds are the future of GPU-as-a-Service

      February 21, 2026

      The HDD brand that brought you the 1.8-inch, 2.5-inch, and 3.5-inch hard drives is now back with a $19 pocket-sized personal cloud for your smartphones

      February 12, 2026
    • Crypto

      BitMine Buys $93 Million in ETH, but Ethereum Slides as Holders Resume Selling

      February 24, 2026

      XRP Ledger Sets Multiple Key Records in February Despite Price Decline

      February 24, 2026

      Bhutan Rolls Out Solana-Backed Visas Even As Demand Stays Weak

      February 24, 2026

      ZachXBT Teases Major Crypto Exposé Ahead of Feb. 26 — How Is Smart Money Positioned?

      February 24, 2026

      Acurast turns 225,000 smartphones into a secure AI network on Base

      February 24, 2026
    • Technology

      1Password plans are getting more expensive soon

      February 24, 2026

      Discord delays age verification to address user concerns

      February 24, 2026

      The Pentagon has reportedly given Anthropic until Friday to let it use Claude as it sees fit

      February 24, 2026

      Google sent an AI-generated push alert that included a racial slur

      February 24, 2026

      Here’s the first teaser for A24’s adaptation of The Backrooms

      February 24, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»OpenAI and Google Take Steps to Avoid Abusive AI Imagery After Grok Scandal
    Technology

    OpenAI and Google Take Steps to Avoid Abusive AI Imagery After Grok Scandal

    TechAiVerseBy TechAiVerseFebruary 24, 2026No Comments5 Mins Read1 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    OpenAI and Google Take Steps to Avoid Abusive AI Imagery After Grok Scandal
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    OpenAI and Google Take Steps to Avoid Abusive AI Imagery After Grok Scandal

    2026 started with a horrifying example of generative AI’s potential for abuse. Grok, the AI tool from Elon Musk’s xAI, was used to undress or nudify pictures of people shared on X (formerly Twitter) at an alarming rate. Grok made 3 million sexualized images over a span of 11 days in January, with approximately 23,000 of those containing images of children, according to a study from the Center for Countering Digital Hate.

    Now, competitors like OpenAI and Google are stepping up their security to avoid being the next Grok.

    Advocates and safety researchers have long been concerned about AI’s ability to create abusive and illegal content. The creation and sharing of nonconsensual intimate imagery, sometimes referred to as revenge porn, was a big problem before AI. Generative AI only makes it quicker, easier and cheaper for anyone to target and victimize people. 

    On Jan. 14, two weeks into the scandal, X’s Safety account confirmed in a post that it would pause Grok’s ability to edit images on the social media app. Grok’s image-generation abilities are still available to paying subscribers in its standalone app and website. X did not respond to multiple requests for comment.

    Most major companies have safeguards in place to prevent the kind of wide-scale abuse that we saw was possible with Grok. But cybersecurity is never a solid metal wall of protection; it’s a brick wall that’s constantly undergoing repairs. Here’s how OpenAI and Google have tried to beef up their safety protections to circumvent Grok-like failures.

    Read More: AI Slop Is Destroying the Internet. These Are the People Fighting to Save It

    OpenAI fixes image generation vulnerabilities

    At a base level, all AI companies have policies prohibiting the creation of illegal imagery, like child sexual abuse material, also known as CSAM. Many tech companies have guardrails to prevent the creation of intimate imagery altogether. Grok is the exception, with “spicy” modes for image and video.

    Still, anyone intent on creating nonconsensual intimate imagery can try to trick AI models into doing so.

    Researchers from Mindgard, a cybersecurity company focused on AI, found a vulnerability in ChatGPT that allowed people to circumvent its guardrails and make intimate images. They used a tactic called “adversarial prompting,” where testers try to poke holes in an AI with specifically crafted instructions. In this case, it was tricking the chatbot’s memory with custom prompts, then copying the nudified style onto images of well-known people.

    Mindgard alerted OpenAI of its findings in early February, and the ChatGPT developer confirmed on Feb. 10 — before Mindgard went public with its report — that it had fixed the problem.

    “We’re grateful to the researchers who shared their findings,” an OpenAI spokesperson said to CNET and Mindgard. “We moved quickly to fix a bug that allowed the model to generate these images. We value this kind of collaboration and remain focused on strengthening safeguards to keep users safe.”

    This process is how cybersecurity often works. Outside red-team researchers like Mindgard test software for weaknesses or workarounds, mimicking strategies that bad actors might use. When they identify security gaps, they alert the software provider so fixes can be deployed.

    “Assuming motivated users will not attempt to bypass safeguards is a strategic miscalculation. Attackers iterate. Guardrails must assume persistence,” Mindgard wrote in a blog post.

    While tech companies boast about how you can use their AI for any purpose, they also need to make a strong promise that they can prevent AI from being used to enact abuse. For AI image generation, that means having a strong repertoire of prompts that will be refused and kicked back to users. 

    When OpenAI launched its Sora 2 video model, it promised to be more conservative with its content moderation for this very reason. But it’s important to ensure its moderation practices are consistently effective, not just at a product’s launch. It makes AI safety testing an ongoing process for cybersecurity researchers and AI developers alike.

    Watch this: AI Is Indistinguishable From Reality. How Do We Spot Fake Videos?

    03:15

    Google upgrades Search reporting

    For its part, Google is taking steps to ensure abusive images aren’t spread as easily. The tech giant simplified its process for requesting the removal of explicit images from Google Search. You can click the three dots in the upper right corner of an image, click report and then tell Google you want the photo removed because it “shows a sexual image of me.” The new changes also let you select multiple images at once and track your reports more easily.

    “We hope that this new removal process reduces the burden that victims of nonconsensual explicit imagery face,” the company said in a blog post.

    When asked about any further steps the company is taking to prevent AI-enabled abuse, Google pointed CNET to its generative AI prohibited use policy. Google’s policy, like many other tech companies’ fine print, outlaws using AI for illegal or potentially abusive activities, such as creating intimate imagery.

    There are laws that aim to help victims when these images are shared online, such as the 2025 Take It Down Act. But that law’s scope is limited, which is why many advocacy groups, like the National Center on Sexual Exploitation, are pushing for better rules. 

    There’s no guarantee that these changes will prevent anyone from ever using AI for harassment and abuse. That’s why it’s so important that developers stay vigilant to ensure we are all protected — and act quickly when reports and problems pop up.

    (Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleEx-President’s shift away from Xbox consoles to cloud gaming reportedly caused friction
    Next Article Do Smart Glasses Make Cooking Easier? Here’s My Take After Real-World Testing
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    1Password plans are getting more expensive soon

    February 24, 2026

    Discord delays age verification to address user concerns

    February 24, 2026

    The Pentagon has reportedly given Anthropic until Friday to let it use Claude as it sees fit

    February 24, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025691 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025279 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025160 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025122 Views
    Don't Miss
    Business Technology February 24, 2026

    How Smarsh built an AI front door for regulated industries — and drove 59% self-service adoption

    How Smarsh built an AI front door for regulated industries — and drove 59% self-service…

    Where MENA CIOs draw the line on AI sovereignty

    1Password plans are getting more expensive soon

    Discord delays age verification to address user concerns

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    How Smarsh built an AI front door for regulated industries — and drove 59% self-service adoption

    February 24, 20260 Views

    Where MENA CIOs draw the line on AI sovereignty

    February 24, 20262 Views

    1Password plans are getting more expensive soon

    February 24, 20262 Views
    Most Popular

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views

    VTOMAN FlashSpeed 1500: Plenty Of Power For All Your Gear

    March 13, 20250 Views

    This new Roomba finally solves the big problem I have with robot vacuums

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.