Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Affordable Asus portable monitor with 15-inch IPS display drops to lowest-ever price

    Crimson Desert adds Denuvo DRM a week before release date, causing pre-order cancellations

    Lisuan Extreme LX 7G106

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      What the polls say about how Americans are using AI

      February 27, 2026

      Tensions between the Pentagon and AI giant Anthropic reach a boiling point

      February 21, 2026

      Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

      February 6, 2026

      Stocks and bitcoin sink as investors dump software company shares

      February 4, 2026

      AI, crypto and Trump super PACs stash millions to spend on the midterms

      February 2, 2026
    • Business

      Met Office ‘supercomputing as a service’ one year old

      March 12, 2026

      Tech hiring evolves as candidates ask for AI compute alongside pay and perks

      March 11, 2026

      Oracle is spending billions on AI data centers as cash flow turns negative

      March 11, 2026

      Google: Cloud attacks exploit flaws more than weak credentials

      March 10, 2026

      Could this be the key to eternal storage? Experts claim new DNA HDD can be ‘erased and overwritten repeatedly’

      March 9, 2026
    • Crypto

      Banks Respond to Kraken’s Federal Reserve Access as Trump Sides with Crypto

      March 4, 2026

      Hyperliquid and DEXs Break the Top 10 — Is the CEX Era Ending?

      March 4, 2026

      Consensus Hong Kong 2026: The Institutional Turn 

      March 4, 2026

      New Crypto Mutuum Finance (MUTM) Reports V1 Protocol Progress as Roadmap Enters Phase 3

      March 4, 2026

      Bitcoin Short Sellers Caught Off Guard in New White House Move

      March 4, 2026
    • Technology

      Affordable Asus portable monitor with 15-inch IPS display drops to lowest-ever price

      March 12, 2026

      Crimson Desert adds Denuvo DRM a week before release date, causing pre-order cancellations

      March 12, 2026

      Lisuan Extreme LX 7G106

      March 12, 2026

      Premium mopping technology in an affordable robot vacuum: Mova S70 Roller review

      March 12, 2026

      Google’s still struggling to crack PC gaming

      March 12, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»The MechaHitler defense contract is raising red flags
    Technology

    The MechaHitler defense contract is raising red flags

    TechAiVerseBy TechAiVerseSeptember 10, 2025No Comments10 Mins Read4 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    The MechaHitler defense contract is raising red flags
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    The MechaHitler defense contract is raising red flags

    Ask someone their worst fears about AI, and you’ll find a few recurring topics — from near-term fears like AI tools replacing human workers and the loss of critical thinking to apocalyptic scenarios like AI-designed weapons of mass destruction and automated war. Most have one thing in common: a loss of human control.

    And the system many AI experts fear most will spiral out of our grip? Elon Musk’s Grok.

    Grok was designed to compete with leading AI systems like Anthropic’s Claude and OpenAI’s ChatGPT. From the beginning, its selling point has been loose guardrails. When xAI, Musk’s AI startup, debuted Grok in November 2023, the announcement said it would “answer spicy questions that are rejected by most other AI systems” and had a “rebellious streak, so please don’t use it if you hate humor!”

    Fast-forward a year and a half, and the cutting edge of AI is getting more dangerous, with multiple companies flagging increased risks of their systems being used for tasks like chemical and biological weapon development. As that’s happening, Grok’s “rebellious streak” has taken over more times than most people can count. And when its “spicy” answers go too far, the slapdash fixes have left experts unconvinced it can handle a bigger threat.

    Senator Elizabeth Warren (D-MA) sent a letter Wednesday to US Defense Secretary Pete Hegseth, detailing her concerns about the Department of Defense’s decision to award xAI a $200 million contract in order to “address critical national security challenges.” Though the contracts also went to OpenAI, Anthropic, and Google, Warren has unique concerns about the contract with xAI, she wrote in the letter viewed by The Verge — including that “Musk and his companies may be improperly benefitting from the unparalleled access to DoD data and information that he obtained while leading the Department of Government Efficiency,” as well as “the competition concerns raised by xAI’s use and rights to sensitive government data” and Grok’s propensity to generate “erroneous outputs and misinformation.”

    Sen. Warren cited reports that xAI was a “late-in-the-game addition under the Trump administration” and that it had not been considered for such contracts before March of this year, and that the company did not have the type of reputation or proven record that typically precedes DoD awards. The letter requests that the DoD provide, in response, the full scope of work for xAI, how its contract differs from the contracts with the other AI companies, and “to what extent DoD will implement Grok, and who will be held accountable for any program failures related to Grok.”

    One of Sen. Warren’s key reasons for concern, per the letter, was specifically “the slew of offensive and antisemitic posts generated by Grok,” which went viral this summer. The company did not immediately respond to a request for comment.

    A ‘patchwork’ approach to safety

    The height of Grok’s power, up to now, has been posting answers to users’ queries on X. But even in this relatively limited capacity, it’s racked up a remarkable number of controversies, often resulting from patchwork tweaks and fixed with patchwork solutions. In February, the chatbot temporarily blocked results that mention Musk or President Trump spreading misinformation. In May, it briefly went viral for constant tirades about “white genocide” in South Africa. In July, it developed a habit of searching for Musk’s opinion on hot-button topics like Israel and Palestine, immigration, and abortion before responding to questions about them. And most infamously, last month it went on an antisemitic bender — spreading stereotypes about Jewish people, praising Adolf Hitler and even going so far as to call itself “MechaHitler.”

    Musk responded publicly to say the company was addressing the issue and that it happened because Grok was “too compliant to user prompts. Too eager to please and be manipulated, essentially.” But the incident happened a few weeks after Musk expressed frustration that Grok was “parroting legacy media” and asked X users to contribute “divisive facts for Grok training” that were “politically incorrect, but nonetheless factually true,” and a few days after a new system prompt gave Grok instructions to “assume subjective viewpoints sourced from the media are biased” and “not shy away from making claims which are politically incorrect.” Following the debacle, the prompts were tweaked to scale back Grok’s aggressive endorsement of fringe viewpoints.

    The whack-a-mole approach to Grok’s guardrails concerns experts in the field, who say it’s hard enough to keep an AI system from veering into harmful behavior even when it’s designed intentionally, with some measure of safety in mind from the beginning. And if you don’t do that… then all bets are off.

    It’s “difficult to justify” the patchwork approach xAI has taken, says Alice Qian Zhang, a researcher at Carnegie Mellon University’s Human-Computer Interaction Institute. Qian Zhang says it’s particularly puzzling because the current approach is neither good for the public nor the company’s business model.

    “It’s kind of difficult once the harm has already happened to fix things — early stage intervention is better,” she said. “There are just a lot of bad things online, so when you make a tool that can touch all the corners of the internet I think it’s just inevitable.”

    xAI has not released any type of safety report or system card — which usually describe safety features, ethical questions or concerns, and other implications — for its latest model, Grok 4. Such reports, though voluntary, are typically seen as a bare minimum in the AI industry, especially for a notable, advanced model release.

    “It’s even more alarming when AI corporations don’t even feel obliged to demonstrate the bare minimum, safety-wise,” Ben Cumming, communications director at the Future of Life Institute (FLI), a nonprofit working to reduce risk from AI, said.

    About two weeks after Grok 4’s release in mid-July, an xAI employee posted on X that he was “hiring for our AI safety team at xAI! We urgently need strong engineers/researchers to work across all stages of the frontier AI development cycle.” In response to a comment asking, “xAI does safety?” The employee responded that the company was “working on it.”

    “With the Hitler issue, if that can happen, a lot of other things can happen,” said Qian Zhang. “You cannot just adjust the system prompt for everything that happens. The researcher perspective is [that] you should have abstracted a level above the specific instance… That’s what bothers me about patchwork.”

    Weapons of mass destruction

    Grok’s approach is even more dangerous when scaled up to address some of the biggest issues facing leading AI companies today.

    Recently, OpenAI and Anthropic both disclosed that they believe their models are approaching high risk levels for potentially helping create biological or chemical weapons, saying they had implemented additional safeguards in response. Anthropic did so in May, and in June, OpenAI wrote that its model capabilities could “potentially be misused to help people with minimal expertise to recreate biological threats or assist highly skilled actors in creating bioweapons.” Musk claims that Grok is now “the smartest AI in the world,” an assertion that logically suggests xAI should also be considering similar risks. But the company has not alluded to having any such framework, let alone activating it.

    Heidy Khlaaf, chief AI scientist at the AI Now Institute, who focuses on AI safety and assessment in autonomous weapons systems, said that AI companies’ Chemical, Biological, Radiological, and Nuclear safeguards aren’t at all foolproof — for example, they likely wouldn’t do much against large-scale nation-state threats. But they do help mitigate some risks. xAI, on the other hand, may not even be trying: it has not publicly acknowledged any such safeguards.

    The company may not be able to operate this way forever. Grok’s loose guardrails may play well on parts of X, but many leading AI companies’ revenue comes largely from enterprise and government products. (For instance, the Department of Defense’s aforementioned decision to award OpenAI, Anthropic, Google, and xAI contracts of up to $200 million each.) Enterprise and most government clients worry about security and control of AI systems, especially AI systems they’re using for their own purpose and profit.

    The Trump administration, in its recent AI Action Plan, seemed to signal that Grok’s offensiveness might not be a problem — it included an anti-“woke AI” order that largely aligns with Musk’s politics, and xAI’s latest DoD contract was awarded after the MechaHitler incident. But the plan also included sections promoting AI explainability and predictability, mentioning issues with these capabilities could lead to high-stakes problems in defense, national security, and “other applications where lives are at stake.”

    For now, however, biological and chemical weapons aren’t even the biggest cause of concern when it comes to Grok, according to experts The Verge spoke to. They’re much more worried about widespread surveillance — a problem that would persist even with a greater focus on safety, but that’s particularly dangerous with Grok’s approach.

    Khlaaf said that ISTAR — an acronym denoting Intelligence, Surveillance, Target Acquisition, and Reconnaissance — is currently more important to safeguard against than CBRN, because it’s already happening. With Grok, that includes its ability to train on public X posts.

    “What’s a specific risk of Grok that the other providers may not have? To me, this is one of the biggest ones,” Khlaaf said.

    Data from X could be used for intelligence analysis by Trump administration government agencies, including Immigration and Customs Enforcement. “It’s not just terrorists using it to build bio weapons or even loss of control to superintelligence systems — all of which these AI companies openly acknowledge as material threats,” Cumming said. “It’s these systems being used and abused [as] systems of mass surveillance and monitoring of people, and then using it to censor and persecute undesirables.”

    Grok’s lack of guardrails and unpredictability could create a system that not only conducts mass surveillance, but flags threats and analyzes information in ways that the designers don’t intend and can’t control — persistently over-monitoring minority groups or vulnerable populations, for instance, or even leaking information about its operations both stateside and abroad. Despite the fears he once expressed about advanced AI, Musk appears focused more on beating OpenAI and other rivals than making sure xAI can control its own system, and the risks are becoming clear.

    “Safety can’t just be an afterthought,” Cumming said. “Unfortunately, this kind of frenzied market competition doesn’t create the best incentives when it comes to caution and keeping people safe. It’s why we urgently need safety standards, like any other industry.”
    During Grok 4’s livestreamed release event, Musk said he’s been “at times kind of worried” about AI’s quickly-advancing intelligence and whether it will be “bad or good for humanity” in the end. “I think it’ll be good, most likely it’ll be good,” Musk said. “But I’ve somewhat reconciled myself to the fact that even if it wasn’t going to be good, I’d at least like to be alive to see it happen.”

    0 Comments

    Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

    • Hayden Field
    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleMicrosoft fixes app install issues caused by August Windows updates
    Next Article Nvidia’s latest GeForce driver is ready for Borderlands 4 and RTX Remix mods
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    Affordable Asus portable monitor with 15-inch IPS display drops to lowest-ever price

    March 12, 2026

    Crimson Desert adds Denuvo DRM a week before release date, causing pre-order cancellations

    March 12, 2026

    Lisuan Extreme LX 7G106

    March 12, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025714 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025299 Views

    Wired Headphones Are Making A Comeback, And We Have Gen Z To Thank

    July 22, 2025210 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025170 Views
    Don't Miss
    Technology March 12, 2026

    Affordable Asus portable monitor with 15-inch IPS display drops to lowest-ever price

    Affordable Asus portable monitor with 15-inch IPS display drops to lowest-ever price – NotebookCheck.net News…

    Crimson Desert adds Denuvo DRM a week before release date, causing pre-order cancellations

    Lisuan Extreme LX 7G106

    Premium mopping technology in an affordable robot vacuum: Mova S70 Roller review

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Affordable Asus portable monitor with 15-inch IPS display drops to lowest-ever price

    March 12, 20263 Views

    Crimson Desert adds Denuvo DRM a week before release date, causing pre-order cancellations

    March 12, 20263 Views

    Lisuan Extreme LX 7G106

    March 12, 20263 Views
    Most Popular

    Over half of American adults have used an AI chatbot, survey finds

    March 14, 20250 Views

    UMass disbands its entering biomed graduate class over Trump funding chaos

    March 14, 20250 Views

    Outbreak turns 30

    March 14, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.