OpenAI got ‘sloppy’ about the wrong thing
You’d think OpenAI would take care when crafting a deal with the Pentagon, one that would see its AI models used in life-and-death scenarios such as those we’re seeing unfold in Iran right now.
But as we’ve learned, the initial agreement that OpenAI struck with the Defense Department on Friday night was a rush job. Even CEO Sam Altman agrees.
“We shouldn’t have rushed to get this out on Friday,” Altman wrote on X late Monday, as he detailed recent changes to the contract that specifically prohibit the use of its models for surveillance of U.S. citizens.
“The issues are super complex, and demand clear communication,” Altman continued. “We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy. Good learning experience for me as we face higher-stakes decisions in the future.”
OpenAI’s hasty deal with the military has, of course, sparked a massive backlash against the company and ChatGPT, coupled with a surge of interest in Anthropic (which has since been tagged as a “supply-chain risk” by Defense Secretary Pete Hegseth) and its competing Claude models. Anthropic had been locked in a tense back-and-forth with the Defense Department over the military’s demand for nearly unfettered use of its AI technology.
I certainly agree with Altman that the issues surrounding contracts between AI providers and the military are, as he says, “super complex,” and yes, OpenAI’s Friday-night deal did indeed look “opportunistic and sloppy.”
And yes, people make mistakes and learn from them. But an AI deal with the Pentagon is about as high-stakes as it gets, and it’s absolutely the wrong thing to get sloppy about.
I’ve reached out to OpenAI for comment and will update this story once they reply.
OpenAI’s rushed Pentagon agreement also raises the question about what else it may have handled sloppily–and this brings the discussion back to us, ChatGPT’s everyday users (or, increasingly, ex-users).
When we use AI, be it ChatGPT’s models or someone else’s, we have to trust it to one degree or another. We’re trusting it with our names, locations, job titles, family details, and perhaps even our finances. It may know who our friends are, and what we’re interested in.
This bond of trust is something that AI providers need to take seriously, perhaps even at the expense of a fast-moving deal.
Those of us who use AI every day must carefully consider the providers we’re dealing with, what they’re promising us, and how they behave.
Author: Ben Patterson, Senior Writer, TechHive
Ben has been writing about technology and consumer electronics for more than 20 years. A PCWorld contributor since 2014, Ben joined TechHive in 2019, where he has covered everything from smart speakers and soundbars to smart lights and security cameras. Ben’s articles have also appeared in PC Magazine, TIME, Wired, CNET, Men’s Fitness, Mobile Magazine, and more. Ben holds a master’s degree in English literature.
