I failed at spotting AI slop videos. Can you do better?
Image: Foundry
Fake videos used to stand out immediately to me. Nowadays? I’m far worse at identifying them. And thanks to the fine folks over at NPR, I have the quiz results to prove it. (Oof.)
The short, four-question test asks you to identify AI versus human-made video. The scenarios vary between serious, unexpected, and cute. All the kinds of entertaining or surprising clips that spread fast in group chats and social media. I expected to get at least one wrong, given the improvements in AI-generated video. I failed more times than that. (But thankfully not at all four.)
I’m not convinced most people will do better. At least, not without coaching.
AI produces tons of slop, for sure. But it has also improved at how well it generates video, with far fewer immediately obvious errors. If you watch video with the default expectation that most of it is real and fakes are easy to spot, you may get tripped up—as I did. I went into the quiz still assuming AI video gives itself away quickly through technical mistakes.
Instead, you have to watch for more subtle clues, at least relatively speaking. Details matter. Context matters. Experience and expertise help with determining what’s reasonable… and what’s not. In other words, spotting an AI video uses similar skills to identifying scams.
A simple watermark could make all the difference for helping people avoid AI slop—or at least, understand the nature of what they’re reading, listening to, or watching.
Digiarty
Too good to be true? Proceed carefully. Trying to play on your emotions, good or bad? Drawing a strong reaction that overrides rational thought could be the point. And if someone’s asking for money? Definitely stop and verify the video’s legitimacy.
You can get more specific tips on how to recognize AI videos in the NPR quiz, like the clip’s length, framing, and even lighting. We have even more detailed advice in our guide on how to spot fake AI videos, which covers elements like physics, soundtracks, and even a very basic (but easily skipped) step of checking metadata.
Video used to be the most reliable form of documentation. AI threatens that trust, given the rate that slop has taken over platforms. Until more laws pass to help with identifying AI-generated material, we all have to develop sharp eyes.
California’s AI Transparency Act can’t kick in fast enough—originally set to require watermarks or other identifiers of AI-generated or AI-altered text, images, audio, or video starting January 1, 2026, the implementation has been delayed until August 2, 2026.
I have so many more months left of dodging AI slop on my own.
Author: Alaina Yee, Senior Editor, PCWorld
A 14-year veteran of technology and video games journalism, Alaina Yee covers a variety of topics for PCWorld. Since joining the team in 2016, she’s written about CPUs, Windows, PC building, Chrome, Raspberry Pi, and much more—while also serving as PCWorld’s resident bargain hunter (#slickdeals). Currently her focus is on security, helping people understand how best to protect themselves online. Her work has previously appeared in PC Gamer, IGN, Maximum PC, and Official Xbox Magazine.
