My “Are you presuming most people are stupid?” test
Sometimes when people talk about a problem in society, they strongly imply that most people are stupid.
This is wrong. Most people aren’t super knowledgeable about a lot of specific facts about the world (only half of Americans can name the 3 branches of government) but they’re intelligent when it comes to their own lives and the areas they work and spend time in. We should expect the average person to struggle with factual questions about abstract ideas and far-off events, but not so much about what’s right in front of them day to day.
If a claim about how society works implies that most people are incredibly stupid, much more stupid than anyone I encounter in my day to day life, I dismiss it. This simple test kills a lot of big claims about how the world works. I’ve been applying it in a lot of AI conversations recently. I’ve written about this a bit before but want to go into more detail.
Here’s a common claim that I think fails my test: “The reason Americans are so unhealthy is that doctors don’t tell people about healthy diets.”
I think most people know what’s considered healthy food. They maybe wouldn’t be able to perfectly break down ideal ratios of macronutrients, but they have a rough idea. The average person whose bad diet is making them unhealthy would probably be able to point to the bad diet as part of the problem. If I walked up to the average person and asked them to make an ideal meal plan for themselves to be maximally healthy, I think most people would do a decent job.
Stefan Schubert makes a similar observation about what he calls sleepwalk bias:
When we predict the future, we often seem to underestimate the degree to which people will act to avoid adverse outcomes. Examples include Marx’s prediction that the ruling classes would fail to act to avert a bloody revolution, predictions of environmental disasters and resource constraints, y2K, etc. In most or all of these cases, there could have been a catastrophe, if people had not acted with determination and ingenuity to prevent it. But when pressed, people often do that, and it seems that we often fail to take that into account when making predictions. In other words: too often we postulate that people will sleepwalk into a disaster. Call this sleepwalk bias.
I often use the idea of sleepwalk bias in conversations. However, what I’m pointing at here is a much more extreme example of assuming everyone is stupid about even normal everyday experiences, so I think it needs its own name. I’m calling it my “Are you presuming most people are stupid?” test.
There are a few claims about AI floating around that fail my test.
I was motivated to write this in response to this Time article: ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study. There are a few points in this article that break my rule.
Kosmyna, who has been a full-time research scientist at the MIT Media Lab since 2021, wanted to specifically explore the impacts of using AI for schoolwork, because more and more students are using AI. So she and her colleagues instructed subjects to write 20-minute essays based on SAT prompts, including about the ethics of philanthropy1 and the pitfalls of having too many choices.
The group that wrote essays using ChatGPT all delivered extremely similar essays that lacked original thought, relying on the same expressions and ideas. Two English teachers who assessed the essays called them largely “soulless.” The EEGs revealed low executive control and attentional engagement. And by their third essay, many of the writers simply gave the prompt to ChatGPT and had it do almost all of the work. “It was more like, ‘just give me the essay, refine this sentence, edit it, and I’m done,’” Kosmyna says.
What are these results actually telling us that the average person doesn’t already know? These seem to be the claims:
-
If you use a talking robot to write your essay for you, you won’t learn as much about the topic compared to writing the essay yourself.
-
Having a talking robot easily available to you makes you more likely to cheat on essay assignments.
Students using ChatGPT to write their essays for them aren’t stupid about what’s happening. Similar to students who just Google to find answers to homework problems, they’re aware that they’re making a trade-off between actual learning and saving time. This article is presuming that students are somehow blind to the idea that copying work from other places means they don’t actually learn. The average student isn’t like that. They make bad decisions when they cheat using talking robots, but they know what they’re doing.
Here’s another study referenced in the same article:
The MIT Media Lab has recently devoted significant resources to studying different impacts of generative AI tools. Studies from earlier this year, for example, found that generally, the more time users spend talking to ChatGPT, the lonelier they feel.
It’s hard for me to imagine walking up to someone I don’t know and saying “Hey, spending a lot of time staring at your screen talking to a robot instead of interacting with real people can make you feel lonely. The experience itself can be somewhat alienating because the robot doesn’t feel human.” I don’t know how you could assume this is useful unless you assume the average person is really stupid. Would you feel comfortable telling a stranger this? Would you be able to say it in a way that isn’t demeaning?
Another big claim that fails my test is that AI chatbots are useless. 10% of the world are now choosing to use them weekly. If they were useless, this would mean that 10% of the world is so stupid that they can’t tell that this tool they’re using every single week isn’t providing any value to them at all. There’s basically nothing else like this that people interact with regularly. You might think that social media like TikTok is bad for people, but it’s not “useless.” Users have fun or learn interesting facts or subtle social vibes from the TikTok videos they watch. You can criticize AI and think it’s net bad to use, but that’s a different claim from saying it’s useless. When I hear people say that AI chatbots are useless, it’s hard not to read it as a claim that almost everyone is incredibly stupid.
There’s too much of this way of talking in AI conversations. There are a lot of great criticisms of AI and chatbots, and real reasons to worry. I think that students cheating with ChatGPT is a gigantic crisis in education without clear solutions. But when people talk as if everyone using chatbots is incredibly stupid, and that people exposed to this technology are blind to the simple obvious trade-offs involved in specific situations, I come away with less respect for them. It seems like they underestimate the average person in a way that shows a lack of curiosity, or a tendency to steamroll other people’s experiences if they’re having a slightly different reaction to new technology.