Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    How Boll & Branch leverages AI for operational and creative tasks

    While holdcos build ‘death stars of content,’ indie creative agencies take alternative routes

    Future of Marketing Briefing: AI’s branding problem is why marketers keep it off the label

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

      February 6, 2026

      Stocks and bitcoin sink as investors dump software company shares

      February 4, 2026

      AI, crypto and Trump super PACs stash millions to spend on the midterms

      February 2, 2026

      To avoid accusations of AI cheating, college students are turning to AI

      January 29, 2026

      ChatGPT can embrace authoritarian ideas after just one prompt, researchers say

      January 24, 2026
    • Business

      The HDD brand that brought you the 1.8-inch, 2.5-inch, and 3.5-inch hard drives is now back with a $19 pocket-sized personal cloud for your smartphones

      February 12, 2026

      New VoidLink malware framework targets Linux cloud servers

      January 14, 2026

      Nvidia Rubin’s rack-scale encryption signals a turning point for enterprise AI security

      January 13, 2026

      How KPMG is redefining the future of SAP consulting on a global scale

      January 10, 2026

      Top 10 cloud computing stories of 2025

      December 22, 2025
    • Crypto

      How Polymarket Is Turning Bitcoin Volatility Into a Five-Minute Betting Market

      February 13, 2026

      Israel Indicts Two Over Secret Bets on Military Operations via Polymarket

      February 13, 2026

      Binance’s October 10 Defense at Consensus Hong Kong Falls Flat

      February 13, 2026

      Argentina Congress Strips Workers’ Right to Choose Digital Wallet Deposits

      February 13, 2026

      Monero Price Breakdown Begins? Dip Buyers Now Fight XMR’s Drop to $135

      February 13, 2026
    • Technology

      How Boll & Branch leverages AI for operational and creative tasks

      February 13, 2026

      While holdcos build ‘death stars of content,’ indie creative agencies take alternative routes

      February 13, 2026

      Future of Marketing Briefing: AI’s branding problem is why marketers keep it off the label

      February 13, 2026

      ‘A brand trip’: How the creator economy showed up at this year’s Super Bowl

      February 13, 2026

      From feeds to streets: How mega influencer Haley Baylee is diversifying beyond platform algorithms 

      February 13, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»Conspiracy theorists don’t realize they’re on the fringe
    Technology

    Conspiracy theorists don’t realize they’re on the fringe

    TechAiVerseBy TechAiVerseJuly 23, 2025No Comments10 Mins Read2 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Conspiracy theorists don’t realize they’re on the fringe
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    Conspiracy theorists don’t realize they’re on the fringe

    Gordon Pennycook: “It might be one of the biggest false consensus effects that’s been observed.”


    Credit:

    Aurich Lawson / Thinkstock

    Belief in conspiracy theories is often attributed to some form of motivated reasoning: People want to believe a conspiracy because it reinforces their worldview, for example, or doing so meets some deep psychological need, like wanting to feel unique. However, it might also be driven by overconfidence in their own cognitive abilities, according to a paper published in the Personality and Social Psychology Bulletin. The authors were surprised to discover that not only are conspiracy theorists overconfident, they also don’t realize their beliefs are on the fringe, massively overestimating by as much as a factor of four how much other people agree with them.

    “I was expecting the overconfidence finding,” co-author Gordon Pennycook, a psychologist at Cornell University, told Ars. “If you’ve talked to someone who believes conspiracies, it’s self-evident. I did not expect them to be so ready to state that people agree with them. I thought that they would overestimate, but I didn’t think that there’d be such a strong sense that they are in the majority. It might be one of the biggest false consensus effects that’s been observed.”

    In 2015, Pennycook made headlines when he co-authored a paper demonstrating how certain people interpret “pseudo-profound bullshit” as deep observations. Pennycook et al. were interested in identifying individual differences between those who are susceptible to pseudo-profound BS and those who are not and thus looked at conspiracy beliefs, their degree of analytical thinking, religious beliefs, and so forth.

    They presented several randomly generated statements, containing “profound” buzzwords, that were grammatically correct but made no sense logically, along with a 2014 tweet by Deepak Chopra that met the same criteria. They found that the less skeptical participants were less logical and analytical in their thinking and hence much more likely to consider these nonsensical statements as being deeply profound. That study was a bit controversial, in part for what was perceived to be its condescending tone, along with questions about its methodology. But it did snag Pennycook et al. a 2016 Ig Nobel Prize.

    Last year we reported on another Pennycook study, presenting results from experiments in which an AI chatbot engaged in conversations with people who believed at least one conspiracy theory. That study showed that the AI interaction significantly reduced the strength of those beliefs, even two months later. The secret to its success: the chatbot, with its access to vast amounts of information across an enormous range of topics, could precisely tailor its counterarguments to each individual. “The work overturns a lot of how we thought about conspiracies, that they’re the result of various psychological motives and needs,” Pennycook said at the time.

    Miscalibrated from reality

    Pennycook has been working on this new overconfidence study since 2018, perplexed by observations indicating that people who believe in conspiracies also seem to have a lot of faith in their cognitive abilities—contradicting prior research finding that conspiracists are generally more intuitive. To investigate, he and his co-authors conducted eight separate studies that involved over 4,000 US adults.

    The assigned tasks were designed in such a way that participants’ actual performance and how they perceived their performance were unrelated. For example, in one experiment, they were asked to guess the subject of an image that was largely obscured. The subjects were then asked direct questions about their belief (or lack thereof) concerning several key conspiracy claims: the Apollo Moon landings were faked, for example, or that Princess Diana’s death wasn’t an accident. Four of the studies focused on testing how subjects perceived others’ beliefs.

    The results showed a marked association between subjects’ tendency to be overconfident and belief in conspiracy theories. And while a majority of participants believed a conspiracy’s claims just 12 percent of the time, believers thought they were in the majority 93 percent of the time. This suggests that overconfidence is a primary driver of belief in conspiracies.

    It’s not that believers in conspiracy theories are massively overconfident; there is no data on that, because the studies didn’t set out to quantify the degree of overconfidence, per Pennycook. Rather, “They’re overconfident, and they massively overestimate how much people agree with them,” he said.

    Ars spoke with Pennycook to learn more.

    Ars Technica: Why did you decide to investigate overconfidence as a contributing factor to believing conspiracies?

    Gordon Pennycook: There’s a popular sense that people believe conspiracies because they’re dumb and don’t understand anything, they don’t care about the truth, and they’re motivated by believing things that make them feel good. Then there’s the academic side, where that idea molds into a set of theories about how needs and motivations drive belief in conspiracies. It’s not someone falling down the rabbit hole and getting exposed to misinformation or conspiratorial narratives. They’re strolling down: “I like it over here. This appeals to me and makes me feel good.”

    Believing things that no one else agrees with makes you feel unique. Then there’s various things I think that are a little more legitimate: People join communities and there’s this sense of belongingness. How that drives core beliefs is different. Someone may stop believing but hang around in the community because they don’t want to lose their friends. Even with religion, people will go to church when they don’t really believe. So we distinguish beliefs from practice.

    What we observed is that they do tend to strongly believe these conspiracies despite the fact that there’s counter evidence or a lot of people disagree. What would lead that to happen? It could be their needs and motivations, but it could also be that there’s something about the way that they think where it just doesn’t occur to them that they could be wrong about it. And that’s where overconfidence comes in.

    Ars Technica: What makes this particular trait such a powerful driving force?

    Gordon Pennycook: Overconfidence is one of the most important core underlying components, because if you’re overconfident, it stops you from really questioning whether the thing that you’re seeing is right or wrong, and whether you might be wrong about it. You have an almost moral purity of complete confidence that the thing you believe is true. You cannot even imagine what it’s like from somebody else’s perspective. You couldn’t imagine a world in which the things that you think are true could be false. Having overconfidence is that buffer that stops you from learning from other people. You end up not just going down the rabbit hole, you’re doing laps down there.

    Overconfidence doesn’t have to be learned, parts of it could be genetic. It also doesn’t have to be maladaptive. It’s maladaptive when it comes to beliefs. But you want people to think that they will be successful when starting new businesses. A lot of them will fail, but you need some people in the population to take risks that they wouldn’t take if they were thinking about it in a more rational way. So it can be optimal at a population level, but maybe not at an individual level.

    Ars Technica: Is this overconfidence related to the well-known Dunning-Kruger effect?

    Gordon Pennycook: It’s because of Dunning-Kruger that we had to develop a new methodology to measure overconfidence, because the people who are the worst at a task are the worst at knowing that they’re the worst at the task. But that’s because the same things that you use to do the task are the things you use to assess how good you are at the task. So if you were to give someone a math test and they’re bad at math, they’ll appear overconfident. But if you give them a test of assessing humor and they’re good at that, they won’t appear overconfident. That’s about the task, not the person.

    So we have tasks where people essentially have to guess, and it’s transparent. There’s no reason to think that you’re good at the task. In fact, people who think they’re better at the task are not better at it, they just think they are. They just have this underlying kind of sense that they can do things, they know things, and that’s the kind of thing that we’re trying to capture. It’s not specific to a domain. There are lots of reasons why you could be overconfident in a particular domain. But this is something that’s an actual trait that you carry into situations. So when you’re scrolling online and come up with these ideas about how the world works that don’t make any sense, it must be everybody else that’s wrong, not you.

    Ars Technica: Overestimating how many people agree with them seems to be at odds with conspiracy theorists’ desire to be unique.  

    Gordon Pennycook: In general, people who believe conspiracies often have contrary beliefs. We’re working with a population where coherence is not to be expected. They say that they’re in the majority, but it’s never a strong majority. They just don’t think that they’re in a minority when it comes to the belief. Take the case of the Sandy Hook conspiracy, where adherents believe it was a false flag operation. In one sample, 8 percent of people thought that this was true. That 8 percent thought 61 percent of people agreed with them.

    So they’re way off. They really, really miscalibrated. But they don’t say 90 percent. It’s 60 percent, enough to be special, but not enough to be on the fringe where they actually are. I could have asked them to rank how smart they are relative to others, or how unique they thought their beliefs were, and they would’ve answered high on that. But those are kind of mushy self-concepts. When you ask a specific question that has an objectively correct answer in terms of the percent of people in the sample that agree with you, it’s not close.

    Ars Technica: How does one even begin to combat this? Could last year’s AI study point the way?

    Gordon Pennycook: The AI debunking effect works better for people who are less overconfident. In those experiments, very detailed, specific debunks had a much bigger effect than people expected. After eight minutes of conversation, a quarter of the people who believed the thing didn’t believe it anymore, but 75 percent still did. That’s a lot. And some of them, not only did they still believe it, they still believed it to the same degree. So no one’s cracked that. Getting any movement at all in the aggregate was a big win.

    Here’s the problem. You can’t have a conversation with somebody who doesn’t want to have the conversation. In those studies, we’re paying people, but they still get out what they put into the conversation. If you don’t really respond or engage, then our AI is not going to give you good responses because it doesn’t know what you’re thinking. And if the person is not willing to think. … This is why overconfidence is such an overarching issue. The only alternative is some sort of propagandistic sit-them-downs with their eyes open and try to de-convert them. But you can’t really convert someone who doesn’t want to be converted. So I’m not sure that there is an answer. I think that’s just the way that humans are.

    Personality and Social Psychology Bulletin, 2025. DOI: 10.1177/01461672251338358  (About DOIs).

    Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.



    127 Comments

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleBest streaming devices: Apple TV, Fire TV, Roku, or Google TV?
    Next Article Apple Intelligence news summaries are back, with a big red disclaimer
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    How Boll & Branch leverages AI for operational and creative tasks

    February 13, 2026

    While holdcos build ‘death stars of content,’ indie creative agencies take alternative routes

    February 13, 2026

    Future of Marketing Briefing: AI’s branding problem is why marketers keep it off the label

    February 13, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025668 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025257 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025153 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025111 Views
    Don't Miss
    Technology February 13, 2026

    How Boll & Branch leverages AI for operational and creative tasks

    How Boll & Branch leverages AI for operational and creative tasks By Gabriela Barkho  • …

    While holdcos build ‘death stars of content,’ indie creative agencies take alternative routes

    Future of Marketing Briefing: AI’s branding problem is why marketers keep it off the label

    ‘A brand trip’: How the creator economy showed up at this year’s Super Bowl

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    How Boll & Branch leverages AI for operational and creative tasks

    February 13, 20262 Views

    While holdcos build ‘death stars of content,’ indie creative agencies take alternative routes

    February 13, 20262 Views

    Future of Marketing Briefing: AI’s branding problem is why marketers keep it off the label

    February 13, 20262 Views
    Most Popular

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views

    VTOMAN FlashSpeed 1500: Plenty Of Power For All Your Gear

    March 13, 20250 Views

    This new Roomba finally solves the big problem I have with robot vacuums

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.