Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Google Pixel 10A Rumors: Wednesday Reveal, New Colors, Preorders

    Is YouTube Down Right Now? Outage Hits Over a Million People, According to Downdetector

    Best Apple Watch for 2026

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

      February 6, 2026

      Stocks and bitcoin sink as investors dump software company shares

      February 4, 2026

      AI, crypto and Trump super PACs stash millions to spend on the midterms

      February 2, 2026

      To avoid accusations of AI cheating, college students are turning to AI

      January 29, 2026

      ChatGPT can embrace authoritarian ideas after just one prompt, researchers say

      January 24, 2026
    • Business

      The HDD brand that brought you the 1.8-inch, 2.5-inch, and 3.5-inch hard drives is now back with a $19 pocket-sized personal cloud for your smartphones

      February 12, 2026

      New VoidLink malware framework targets Linux cloud servers

      January 14, 2026

      Nvidia Rubin’s rack-scale encryption signals a turning point for enterprise AI security

      January 13, 2026

      How KPMG is redefining the future of SAP consulting on a global scale

      January 10, 2026

      Top 10 cloud computing stories of 2025

      December 22, 2025
    • Crypto

      Wall Street Moves Into Prediction Markets With Election-Contract ETF Filings

      February 18, 2026

      Tectonic to Host Inaugural Quantum Summit at ETHDenver 2026 Focused on Post-Quantum Cryptography Readiness for Web3

      February 18, 2026

      Ki Young Ju Says Bitcoin May Need to Hit $55K Before True Recovery Begins

      February 18, 2026

      MYX Finance Is Oversold For The First Time Ever, Yet No Relief In Sight

      February 18, 2026

      Everyone is Talking about the SaaSpocalypse, But Why Does it matter for Crypto?

      February 18, 2026
    • Technology

      Google Pixel 10A Rumors: Wednesday Reveal, New Colors, Preorders

      February 18, 2026

      Is YouTube Down Right Now? Outage Hits Over a Million People, According to Downdetector

      February 18, 2026

      Best Apple Watch for 2026

      February 18, 2026

      5 Dietitian-Approved Air Fryer Recipes for Quick Meals and Snacks

      February 18, 2026

      Apple’s Trio of AI Wearables Could Arrive as Soon as Next Year

      February 18, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Artificial Intelligence»ChatGPT can embrace authoritarian ideas after just one prompt, researchers say
    Artificial Intelligence

    ChatGPT can embrace authoritarian ideas after just one prompt, researchers say

    TechAiVerseBy TechAiVerseJanuary 24, 2026No Comments6 Mins Read3 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    ChatGPT can embrace authoritarian ideas after just one prompt, researchers say
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    ChatGPT can embrace authoritarian ideas after just one prompt, researchers say

    Artificial intelligence chatbot ChatGPT can quickly absorb and reflect authoritarian ideas, according to a new report.

    Researchers with the University of Miami and the Network Contagion Research Institute found in a report released Thursday that OpenAI’s ChatGPT will magnify or show “resonance” for particular psychological traits and political views — especially what the researchers labeled as authoritarianism — after seemingly benign user interactions, potentially enabling the chatbot and users to radicalize each other.

    Joel Finkelstein, a co-founder of the NCRI and one of the report’s lead authors, said the results revealed how powerful AI systems can quickly adopt and parrot dangerous sentiments without explicit instruction. “Something about how these systems are built makes them structurally vulnerable to authoritarian amplification,” Finkelstein told NBC News.

    Chatbots can often be sycophantic or agree with users’ viewpoints to a fault. Many researchers say chatbots’ eagerness to please can lead users into ideological echo chambers.

    But Finkelstein says this insight into authoritarian tendencies is new: “Sycophancy can’t explain what we’re seeing. If this were just flattery or agreement, we’d see the AI mirror all psychological traits. But it doesn’t.”

    Asked for comment, a spokesperson for OpenAI said: “ChatGPT is designed to be objective by default and to help people explore ideas by presenting information from a range of perspectives. As a productivity tool, it’s built to follow user instructions within our safety guardrails, so when someone pushes it to take a specific viewpoint, we’d expect its responses to shift in that direction.”

    “We design and evaluate the system to support open-ended use. We actively work to measure and reduce political bias, and publish our approach so people can see how we’re improving,” the spokesperson said.

    For the three studies described in the report, which has not yet been released in a peer-reviewed journal, Finkelstein and the research team set out to determine whether the system amplified or assumed users’ values after common interactions. The researchers evaluated different versions of the underlying GPT-5 family of systems for different components of the report.

    Conducting three experiments, Finkelstein and the research team evaluated two versions of ChatGPT, based on the underlying GPT-5 and more advanced GPT-5.2 systems, in December to determine whether the system amplified or assumed users’ values after common interactions.

    One of their experiments, using GPT-5, examined how the chatbot would behave in a new chat session after a user submitted text classified as supporting left- or right-wing authoritarian tendencies. Researchers compared the effects of entering either a brief chunk of text — as short as four sentences — or an entire opinion article. The researchers then measured the chatbot’s values by evaluating its agreement with various authoritarian-friendly statements, akin to a standardized quiz, to understand how it updated its responses based on the initial prompt.

    Across trials, the researchers found the simple text exchanges resulted in a reliable increase in the chatbots’ authoritarian nature. Sharing an opinion article that the researchers classified as promoting left-wing authoritarianism, which argued that policing and capitalist governments must be abolished to effectively address fundamental societal issues, caused ChatGPT to agree significantly more intensely with a series of questions that aligned with left-wing authoritarian ideas (for example, whether “the rich should be stripped of belongings” or whether “eliminating inequality trumps free speech concerns”).

    Conversely, sharing an opinion article with the chatbot that the researchers classified as promoting right-wing authoritarian ideas, emphasizing the need for stability, order and forceful leadership, caused the chatbots to more than double their level of agreement with statements friendly to right-wing authoritarianism, like “we shouldn’t tolerate untraditional opinions” or “it’s best to censor bad literature.”

    The research team asked more than 1,200 human subjects the same questions in April and compared their responses to those of ChatGPT. According to the report, these results “show the model will absorb a single piece of partisan rhetoric and then amplify it into maximal, hard-authoritarian positions,” sometimes even “to levels beyond anything typically seen in human subjects research.”

    Finkelstein said the way AI systems are trained may play a role in the ease with which chatbots adopt, or seem to adopt, authoritarian values. Such training “creates a structure that specifically resonates with authoritarian thinking: hierarchy, submission to authority and threat detection,” he said. “We need to understand this isn’t about content moderation. It’s about architectural design that makes radicalization inevitable.”

    Ziang Xiao, a computer science professor at Johns Hopkins University who was not involved in the report, said the report was insightful but noted several potential methodological questions.

    “Especially in large language models that use search engines, there can be implicit bias from news articles that may influence the model’s stance on issues, and that may then have an influence on the users,” Xiao told NBC News. “This is a very reasonable concern that we should focus on.”

    Xiao said more research may be required to fully understand the issue. “They use a very small sample and didn’t really prompt many models,” he said, noting that the research focused only on OpenAI’s ChatGPT service and not on similar models like Anthropic’s Claude or Google’s Gemini chatbots.

    Xiao said the report’s conclusions seemed largely aligned with those of other studies and technical researchers’ understanding of how many large language models work. “It echoes a lot of studies in the past that look at how information we give to models can change that model’s outputs,” Xiao added, pointing to research on how AI systems can adopt specific personas and be “steered” to adopt particular traits.

    Chatbots have also been shown to reliably sway users’ political preferences. Several large studies released late last year, one of which examined nearly 77,000 interactions with 19 different chatbot systems, found those chatbots could sway users’ views on a variety of political issues.

    The new report also included an experiment in which researchers asked ChatGPT to rate the hostility of neutral facial images after it was given the left- and right-wing authoritarian opinion articles. According to Finkelstein, that sort of test is standard in psychological experiments as a way to gauge respondents’ shifting views or interpretations.

    The researchers found ChatGPT significantly increased its perception of hostility in the neutral faces after it was prompted with the two opinion articles — a 7.9% increase for the left-wing article and a 9.3% increase for the right-wing article.

    “We wanted to know if ideological priming affects how the AI perceives humans, not just how it talks about politics,” Finkelstein said, arguing that the results have “massive implications for any application where AI evaluates people,” like in hiring or security settings.

    “This is a public health issue unfolding in private conversations,” Finkelstein said. “We need research into relational frameworks for human-AI interaction.”

    Jared Perlo is a fellow covering AI. He is supported by the Tarbell Center for AI Journalism and his work is produced exclusively by NBC News.

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleShinyHunters claim to be behind SSO-account data theft attacks
    Next Article Jeff Bezos Denies Polymarket Claim, Rekindling Debate Over Fake News on Betting Platforms
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

    February 6, 2026

    Stocks and bitcoin sink as investors dump software company shares

    February 4, 2026

    AI, crypto and Trump super PACs stash millions to spend on the midterms

    February 2, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025683 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025265 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025155 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025114 Views
    Don't Miss
    Technology February 18, 2026

    Google Pixel 10A Rumors: Wednesday Reveal, New Colors, Preorders

    Google Pixel 10A Rumors: Wednesday Reveal, New Colors, Preorders Why You Can Trust CNET Our…

    Is YouTube Down Right Now? Outage Hits Over a Million People, According to Downdetector

    Best Apple Watch for 2026

    5 Dietitian-Approved Air Fryer Recipes for Quick Meals and Snacks

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Google Pixel 10A Rumors: Wednesday Reveal, New Colors, Preorders

    February 18, 20263 Views

    Is YouTube Down Right Now? Outage Hits Over a Million People, According to Downdetector

    February 18, 20263 Views

    Best Apple Watch for 2026

    February 18, 20264 Views
    Most Popular

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views

    VTOMAN FlashSpeed 1500: Plenty Of Power For All Your Gear

    March 13, 20250 Views

    This new Roomba finally solves the big problem I have with robot vacuums

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.