Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Google’s still struggling to crack PC gaming

    Xbox unveils first tech details of its next generation console, codenamed Project Helix

    Developer sues publisher after leaving Kickstarter backers waiting over two years for promised physical editions

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      What the polls say about how Americans are using AI

      February 27, 2026

      Tensions between the Pentagon and AI giant Anthropic reach a boiling point

      February 21, 2026

      Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

      February 6, 2026

      Stocks and bitcoin sink as investors dump software company shares

      February 4, 2026

      AI, crypto and Trump super PACs stash millions to spend on the midterms

      February 2, 2026
    • Business

      Met Office ‘supercomputing as a service’ one year old

      March 12, 2026

      Tech hiring evolves as candidates ask for AI compute alongside pay and perks

      March 11, 2026

      Oracle is spending billions on AI data centers as cash flow turns negative

      March 11, 2026

      Google: Cloud attacks exploit flaws more than weak credentials

      March 10, 2026

      Could this be the key to eternal storage? Experts claim new DNA HDD can be ‘erased and overwritten repeatedly’

      March 9, 2026
    • Crypto

      Banks Respond to Kraken’s Federal Reserve Access as Trump Sides with Crypto

      March 4, 2026

      Hyperliquid and DEXs Break the Top 10 — Is the CEX Era Ending?

      March 4, 2026

      Consensus Hong Kong 2026: The Institutional Turn 

      March 4, 2026

      New Crypto Mutuum Finance (MUTM) Reports V1 Protocol Progress as Roadmap Enters Phase 3

      March 4, 2026

      Bitcoin Short Sellers Caught Off Guard in New White House Move

      March 4, 2026
    • Technology

      Google’s still struggling to crack PC gaming

      March 12, 2026

      Media Briefing: In the AI era, subscribers are the real prize — and the Telegraph proves it

      March 12, 2026

      Furniture.com was built for SEO. Now it’s trying to crack AI search

      March 12, 2026

      How medical creator Nick Norwitz grew his Substack paid subscribers from 900 to 5,200 within 8 months

      March 12, 2026

      Inside Amazon’s effort to shape the AI narrative on sustainability and ethics

      March 12, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Artificial Intelligence»ChatGPT can embrace authoritarian ideas after just one prompt, researchers say
    Artificial Intelligence

    ChatGPT can embrace authoritarian ideas after just one prompt, researchers say

    TechAiVerseBy TechAiVerseJanuary 24, 2026No Comments6 Mins Read3 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    ChatGPT can embrace authoritarian ideas after just one prompt, researchers say
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    ChatGPT can embrace authoritarian ideas after just one prompt, researchers say

    Artificial intelligence chatbot ChatGPT can quickly absorb and reflect authoritarian ideas, according to a new report.

    Researchers with the University of Miami and the Network Contagion Research Institute found in a report released Thursday that OpenAI’s ChatGPT will magnify or show “resonance” for particular psychological traits and political views — especially what the researchers labeled as authoritarianism — after seemingly benign user interactions, potentially enabling the chatbot and users to radicalize each other.

    Joel Finkelstein, a co-founder of the NCRI and one of the report’s lead authors, said the results revealed how powerful AI systems can quickly adopt and parrot dangerous sentiments without explicit instruction. “Something about how these systems are built makes them structurally vulnerable to authoritarian amplification,” Finkelstein told NBC News.

    Chatbots can often be sycophantic or agree with users’ viewpoints to a fault. Many researchers say chatbots’ eagerness to please can lead users into ideological echo chambers.

    But Finkelstein says this insight into authoritarian tendencies is new: “Sycophancy can’t explain what we’re seeing. If this were just flattery or agreement, we’d see the AI mirror all psychological traits. But it doesn’t.”

    Asked for comment, a spokesperson for OpenAI said: “ChatGPT is designed to be objective by default and to help people explore ideas by presenting information from a range of perspectives. As a productivity tool, it’s built to follow user instructions within our safety guardrails, so when someone pushes it to take a specific viewpoint, we’d expect its responses to shift in that direction.”

    “We design and evaluate the system to support open-ended use. We actively work to measure and reduce political bias, and publish our approach so people can see how we’re improving,” the spokesperson said.

    For the three studies described in the report, which has not yet been released in a peer-reviewed journal, Finkelstein and the research team set out to determine whether the system amplified or assumed users’ values after common interactions. The researchers evaluated different versions of the underlying GPT-5 family of systems for different components of the report.

    Conducting three experiments, Finkelstein and the research team evaluated two versions of ChatGPT, based on the underlying GPT-5 and more advanced GPT-5.2 systems, in December to determine whether the system amplified or assumed users’ values after common interactions.

    One of their experiments, using GPT-5, examined how the chatbot would behave in a new chat session after a user submitted text classified as supporting left- or right-wing authoritarian tendencies. Researchers compared the effects of entering either a brief chunk of text — as short as four sentences — or an entire opinion article. The researchers then measured the chatbot’s values by evaluating its agreement with various authoritarian-friendly statements, akin to a standardized quiz, to understand how it updated its responses based on the initial prompt.

    Across trials, the researchers found the simple text exchanges resulted in a reliable increase in the chatbots’ authoritarian nature. Sharing an opinion article that the researchers classified as promoting left-wing authoritarianism, which argued that policing and capitalist governments must be abolished to effectively address fundamental societal issues, caused ChatGPT to agree significantly more intensely with a series of questions that aligned with left-wing authoritarian ideas (for example, whether “the rich should be stripped of belongings” or whether “eliminating inequality trumps free speech concerns”).

    Conversely, sharing an opinion article with the chatbot that the researchers classified as promoting right-wing authoritarian ideas, emphasizing the need for stability, order and forceful leadership, caused the chatbots to more than double their level of agreement with statements friendly to right-wing authoritarianism, like “we shouldn’t tolerate untraditional opinions” or “it’s best to censor bad literature.”

    The research team asked more than 1,200 human subjects the same questions in April and compared their responses to those of ChatGPT. According to the report, these results “show the model will absorb a single piece of partisan rhetoric and then amplify it into maximal, hard-authoritarian positions,” sometimes even “to levels beyond anything typically seen in human subjects research.”

    Finkelstein said the way AI systems are trained may play a role in the ease with which chatbots adopt, or seem to adopt, authoritarian values. Such training “creates a structure that specifically resonates with authoritarian thinking: hierarchy, submission to authority and threat detection,” he said. “We need to understand this isn’t about content moderation. It’s about architectural design that makes radicalization inevitable.”

    Ziang Xiao, a computer science professor at Johns Hopkins University who was not involved in the report, said the report was insightful but noted several potential methodological questions.

    “Especially in large language models that use search engines, there can be implicit bias from news articles that may influence the model’s stance on issues, and that may then have an influence on the users,” Xiao told NBC News. “This is a very reasonable concern that we should focus on.”

    Xiao said more research may be required to fully understand the issue. “They use a very small sample and didn’t really prompt many models,” he said, noting that the research focused only on OpenAI’s ChatGPT service and not on similar models like Anthropic’s Claude or Google’s Gemini chatbots.

    Xiao said the report’s conclusions seemed largely aligned with those of other studies and technical researchers’ understanding of how many large language models work. “It echoes a lot of studies in the past that look at how information we give to models can change that model’s outputs,” Xiao added, pointing to research on how AI systems can adopt specific personas and be “steered” to adopt particular traits.

    Chatbots have also been shown to reliably sway users’ political preferences. Several large studies released late last year, one of which examined nearly 77,000 interactions with 19 different chatbot systems, found those chatbots could sway users’ views on a variety of political issues.

    The new report also included an experiment in which researchers asked ChatGPT to rate the hostility of neutral facial images after it was given the left- and right-wing authoritarian opinion articles. According to Finkelstein, that sort of test is standard in psychological experiments as a way to gauge respondents’ shifting views or interpretations.

    The researchers found ChatGPT significantly increased its perception of hostility in the neutral faces after it was prompted with the two opinion articles — a 7.9% increase for the left-wing article and a 9.3% increase for the right-wing article.

    “We wanted to know if ideological priming affects how the AI perceives humans, not just how it talks about politics,” Finkelstein said, arguing that the results have “massive implications for any application where AI evaluates people,” like in hiring or security settings.

    “This is a public health issue unfolding in private conversations,” Finkelstein said. “We need research into relational frameworks for human-AI interaction.”

    Jared Perlo is a fellow covering AI. He is supported by the Tarbell Center for AI Journalism and his work is produced exclusively by NBC News.

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleShinyHunters claim to be behind SSO-account data theft attacks
    Next Article Jeff Bezos Denies Polymarket Claim, Rekindling Debate Over Fake News on Betting Platforms
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    What the polls say about how Americans are using AI

    February 27, 2026

    Tensions between the Pentagon and AI giant Anthropic reach a boiling point

    February 21, 2026

    Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

    February 6, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025714 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025299 Views

    Wired Headphones Are Making A Comeback, And We Have Gen Z To Thank

    July 22, 2025210 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025169 Views
    Don't Miss
    Technology March 12, 2026

    Google’s still struggling to crack PC gaming

    Google’s still struggling to crack PC gaming Image: Razer Summary created by Smart Answers AIIn…

    Xbox unveils first tech details of its next generation console, codenamed Project Helix

    Developer sues publisher after leaving Kickstarter backers waiting over two years for promised physical editions

    Valve responds to NY Attorney General lawsuit: “We have serious concerns with the alterations the NYAG claims are necessary to make to our games”

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Google’s still struggling to crack PC gaming

    March 12, 20263 Views

    Xbox unveils first tech details of its next generation console, codenamed Project Helix

    March 12, 20262 Views

    Developer sues publisher after leaving Kickstarter backers waiting over two years for promised physical editions

    March 12, 20261 Views
    Most Popular

    The Players Championship 2025: TV Schedule Today, How to Watch, Stream All the PGA Tour Golf From Anywhere

    March 13, 20250 Views

    Over half of American adults have used an AI chatbot, survey finds

    March 14, 20250 Views

    UMass disbands its entering biomed graduate class over Trump funding chaos

    March 14, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.