Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Google Photos is coming to your Samsung TV next year

    Asus wants your gaming laptop to pull double duty at CES 2026

    Bixby on Galaxy phones may soon rival Gemini with smarter answers

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      AI has become the norm for students. Teachers are playing catch-up.

      December 23, 2025

      Trump signs executive order seeking to ban states from regulating AI companies

      December 13, 2025

      Apple’s AI chief abruptly steps down

      December 3, 2025

      The issue that’s scrambling both parties: From the Politics Desk

      December 3, 2025

      More of Silicon Valley is building on free Chinese AI

      December 1, 2025
    • Business

      Top 10 cloud computing stories of 2025

      December 22, 2025

      Saudia Arabia’s STC commits to five-year network upgrade programme with Ericsson

      December 18, 2025

      Zeroday Cloud hacking event awards $320,0000 for 11 zero days

      December 18, 2025

      Amazon: Ongoing cryptomining campaign uses hacked AWS accounts

      December 18, 2025

      Want to back up your iPhone securely without paying the Apple tax? There’s a hack for that, but it isn’t for everyone… yet

      December 16, 2025
    • Crypto

      HBAR Faces a 31% Breakdown Risk — Dip Buying Tries to Push Back

      December 29, 2025

      Ethereum Staking Entry Queue Surpasses Exit Queue After 3 Months — What’s Next for ETH?

      December 29, 2025

      3 Gold Market Signals That Suggest Bitcoin’s Price May Be Near a Bottom

      December 29, 2025

      3 Token Unlocks to Watch This Week

      December 29, 2025

      XRP at a Critical Juncture as On-Chain Data and Charts Tell Different Stories

      December 29, 2025
    • Technology

      Google Photos is coming to your Samsung TV next year

      December 29, 2025

      Asus wants your gaming laptop to pull double duty at CES 2026

      December 29, 2025

      Bixby on Galaxy phones may soon rival Gemini with smarter answers

      December 29, 2025

      Humanoid robot teleoperator manages to kick himself where it hurts

      December 29, 2025

      I thought the iPad is untouchable, but the $399 OnePlus Pad Go 2 shook my belief

      December 29, 2025
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»Traditional fake news detection fails against AI-generated content
    Technology

    Traditional fake news detection fails against AI-generated content

    TechAiVerseBy TechAiVerseJune 23, 2025No Comments7 Mins Read2 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Traditional fake news detection fails against AI-generated content
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    Traditional fake news detection fails against AI-generated content

    As generative AI produces increasingly convincing text, Dutch researchers are exploring how linguistic cues, model bias, and transparency tools can help detect fake news.

    By

    • Kim Loohuis

    Published: 18 Jun 2025 11:30

    Large language models (LLMs) are capable of generating text that is grammatically flawless, stylistically convincing and semantically rich. While this technological leap has brought efficiency gains to journalism, education and business communication, it has also complicated the detection of misinformation. How do you identify fake news when even experts struggle to distinguish artificial intelligence (AI)-generated content from human-authored text? 

    This question was central to a recent symposium in Amsterdam on disinformation and LLMs, hosted by CWI, the research institute for mathematics and computer science in the Netherlands, and co-organised with Utrecht University and the University of Groningen. International researchers gathered to explore how misinformation is evolving and what new tools and approaches are needed to counter it. 

    Among the organisers was CWI researcher Davide Ceolin, whose work focuses on information quality, bias in AI models and the explainability of automated assessments. The warning signs that once helped identify misinformation – grammatical errors, awkward phrasing and linguistic inconsistencies – are rapidly becoming obsolete as AI-generated content becomes indistinguishable from human writing.  

    This evolution represents more than just a technical challenge. The World Economic Forum has identified misinformation as the most significant short-term risk globally for the second consecutive year, with the Netherlands ranking it among its top five concerns through 2027. The sophistication of AI-generated content is a key factor driving this heightened concern, presenting a fundamental challenge for organisations and individuals alike.

    For years, Ceolin’s team developed tools and methods to identify fake news through linguistic and reputation patterns, detecting the telltale signs of content that characterised much of the early misinformation.  

    Their methods make use of natural language processing (NLP), with colleagues from the Vrije Universiteit Amsterdam; logical reasoning, with colleagues from the University of Milan; and human computation (crowdsourcing, with colleagues from the University of Udine, University of Queensland, and Royal Melbourne Institute of Technology), and help identify suspicious pieces of text and check their veracity. 

    Game changer

    The game has fundamentally changed. “LLMs are starting to write more linguistically correct texts,” said Ceolin. “The credibility and factuality are not necessarily aligned – that’s the issue.”

    Traditional markers of deception are disappearing just as the volume, sophistication and personalisation of generated content increase exponentially.  

    Tommy van Steen, a university lecturer in cyber security at Leiden University, explained the broader challenge facing researchers. At a recent interdisciplinary event organised by Leiden University – the Night of Digital Security, which brought together experts from law, criminology, technology and public administration – he noted: “Fake news as a theme or word really comes from Trump around the 2016 elections. Everything he disagreed with, he simply called fake news.” 

    However, Van Steen said the problem extends far beyond blatant fabrications. “It’s important to distinguish between misinformation and disinformation,” he said. “Both involve sharing information that isn’t correct, but with misinformation, it’s accidental; with disinformation, it’s intentional.” 

    Beyond linguistic analysis

    For researchers like Ceolin, the implications of AI-generated content extend far beyond simple text generation. Recent research from his team (in collaboration with INRIA, CWI’s sister institute in France) – accepted in the findings of the flagship computational linguistics conference, ACL – revealed how LLMs exhibit different political biases depending on the language they’re prompted in and the nationality they’re assigned. When the same model answered identical political compass questions in different languages or while embodying different national personas, the results varied significantly. 

    Van Steen’s work highlights that misinformation isn’t simply a binary of true versus false content. He employs a seven-category framework ranging from satire and parody through to completely fabricated content.

    “It’s not just about complete nonsense or complete truth – there’s actually quite a lot in-between, and that can be at least as harmful, maybe even more harmful,” he said.

    However, Ceolin argued that technological solutions alone are insufficient. “I think it’s a dual effort,” he said. “Users should cooperate with the machine and with other users to foster identification of misinformation.”  

    The approach represents a significant shift from purely automated detection to what Ceolin called “transparent” systems, which provide users with the reasoning behind their assessments. Rather than black-box algorithms delivering binary verdicts, the new generation of tools aims to educate and empower users by explaining their decision-making process. 

    Content farming and micro-targeting concerns

    The symposium at CWI highlighted three escalation levels of AI-driven misinformation: content farming, LLM vulnerabilities and micro-targeting.

    Ceolin identified content farming as the most concerning. “It’s very easy to generate content, including content with negative intentions, but it’s much harder for humans to detect fake generated content,” he said.  

    Van Steen highlighted a fundamental asymmetry that makes detection increasingly challenging. “One of the biggest problems with fake news is this disconnect – how easy it is to create versus how difficult and time-consuming it is to verify,” he noted. “You’re never going to balance that equation easily.”

    The challenge intensifies when sophisticated content generation combines with precision targeting. “If bad AI-generated content effectively targets a specific group of users, it’s even harder to spot and detect,” said Ceolin.  

    Tackling this new generation of sophisticated misinformation requires a fundamental rethinking of detection methodologies. He advocates for explainable AI systems that prioritise transparency over pure accuracy metrics. When asked to justify choosing an 85% accurate but explainable system over a 99% accurate black box, he poses a crucial counter-question: “Can you really trust the 99% black box model 99% of the time?” 

    The 1% inaccuracy in black box models could present systematic bias beyond random error, and without transparency, organisations cannot identify or address these weaknesses. “In the transparent model, you can identify areas where the model could be deficient and target specific aspects for improvement,” said Ceolin.

    This philosophy extends to the broader challenge of assessing AI bias. “We are now looking at whether we can benchmark and measure the bias of these models so that we can help users understand the quality of information they receive from them,” he said. 

    Preparing for an uncertain future

    For organisations grappling with the new landscape, Ceolin’s advice emphasised the fundamentals. “We shouldn’t forget that all the technology we’ve developed so far can still play a big role,” he said.

    Even as LLMs become more sophisticated, traditional verification approaches remain relevant. 

    “These LLMs, in several cases, also show the sources they use for their answers,” said Ceolin. “We should teach users to look beyond the text they receive as a response to check that these really are the sources used, and then check the reputation, reliability and credibility of those sources.” 

    The future requires what the CWI researcher describes as a “joint effort” involving companies, citizens and institutions. “We as researchers are highlighting the issues and risks, and proposing solutions,” he said.

    “It will be fundamental for us to help citizens understand the benefits but also the limitations of these models. The last judgement should come from users – but informed users, supported by transparent tools that help them understand not just what they’re reading, but why they should trust it.”

    Read more on Artificial intelligence, automation and robotics


    • Who owns AI-generated content?

      By: Reda Chouffani


    • Davos 2025: Misinformation and disinformation are most pressing risks, says World Economic Forum

      By: Bill Goodwin


    • 11 ways to spot disinformation on social media

      By: Amanda Hetler


    • U.S. approach to misinformation, AI will shift under Trump

      By: Makenzie Holland

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleUS embassy wants ‘every social media username of past five years’ for new visas
    Next Article Government announces details of Post Office Capture redress scheme
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    Google Photos is coming to your Samsung TV next year

    December 29, 2025

    Asus wants your gaming laptop to pull double duty at CES 2026

    December 29, 2025

    Bixby on Galaxy phones may soon rival Gemini with smarter answers

    December 29, 2025
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025556 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025201 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025104 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 202591 Views
    Don't Miss
    Technology December 29, 2025

    Google Photos is coming to your Samsung TV next year

    Google Photos is coming to your Samsung TV next year Nano Banana-powered image editing is…

    Asus wants your gaming laptop to pull double duty at CES 2026

    Bixby on Galaxy phones may soon rival Gemini with smarter answers

    Humanoid robot teleoperator manages to kick himself where it hurts

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Google Photos is coming to your Samsung TV next year

    December 29, 20252 Views

    Asus wants your gaming laptop to pull double duty at CES 2026

    December 29, 20252 Views

    Bixby on Galaxy phones may soon rival Gemini with smarter answers

    December 29, 20252 Views
    Most Popular

    What to Know and Where to Find Apple Intelligence Summaries on iPhone

    March 12, 20250 Views

    A Team of Female Founders Is Launching Cloud Security Tech That Could Overhaul AI Protection

    March 12, 20250 Views

    Senua’s Saga: Hellblade 2 leads BAFTA Game Awards 2025 nominations

    March 12, 20250 Views
    © 2025 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.