Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Razer Huntsman Signature Edition Unveiled as Ultra-Premium Flagship Gaming Keyboard

    Google Pixel 10a available on 5 March in Malaysia from RM2299

    Steam Deck’s out of stock, but the Xbox Ally is under $500

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

      February 6, 2026

      Stocks and bitcoin sink as investors dump software company shares

      February 4, 2026

      AI, crypto and Trump super PACs stash millions to spend on the midterms

      February 2, 2026

      To avoid accusations of AI cheating, college students are turning to AI

      January 29, 2026

      ChatGPT can embrace authoritarian ideas after just one prompt, researchers say

      January 24, 2026
    • Business

      The HDD brand that brought you the 1.8-inch, 2.5-inch, and 3.5-inch hard drives is now back with a $19 pocket-sized personal cloud for your smartphones

      February 12, 2026

      New VoidLink malware framework targets Linux cloud servers

      January 14, 2026

      Nvidia Rubin’s rack-scale encryption signals a turning point for enterprise AI security

      January 13, 2026

      How KPMG is redefining the future of SAP consulting on a global scale

      January 10, 2026

      Top 10 cloud computing stories of 2025

      December 22, 2025
    • Crypto

      Is Bitcoin Price Entering a New Bear Market? Here’s Why Metrics Say Yes

      February 19, 2026

      Cardano’s Trading Activity Crashes to a 6-Month Low — Can ADA Still Attempt a Reversal?

      February 19, 2026

      Is Extreme Fear a Buy Signal? New Data Questions the Conventional Wisdom

      February 19, 2026

      Coinbase and Ledn Strengthen Crypto Lending Push Despite Market Slump

      February 19, 2026

      Bitcoin Caught Between Hawkish Fed and Dovish Warsh

      February 19, 2026
    • Technology

      Steam Deck’s out of stock, but the Xbox Ally is under $500

      February 20, 2026

      How fast is your Internet? Windows 11 will (finally) tell you

      February 20, 2026

      Oh no, Intel is moving customer support to AI

      February 20, 2026

      LG’s 32-inch 1440p 180Hz gaming monitor is a steal for $197

      February 20, 2026

      Claude Sonnet 4.6 brings 1M token power and fewer AI hallucinations

      February 20, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»Traditional fake news detection fails against AI-generated content
    Technology

    Traditional fake news detection fails against AI-generated content

    TechAiVerseBy TechAiVerseJune 23, 2025No Comments7 Mins Read2 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Traditional fake news detection fails against AI-generated content
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    Traditional fake news detection fails against AI-generated content

    As generative AI produces increasingly convincing text, Dutch researchers are exploring how linguistic cues, model bias, and transparency tools can help detect fake news.

    By

    • Kim Loohuis

    Published: 18 Jun 2025 11:30

    Large language models (LLMs) are capable of generating text that is grammatically flawless, stylistically convincing and semantically rich. While this technological leap has brought efficiency gains to journalism, education and business communication, it has also complicated the detection of misinformation. How do you identify fake news when even experts struggle to distinguish artificial intelligence (AI)-generated content from human-authored text? 

    This question was central to a recent symposium in Amsterdam on disinformation and LLMs, hosted by CWI, the research institute for mathematics and computer science in the Netherlands, and co-organised with Utrecht University and the University of Groningen. International researchers gathered to explore how misinformation is evolving and what new tools and approaches are needed to counter it. 

    Among the organisers was CWI researcher Davide Ceolin, whose work focuses on information quality, bias in AI models and the explainability of automated assessments. The warning signs that once helped identify misinformation – grammatical errors, awkward phrasing and linguistic inconsistencies – are rapidly becoming obsolete as AI-generated content becomes indistinguishable from human writing.  

    This evolution represents more than just a technical challenge. The World Economic Forum has identified misinformation as the most significant short-term risk globally for the second consecutive year, with the Netherlands ranking it among its top five concerns through 2027. The sophistication of AI-generated content is a key factor driving this heightened concern, presenting a fundamental challenge for organisations and individuals alike.

    For years, Ceolin’s team developed tools and methods to identify fake news through linguistic and reputation patterns, detecting the telltale signs of content that characterised much of the early misinformation.  

    Their methods make use of natural language processing (NLP), with colleagues from the Vrije Universiteit Amsterdam; logical reasoning, with colleagues from the University of Milan; and human computation (crowdsourcing, with colleagues from the University of Udine, University of Queensland, and Royal Melbourne Institute of Technology), and help identify suspicious pieces of text and check their veracity. 

    Game changer

    The game has fundamentally changed. “LLMs are starting to write more linguistically correct texts,” said Ceolin. “The credibility and factuality are not necessarily aligned – that’s the issue.”

    Traditional markers of deception are disappearing just as the volume, sophistication and personalisation of generated content increase exponentially.  

    Tommy van Steen, a university lecturer in cyber security at Leiden University, explained the broader challenge facing researchers. At a recent interdisciplinary event organised by Leiden University – the Night of Digital Security, which brought together experts from law, criminology, technology and public administration – he noted: “Fake news as a theme or word really comes from Trump around the 2016 elections. Everything he disagreed with, he simply called fake news.” 

    However, Van Steen said the problem extends far beyond blatant fabrications. “It’s important to distinguish between misinformation and disinformation,” he said. “Both involve sharing information that isn’t correct, but with misinformation, it’s accidental; with disinformation, it’s intentional.” 

    Beyond linguistic analysis

    For researchers like Ceolin, the implications of AI-generated content extend far beyond simple text generation. Recent research from his team (in collaboration with INRIA, CWI’s sister institute in France) – accepted in the findings of the flagship computational linguistics conference, ACL – revealed how LLMs exhibit different political biases depending on the language they’re prompted in and the nationality they’re assigned. When the same model answered identical political compass questions in different languages or while embodying different national personas, the results varied significantly. 

    Van Steen’s work highlights that misinformation isn’t simply a binary of true versus false content. He employs a seven-category framework ranging from satire and parody through to completely fabricated content.

    “It’s not just about complete nonsense or complete truth – there’s actually quite a lot in-between, and that can be at least as harmful, maybe even more harmful,” he said.

    However, Ceolin argued that technological solutions alone are insufficient. “I think it’s a dual effort,” he said. “Users should cooperate with the machine and with other users to foster identification of misinformation.”  

    The approach represents a significant shift from purely automated detection to what Ceolin called “transparent” systems, which provide users with the reasoning behind their assessments. Rather than black-box algorithms delivering binary verdicts, the new generation of tools aims to educate and empower users by explaining their decision-making process. 

    Content farming and micro-targeting concerns

    The symposium at CWI highlighted three escalation levels of AI-driven misinformation: content farming, LLM vulnerabilities and micro-targeting.

    Ceolin identified content farming as the most concerning. “It’s very easy to generate content, including content with negative intentions, but it’s much harder for humans to detect fake generated content,” he said.  

    Van Steen highlighted a fundamental asymmetry that makes detection increasingly challenging. “One of the biggest problems with fake news is this disconnect – how easy it is to create versus how difficult and time-consuming it is to verify,” he noted. “You’re never going to balance that equation easily.”

    The challenge intensifies when sophisticated content generation combines with precision targeting. “If bad AI-generated content effectively targets a specific group of users, it’s even harder to spot and detect,” said Ceolin.  

    Tackling this new generation of sophisticated misinformation requires a fundamental rethinking of detection methodologies. He advocates for explainable AI systems that prioritise transparency over pure accuracy metrics. When asked to justify choosing an 85% accurate but explainable system over a 99% accurate black box, he poses a crucial counter-question: “Can you really trust the 99% black box model 99% of the time?” 

    The 1% inaccuracy in black box models could present systematic bias beyond random error, and without transparency, organisations cannot identify or address these weaknesses. “In the transparent model, you can identify areas where the model could be deficient and target specific aspects for improvement,” said Ceolin.

    This philosophy extends to the broader challenge of assessing AI bias. “We are now looking at whether we can benchmark and measure the bias of these models so that we can help users understand the quality of information they receive from them,” he said. 

    Preparing for an uncertain future

    For organisations grappling with the new landscape, Ceolin’s advice emphasised the fundamentals. “We shouldn’t forget that all the technology we’ve developed so far can still play a big role,” he said.

    Even as LLMs become more sophisticated, traditional verification approaches remain relevant. 

    “These LLMs, in several cases, also show the sources they use for their answers,” said Ceolin. “We should teach users to look beyond the text they receive as a response to check that these really are the sources used, and then check the reputation, reliability and credibility of those sources.” 

    The future requires what the CWI researcher describes as a “joint effort” involving companies, citizens and institutions. “We as researchers are highlighting the issues and risks, and proposing solutions,” he said.

    “It will be fundamental for us to help citizens understand the benefits but also the limitations of these models. The last judgement should come from users – but informed users, supported by transparent tools that help them understand not just what they’re reading, but why they should trust it.”

    Read more on Artificial intelligence, automation and robotics


    • Who owns AI-generated content?

      By: Reda Chouffani


    • Davos 2025: Misinformation and disinformation are most pressing risks, says World Economic Forum

      By: Bill Goodwin


    • 11 ways to spot disinformation on social media

      By: Amanda Hetler


    • U.S. approach to misinformation, AI will shift under Trump

      By: Makenzie Holland

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleUS embassy wants ‘every social media username of past five years’ for new visas
    Next Article Government announces details of Post Office Capture redress scheme
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    Steam Deck’s out of stock, but the Xbox Ally is under $500

    February 20, 2026

    How fast is your Internet? Windows 11 will (finally) tell you

    February 20, 2026

    Oh no, Intel is moving customer support to AI

    February 20, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025684 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025273 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025156 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025118 Views
    Don't Miss
    Gadgets February 20, 2026

    Razer Huntsman Signature Edition Unveiled as Ultra-Premium Flagship Gaming Keyboard

    Razer Huntsman Signature Edition Unveiled as Ultra-Premium Flagship Gaming Keyboard Razer has announced the new…

    Google Pixel 10a available on 5 March in Malaysia from RM2299

    Steam Deck’s out of stock, but the Xbox Ally is under $500

    How fast is your Internet? Windows 11 will (finally) tell you

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Razer Huntsman Signature Edition Unveiled as Ultra-Premium Flagship Gaming Keyboard

    February 20, 20262 Views

    Google Pixel 10a available on 5 March in Malaysia from RM2299

    February 20, 20261 Views

    Steam Deck’s out of stock, but the Xbox Ally is under $500

    February 20, 20262 Views
    Most Popular

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views

    VTOMAN FlashSpeed 1500: Plenty Of Power For All Your Gear

    March 13, 20250 Views

    This new Roomba finally solves the big problem I have with robot vacuums

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.