Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    In Graphic Detail: Subscriptions are rising at big news publishers – even as traffic shrinks

    ‘An influential seat at the table’: Why Target’s retail media business Roundel is one of the first to test ChatGPT ads

    Ad Tech Briefing: A mid-term report card

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

      February 6, 2026

      Stocks and bitcoin sink as investors dump software company shares

      February 4, 2026

      AI, crypto and Trump super PACs stash millions to spend on the midterms

      February 2, 2026

      To avoid accusations of AI cheating, college students are turning to AI

      January 29, 2026

      ChatGPT can embrace authoritarian ideas after just one prompt, researchers say

      January 24, 2026
    • Business

      The HDD brand that brought you the 1.8-inch, 2.5-inch, and 3.5-inch hard drives is now back with a $19 pocket-sized personal cloud for your smartphones

      February 12, 2026

      New VoidLink malware framework targets Linux cloud servers

      January 14, 2026

      Nvidia Rubin’s rack-scale encryption signals a turning point for enterprise AI security

      January 13, 2026

      How KPMG is redefining the future of SAP consulting on a global scale

      January 10, 2026

      Top 10 cloud computing stories of 2025

      December 22, 2025
    • Crypto

      Metaplanet Reports FY2025 Results as Bitcoin Unrealized Losses Top $1 Billion

      February 17, 2026

      Crypto’s AI Pivot: Hype, Infrastructure, and a Two-Year Countdown

      February 17, 2026

      The RWA War: Stablecoins, Speed, and Control

      February 17, 2026

      Jeffrey Epstein Emails Show Plans to Meet Gary Gensler To Talk Crypto

      February 17, 2026

      Bitcoin Bounce Fades, Q1 Losses Deepen, and New Price Risk Back in Focus

      February 17, 2026
    • Technology

      In Graphic Detail: Subscriptions are rising at big news publishers – even as traffic shrinks

      February 17, 2026

      ‘An influential seat at the table’: Why Target’s retail media business Roundel is one of the first to test ChatGPT ads

      February 17, 2026

      Ad Tech Briefing: A mid-term report card

      February 17, 2026

      AdCP vs. IAB Tech Lab: Inside programmatic advertising’s agentic AI standards showdown

      February 17, 2026

      ChatGPT enters the ad game. Now what?

      February 17, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»Cisco: Fine-tuned LLMs are now threat multipliers—22x more likely to go rogue
    Technology

    Cisco: Fine-tuned LLMs are now threat multipliers—22x more likely to go rogue

    TechAiVerseBy TechAiVerseApril 5, 2025No Comments7 Mins Read4 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Cisco: Fine-tuned LLMs are now threat multipliers—22x more likely to go rogue
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    Cisco: Fine-tuned LLMs are now threat multipliers—22x more likely to go rogue

    April 4, 2025 3:12 PM

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


    Weaponized large language models (LLMs) fine-tuned with offensive tradecraft are reshaping cyberattacks, forcing CISOs to rewrite their playbooks. They’ve proven capable of automating reconnaissance, impersonating identities and evading real-time detection, accelerating large-scale social engineering attacks.

    Models, including FraudGPT, GhostGPT and DarkGPT, retail for as little as $75 a month and are purpose-built for attack strategies such as phishing, exploit generation, code obfuscation, vulnerability scanning and credit card validation.

    Cybercrime gangs, syndicates and nation-states see revenue opportunities in providing platforms, kits and leasing access to weaponized LLMs today. These LLMs are being packaged much like legitimate businesses package and sell SaaS apps. Leasing a weaponized LLM often includes access to dashboards, APIs, regular updates and, for some, customer support.

    VentureBeat continues to track the progression of weaponized LLMs closely. It’s becoming evident that the lines are blurring between developer platforms and cybercrime kits as weaponized LLMs’ sophistication continues to accelerate. With lease or rental prices plummeting, more attackers are experimenting with platforms and kits, leading to a new era of AI-driven threats.

    Legitimate LLMs in the cross-hairs

    The spread of weaponized LLMs has progressed so quickly that legitimate LLMs are at risk of being compromised and integrated into cybercriminal tool chains. The bottom line is that legitimate LLMs and models are now in the blast radius of any attack.

    The more fine-tuned a given LLM is, the greater the probability it can be directed to produce harmful outputs. Cisco’s The State of AI Security Report reports that fine-tuned LLMs are 22 times more likely to produce harmful outputs than base models. Fine-tuning models is essential for ensuring their contextual relevance. The trouble is that fine-tuning also weakens guardrails and opens the door to jailbreaks, prompt injections and model inversion.

    Cisco’s study proves that the more production-ready a model becomes, the more exposed it is to vulnerabilities that must be considered in an attack’s blast radius. The core tasks teams rely on to fine-tune LLMs, including continuous fine-tuning, third-party integration, coding and testing, and agentic orchestration, create new opportunities for attackers to compromise LLMs.

    Once inside an LLM, attackers work fast to poison data, attempt to hijack infrastructure, modify and misdirect agent behavior and extract training data at scale. Cisco’s study infers that without independent security layers, the models teams work so diligently on to fine-tune aren’t just at risk; they’re quickly becoming liabilities. From an attacker’s perspective, they’re assets ready to be infiltrated and turned.

    Fine-Tuning LLMs dismantles safety controls at scale

    A key part of Cisco’s security team’s research centered on testing multiple fine-tuned models, including Llama-2-7B and domain-specialized Microsoft Adapt LLMs. These models were tested across a wide variety of domains including healthcare, finance and law.

    One of the most valuable takeaways from Cisco’s study of AI security is that fine-tuning destabilizes alignment, even when trained on clean datasets. Alignment breakdown was the most severe in biomedical and legal domains, two industries known for being among the most stringent regarding compliance, legal transparency and patient safety. 

    While the intent behind fine-tuning is improved task performance, the side effect is systemic degradation of built-in safety controls. Jailbreak attempts that routinely failed against foundation models succeeded at dramatically higher rates against fine-tuned variants, especially in sensitive domains governed by strict compliance frameworks.

    The results are sobering. Jailbreak success rates tripled and malicious output generation soared by 2,200% compared to foundation models. Figure 1 shows just how stark that shift is. Fine-tuning boosts a model’s utility but comes at a cost, which is a substantially broader attack surface.

    TAP achieves up to 98% jailbreak success, outperforming other methods across open- and closed-source LLMs. Source: Cisco State of AI Security 2025, p. 16.

    Malicious LLMs are a $75 commodity

    Cisco Talos is actively tracking the rise of black-market LLMs and provides insights into their research in the report. Talos found that GhostGPT, DarkGPT and FraudGPT are sold on Telegram and the dark web for as little as $75/month. These tools are plug-and-play for phishing, exploit development, credit card validation and obfuscation.

    DarkGPT underground dashboard offers “uncensored intelligence” and subscription-based access for as little as 0.0098 BTC—framing malicious LLMs as consumer-grade SaaS.
    Source: Cisco State of AI Security 2025, p. 9.

    Unlike mainstream models with built-in safety features, these LLMs are pre-configured for offensive operations and offer APIs, updates, and dashboards that are indistinguishable from commercial SaaS products.

    $60 dataset poisoning threatens AI supply chains

    “For just $60, attackers can poison the foundation of AI models—no zero-day required,” write Cisco researchers. That’s the takeaway from Cisco’s joint research with Google, ETH Zurich and Nvidia, which shows how easily adversaries can inject malicious data into the world’s most widely used open-source training sets.

    By exploiting expired domains or timing Wikipedia edits during dataset archiving, attackers can poison as little as 0.01% of datasets like LAION-400M or COYO-700M and still influence downstream LLMs in meaningful ways.

    The two methods mentioned in the study, split-view poisoning and frontrunning attacks, are designed to leverage the fragile trust model of web-crawled data. With most enterprise LLMs built on open data, these attacks scale quietly and persist deep into inference pipelines.

    Decomposition attacks quietly extract copyrighted and regulated content

    One of the most startling discoveries Cisco researchers demonstrated is that LLMs can be manipulated to leak sensitive training data without ever triggering guardrails. Cisco researchers used a method called decomposition prompting to reconstruct over 20% of select New York Times and Wall Street Journal articles. Their attack strategy broke down prompts into sub-queries that guardrails classified as safe, then reassembled the outputs to recreate paywalled or copyrighted content.

    Successfully evading guardrails to access proprietary datasets or licensed content is an attack vector every enterprise is grappling to protect today. For those that have LLMs trained on proprietary datasets or licensed content, decomposition attacks can be particularly devastating. Cisco explains that the breach isn’t happening at the input level, it’s emerging from the models’ outputs. That makes it far more challenging to detect, audit or contain.

    If you’re deploying LLMs in regulated sectors like healthcare, finance or legal, you’re not just staring down GDPR, HIPAA or CCPA violations. You’re dealing with an entirely new class of compliance risk, where even legally sourced data can get exposed through inference, and the penalties are just the beginning.

    Final Word: LLMs aren’t just a tool, they’re the latest attack surface

    Cisco’s ongoing research, including Talos’ dark web monitoring, confirms what many security leaders already suspect: weaponized LLMs are growing in sophistication while a price and packaging war is breaking out on the dark web. Cisco’s findings also prove LLMs aren’t on the edge of the enterprise; they are the enterprise. From fine-tuning risks to dataset poisoning and model output leaks, attackers treat LLMs like infrastructure, not apps.

    One of the most valuable key takeaways from Cisco’s report is that static guardrails will no longer cut it. CISOs and security leaders need real-time visibility across the entire IT estate, stronger adversarial testing, and a more streamlined tech stack to keep up – and a new recognition that LLMs and models are an attack surface that becomes more vulnerable with greater fine-tuning.

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleVibe coding at enterprise scale: AI tools now tackle the full development lifecycle
    Next Article Genspark’s Super Agent ups the ante in the general AI agent race
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    In Graphic Detail: Subscriptions are rising at big news publishers – even as traffic shrinks

    February 17, 2026

    ‘An influential seat at the table’: Why Target’s retail media business Roundel is one of the first to test ChatGPT ads

    February 17, 2026

    Ad Tech Briefing: A mid-term report card

    February 17, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025682 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025264 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025155 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025114 Views
    Don't Miss
    Technology February 17, 2026

    In Graphic Detail: Subscriptions are rising at big news publishers – even as traffic shrinks

    In Graphic Detail: Subscriptions are rising at big news publishers – even as traffic shrinksAfter…

    ‘An influential seat at the table’: Why Target’s retail media business Roundel is one of the first to test ChatGPT ads

    Ad Tech Briefing: A mid-term report card

    AdCP vs. IAB Tech Lab: Inside programmatic advertising’s agentic AI standards showdown

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    In Graphic Detail: Subscriptions are rising at big news publishers – even as traffic shrinks

    February 17, 20263 Views

    ‘An influential seat at the table’: Why Target’s retail media business Roundel is one of the first to test ChatGPT ads

    February 17, 20262 Views

    Ad Tech Briefing: A mid-term report card

    February 17, 20260 Views
    Most Popular

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views

    VTOMAN FlashSpeed 1500: Plenty Of Power For All Your Gear

    March 13, 20250 Views

    This new Roomba finally solves the big problem I have with robot vacuums

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.