Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Lyon Is Replacing Windows and Microsoft Office with Linux and OnlyOffice

    Ollama has a native front end chatbot now

    Early universe’s ‘little red dots’ may be black hole stars

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      AI models may be accidentally (and secretly) learning each other’s bad behaviors

      July 30, 2025

      Another Chinese AI model is turning heads

      July 15, 2025

      AI chatbot Grok issues apology for antisemitic posts

      July 13, 2025

      Apple sued by shareholders for allegedly overstating AI progress

      June 22, 2025

      How far will AI go to defend its own survival?

      June 2, 2025
    • Business

      Cloudflare open-sources Orange Meets with End-to-End encryption

      June 29, 2025

      Google links massive cloud outage to API management issue

      June 13, 2025

      The EU challenges Google and Cloudflare with its very own DNS resolver that can filter dangerous traffic

      June 11, 2025

      These two Ivanti bugs are allowing hackers to target cloud instances

      May 21, 2025

      How cloud and AI transform and improve customer experiences

      May 10, 2025
    • Crypto

      Shiba Inu Price’s 16% Drop Wipes Half Of July Gains; Is August In Trouble?

      July 30, 2025

      White House Crypto Report Suggests Major Changes to US Crypto Tax

      July 30, 2025

      XRP Whale Outflows Reflect Price Concern | Weekly Whale Watch

      July 30, 2025

      Stellar (XLM) Bull Flag Breakout Shows Cracks as Momentum Fades

      July 30, 2025

      Binance Listing Could Be a ‘Kiss of Death’ for Pi Network and New Tokens

      July 30, 2025
    • Technology

      Lyon Is Replacing Windows and Microsoft Office with Linux and OnlyOffice

      July 31, 2025

      Ollama has a native front end chatbot now

      July 31, 2025

      Early universe’s ‘little red dots’ may be black hole stars

      July 31, 2025

      Vibe code is legacy code

      July 31, 2025

      The Math Is Haunted

      July 31, 2025
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»Cisco: Fine-tuned LLMs are now threat multipliers—22x more likely to go rogue
    Technology

    Cisco: Fine-tuned LLMs are now threat multipliers—22x more likely to go rogue

    TechAiVerseBy TechAiVerseApril 5, 2025No Comments7 Mins Read0 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Cisco: Fine-tuned LLMs are now threat multipliers—22x more likely to go rogue
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    BMI Calculator – Check your Body Mass Index for free!

    Cisco: Fine-tuned LLMs are now threat multipliers—22x more likely to go rogue

    April 4, 2025 3:12 PM

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


    Weaponized large language models (LLMs) fine-tuned with offensive tradecraft are reshaping cyberattacks, forcing CISOs to rewrite their playbooks. They’ve proven capable of automating reconnaissance, impersonating identities and evading real-time detection, accelerating large-scale social engineering attacks.

    Models, including FraudGPT, GhostGPT and DarkGPT, retail for as little as $75 a month and are purpose-built for attack strategies such as phishing, exploit generation, code obfuscation, vulnerability scanning and credit card validation.

    Cybercrime gangs, syndicates and nation-states see revenue opportunities in providing platforms, kits and leasing access to weaponized LLMs today. These LLMs are being packaged much like legitimate businesses package and sell SaaS apps. Leasing a weaponized LLM often includes access to dashboards, APIs, regular updates and, for some, customer support.

    VentureBeat continues to track the progression of weaponized LLMs closely. It’s becoming evident that the lines are blurring between developer platforms and cybercrime kits as weaponized LLMs’ sophistication continues to accelerate. With lease or rental prices plummeting, more attackers are experimenting with platforms and kits, leading to a new era of AI-driven threats.

    Legitimate LLMs in the cross-hairs

    The spread of weaponized LLMs has progressed so quickly that legitimate LLMs are at risk of being compromised and integrated into cybercriminal tool chains. The bottom line is that legitimate LLMs and models are now in the blast radius of any attack.

    The more fine-tuned a given LLM is, the greater the probability it can be directed to produce harmful outputs. Cisco’s The State of AI Security Report reports that fine-tuned LLMs are 22 times more likely to produce harmful outputs than base models. Fine-tuning models is essential for ensuring their contextual relevance. The trouble is that fine-tuning also weakens guardrails and opens the door to jailbreaks, prompt injections and model inversion.

    Cisco’s study proves that the more production-ready a model becomes, the more exposed it is to vulnerabilities that must be considered in an attack’s blast radius. The core tasks teams rely on to fine-tune LLMs, including continuous fine-tuning, third-party integration, coding and testing, and agentic orchestration, create new opportunities for attackers to compromise LLMs.

    Once inside an LLM, attackers work fast to poison data, attempt to hijack infrastructure, modify and misdirect agent behavior and extract training data at scale. Cisco’s study infers that without independent security layers, the models teams work so diligently on to fine-tune aren’t just at risk; they’re quickly becoming liabilities. From an attacker’s perspective, they’re assets ready to be infiltrated and turned.

    Fine-Tuning LLMs dismantles safety controls at scale

    A key part of Cisco’s security team’s research centered on testing multiple fine-tuned models, including Llama-2-7B and domain-specialized Microsoft Adapt LLMs. These models were tested across a wide variety of domains including healthcare, finance and law.

    One of the most valuable takeaways from Cisco’s study of AI security is that fine-tuning destabilizes alignment, even when trained on clean datasets. Alignment breakdown was the most severe in biomedical and legal domains, two industries known for being among the most stringent regarding compliance, legal transparency and patient safety. 

    While the intent behind fine-tuning is improved task performance, the side effect is systemic degradation of built-in safety controls. Jailbreak attempts that routinely failed against foundation models succeeded at dramatically higher rates against fine-tuned variants, especially in sensitive domains governed by strict compliance frameworks.

    The results are sobering. Jailbreak success rates tripled and malicious output generation soared by 2,200% compared to foundation models. Figure 1 shows just how stark that shift is. Fine-tuning boosts a model’s utility but comes at a cost, which is a substantially broader attack surface.

    TAP achieves up to 98% jailbreak success, outperforming other methods across open- and closed-source LLMs. Source: Cisco State of AI Security 2025, p. 16.

    Malicious LLMs are a $75 commodity

    Cisco Talos is actively tracking the rise of black-market LLMs and provides insights into their research in the report. Talos found that GhostGPT, DarkGPT and FraudGPT are sold on Telegram and the dark web for as little as $75/month. These tools are plug-and-play for phishing, exploit development, credit card validation and obfuscation.

    DarkGPT underground dashboard offers “uncensored intelligence” and subscription-based access for as little as 0.0098 BTC—framing malicious LLMs as consumer-grade SaaS.
    Source: Cisco State of AI Security 2025, p. 9.

    Unlike mainstream models with built-in safety features, these LLMs are pre-configured for offensive operations and offer APIs, updates, and dashboards that are indistinguishable from commercial SaaS products.

    $60 dataset poisoning threatens AI supply chains

    “For just $60, attackers can poison the foundation of AI models—no zero-day required,” write Cisco researchers. That’s the takeaway from Cisco’s joint research with Google, ETH Zurich and Nvidia, which shows how easily adversaries can inject malicious data into the world’s most widely used open-source training sets.

    By exploiting expired domains or timing Wikipedia edits during dataset archiving, attackers can poison as little as 0.01% of datasets like LAION-400M or COYO-700M and still influence downstream LLMs in meaningful ways.

    The two methods mentioned in the study, split-view poisoning and frontrunning attacks, are designed to leverage the fragile trust model of web-crawled data. With most enterprise LLMs built on open data, these attacks scale quietly and persist deep into inference pipelines.

    Decomposition attacks quietly extract copyrighted and regulated content

    One of the most startling discoveries Cisco researchers demonstrated is that LLMs can be manipulated to leak sensitive training data without ever triggering guardrails. Cisco researchers used a method called decomposition prompting to reconstruct over 20% of select New York Times and Wall Street Journal articles. Their attack strategy broke down prompts into sub-queries that guardrails classified as safe, then reassembled the outputs to recreate paywalled or copyrighted content.

    Successfully evading guardrails to access proprietary datasets or licensed content is an attack vector every enterprise is grappling to protect today. For those that have LLMs trained on proprietary datasets or licensed content, decomposition attacks can be particularly devastating. Cisco explains that the breach isn’t happening at the input level, it’s emerging from the models’ outputs. That makes it far more challenging to detect, audit or contain.

    If you’re deploying LLMs in regulated sectors like healthcare, finance or legal, you’re not just staring down GDPR, HIPAA or CCPA violations. You’re dealing with an entirely new class of compliance risk, where even legally sourced data can get exposed through inference, and the penalties are just the beginning.

    Final Word: LLMs aren’t just a tool, they’re the latest attack surface

    Cisco’s ongoing research, including Talos’ dark web monitoring, confirms what many security leaders already suspect: weaponized LLMs are growing in sophistication while a price and packaging war is breaking out on the dark web. Cisco’s findings also prove LLMs aren’t on the edge of the enterprise; they are the enterprise. From fine-tuning risks to dataset poisoning and model output leaks, attackers treat LLMs like infrastructure, not apps.

    One of the most valuable key takeaways from Cisco’s report is that static guardrails will no longer cut it. CISOs and security leaders need real-time visibility across the entire IT estate, stronger adversarial testing, and a more streamlined tech stack to keep up – and a new recognition that LLMs and models are an attack surface that becomes more vulnerable with greater fine-tuning.

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    BMI Calculator – Check your Body Mass Index for free!

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleVibe coding at enterprise scale: AI tools now tackle the full development lifecycle
    Next Article Genspark’s Super Agent ups the ante in the general AI agent race
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    Lyon Is Replacing Windows and Microsoft Office with Linux and OnlyOffice

    July 31, 2025

    Ollama has a native front end chatbot now

    July 31, 2025

    Early universe’s ‘little red dots’ may be black hole stars

    July 31, 2025
    Leave A Reply Cancel Reply

    Top Posts

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 202532 Views

    New Akira ransomware decryptor cracks encryptions keys using GPUs

    March 16, 202529 Views

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 202528 Views

    OpenAI details ChatGPT-o3, o4-mini, o4-mini-high usage limits

    April 19, 202522 Views
    Don't Miss
    Technology July 31, 2025

    Lyon Is Replacing Windows and Microsoft Office with Linux and OnlyOffice

    Lyon Is Replacing Windows and Microsoft Office with Linux and OnlyOffice Jumping Rocks/Contributor/GettyIs it something…

    Ollama has a native front end chatbot now

    Early universe’s ‘little red dots’ may be black hole stars

    Vibe code is legacy code

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Lyon Is Replacing Windows and Microsoft Office with Linux and OnlyOffice

    July 31, 20250 Views

    Ollama has a native front end chatbot now

    July 31, 20250 Views

    Early universe’s ‘little red dots’ may be black hole stars

    July 31, 20250 Views
    Most Popular

    Xiaomi 15 Ultra Officially Launched in China, Malaysia launch to follow after global event

    March 12, 20250 Views

    Apple thinks people won’t use MagSafe on iPhone 16e

    March 12, 20250 Views

    French Apex Legends voice cast refuses contracts over “unacceptable” AI clause

    March 12, 20250 Views
    © 2025 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.