Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Xiaomi Pad 8 Series

    Lenovo IdeaPad Slim 5 16 laptop review: Intel Core i5 vs. AMD Ryzen 5

    Oppo Find N6: Leakers clarify international release plans for new foldable with OnePlus Open 2 also mooted

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Apple’s AI chief abruptly steps down

      December 3, 2025

      The issue that’s scrambling both parties: From the Politics Desk

      December 3, 2025

      More of Silicon Valley is building on free Chinese AI

      December 1, 2025

      From Steve Bannon to Elizabeth Warren, backlash erupts over push to block states from regulating AI

      November 23, 2025

      Insurance companies are trying to avoid big payouts by making AI safer

      November 19, 2025
    • Business

      Public GitLab repositories exposed more than 17,000 secrets

      November 29, 2025

      ASUS warns of new critical auth bypass flaw in AiCloud routers

      November 28, 2025

      Windows 11 gets new Cloud Rebuild, Point-in-Time Restore tools

      November 18, 2025

      Government faces questions about why US AWS outage disrupted UK tax office and banking firms

      October 23, 2025

      Amazon’s AWS outage knocked services like Alexa, Snapchat, Fortnite, Venmo and more offline

      October 21, 2025
    • Crypto

      Five Cryptocurrencies That Often Rally Around Christmas

      December 3, 2025

      Why Trump-Backed Mining Company Struggles Despite Bitcoin’s Recovery

      December 3, 2025

      XRP ETFs Extend 11-Day Inflow Streak as $1 Billion Mark Nears

      December 3, 2025

      Why AI-Driven Crypto Exploits Are More Dangerous Than Ever Before

      December 3, 2025

      Bitcoin Is Recovering, But Can It Drop Below $80,000 Again?

      December 3, 2025
    • Technology

      Xiaomi Pad 8 Series

      December 3, 2025

      Lenovo IdeaPad Slim 5 16 laptop review: Intel Core i5 vs. AMD Ryzen 5

      December 3, 2025

      Oppo Find N6: Leakers clarify international release plans for new foldable with OnePlus Open 2 also mooted

      December 3, 2025

      Microsoft’s ugly sweater returns with an Xbox Edition alongside two others

      December 3, 2025

      Free Red Dead Redemption Switch 2 upgrade maximizes console’s specs for huge performance boost

      December 3, 2025
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»WTF is ‘shadow AI,’ and why should publishers care?
    Technology

    WTF is ‘shadow AI,’ and why should publishers care?

    TechAiVerseBy TechAiVerseMarch 15, 2025No Comments8 Mins Read2 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    WTF is ‘shadow AI,’ and why should publishers care?
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    WTF is ‘shadow AI,’ and why should publishers care?

    If I paid for ChatGPT Plus from my own pocket and used it to help me write this article, would my boss know — or care?

    That’s the question surrounding “shadow AI” — which refers to the use of AI tools at work that haven’t been formally approved by companies.

    The ongoing pressure to work faster, along with the proliferation of easy-to-use generative AI tools, means more editorial staff are using AI to complete tasks. And while using generative AI for minor tasks — like grammar checks, rewriting copy, or testing different headlines — falls into one bucket of offenses, there are other uses which could cause bigger problems down the line, if left unchecked.

    And that could lead to inconsistent editorial standards, security vulnerabilities and ethical missteps, legal experts say.

    Here’s a more detailed look at what it is, and how publishers are set up to deal with the risks. 

    What is shadow AI?

    Shadow AI refers to the use of AI tools at work that haven’t been officially approved or licensed by the company. It’s been a thorn in the side of IT departments everywhere, as the unsanctioned use of generative AI tools at work can make businesses more vulnerable to data breaches. 

    But its application to newsrooms poses a unique set of considerations. Inputting sensitive data like sensitive source material, proprietary research, embargoed news and copyrighted information into large language models without a publisher’s oversight could more than risk the protection of that information and the journalist’s reputation and accuracy of their work — it could even be illegal.

    “If somebody takes [my work] and puts it into a system, and now the owner of that system has a copy of it, they could potentially use it in ways that [I] never intended or would have ever permitted. And one of those ways is training AI,” said Michael Yang, senior director of AI advisory services at law firm Husch Blackwell. “You could be in a situation where you have inadvertently or unintentionally caused a breach of a contract situation.”

    Newsroom employees inputting copyrighted data into an LLM that then uses that data for training purposes could cause legal issues down the road.

    What are the risks involved?

    Legal experts that spoke to Digiday cited three main considerations: the potential bias of AI models, confidentiality and accuracy issues.

    The bias of AI models has been well-reported. If the data used to train AI models is biased or one-sided (such as skewed in favor of certain races or genders) and journalists depend on tools built from these models for their work, the output could end up perpetuating those stereotypes, Yang stressed.

    LLMs scraping publishers’ online content and using it to train their models is at the heart of copyright infringement cases like the one brought by The New York Times against OpenAI. The same questions around how these LLMs are using data to train their systems are why it could be risky for journalists to input copyrighted data (or any sensitive information — such as confidential source information) into an AI model that is connected to the internet and not hosted locally, according to Felix Simon, a research fellow in AI and news at Oxford University who studies the implications of AI for journalism.

    Sensitive data could be fed into these unapproved systems and used for training the AI models — potentially appearing in outputs, Simon said. And if these systems aren’t secure, they could be viewed by the AI tech company, the people reviewing model outputs to make updates to the systems, or third parties, he added.

    Sharing copyrighted data in this way with an AI system could be illegal due to the way AI companies can ingest inputs and use them as training data, Yang stressed. And the publisher would be liable for either infringing on copyright or generating infringing content, added Gary Kibel, partner at law firm Davis+ Gilbert, which advises media and advertising clients.

    Meanwhile, using a tool that hasn’t been vetted can cause accuracy problems. “If you input into an AI platform, ‘If CEO Jane Doe did the following, what would that mean?’ and then the AI platform rolls that into their training data, and it comes out in someone else’s output that the CEO Jane Doe did the following… they may come to you and say, ‘How in the world did this get out? I told only you,’” Kibel said.

    A lot of the larger publishers have set themselves up with formal policies, principles and guardrails for newsrooms. 

    Gannett has its “Ethical Guidelines and Policy for Gannett Journalists Regarding AI-Generated or Assisted Content.” It’s one of just a number of publishers that have developed these policies — others include the Guardian, The New York Times and The Washington Post. 

    Publishers also have created internal groups dedicated to determining these principles and guidelines.

    For example, Gannett has an “AI Council,” which is made up of cross-functional managers who are tasked with reviewing new AI tools and use cases for evaluation and approval. Similar task forces cropped up at companies like BuzzFeed and Forbes in 2023.

    “These protocols ensure the protection of Gannett personnel, assets, IP, and information,” a spokesperson for the company said.

    Educating newsroom employees on the risks that come with using AI tools not paid for or approved by their company is also key. A publishing exec — who spoke on the condition of anonymity — said the best approach is to explain the risks involved, especially by highlighting the risks personally to employees.

    So far, publishers think their policies, guidelines and AI-dedicated task forces are enough to steer their newsrooms in the right direction. 

    That could work, as long as those guidelines “have some teeth,” with consequences such as any disciplinary action clearly explained, according to Yang, who is a former director and associate general counsel at Adobe.

    Companies can also whitelist technology that is approved for use by the newsroom. For example, The New York Times recently approved a whole host of AI programs for editorial and product staff, including Google’s Vertex AI and NotebookLM, Semafor reported.

    But that’s hard to do if you’re a small publisher with fewer resources. It’s also impossible to review all the AI tools available out there that a journalist might use. The legal experts and publishing execs told Digiday they recognize the challenge of controlling how journalists use online information.

    How do you police shadow AI?

    You can’t. At least, not completely. But you can ensure that staff know where they and the company stand on how AI tools should be used at work. 

    “It could be as simple as someone who’s running an app on their private phone,” Yang said. “How do you effectively police that when that is their phone, their property and they can do it without anybody knowing?”

    But that’s where the formal policies, principles and guardrails set up by publishers can help. 

    One publishing exec said they expected some shadow AI to happen in the newsroom, but was confident in the training their company was providing to employees. Their company has several training sessions a year to discuss their AI policy and guidelines, such as not to upload confidential sources and material, financial data and personal information into LLMs not approved by the company.

    “I tend to trust people with making judgments in terms of the work that they do and knowing what’s good for them,” said the exec.

    A Gannett spokesperson said the company has a “robust process” for approving and implementing technology across its newsroom. The company has a specific tech policy that outlines software and online services that are approved, as well as how to request access and payment of other services if needed.

    “This policy helps us ensure the security and integrity of our systems and data,” the spokesperson said.

    According to a recent report by AI software company Trint, 64% of organizations plan to improve employee education and 57% will introduce new policies on AI usage this year.

    But another question companies should ask themselves is: why are journalists doing this?

    “Maybe they’re doing it because the tools that are being made available to them are not sufficient,” Yang said. “You can lean into it and say, ‘We’re going to vet the tools, have technical protections for the tools…  and we’re going to have policies and education to make sure you understand what you can and can’t do [and] how best to use it to prevent these problems.’”

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleUS Congress demands UK lifts gag on Apple encryption order
    Next Article YouTube reveals how Shows will help to push creators’ episodic content
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    Xiaomi Pad 8 Series

    December 3, 2025

    Lenovo IdeaPad Slim 5 16 laptop review: Intel Core i5 vs. AMD Ryzen 5

    December 3, 2025

    Oppo Find N6: Leakers clarify international release plans for new foldable with OnePlus Open 2 also mooted

    December 3, 2025
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025467 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025159 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 202584 Views

    Is Libby Compatible With Kobo E-Readers?

    March 31, 202563 Views
    Don't Miss
    Technology December 3, 2025

    Xiaomi Pad 8 Series

    Xiaomi Pad 8 Series – Notebookcheck.net External Reviews Processor: Qualcomm Snapdragon 8 SD 8 Elite,…

    Lenovo IdeaPad Slim 5 16 laptop review: Intel Core i5 vs. AMD Ryzen 5

    Oppo Find N6: Leakers clarify international release plans for new foldable with OnePlus Open 2 also mooted

    Microsoft’s ugly sweater returns with an Xbox Edition alongside two others

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Xiaomi Pad 8 Series

    December 3, 20250 Views

    Lenovo IdeaPad Slim 5 16 laptop review: Intel Core i5 vs. AMD Ryzen 5

    December 3, 20250 Views

    Oppo Find N6: Leakers clarify international release plans for new foldable with OnePlus Open 2 also mooted

    December 3, 20250 Views
    Most Popular

    Apple thinks people won’t use MagSafe on iPhone 16e

    March 12, 20250 Views

    Volkswagen’s cheapest EV ever is the first to use Rivian software

    March 12, 20250 Views

    Startup studio Hexa acquires majority stake in Veevart, a vertical SaaS platform for museums

    March 12, 20250 Views
    © 2025 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.