Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    The AI Hype Index: The White House’s war on “woke AI”

    An EPA rule change threatens to gut US climate regulations

    Roundtables: Why It’s So Hard to Make Welfare AI Fair

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      AI models may be accidentally (and secretly) learning each other’s bad behaviors

      July 30, 2025

      Another Chinese AI model is turning heads

      July 15, 2025

      AI chatbot Grok issues apology for antisemitic posts

      July 13, 2025

      Apple sued by shareholders for allegedly overstating AI progress

      June 22, 2025

      How far will AI go to defend its own survival?

      June 2, 2025
    • Business

      Cloudflare open-sources Orange Meets with End-to-End encryption

      June 29, 2025

      Google links massive cloud outage to API management issue

      June 13, 2025

      The EU challenges Google and Cloudflare with its very own DNS resolver that can filter dangerous traffic

      June 11, 2025

      These two Ivanti bugs are allowing hackers to target cloud instances

      May 21, 2025

      How cloud and AI transform and improve customer experiences

      May 10, 2025
    • Crypto

      Shiba Inu Price’s 16% Drop Wipes Half Of July Gains; Is August In Trouble?

      July 30, 2025

      White House Crypto Report Suggests Major Changes to US Crypto Tax

      July 30, 2025

      XRP Whale Outflows Reflect Price Concern | Weekly Whale Watch

      July 30, 2025

      Stellar (XLM) Bull Flag Breakout Shows Cracks as Momentum Fades

      July 30, 2025

      Binance Listing Could Be a ‘Kiss of Death’ for Pi Network and New Tokens

      July 30, 2025
    • Technology

      The AI Hype Index: The White House’s war on “woke AI”

      July 30, 2025

      An EPA rule change threatens to gut US climate regulations

      July 30, 2025

      Roundtables: Why It’s So Hard to Make Welfare AI Fair

      July 30, 2025

      The Download: a 30-year old baby, and OpenAI’s push into colleges

      July 30, 2025

      Exclusive: A record-breaking baby has been born from an embryo that’s over 30 years old

      July 30, 2025
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»Preparing for AI: The CISO’s role in security, ethics and compliance
    Technology

    Preparing for AI: The CISO’s role in security, ethics and compliance

    TechAiVerseBy TechAiVerseJune 4, 2025No Comments5 Mins Read0 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Preparing for AI: The CISO’s role in security, ethics and compliance
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    BMI Calculator – Check your Body Mass Index for free!

    Preparing for AI: The CISO’s role in security, ethics and compliance

    The Security Think Tank considers how CISOs can best plan to facilitate the secure running of AI and Gen AI-based initiatives and ensure employees do not inadvertently leak data or make bad decisions.

    By

    • Elliott Wilkes, ACDS

    Published: 03 Jun 2025

    As generative AI (GenAI) tools become embedded in the fabric of enterprise operations, they bring transformative promise, but also considerable risk.

    For CISOs, the challenge lies in facilitating innovation while securing data, maintaining compliance across borders, and preparing for the unpredictable nature of large language models and AI agents.

    The stakes are high; a compromised or poorly governed AI tool could expose sensitive data, violate global data laws, or make critical decisions based on false or manipulated inputs.

    To mitigate these risks, CISOs must rethink their cyber security strategies and policies across three core areas: data use, data sovereignty, and AI safety.

    Data use: Understanding the terms before sharing vital information

    The most pressing risk in AI adoption is not malicious actors but ignorance. Too many organisations integrate third-party AI tools without fully understanding how their data will be used, stored, or shared. Most AI platforms are trained on vast swathes of public data scraped from the internet, often with little regard for the source.

    While the larger players in the industry, like Microsoft and Google, have started embedding more ethical safeguards and transparency into their terms of service, much of the fine print remains opaque and subject to change.

    For CISOs, this means rewriting data-sharing policies and procurement checklists. AI tools should be treated as third-party vendors with high-risk access. Before deployment, security teams must audit AI platform terms of use, assess where and how enterprise data might be retained or reused, and ensure opt-outs are in place where possible.

    Investing in external consultants or AI governance specialists who understand these nuanced contracts can also protect organisations from inadvertently sharing proprietary information. In essence, data used with AI must be treated like a valuable export which is carefully considered, tracked, and regulated.

    Data sovereignty: Guardrails for a borderless technology

    One of the hidden dangers in AI integration is the blurring of geographical boundaries when it comes to data. What complies with data laws in one country may not in another.

    For multinationals, this creates a minefield of potential regulatory breaches, particularly under acts such as DORA and the forthcoming UK Cyber Security and Resilience Bill as well as frameworks like the EU’s GDPR or the UK Data Protection Act.

    CISOs must adapt their security strategies to ensure AI platforms align with regional data sovereignty requirements, which means reviewing where AI systems are hosted, how data flows between jurisdictions, and whether appropriate data transfer mechanisms such as standard contractual clauses or binding corporate rules are in place.

    Where AI tools do not offer adequate localisation or compliance capabilities, security teams must consider applying geofencing, data masking, or even local AI deployments.

    Policy updates should mandate that data localisation preferences be enforced for sensitive or regulated datasets, and AI procurement processes should include clear questions about cross-border data handling. Ultimately, ensuring data remains within the bounds of compliance is a legal issue as well as a security imperative.

    Safety: Designing resilience into AI deployments

    The final pillar of AI security lies in safeguarding systems from the growing threat of manipulation, be it through prompt injection attacks, model hallucinations, or insider misuse.

    While still an emerging threat category, prompt injection has become one of the most discussed vectors in GenAI security. By cleverly crafting input strings, attackers can override expected behaviours or extract confidential information from a model. In more extreme examples, AI models have even hallucinated bizarre or harmful outputs, with one system reportedly refusing to be shut down by developers.

    For CISOs, the response must be twofold. First, internal controls and red-teaming exercises, like traditional penetration testing, should be adapted to stress-test AI systems. Techniques like chaos engineering can help simulate edge cases and uncover flaws before they’re exploited.

    Second, there needs to be a cultural shift in how vendors are selected. Security policies should favour AI providers who demonstrate rigorous testing, robust safety mechanisms, and clear ethical frameworks. While such vendors may come at a premium, the potential cost of trusting an untested AI tool is far greater.

    To reinforce accountability, CISOs should also advocate for contracts that place responsibility on AI vendors for operational failures or unsafe outputs. A well-written agreement should address liability, incident response procedures, and escalation routes in the event of a malfunction or breach.

    From gatekeeper to enabler

    As AI becomes a core part of business infrastructure, CISOs must evolve from being gatekeepers of security to enablers of safe innovation. Updating policies around data use, strengthening controls over data sovereignty, and building a layered safety net for AI deployments will be essential to unlocking the full potential of GenAI without compromising trust, compliance, or integrity.

    The best defence to the rapid changes caused by AI is proactive, strategic adaptation rooted in knowledge, collaboration, and an unrelenting focus on responsibility.

    Elliott Wilkes is CTO at Advanced Cyber Defence Systems. A seasoned digital transformation leader and product manager, Wilkes has over a decade of experience working with both the American and British governments, most recently as a cyber security consultant to the Civil Service.

    Read more on Security policy and user awareness

    BMI Calculator – Check your Body Mass Index for free!

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleFintech growth three times that of finance sector as a whole
    Next Article Reddit will let you hide posts, comments and NSFW activity from your public profile
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    The AI Hype Index: The White House’s war on “woke AI”

    July 30, 2025

    An EPA rule change threatens to gut US climate regulations

    July 30, 2025

    Roundtables: Why It’s So Hard to Make Welfare AI Fair

    July 30, 2025
    Leave A Reply Cancel Reply

    Top Posts

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 202532 Views

    New Akira ransomware decryptor cracks encryptions keys using GPUs

    March 16, 202529 Views

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 202528 Views

    OpenAI details ChatGPT-o3, o4-mini, o4-mini-high usage limits

    April 19, 202522 Views
    Don't Miss
    Technology July 30, 2025

    The AI Hype Index: The White House’s war on “woke AI”

    The AI Hype Index: The White House’s war on “woke AI”Separating AI reality from hyped-up…

    An EPA rule change threatens to gut US climate regulations

    Roundtables: Why It’s So Hard to Make Welfare AI Fair

    The Download: a 30-year old baby, and OpenAI’s push into colleges

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    The AI Hype Index: The White House’s war on “woke AI”

    July 30, 20250 Views

    An EPA rule change threatens to gut US climate regulations

    July 30, 20250 Views

    Roundtables: Why It’s So Hard to Make Welfare AI Fair

    July 30, 20250 Views
    Most Popular

    Xiaomi 15 Ultra Officially Launched in China, Malaysia launch to follow after global event

    March 12, 20250 Views

    Apple thinks people won’t use MagSafe on iPhone 16e

    March 12, 20250 Views

    French Apex Legends voice cast refuses contracts over “unacceptable” AI clause

    March 12, 20250 Views
    © 2025 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.