Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Build a Rocket Boy confirms more layoffs amid further claims of “organized espionage and corporate sabotage”

    Former Blizzard CCO and Bonfire CEO Rob Pardo to present keynote address at GDC Festival of Gaming

    Turkish mobile developer Vento Games secures $4m in seed round funding

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      What the polls say about how Americans are using AI

      February 27, 2026

      Tensions between the Pentagon and AI giant Anthropic reach a boiling point

      February 21, 2026

      Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

      February 6, 2026

      Stocks and bitcoin sink as investors dump software company shares

      February 4, 2026

      AI, crypto and Trump super PACs stash millions to spend on the midterms

      February 2, 2026
    • Business

      Google releases Gemini 3.1 Flash Lite at 1/8th the cost of Pro

      March 4, 2026

      Huawei Watch GT Series

      March 4, 2026

      Weighing up the enterprise risks of neocloud providers

      March 3, 2026

      A stolen Gemini API key turned a $180 bill into $82,000 in two days

      March 3, 2026

      These ultra-budget laptops “include” 1.2TB storage, but most of it is OneDrive trial space

      March 1, 2026
    • Crypto

      Banks Respond to Kraken’s Federal Reserve Access as Trump Sides with Crypto

      March 4, 2026

      Hyperliquid and DEXs Break the Top 10 — Is the CEX Era Ending?

      March 4, 2026

      Consensus Hong Kong 2026: The Institutional Turn 

      March 4, 2026

      New Crypto Mutuum Finance (MUTM) Reports V1 Protocol Progress as Roadmap Enters Phase 3

      March 4, 2026

      Bitcoin Short Sellers Caught Off Guard in New White House Move

      March 4, 2026
    • Technology

      Big tech companies agree to not ruin your electric bill with AI data centers

      March 5, 2026

      Mark Zuckerberg downplays Meta’s own research in New Mexico child safety trial

      March 5, 2026

      Bill Gates-backed TerraPower begins nuclear reactor construction

      March 5, 2026

      Assassin’s Creed Unity is getting a free 60 fps patch tomorrow

      March 5, 2026

      LG reveals pricing for its 2026 OLED TVs

      March 5, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»Investigatory powers: Guidelines for police and spies could also help businesses with AI
    Technology

    Investigatory powers: Guidelines for police and spies could also help businesses with AI

    TechAiVerseBy TechAiVerseJune 5, 2025No Comments8 Mins Read2 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Investigatory powers: Guidelines for police and spies could also help businesses with AI
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    Investigatory powers: Guidelines for police and spies could also help businesses with AI

    Police and intelligence agencies are turning to AI to sift through vast amounts of data to identify security threats, potential suspects and individuals who may pose a security risk.

    Agencies such as GCHQ and MI5 use AI techniques to gather data from multiple sources, find connections between them, and triage the most significant results for human analysts to review.

    Their use of automated systems to analyse huge volumes of data, which could include bulk datasets containing people’s financial records, medical information and intercepted communications, has raised new concerns over privacy and human rights.

    When is the use of AI proportionate, and when does it go too far? That is a question that the oversight body for the intelligence services, the Investigatory Powers Commissioner’s Office (IPCO) is grappling with.

    When is the use of AI proportionate?

    Duffy Calder is the chair of IPCO’s Technical Advisory Panel, known as the TAP, a small group of experts with backgrounds in academia, the UK intelligence community and the defence industry.

    Her job is to advise the investigatory powers commissioner, Brian Leveson, and IPCO’s judicial commissioners – serving or retired judges – responsible for signing or rejecting applications for surveillance warrants on often complex technical issues.

    Members of the panel also accompany IPCO inspectors on visits to police, intelligence agencies and other government agencies with surveillance powers, under the Investigatory Powers Act.

    In the first interview IPCO has given on the work of the TAP, Calder says one of the key functions of the group is to advise the investigatory powers commissioner on future technology trends.

    “It’s absolutely obvious that we were going to be doing something on AI,” she says.

    The TAP has produced a framework – the AI Proportionality Assessment Aid – to assist police, intelligence services and over 600 other government agencies regulated by IPCO in thinking about whether the use of AI is proportionate and minimises invasion of privacy. It has also made its guidance available to businesses and other organisations.

    How AI might be used in surveillance

    Calder says she is not able to say anything about the difference AI is making to the police, intelligence agencies and other government bodies that IPCO oversees. That is a question for the bodies that are using it, she says.

    However, a publicly available research report from the Royal United Services Institute (RUSI), commissioned by GCHQ, suggests ways it might be used. They include identifying individuals from the sound of their voice, their writing style, or the way they type on a computer keyboard.

    © Ian Georgeson Photography

    “People are very rightly raising issues of fairness, transparency and bias, but they are not always unpicking them and asking what this means in a technical setting”

    Muffy Calder, University of Glasgow

    The most compelling use case, however, is to triage the vast amount of data collected by intelligence agencies and find relevant links between data from multiple sources that have intelligence value. Augmented intelligence systems can present analysts with the most relevant information from a sea of data for them to assess and make a final judgement. 

    The computer scientists and mathematicians that make up the TAP have been working with and studying AI for many years, says Calder, and they realise that the use of AI to analyse personal data raises ethical questions.

    “People are very rightly raising issues of fairness, transparency and bias, but they are not always unpicking them and asking what this means in a technical setting,” she says.

    The balance between privacy and intrusion

    The framework aims to give organisations tools to assess how much AI intrudes into privacy and how to minimise intrusion. Rather than provide answers, it offers a set of questions that can help organisations think about the risks of AI.

    “I think everyone’s goal within investigations is to minimise privacy intrusion. So, we must always have a balance between the purpose of an investigation and the intrusion on people, and, for example, collateral intrusion [of people who are not under suspicion],” she says.

    The TAP’s AI Proportionality Assessment Aid is meant for people who design, develop, test and commission AI models and people involved in ensuring their organisations comply with legal and regulatory requirements. It provides a series of questions to consider for each stage in an AI model, from concept, to development, through to exploitation of results.

    “It is a framework in which we can start to ask, are we doing the right things? Is AI an appropriate tool for the circumstances? It’s not about can I do it, it’s more about should I,” she says.

    Is AI the right tool?

    The first question is whether AI is the right tool for the job. In some cases, such as facial recognition, AI may be the only solution as it is difficult mathematically to solve that problem, so training an AI system by showing it examples makes sense.

    In other cases, where people understand what Calder refers to as the “physics” of a problem, such as calculating tax, a mathematical algorithm is more appropriate.

    “AI is very good when an analytical solution is either too difficult or we don’t know what the analytical solution is. So right from the beginning, it’s a matter of asking, do I actually need AI here?” she says.

    Another issue to consider is how often to retrain AI models to ensure they are making decisions on the best, most accurate data, and data that is most appropriate for the applications the model is being used for.

    One common mistake is to train an AI model on data that is not aligned with its intended use. “That is probably a classic one. You have trained it on images of cars, and you are going to use it to try to recognise tanks,” she says.

    Critical questions might include whether the AI model has the right balance between false positives and false negatives in a particular application.

    For example, if AI is used to identify individuals through police facial recognition technology, too many false positives lead to innocent people being wrongly stopped and questioned by police. Too many false negatives would lead to suspects not being recognised.

    When AI makes mistakes

    What would happen, then, if someone were wrongly placed under electronic surveillance as a result of an automated decision? Calder agrees it is a crucial question.

    The framework helps by asking organisations to think about how they respond when AI makes mistakes or hallucinates.

    “The response might be that we need to retrain the model on more accurate or more up-to-date data. There could be lots of answers, and the key point is do you even recognise there is an issue, and do you have a process for dealing with it and some way of capturing your decisions?”

    Was the error systemic? Was it user input? Was it due to the way a human operator produced and handled the result?

    “You also might want to question if this was the result of how the tool was optimised. For example, was it optimised to minimise false negatives, not false positives, and what you did was something that gave you a false positive?” she adds.

    Intrusion during training

    Sometimes it can be justifiable to accept a higher level of intrusion privacy during the training stage if that means a lower level of intrusion when AI is deployed. For example, training a model with the personal data of a large number of people can ensure that the model is more targeted and is likely to lead to “collateral” intrusion.

    “The end result is a tool which you can use in a much more targeted way in pursuit of, for example, criminal activity. So, you get a more targeted tool, and when you use the tool, you only affect a few people’s privacy,” she says.

    Having a human in the loop in an AI system can mitigate the potential for errors, but it also brings with it other dangers.

    The human in the loop

    Computer systems introduced in hospitals, for example, make it possible for clinicians to dispense drugs more efficiently by allowing them to select from a list of relevant drugs and quantities, rather than having to write out prescriptions by hand.

    The downside is that it is easier for clinicians to “desensitise” and make a mistake by selecting the wrong drug or the wrong dose, or to fail to consider a more appropriate drug that may not be included in the pre-selected list.

    AI tools can lead to similar desensitisation, where people can disengage if they are required to continually check a large number of outputs from an AI system. The task can become a checklist exercise, and it is easy for a tired or distracted human reviewer to tick the wrong box.

    “I think there are a lot of parallels with the use of AI and medicine because both are dealing with sensitive data and both have direct impacts on people’s lives,” says Calder.

    The TAP’s AI proportionality Assessment Aid is likely to be essential reading for chief information officers and chief digital officers thinking about deploying AI in their organisations.

    “I think the vast majority of these questions are applicable outside of an investigatory context,” says Calder.

    “Almost any organisation using technology has to think about their reputation and their efficacy. I don’t think organisations set out to make mistakes or to do something badly, so the aim is to help people [use AI] in an appropriate way,” she says.

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleSXSW London: Risk of AI removing choice
    Next Article NCSC sets out how to build cyber safe cultures
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    Big tech companies agree to not ruin your electric bill with AI data centers

    March 5, 2026

    Mark Zuckerberg downplays Meta’s own research in New Mexico child safety trial

    March 5, 2026

    Bill Gates-backed TerraPower begins nuclear reactor construction

    March 5, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025704 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025289 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025164 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025124 Views
    Don't Miss
    Gaming March 5, 2026

    Build a Rocket Boy confirms more layoffs amid further claims of “organized espionage and corporate sabotage”

    Build a Rocket Boy confirms more layoffs amid further claims of “organized espionage and corporate…

    Former Blizzard CCO and Bonfire CEO Rob Pardo to present keynote address at GDC Festival of Gaming

    Turkish mobile developer Vento Games secures $4m in seed round funding

    Good Games Group has bought the Humble and Firestoke back catalogues. Now, newly renamed as Balor Games, it wants to invest in triple-I

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Build a Rocket Boy confirms more layoffs amid further claims of “organized espionage and corporate sabotage”

    March 5, 20262 Views

    Former Blizzard CCO and Bonfire CEO Rob Pardo to present keynote address at GDC Festival of Gaming

    March 5, 20262 Views

    Turkish mobile developer Vento Games secures $4m in seed round funding

    March 5, 20262 Views
    Most Popular

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views

    VTOMAN FlashSpeed 1500: Plenty Of Power For All Your Gear

    March 13, 20250 Views

    Best TV Antenna of 2025

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.