Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    How a precise timing structure drives material differences in marketing efficiency

    Overheard at the Digiday AI Marketing Strategies event

    With AI backlash building, marketers reconsider their approach

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

      February 6, 2026

      Stocks and bitcoin sink as investors dump software company shares

      February 4, 2026

      AI, crypto and Trump super PACs stash millions to spend on the midterms

      February 2, 2026

      To avoid accusations of AI cheating, college students are turning to AI

      January 29, 2026

      ChatGPT can embrace authoritarian ideas after just one prompt, researchers say

      January 24, 2026
    • Business

      The HDD brand that brought you the 1.8-inch, 2.5-inch, and 3.5-inch hard drives is now back with a $19 pocket-sized personal cloud for your smartphones

      February 12, 2026

      New VoidLink malware framework targets Linux cloud servers

      January 14, 2026

      Nvidia Rubin’s rack-scale encryption signals a turning point for enterprise AI security

      January 13, 2026

      How KPMG is redefining the future of SAP consulting on a global scale

      January 10, 2026

      Top 10 cloud computing stories of 2025

      December 22, 2025
    • Crypto

      US Investors Might Be Leaving Bitcoin and Ethereum ETFs for International Markets

      February 14, 2026

      Binance France President Targeted in Armed Kidnapping Attempt

      February 14, 2026

      Binance Fires Investigators as $1 Billion Iran-Linked USDT Flows Surface

      February 14, 2026

      Aave Proposes 100% DAO Revenue Model, Yet Price Remains Under Pressure

      February 14, 2026

      A $3 Billion Credit Giant Is Testing Bitcoin in the Mortgage System — Here’s How

      February 14, 2026
    • Technology

      How a precise timing structure drives material differences in marketing efficiency

      February 14, 2026

      Overheard at the Digiday AI Marketing Strategies event

      February 14, 2026

      With AI backlash building, marketers reconsider their approach

      February 14, 2026

      Despite flight to fame, celeb talent isn’t as sure a bet as CMOs think

      February 14, 2026

      Media Briefing: Turning scraped content into paid assets — Amazon and Microsoft build AI marketplaces

      February 14, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»How police live facial recognition subtly reconfigures suspicion
    Technology

    How police live facial recognition subtly reconfigures suspicion

    TechAiVerseBy TechAiVerseDecember 9, 2025No Comments11 Mins Read3 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    How police live facial recognition subtly reconfigures suspicion
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    How police live facial recognition subtly reconfigures suspicion

    Police use of live facial recognition (LFR) technology reconfigures suspicion in subtle yet important ways, undermining so-called human-in-the-loop safeguards.

    Despite the long-standing controversies surrounding police use of LFR, the technology is now used in the UK to scan millions of people’s faces every year. While initial deployments were sparse, happening only every few months, they are now run-of-the-mill, with facial recognition-linked cameras regularly deployed to events and busy areas in places like London and Cardiff.

    Given the potential for erroneous alerts, police forces deploying the technology claim that a human will always make the final decision over whether to engage someone flagged by an LFR system. This measure is intended to ensure accuracy and reduce the potential of unnecessary police interactions.

    However, a growing body of research highlighting the socio-technical nature of LFR systems suggests the technology is undermining these human-in-the-loop safeguards, by essentially reshaping (and reinforcing) police perceptions of who is deemed suspicious and how police interact with them on the street as a result.

    A growing body of research

    According to one paper from March 2021 – written by sociologists Pete Fussey, Bethan Davies and Martin Innes – the use of LFR “constitutes a socio-technical assemblage that both shapes police practices yet is also profoundly shaped by forms of police suspicion and discretion”.

    The authors argue that while under current police powers, officers recognising someone may constitute grounds for a stop and search, this changes when LFR is inserted into the process, because the “initial recognition” does not result from an officer exercising discretion.

    “Instead, officers act more akin to intermediaries, interpreting and then acting upon a (computer-instigated) suggestion originating outside of, and prior to, their own intuition,” the sociologists wrote. “The technology thus performs a framing and priming role in how suspicion is generated.”

    More recently, academics Karen Yeung and Wenlong Li argued in a September 2025 research paper that, given the potential for erroneous matches, the mere generation of an LFR match alert is not in itself enough to constitute “reasonable suspicion”, which UK police are required to demonstrate to legally stop and detain people.

    “Although police officers in England and Wales are entitled to stop individuals and ask them questions about who they are and what they are doing, individuals are not obliged to answer these questions in the absence of reasonable suspicion that they have been involved in the commission of a crime,” they wrote.

    “Accordingly, any initial attempt by police officers to stop and question an individual whose face is matched to the watchlist must be undertaken on the basis that the individual is not legally obliged to cooperate for that reason alone.”

    Despite being legally required to have reasonable suspicion, a July 2019 paper from the Human Rights, Big Data & Technology Project based at the University of Essex Human Rights Centre, which marked the first independent review into trials of LFR technology by the Metropolitan Police, observed a discernible “presumption to intervene” among police officers using the technology.

    According to authors Fussey and Daragh Murray, who is a reader in international law and human rights at Queen Mary’s School of Law, this means the officers involved tended to act on the outcomes of the system and engage individuals that it said matched the watchlist in use, even when they did not.

    As a form of automation bias, the “presumption to intervene” is important in a socio-technical sense, because in practice it risks opening up random members of the public to unwarranted or unnecessary police interactions.

    Priming suspicion

    Although Yeung and Li noted that individuals are not legally obliged to cooperate with police in the absence of reasonable suspicion, there have been instances where failing to comply with officers after an LFR alert has affected people negatively.

    In February 2025, for example, anti-knife crime campaigner Shaun Thompson, who was returning home from a volunteer shift in Croydon with the Street Fathers youth outreach group, was stopped by officers after being wrongly identified as a suspect by the Met’s LFR system.

    Thompson was then held for almost 30 minutes by officers, who repeatedly demanded scans of his fingerprints and threatened him with arrest, despite being provided with multiple identity documents showing he was not the individual on the database.

    Thompson has publicly described the system as “stop and search on steroids” and said it felt like he was being treated as “guilty until proven innocent”. Following the incident, Thompson launched a judicial review into the Met’s use of LFR to stop others ending up in similar situations, which is due to be heard in January 2026.

    Even when no alert has been generated, there are instances where the use of LFR has prompted negative interactions between citizens and the police.

    During the Met’s February 2019 deployment in Romford, for example, Computer Weekly was present when two members of the public were stopped for covering their faces near the LFR van because they did not want their biometric information to be processed.

    Writing to the Lords Justice and Home Affairs Committee (JHAC) in September 2021 as part of its investigation into policing algorithms, Fussey, Murray and criminologist Amy Stevens noted that while most surveillance in the UK is designed to target individuals once a certain threshold of suspicion has been reached, LFR inverts this by considering everyone that passes through the camera’s gaze as suspicious in the first instance.

    This means although people can be subsequently eliminated from police inquiries, the technology itself affects how officers see suspicion, by essentially “priming” them to engage with people flagged by the system. 

    “Any potential tendency to defer or over-rely on automated outputs over other available information has the ability to transform what is still considered to be a human-led decision to de facto an automated one,” they wrote.

    “Robust monitoring should therefore be in place to provide an understanding of the level of deference to tools intended as advisory, and how often and in which circumstances human users make an alternative decision to the one advised by the tool.”

    Watchlist creation and bureaucratic suspicion

    A key aspect mediating the relationship between LFR and the concept of “reasonable suspicion” is the creation of watchlists.

    Socio-technically, researchers investigating LFR use by police have expressed a number of concerns around watchlist creation, including how it “structures the police gaze” to focus on particular people and social groups.

    In their 2021 paper, for example, Fussey, Davies and Innes noted that creating watchlists from police-held custody images naturally means police attention will be targeted toward “the usual suspects”, inducing “a technologically framed bureaucratic suspicion in digital policing”.

    This means that, rather than linking specific evidence from a crime to a particular individual (known as ‘incidental suspicion’), LFR instead relies on the use of general, standardised criteria (such as a person’s prior police record or location) to identify potential suspects, which is known in sociology as “bureaucratic suspicion”.

    “Individuals listed on watchlists and databases are cast as warranting suspicion, and the AFR [automated facial recognition] surveillant gaze is specifically oriented towards them,” they wrote.

    “But, in so doing, the social biases of police activity that disproportionately focuses on young people and members of African Caribbean and other minority ethnic groups (inter alia The Lammy Review 2017) are further inflected by alleged technological biases deriving from how technical accuracy recedes for subjects who are older, female and for some people of colour.”

    Others have also raised separate concerns about the vague criteria around watchlist creation and the importance of needing “quality” data to feed into the system.

    Yeung and Li, for example, have highlighted “unresolved questions” about the legality of watchlist composition, including the “significance and seriousness” of the underlying offence used to justify a person’s inclusion, and the “legitimacy of the reason why that person is ‘wanted’ by the police” in the first place.

    As an example, while police repeatedly claim that LFR is being used solely on the most serious or violent offenders, watchlists regularly contain images of people for drug, shoplifting or traffic offences, which legally do not meet this definition.

    Writing in their September 2025 paper, Yeung and Li also noted that while the Met’s watchlists were populated by individuals wanted on outstanding arrest warrants, they also included “images of a much broader, amorphous category of persons” who did not meet the definition of serious offenders.

    This included “individuals not allowed to attend the Notting Hill Carnival”, “individuals whose attendance would pose a risk to the security and safety of the event”, “wanted missing” individuals and children, and even individuals who “present a risk of harm to themselves and to others” and those who “may be at risk or vulnerable”.

    In December 2023, senior officers from the Met and South Wales Police confirmed that LFR operates on a “bureaucratic suspicion” model, telling a Lords committee that facial recognition watchlist image selection is based on generic crime categories attached to people’s photos, rather than a context-specific assessment of the threat presented by a given individual.

    The Met Police’s then-director of intelligence, Lindsey Chiswick, further told Lords that whether or not something is “serious” depends on the context, and that, for example, retailers suffering from prolific shoplifting would be “serious for them”.

    While the vague and amorphous nature of police LFR watchlist creation has been highlighted by other academics – including Fussey et al, who argued that “broad categories offer significant latitude for interpretation, creating a space for officer discretion with regards to who was enrolled and excluded from such databases” – the issue has also been highlighted by the courts.

    In August 2020, for example, the Court of Appeal ruled that the use of LFR by South Wales Police was unlawful, in part because the vagueness of the watchlist criteria – which used “other persons where intelligence is required” as an inclusion category – left excessive discretion in the hands of the police.

    “It is not clear who can be placed on the watchlist, nor is it clear that there are any criteria for determining where AFR can be deployed,” said the judgment, adding that, “in effect, it could cover anyone who is of interest to the police.”

    During the December 2023 Lords session, watchlist size was also highlighted by Yeung – who was called to give evidence given her expertise – as an important socio-technical factor.

    “There is a divergence between the claims that they only put pictures of those wanted for serious crimes on the watchlist, and the fact that in the Oxford Circus deployment alone, there were over 9,700 images,” she said.

    Unlawful custody images retention

    Further underpinning concerns about the socio-technical impacts of watchlist creation, there are ongoing issues with the unlawful retention of custody images in the Police National Database (PND). This represents the primary source of images used to populate police watchlists.

    In 2012, a High Court ruling found the retention of custody images in the PND to be unlawful on the basis that information about unconvicted people was being treated in the same way as information about people who were ultimately convicted, and that the six-year retention period was disproportionate.

    Despite the 2012 ruling, millions of custody images are still being unlawfully retained.

    Writing to other chief constables to outline some of the issues around custody image retention in February 2022, the National Police Chiefs Council (NPCC) lead for records management, Lee Freeman, said the potentially unlawful retention of an estimated 19 million images “poses a significant risk in terms of potential litigation, police legitimacy, and wider support and challenge in our use of these images for technologies such as facial recognition”.

    In November 2023, the NPCC confirmed to Computer Weekly that it had launched a programme that would seek to establish a management regime for custody images, alongside a review of all currently held data by police forces in the UK.

    The issue was again flagged by the biometric commissioner of England and Wales, Tony Eastaugh, in December 2024, when he noted in his annual report that “forces continue to retain and use images of people who, while having been arrested, have never subsequently been charged or summonsed”.

    Eastaugh added that while work was already “underway” to ensure the retention of images is proportionate and lawful, “the use of custody images of unconvicted individuals may include for facial recognition purposes”.

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleNetflix mulled buying EA before Warner Bros. acquisition, as it grows AAA games library
    Next Article NCSC warns of confusion over true nature of AI prompt injection
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    How a precise timing structure drives material differences in marketing efficiency

    February 14, 2026

    Overheard at the Digiday AI Marketing Strategies event

    February 14, 2026

    With AI backlash building, marketers reconsider their approach

    February 14, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025671 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025259 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025153 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025112 Views
    Don't Miss
    Technology February 14, 2026

    How a precise timing structure drives material differences in marketing efficiency

    How a precise timing structure drives material differences in marketing efficiencyRelying on a gut feeling…

    Overheard at the Digiday AI Marketing Strategies event

    With AI backlash building, marketers reconsider their approach

    Despite flight to fame, celeb talent isn’t as sure a bet as CMOs think

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    How a precise timing structure drives material differences in marketing efficiency

    February 14, 20262 Views

    Overheard at the Digiday AI Marketing Strategies event

    February 14, 20262 Views

    With AI backlash building, marketers reconsider their approach

    February 14, 20262 Views
    Most Popular

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views

    VTOMAN FlashSpeed 1500: Plenty Of Power For All Your Gear

    March 13, 20250 Views

    This new Roomba finally solves the big problem I have with robot vacuums

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.