Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Peacock feathers can emit laser beams

    $15 billion in NIH funding frozen, then thawed Tuesday in ongoing power war

    So far, only one-third of Americans have ever used AI for work

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      AI models may be accidentally (and secretly) learning each other’s bad behaviors

      July 30, 2025

      Another Chinese AI model is turning heads

      July 15, 2025

      AI chatbot Grok issues apology for antisemitic posts

      July 13, 2025

      Apple sued by shareholders for allegedly overstating AI progress

      June 22, 2025

      How far will AI go to defend its own survival?

      June 2, 2025
    • Business

      Cloudflare open-sources Orange Meets with End-to-End encryption

      June 29, 2025

      Google links massive cloud outage to API management issue

      June 13, 2025

      The EU challenges Google and Cloudflare with its very own DNS resolver that can filter dangerous traffic

      June 11, 2025

      These two Ivanti bugs are allowing hackers to target cloud instances

      May 21, 2025

      How cloud and AI transform and improve customer experiences

      May 10, 2025
    • Crypto

      A Once-Rumored Trump Target Is Now Betting Big on Bitcoin

      July 29, 2025

      XLM Downturn Looms: MACD Crossover and Negative Sentiment Raise Red Flags

      July 29, 2025

      Ten Years of Ethereum: How a Blockchain Dream Grew Up

      July 29, 2025

      Bitcoin Is Becoming the Credit Default Swap on a Collapsing Fiat System | US Crypto News

      July 29, 2025

      HTX Introduces Custom Invitation Code Feature with Referral Campaign Offering 1 BTC Grand Prize

      July 29, 2025
    • Technology

      Peacock feathers can emit laser beams

      July 30, 2025

      $15 billion in NIH funding frozen, then thawed Tuesday in ongoing power war

      July 30, 2025

      So far, only one-third of Americans have ever used AI for work

      July 30, 2025

      How to get another free year of updates for your Windows 10 PC

      July 30, 2025

      Making Roman concrete produces as much CO2 as modern concrete

      July 30, 2025
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»What does it mean for an algorithm to be “fair”?
    Technology

    What does it mean for an algorithm to be “fair”?

    TechAiVerseBy TechAiVerseJune 18, 2025No Comments6 Mins Read1 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    What does it mean for an algorithm to be “fair”?
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    BMI Calculator – Check your Body Mass Index for free!

    What does it mean for an algorithm to be “fair”?

    Back in February, I flew to Amsterdam to report on a high-stakes experiment the city had recently conducted: a pilot program for what it called Smart Check, which was its attempt to create an effective, fair, and unbiased predictive algorithm to try to detect welfare fraud. But the city fell short of its lofty goals—and, with our partners at Lighthouse Reports and the Dutch newspaper Trouw, we tried to get to the bottom of why. You can read about it in our deep dive published last week.

    For an American reporter, it’s been an interesting time to write a story on “responsible AI” in a progressive European city—just as ethical considerations in AI deployments appear to be disappearing in the United States, at least at the national level. 

    For example, a few weeks before my trip, the Trump administration rescinded Biden’s executive order on AI safety and DOGE began turning to AI to decide which federal programs to cut. Then, more recently, House Republicans passed a 10-year moratorium on US states’ ability to regulate AI (though it has yet to be passed by the Senate). 

    What all this points to is a new reality in the United States where responsible AI is no longer a priority (if it ever genuinely was). 

    But this has also made me think more deeply about the stakes of deploying AI in situations that directly affect human lives, and about what success would even look like. 

    When Amsterdam’s welfare department began developing the algorithm that became Smart Check, the municipality followed virtually every recommendation in the responsible-AI playbook: consulting external experts, running bias tests, implementing technical safeguards, and seeking stakeholder feedback. City officials hoped the resulting algorithm could avoid causing the worst types of harm inflicted by discriminatory AI over nearly a decade. 

    After talking to a large number of people involved in the project and others who would potentially be affected by it, as well as some experts who did not work on it, it’s hard not to wonder if the city could ever have succeeded in its goals when neither “fairness” nor even “bias” has a universally agreed-upon definition. The city was treating these issues as technical ones that could be answered by reweighting numbers and figures—rather than political and philosophical questions that society as a whole has to grapple with.

    On the afternoon that I arrived in Amsterdam, I sat down with Anke van der Vliet, a longtime advocate for welfare beneficiaries who served on what’s called the Participation Council, a 15-member citizen body that represents benefits recipients and their advocates.

    The city had consulted the council during Smart Check’s development, but van der Vliet was blunt in sharing the committee’s criticisms of the plans. Its members simply didn’t want the program. They had well-placed fears of discrimination and disproportionate impact, given that fraud is found in only 3% of applications.

    To the city’s credit, it did respond to some of their concerns and make changes in the algorithm’s design—like removing from consideration factors, such as age, whose inclusion could have had a discriminatory impact. But the city ignored the Participation Council’s main feedback: its recommendation to stop development altogether. 

    Van der Vliet and other welfare advocates I met on my trip, like representatives from the Amsterdam Welfare Union, described what they see as a number of challenges faced by the city’s some 35,000 benefits recipients: the indignities of having to constantly re-prove the need for benefits, the increases in cost of living that benefits payments do not reflect, and the general feeling of distrust between recipients and the government. 

    City welfare officials themselves recognize the flaws of the system, which “is held together by rubber bands and staples,” as Harry Bodaar, a senior policy advisor to the city who focuses on welfare fraud enforcement, told us. “And if you’re at the bottom of that system, you’re the first to fall through the cracks.”

    So the Participation Council didn’t want Smart Check at all, even as Bodaar and others working in the department hoped that it could fix the system. It’s a classic example of a “wicked problem,” a social or cultural issue with no one clear answer and many potential consequences. 

    After the story was published, I heard from Suresh Venkatasubramanian, a former tech advisor to the White House Office of Science and Technology Policy who co-wrote Biden’s AI Bill of Rights (now rescinded by Trump). “We need participation early on from communities,” he said, but he added that it also matters what officials do with the feedback—and whether there is “a willingness to reframe the intervention based on what people actually want.” 

    Had the city started with a different question—what people actually want—perhaps it might have developed a different algorithm entirely. As the Dutch digital rights advocate Hans De Zwart put it to us, “We are being seduced by technological solutions for the wrong problems … why doesn’t the municipality build an algorithm that searches for people who do not apply for social assistance but are entitled to it?” 

    These are the kinds of fundamental questions AI developers will need to consider, or they run the risk of repeating (or ignoring) the same mistakes over and over again.

    Venkatasubramanian told me he found the story to be “affirming” in highlighting the need for “those in charge of governing these systems”  to “ask hard questions … starting with whether they should be used at all.”

    But he also called the story “humbling”: “Even with good intentions, and a desire to benefit from all the research on responsible AI, it’s still possible to build systems that are fundamentally flawed, for reasons that go well beyond the details of the system constructions.” 

    To better understand this debate, read our full story here. And if you want more detail on how we ran our own bias tests after the city gave us unprecedented access to the Smart Check algorithm, check out the methodology over at Lighthouse. (For any Dutch speakers out there, here’s the companion story in Trouw.) Thanks to the Pulitzer Center for supporting our reporting. 

    This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

    BMI Calculator – Check your Body Mass Index for free!

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleWhen AIs bargain, a less advanced agent could cost you
    Next Article Puerto Rico’s power struggles
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    Peacock feathers can emit laser beams

    July 30, 2025

    $15 billion in NIH funding frozen, then thawed Tuesday in ongoing power war

    July 30, 2025

    So far, only one-third of Americans have ever used AI for work

    July 30, 2025
    Leave A Reply Cancel Reply

    Top Posts

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 202532 Views

    New Akira ransomware decryptor cracks encryptions keys using GPUs

    March 16, 202529 Views

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 202528 Views

    OpenAI details ChatGPT-o3, o4-mini, o4-mini-high usage limits

    April 19, 202522 Views
    Don't Miss
    Technology July 30, 2025

    Peacock feathers can emit laser beams

    Peacock feathers can emit laser beams Peacock feathers are greatly admired for their bright iridescent…

    $15 billion in NIH funding frozen, then thawed Tuesday in ongoing power war

    So far, only one-third of Americans have ever used AI for work

    How to get another free year of updates for your Windows 10 PC

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Peacock feathers can emit laser beams

    July 30, 20250 Views

    $15 billion in NIH funding frozen, then thawed Tuesday in ongoing power war

    July 30, 20250 Views

    So far, only one-third of Americans have ever used AI for work

    July 30, 20250 Views
    Most Popular

    Xiaomi 15 Ultra Officially Launched in China, Malaysia launch to follow after global event

    March 12, 20250 Views

    Apple thinks people won’t use MagSafe on iPhone 16e

    March 12, 20250 Views

    French Apex Legends voice cast refuses contracts over “unacceptable” AI clause

    March 12, 20250 Views
    © 2025 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.