Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    This MacBook Pro has a Touch Bar and is only $410 while stock lasts

    Intel’s tough decision boosted AMD to record highs

    Bundle deal! Ring Battery Doorbell and Outdoor Cam Plus (44% off)

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

      February 6, 2026

      Stocks and bitcoin sink as investors dump software company shares

      February 4, 2026

      AI, crypto and Trump super PACs stash millions to spend on the midterms

      February 2, 2026

      To avoid accusations of AI cheating, college students are turning to AI

      January 29, 2026

      ChatGPT can embrace authoritarian ideas after just one prompt, researchers say

      January 24, 2026
    • Business

      The HDD brand that brought you the 1.8-inch, 2.5-inch, and 3.5-inch hard drives is now back with a $19 pocket-sized personal cloud for your smartphones

      February 12, 2026

      New VoidLink malware framework targets Linux cloud servers

      January 14, 2026

      Nvidia Rubin’s rack-scale encryption signals a turning point for enterprise AI security

      January 13, 2026

      How KPMG is redefining the future of SAP consulting on a global scale

      January 10, 2026

      Top 10 cloud computing stories of 2025

      December 22, 2025
    • Crypto

      How Polymarket Is Turning Bitcoin Volatility Into a Five-Minute Betting Market

      February 13, 2026

      Israel Indicts Two Over Secret Bets on Military Operations via Polymarket

      February 13, 2026

      Binance’s October 10 Defense at Consensus Hong Kong Falls Flat

      February 13, 2026

      Argentina Congress Strips Workers’ Right to Choose Digital Wallet Deposits

      February 13, 2026

      Monero Price Breakdown Begins? Dip Buyers Now Fight XMR’s Drop to $135

      February 13, 2026
    • Technology

      This MacBook Pro has a Touch Bar and is only $410 while stock lasts

      February 13, 2026

      Intel’s tough decision boosted AMD to record highs

      February 13, 2026

      Bundle deal! Ring Battery Doorbell and Outdoor Cam Plus (44% off)

      February 13, 2026

      Microsoft Store goes zero-clutter—through the command line

      February 13, 2026

      How Boll & Branch leverages AI for operational and creative tasks

      February 13, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»Digital Ethics Summit 2025: Open sourcing and assuring AI
    Technology

    Digital Ethics Summit 2025: Open sourcing and assuring AI

    TechAiVerseBy TechAiVerseDecember 12, 2025No Comments10 Mins Read4 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Digital Ethics Summit 2025: Open sourcing and assuring AI
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    Digital Ethics Summit 2025: Open sourcing and assuring AI

    Open sourcing artificial intelligence (AI) can help combat concentrations of capital and power that currently define its development, while nascent assurance practices need regulation to define what “good” looks like.

    Speaking at trade association TechUK’s ninth annual Digital Ethics Summit, panellists discussed various dynamics at play in the development of AI technologies, including the under-utilisation of open source approaches, the need for AI assurance to be continuous and iterative, and the extent to which regulation is needed to inform current assurance practices. 

    During the previous two summit events – held in December 2023 and 2024 – delegates stressed the need for well-intentioned ethical AI principles to be translated into concrete practical measures, and highlighted the need for any regulation to recognise the socio-technical nature of AI that has the potential to produce greater inequality and concentrations of power.

    A major theme of these previous discussions was who dictates and controls how technologies are developed and deployed, and who gets to lead discussions around what is considered “ethical”.

    While discussions at the 2025 summit touched on many of the same points, conversations this year focused on the UK’s developing AI assurance ecosystem, and the degree to which AI’s further development could be democratised by more open approaches.

    Open sourcing models and ecosystems

    In a conversation about the benefits and disadvantages of open versus closed source AI models, speakers noted that most models do not fall neatly into either binary, and instead exist on a spectrum, where aspects of any given model are either open or closed.

    However, they were also clear that there are exceedingly few genuinely open source models and approaches being developed.

    Matthew Squire, chief technology officer and founder of Fuzzy Labs, for example, noted that “a lot of these ostensibly open source models, what they’re really offering as open is the model weights,” which are essentially the parameters a model uses to transform input data into an output.

    Noting that the vast majority of model developers do not currently open up other key aspects of a model, including the underlying data, training parameters or code, he concluded that most models fall decidedly on the closed end of the spectrum. “[Model weights represent] the final product of having trained that model, but a lot more goes into it,” said Squire.

    For Linda Griffin, vice-president of global policy at Mozilla, while AI models do not exist in a binary of open vs closed, the ecosystems they are developed in do.

    Highlighting how the internet was built on open source software before large corporations like Microsoft enclosed it in their own infrastructure, she said a similar dynamic is at play today with AI, where a handful of companies – essentially those that control web access via ownership of browsers, and which therefore have access to mountains of customer data – have enclosed the AI stack.

    “What the UK government really needs to be thinking about right now is what is our long-term strategy for procuring, funding, supporting, incentivising more open access, so that UK companies, startups and citizens can build and choose what to do,” said Griffin. “Do you want UK businesses to be building AI or renting it? Right now, they’re renting it, and that is a long-term problem.”

    ‘Under-appreciated opportunity’

    Jakob Mokander, director of science and technology policy at the Tony Blair Institute, added that open source is an “under-appreciated opportunity” that can help governments and organisations capture real value from the technology.

    Noting that openness and open source ecosystems have a lot of advantages compared with closed systems for spurring growth and innovation, he highlighted how the current absence of open approaches also carries with it significant risks.

    “The absence of open source is maybe an even greater risk, because then you have a high-power concentration, either in the hands of government actors or in terms of one or two big tech companies,” said Mokander. “Whether you look at this from a primarily growth-driven or information-driven lens, or from a risk-driven lens, you would want to see a strong open ecosystem.”

    When it comes to the relationship between open source and AI assurance, Rowley Adams, the lead engineer at EthicAI, said it allows for greater scrutiny of developer claims when compared with closed approaches. “From an assurance perspective, verifiability is obviously crucial, which is impossible with closed models, taking [developers at their] word at every single point, almost in a faith-based way,” he said. “With open source models, the advantage is that you can actually go and probe, experiment and evaluate in a methodical and thorough way.”

    Asked by Computer Weekly whether governments need to consider new antitrust legislation to break up the AI stack – given the massive concentrations of power and capital that stem from a few companies controlling access to the underlying infrastructure – speakers said there is a pressing need to understand how markets are structured in this space.

    Griffin, for example, said there needs to be “long-term scenario planning from government” that takes into account the potential for market interventions if necessary.

    Mokander added that the increasing capabilities of AI need to go “hand-in-hand with new thinking on anti-trust and market diversification,” and that it’s key “to not have reliance [on companies] that can be used as a leverage against government and the democratic interest. “That doesn’t necessarily mean they have to prevent private ownership, but it’s the conditions under which you operate those infrastructures,” he said.

    Continuous assurance needed

    Speaking on a separate panel about the state of AI assurance in the UK, Michaela Coetsee, the AI ethics and assurance lead at Advai, pointed out that, due to the dynamic nature of AI systems, assurance is not a one-and-done process, and instead requires continuous monitoring and evaluation.

    “Because AI is a social-technical endeavour, we need multifaceted skills and talent,” she said. “We need data scientists, ML [machine learning] engineers, developers. We need red teamers who specifically look for vulnerabilities within the system. We need legal policy, AI, governance specialists. There’s a whole range of roles.”

    However, Coetsee and other panellists were clear that, as it stands, there is still a need to properly define assurance metrics and standardise how systems are tested, something that can be difficult given the highly contextual nature of AI deployments.

    Stacie Hoffmann, head of strategic growth and department for data science and AI at the National Physical Laboratory, for example, noted that while there are lots of testing evaluation tools either on the market or being developed in-house – which can ultimately help build confidence in the reliability and robustness of a given system – “there’s not that overarching framework that says ‘this is what good testing looks like’.”

    Highlighting how assurance practices can still provide insight into whether a system is acting as expected, or its degree of bias in a particular situation, she added that there is no one-size-fits-all approach. “Again, it’s very context-specific, so we’re never going to have one test that can test a system for all eventualities – you’re going to need to bring in different elements of testing based on the context and the specificity,” said Hoffmann.

    For Coetsee, one way to achieve a greater degree of trust in the technology, in lieu of formal rules, regulations or standards, is to run limited test pilots where models ingest customer data, so that organisations can gain better oversight of how they will operate in practice before making purchase decisions.

    “I think people have quite a heightened awareness of the risks around these systems now … but we still do see people buying AI off of pitch decks,” she said, adding that there is also a need for more collaboration throughout the UK’s nascent AI assurance ecosystem.

    “We do need to keep working on the metrics … it would [also] be amazing to understand and collaborate more to understand what controls and mitigations are actually working in practice as well, and share that so that you can start to have more trustworthy systems across different sectors.”

    Horse or cart: assurance vs regulation

    Speaking on how the digital ethics conversation has evolved over the past year, Liam Booth – a former Downing Street chief of staff who currently works in policy, communication and strategy at Anthropic – noted that while global firms like his would prefer a “highest common denominator” approach to AI regulation – whereby global firms adhere to the strictest regulatory standards possible to ensure compliance across jurisdictions with differing rules – the UK itself should not “rush toward regulation” before there is a full understanding of the technology’s capabilities or how it has been developed.

    “Because of things like a very mature approach to sandboxes, a very open approach to innovation and regulatory change, the UK could be the best place in the world to experiment, deploy and test,” he said, adding that while the UK government’s focus on building an assurance ecosystem for AI is positive, the country will not be world-leading in the technology unless it ramps up diffusion and deployment.

    “You are not going to have a world-leading assurance market, either from a regulatory or commercial product side, if there aren’t people using the technology that wish to purchase the assurance product,” said Booth.

    However, he noted that building up the assurance ecosystem can be helpful for promoting trust in the tech, as it will give both public and private sectors more confidence to use it.

    “In a world in which you’re not the datacentre capital, or you may not necessarily have a frontier model provider located in your country, you need to continually innovate and think about what your relevance is at that [global] table, and keep recreating yourself every few years,” said Booth.

    Taking a step back

    However, for Gaia Marcus, director of the Ada Lovelace Institute, while it is positive to be talking about assurance in more detail, “we need to take a massive step back” and get the technology regulated first as a prerequisite to building trust in it.

    Highlighting Ada’s July 2023 audit of UK AI regulation – which found that “large swathes” of the economy are either unregulated or only partially regulated when it comes to use of AI – she argued there are no real sector-specific rules around how AI as a general-purpose technology should be used in contexts like education, policing or employment.

    Marcus added that assurance benchmarks for deciding “what good looks like” in a range of different deployment contexts can therefore only be decided through proper regulation.

    “You need to have a basic understanding of what good looks like … if you have an assurance ecosystem where people are deciding what they’re assuring against, you’re comparing apples, oranges and pairs,” she said.

    Marcus added that, due to the unrelenting hype and “snake oil” around AI technology, “we need to ask very basic questions” around the effectiveness of the technology and whose interests it is ultimately serving.

    “We’re falling down on this really basic thing, which is measuring and evaluating, and holding data-driven and AI technologies to the same standard that you would hold any other piece of technology to,” she said.

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleInterview: Art Hu, global CIO, Lenovo
    Next Article New Seiko 5 Sports limited-edition automatic watch launches as a collab with Pink Panther
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    This MacBook Pro has a Touch Bar and is only $410 while stock lasts

    February 13, 2026

    Intel’s tough decision boosted AMD to record highs

    February 13, 2026

    Bundle deal! Ring Battery Doorbell and Outdoor Cam Plus (44% off)

    February 13, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025668 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025257 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025153 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025111 Views
    Don't Miss
    Technology February 13, 2026

    This MacBook Pro has a Touch Bar and is only $410 while stock lasts

    This MacBook Pro has a Touch Bar and is only $410 while stock lasts Image:…

    Intel’s tough decision boosted AMD to record highs

    Bundle deal! Ring Battery Doorbell and Outdoor Cam Plus (44% off)

    Microsoft Store goes zero-clutter—through the command line

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    This MacBook Pro has a Touch Bar and is only $410 while stock lasts

    February 13, 20262 Views

    Intel’s tough decision boosted AMD to record highs

    February 13, 20262 Views

    Bundle deal! Ring Battery Doorbell and Outdoor Cam Plus (44% off)

    February 13, 20261 Views
    Most Popular

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views

    VTOMAN FlashSpeed 1500: Plenty Of Power For All Your Gear

    March 13, 20250 Views

    This new Roomba finally solves the big problem I have with robot vacuums

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.