Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Microsoft sets 2026 deadline for Secure Boot certificate expiration

    Sony confirms new WH-1000XM6 release in official teaser

    Awkward debut: XPeng’s Iron robot falls on stage

    Facebook X (Twitter) Instagram
    • Artificial Intelligence
    • Business Technology
    • Cryptocurrency
    • Gadgets
    • Gaming
    • Health
    • Software and Apps
    • Technology
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Tech AI Verse
    • Home
    • Artificial Intelligence

      Read the extended transcript: President Donald Trump interviewed by ‘NBC Nightly News’ anchor Tom Llamas

      February 6, 2026

      Stocks and bitcoin sink as investors dump software company shares

      February 4, 2026

      AI, crypto and Trump super PACs stash millions to spend on the midterms

      February 2, 2026

      To avoid accusations of AI cheating, college students are turning to AI

      January 29, 2026

      ChatGPT can embrace authoritarian ideas after just one prompt, researchers say

      January 24, 2026
    • Business

      New VoidLink malware framework targets Linux cloud servers

      January 14, 2026

      Nvidia Rubin’s rack-scale encryption signals a turning point for enterprise AI security

      January 13, 2026

      How KPMG is redefining the future of SAP consulting on a global scale

      January 10, 2026

      Top 10 cloud computing stories of 2025

      December 22, 2025

      Saudia Arabia’s STC commits to five-year network upgrade programme with Ericsson

      December 18, 2025
    • Crypto

      HBAR Shorts Face $5 Million Risk if Price Breaks Key Level

      February 10, 2026

      Ethereum Holds $2,000 Support — Accumulation Keeps Recovery Hopes Alive

      February 10, 2026

      Miami Mansion Listed for 700 BTC as California Billionaire Tax Sparks Relocations

      February 10, 2026

      Solana Drops to 2-Year Lows — History Suggests a Bounce Toward $100 is Incoming

      February 10, 2026

      Bitget Cuts Stock Perps Fees to Zero for Makers Ahead of Earnings Season, Expanding Access Across Markets

      February 10, 2026
    • Technology

      Microsoft sets 2026 deadline for Secure Boot certificate expiration

      February 11, 2026

      Sony confirms new WH-1000XM6 release in official teaser

      February 11, 2026

      Awkward debut: XPeng’s Iron robot falls on stage

      February 11, 2026

      Limited edition Analogue 3D now available to buy

      February 11, 2026

      Unusual mid-range smartphone features dot matrix secondary display, camera shutter button and 6,500 mAh battery

      February 11, 2026
    • Others
      • Gadgets
      • Gaming
      • Health
      • Software and Apps
    Check BMI
    Tech AI Verse
    You are at:Home»Technology»Second ever international AI safety report published
    Technology

    Second ever international AI safety report published

    TechAiVerseBy TechAiVerseFebruary 11, 2026No Comments8 Mins Read3 Views
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Second ever international AI safety report published
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email

    Second ever international AI safety report published

    More than 100 artificial intelligence experts have produced the second international AI safety report ahead of a summit in India, outlining a high degree of uncertainty about the development and risks of AI

    By

    • Sebastian Klovig Skelton,
      Data & ethics editor

    Published: 10 Feb 2026 15:00

    The overall trajectory of general-purpose artificial intelligence (AI) systems remains “deeply uncertain”, even as the technology’s proliferation is generating new empirical evidence about its impacts, the second International AI safety report has found.

    Published on 3 February 2026, the report covers a wide range of threats posed by AI systems – from its impact on jobs, human autonomy and the environment to the potential for malfunctions or malicious use – that will be used to inform diplomatic discussions at the upcoming India AI Impact Summit.

    Building on the previous report, released in January 2025, which was commissioned following the inaugural AI Safety Summit, hosted by the UK government at Bletchley Park in November 2023, the latest report similarly highlights a high degree of uncertainty around how AI systems will develop, and the kinds of mitigations that would be effective against a range of challenges.

    “How and why general-purpose AI models acquire new capabilities and behave in certain ways is often difficult to predict, even for developers. An ‘evaluation gap’ means that benchmark results alone cannot reliably predict real-world utility or risk,” it says, adding that the systemic data on the prevalence and severity of AI-related harms remains limited for the vast majority of risks.

    “Whether current safeguards will be sufficiently effective for more capable systems is unclear,” it adds. “Together, these gaps define the limits of what any current assessment can confidently claim.”

    It further notes that while general-purpose AI capabilities have improved in the past year through “inference-time scaling” (a technique that allows models to use more computing power to generate intermediate steps before giving a final answer), the overall picture remains “jagged”, with leading systems excelling at some difficult tasks while failing at simpler ones.

    On AI’s further development to 2030, the authors say plausible scenarios vary dramatically.

    “Progress could plateau near current capability levels, slow, remain steady, or accelerate dramatically in ways that are difficult to anticipate,” it says, adding that while “unprecedented” investment commitments suggest major AI developers expect continued capability gains, unforeseen technical limits – including energy constraints, high-quality data scarcity and bottlenecks in chip production – could slow progress.

    “The social impact of a given level of AI capabilities also depends on how and where systems are deployed, how they are used, and how different actors respond,” it says. “This uncertainty reflects the difficulty of forecasting a technology whose impacts depend on unpredictable technical breakthroughs, shifting economic conditions and varied institutional responses.”

    Systemic impacts

    Regarding the systemic impact on labour markets, the report notes that there is disagreement on the magnitude of future impacts, with some expecting job losses to be offset by new job creation, and others arguing that widespread adoption would significantly reduce both employment and wages.

    It adds that while it is too soon for a definitive assessment of the impacts, early evidence suggests junior positions in fields like writing and translation are most at risk.

    Relatedly, it says that there were also risks presented by systems of human autonomy, in the sense that reliance on AI tools can weaken critical thinking skills and memory, while also encouraging automation bias.

    “This relates to a broader trend of ‘cognitive offloading’ – the act of delegating cognitive tasks to external systems or people, reducing one’s own cognitive engagement and therefore ability to act with autonomy,” it says. “Cognitive offloading can free up cognitive resources and improve efficiency, but research also indicates potential long-term effects on the development and maintenance of cognitive skills. 

    As an example, the report notes one study that found a clinician’s ability to detect tumours without AI assistance had dropped by 6%, just three months after the introduction of AI support.

    On the implications for income and wealth inequality, it says general-purpose systems could widen the disparities both within and between countries.

    “AI adoption may shift earnings from labour to capital owners, such as shareholders of firms that develop or use AI,” it says. “Globally, high-income countries with skilled workforces and strong digital infrastructure are likely to capture AI’s benefits faster than low-income economies.

    “One study estimates that AI’s impact on economic growth in advanced economies could be more than twice that of in low-income countries. AI could also reduce incentives to offshore labour-intensive services by making domestic automation more cost-effective, potentially limiting traditional development paths.”

    The prediction that AI is likely to exacerbate inequality by reducing the share of all income that goes to workers relative to capital owners is in line with a January 2024 assessment of AI’s impacts on inequality by the International Monetary Fund (IMF), which found the technology will “likely worsen overall inequality” if policymakers do not proactively work to prevent it from stoking social tensions.

    JPMorgan boss Jamie Dimon expressed similar concerns at the 2026 World Economic Forum, warning that the rapid roll-out of AI throughout society will cause “civil unrest” unless governments and companies work together to mitigate its effect on job markets.

    Malfunction and loss control issues

    On AI’s scope for malicious use – which covers threats such as cyber attacks, its potential for “influence and manipulation”, and the impacts of AI-generated content – the report says it “remains difficult to assess” due to a lack of systemic data on their prevalence and severity, despite harms profiteering.

    For malfunction risks, which includes challenges around the reliability of AI and loss of human control over it, the report adds that agentic systems that can act autonomously are making it harder for humans to intervene before failures occur, and could allow “dangerous capabilities” to go undetected before deployment.

    However, it says that while AI systems are not yet capable of creating loss of control scenarios, there is currently not enough evidence to determine when or how they would pass this threshold.

    Evidence chasms

    According to the report, it is clear that more research is needed to understand the prevalence of different risks and how much they vary across different regions of the world, especially in regions such as Asia, Africa and Latin America that are rapidly digitising. 

    “There is a lack of evidence on: how to measure the severity, prevalence, and timeframe of emerging risks; the extent to which these risks can be mitigated in real-world contexts; and how to effectively encourage or enforce mitigation adoption across diverse actors,” it says.

    “Certain risk mitigations are growing in popularity, but more research is needed to understand how robust risk mitigations and safeguards are in practice for different communities and AI actors (including for small and medium-sized enterprises).

    “Further, risk management efforts currently vary highly across leading AI companies,” it continues. “It has been argued that developers’ incentives are not well-aligned with thorough risk assessment and management.”

    The report notes that while AI companies have made a number of voluntary commitments by tech firms – including the Frontier AI Safety Commitments voluntarily made by AI firms and the Seoul Declaration for safe, innovative and inclusive AI signed by governments at the AI summit in Seoul – there is a further evidence gap around “the degree to which different voluntary commitments are being met, what obstacles companies face in adhering fully to commitments, and how they are integrating … safety frameworks into broader AI risk management practices”.

    The report adds that key challenges include determining how to prioritise the diverse risks posed by general-purpose AI, clarifying which actors are best positioned to mitigate them, and understanding the incentives and constraints that shape each of their actions.

    “Evidence indicates that policymakers currently have limited access to information about how AI developers and deployers are testing, evaluating and monitoring emerging risks, and about the effectiveness of different mitigation practices,” it says.

    While the 2025 safety report goes into more detail on risks around AI-related discrimination and its propensity to reproduce negative social biases, the 2026 report only touches on this briefly, noting that “some researchers have argued that most technical approaches to pluralistic alignment fail to address, and potentially distract from, deeper challenges, such as systematic biases, social power dynamics, and the concentration of wealth and influence”.

    Although the 2025 report notes “a holistic and participatory approach that includes a variety of perspectives and stakeholders is essential to mitigate bias”, the 2026 report only says that open source approaches are critical to “enabling global majority participation in AI development”.

    “Without such access, communities in low-resource regions risk exclusion from AI’s benefits,” it says, adding that allowing downstream developers to fine-tune models for diverse applications that, for example, adapt them for under-resourced minority languages or optimise performance for specific purposes “can allow more people and communities to use and benefit from AI than would otherwise be possible”.

    Read more on Business applications


    • Managing AI’s environmental impact


    • What to expect from Cognite Impact: 2025

      By: Adrian Bridgwater


    • Google spins up agentic SOC to speed up incident management

      By: Alex Scroxton


    • UK critical systems at risk from ‘digital divide’ created by AI threats

      By: Bill Goodwin

    Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
    Previous ArticleSamsung’s Galaxy S26 Unpacked event is on February 25
    Next Article Is the EU’s free trade deal with India the dawn of a new era?
    TechAiVerse
    • Website

    Jonathan is a tech enthusiast and the mind behind Tech AI Verse. With a passion for artificial intelligence, consumer tech, and emerging innovations, he deliver clear, insightful content to keep readers informed. From cutting-edge gadgets to AI advancements and cryptocurrency trends, Jonathan breaks down complex topics to make technology accessible to all.

    Related Posts

    Microsoft sets 2026 deadline for Secure Boot certificate expiration

    February 11, 2026

    Sony confirms new WH-1000XM6 release in official teaser

    February 11, 2026

    Awkward debut: XPeng’s Iron robot falls on stage

    February 11, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Ping, You’ve Got Whale: AI detection system alerts ships of whales in their path

    April 22, 2025667 Views

    Lumo vs. Duck AI: Which AI is Better for Your Privacy?

    July 31, 2025251 Views

    6.7 Cummins Lifter Failure: What Years Are Affected (And Possible Fixes)

    April 14, 2025151 Views

    6 Best MagSafe Phone Grips (2025), Tested and Reviewed

    April 6, 2025111 Views
    Don't Miss
    Technology February 11, 2026

    Microsoft sets 2026 deadline for Secure Boot certificate expiration

    Microsoft sets 2026 deadline for Secure Boot certificate expiration – NotebookCheck.net News ⓘ news.microsoft.comMicrosoft signage…

    Sony confirms new WH-1000XM6 release in official teaser

    Awkward debut: XPeng’s Iron robot falls on stage

    Limited edition Analogue 3D now available to buy

    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us
    About Us

    Welcome to Tech AI Verse, your go-to destination for everything technology! We bring you the latest news, trends, and insights from the ever-evolving world of tech. Our coverage spans across global technology industry updates, artificial intelligence advancements, machine learning ethics, and automation innovations. Stay connected with us as we explore the limitless possibilities of technology!

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Microsoft sets 2026 deadline for Secure Boot certificate expiration

    February 11, 20263 Views

    Sony confirms new WH-1000XM6 release in official teaser

    February 11, 20262 Views

    Awkward debut: XPeng’s Iron robot falls on stage

    February 11, 20263 Views
    Most Popular

    7 Best Kids Bikes (2025): Mountain, Balance, Pedal, Coaster

    March 13, 20250 Views

    VTOMAN FlashSpeed 1500: Plenty Of Power For All Your Gear

    March 13, 20250 Views

    This new Roomba finally solves the big problem I have with robot vacuums

    March 13, 20250 Views
    © 2026 TechAiVerse. Designed by Divya Tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions

    Type above and press Enter to search. Press Esc to cancel.