Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Safety Risks. Show all posts

AI Expert Warns World Is Running Out of Time to Tackle High-Risk AI Revolution

 

AI safety specialist David Dalrymple has warned in no unclear terms that humanity may be running out of time to get ready for the dangers of fast-moving artificial intelligence. When talking to The Guardian, the director of programme at the UK government’s Advanced Research and Invention Agency (ARIA) emphasised that AI development is progressing “really fast,” and that no society can safely take these systems being reliable for granted. He is the latest authoritative figure to add to the escalating global anxiety that deployment is outstripping safety research and governance models. 

Dalrymple contended that the existential risk is from AI systems that can do virtually all economically valuable human work but more quickly, at lower cost and at a higher quality. In his mind, these intellectual systems might “outcompete” humans in the very domains that constitute our control over civilization, society and perhaps even planetary-scale decisions. And not just about losing jobs, but about losing strategic dominance in vital sectors, from security to infrastructure management.

He described a scenario in which AI capabilities race ahead of safety mechanisms, triggering destabilisation across both the security landscape and the broader economy. Dalrymple emphasised an urgent need for more technical research into understanding and controlling the behaviour of advanced AI, particularly as systems become more autonomous and integrated into vital services. Without this work, he suggested, governments and institutions risk deploying tools whose failure modes and emergent properties they barely understand. 

 Dalrymple, who among other things consults with ARIA on creating protections for AI systems used in critical infrastructure like energy grids, warned that it is “very dangerous” for policymakers to believe advanced AI will just work as they want it to. He noted that the science needed to fully guarantee reliability is unlikely to emerge in time, given the intense economic incentives driving rapid deployment. As a result, he argued the “next best” strategy is aggressively focusing on controlling and mitigating the downsides, even if perfect assurance is out of reach. 

The AI expert also said that by late 2026, AI systems may be able to do a full day of R&D, including self-improvement in such AI-related fields as mathematics and computer science. Such an innovation would give a further jolt to AI capabilities, and bring society more deeply into what he described as a “high-risk” transition that civilization is mostly “sleepwalking” into. And while he conceded that unsettling developments can ultimately yield benefits, he said the road we appear to be on is one that holds a lot of peril for if safety continues to lag behind capability.