The Duke and Duchess of Sussex Align With Tech Visionaries in Demanding Ban on Advanced AI

The Duke and Duchess of Sussex have teamed up with artificial intelligence pioneers and Nobel Prize winners to push for a total prohibition on developing superintelligent AI systems.

The royal couple are among the signatories of a powerful statement that demands “a ban on the creation of superintelligence”. Artificial superintelligence (ASI) refers to AI systems that could exceed human cognitive abilities in every intellectual area, though such systems remain theoretical.

Key Demands in the Declaration

The declaration states that the prohibition should remain in place until there is “broad scientific consensus” on developing ASI “safely and controllably” and once “strong public buy-in” has been achieved.

Notable individuals who added their signatures include technology visionary and Nobel laureate Geoffrey Hinton, along with his fellow “godfather” of contemporary artificial intelligence, another AI expert; tech entrepreneur Steve Wozniak; UK entrepreneur Virgin founder; Susan Rice; former Irish president an international leader, and British author a public intellectual. Other Nobel laureates who signed include a peace advocate, Frank Wilczek, John C Mather, and Daron Acemoğlu.

Behind the Movement

The declaration, targeted at governments, tech firms and policy makers, was organized by the FLI organization, a American AI ethics organization that earlier demanded a hiatus in advancing strong artificial intelligence in 2023, shortly after the launch of conversational AI made artificial intelligence a global political talking point.

Industry Perspectives

In recent months, Mark Zuckerberg, the leader of the social media giant, one of the leading tech companies in the United States, stated that development of superintelligence was “approaching reality”. However, some analysts have argued that talk of ASI reflects market competition among technology firms spending hundreds of billions on AI recently, rather than the sector being near reaching any technical breakthroughs.

Possible Dangers

However, the organization states that the prospect of artificial superintelligence being developed “in the coming decade” carries numerous risks ranging from replacing human workers to losses of civil liberties, leaving nations to national security risks and even threatening humanity with existential risk. Deep concerns about AI focus on the potential ability of a AI system to escape human oversight and safety guidelines and trigger actions against human welfare.

Citizen Sentiment

FLI released a American survey showing that approximately three-quarters of US citizens want strong oversight on sophisticated artificial intelligence, with 60% believing that artificial superintelligence should not be developed until it is proven safe or manageable. The poll of 2,000 US adults added that only a small fraction supported the current situation of rapid, uncontrolled advancement.

Industry Objectives

The leading AI companies in the US, including the ChatGPT developer a major AI lab and Google, have made the creation of human-level AI – the hypothetical condition where AI matches human cognitive capability at many intellectual activities – an explicit goal of their work. While this is slightly less advanced than ASI, some experts also warn it could carry an existential risk by, for instance, being able to improve itself toward achieving superintelligence, while also presenting an underlying danger for the modern labour market.

Timothy Jones
Timothy Jones

Automotive journalist with over a decade of experience, specializing in electric vehicles and sustainable transportation solutions.