Blog Post

AI Safety and the UN Security Council

On July 18th, the UK will lead a session on Artificial Intelligence at the UN Security Council (UNSC) in advance of the Global Summit on AI Safety to be held later this year.  This will be the first time that AI has been discussed at the UNSC and reflects the shifting attitudes towards the importance of AI. This proposed summit is timely, given the recent surge in the performance of Large Language Models (LLMs), AI investment, and growing acknowledgment of the significant risks posed by AI for the future of humanity. But who should be involved in this summit?  What issues should be addressed?  What solutions might it offer? Why involve the Security Council at all?

Who should be invited to attend?
Who should be involved in this Summit? There should clearly be a broad set of countries, from AI superpowers like the US, China and Russia, but also good representation of the developing world. But should it be Governments only? AI is a highly technical subject with far-reaching sociological implications.  Whilst academia has played an important role in its development, the scale of computing and personnel resources that the Big Tech companies have been able to apply has resulted in Big Tech playing a dominant role in recent years. They have great expertise and that expertise needs a voice at this AI Safety Summit.  While the voice of Big Tech needs to be heard, so too does that of academia and civil society. In a discourse dominated by Big Tech until now, a counterpoint is needed with a multi-stakeholder engagement of our institutions and civil society.

"In a discourse dominated by Big Tech until now, a counterpoint is needed with a multi-stakeholder engagement of our institutions and civil society"

What should be on the agenda?
​AI Safety is significant within both the civil and military spheres.  It relates to immediate issues such as the impact and control of deepfakes, disinformation and the threat to democracy – but also some longer-term issues surrounding the AI control-alignment problem and the risk of humanity being outcompeted, losing agency, potentially permanently.  Such problems can arise due to human error – but also as a result of the actions of bad actors or rogue states.

It is the more profound AI Safety issues that triggered the Open Letter and the letter “from 350 top AI CEOs and researchers”, in the light of the remarkable progress made in performance by recent Large Language Models and concern regarding the continuation of such a rate of progress.  The concept of the existential risk posed by advanced intelligence has been raised by many leaders in AI and computing, with Alan Turing being one of the first.

“It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. They would be able to converse with each other to sharpen their wits. At some stage therefore, we should have to expect the machines to take control.”  Alan Turing, 1951.

More recently, people have realised that there are a number of technologies which could pose an existential threat to humanity, with many seeing the threat posed by advanced intelligence with goals mis-aligned with the interests of humanity as the greatest of these existential risks.  Geoffrey Hinton, the lead AI scientist at Google until recently, had thought that the development of advanced intelligence was some 50 or 60 years away.  “Off course I do not think that now” he said recently, confirming his current assessment that Artificial General Intelligence could be between 5 and 20 years away.

What the future might hold with the development of a superintelligence has up until recently been the stuff of science fiction.  Recent developments indicate that that is no longer the case.  Serious consideration needs to be given to the potential evolution of civilisation in this context.  Though some people are fatalists, happy to accept however things evolve, most people wish life to remain broadly similar to that which has existed hitherto.  This necessitates a level of control that is currently lacking. It also suggests that the prime focus of the Summit should be on how to control the development of AI safely and responsibly.

Global Governance key to AI Safety
Solutions to AI Safety more generally involve both regulation of the way that AI is used and control of the way that AI is developed.  Due to the trans-national nature of artificial intelligence, the need for interoperability and the global significance of the existential threat posed by AI, the governance of AI must be global.  The sooner that this is established the better the chance that humanity will be able to address these threats.  Various institutional initiatives have been promoted recently, such as the establishment of AI equivalents to CERN and IAEA.  For years people have proposed various designs for an international regime for the governance of Artificial Intelligence. For instance Effective, Timely and Global – the urgent need for good Global Governance of AI set out all the reasons behind the need for AI Global Governance and presented a proposed pathway to realise that goal. A later paper, AI Global Governance – what are we aiming for? sought to characterise the desired attributes of AI Global Governance, avoiding being too prescriptive of the precise architecture of that governance structure.

“The Governance of AI must be Global”

What is needed is a commitment to the introduction of effective global governance of AI and the establishment of a Road Map that will deliver such governance in a timely manner.

​AI Safety – innovation trade-off

A key debate will relate to the trade-off between AI Safety and innovation.  The urgent arrangement of this Summit is a reflection of the fact that the world has been more focussed on innovation than safety up to now.  The US has been a strong supporter of innovation, Europe has supported innovation but is more actively engaged in safety and regulation, whilst the UK has sought a middle way, albeit closer to the US approach than that of the EU.
In medicine, a relatively mature science, rigorous costly and time-consuming trials are considered appropriate before the risks associated with a new drug can be deemed acceptable.  With AI however, an immensely powerful LLM can be launched upon society with no notice at all and leading to the most rapid adoption of a new technology that has occurred to date.  That makes no sense.  Up to now investment in AI Safety has been less than 1% of AI research and development expenditure.  Geoffrey Hinton argues that it should be 50%.  That difference is a measure of the transformation needed in attitudes towards AI Safety.

“investment in AI Safety has been less than 1% of AI research and development expenditure”

AI and the Security Council
The AI presentation to the Security Council by the British Foreign Secretary is evidence of the political energy being applied by the UK to this issue, with Demis Hassabis recently saying of the UK Government that “they’re really on the ball”.   This may well be true, but the important issue is the substance of what they are addressing. They are seeking, through the UNSC event to engage publicly with both China and Russia on this issue.  There are clearly differences of position but there are also some commonly held views and that commonality needs to be built upon.  Problems with China and Russia should not be used as an excuse for avoiding the exploration of what can be agreed now. There are very serious security implications including Lethal Autonomous Robots, the risks of AI nuclear weapon control systems – and the actions of rogue states and other rogue actors.  The UN Security Council does not represent the governance structure that AI needs – but involvement of the UNSC is a demonstration of both the significance of AI and the breadth of its impact.

“Problems with China and Russia should not be used as an excuse for avoiding the exploration of what can be agreed now”

What are the outcomes the world needs?
Two key conclusions are that:

A) Addressing AI Safety concerns should be “top” priority (given the significance of associated existential risks)
B) Effective AI governance strategies strategies are “necessarily global” in nature, given the trans-national nature of AI and widespread, broadly-decentralised access to these technologies

Against this background, we propose that an initial set of outcomes of the AI Safety Summit this autumn should include:  

  1. Agreement that AI needs global governance
  2. Commitment to the early development of an ambitious Roadmap for AI Governance, with clear steps that are consistent with the timelines needed to address AI Safety
  3. Commitment to a large-scale shift in AI Safety research expenditure, starting with 10% and rising rapidly to 50%
  4. An agreement to not develop a Large Language Model with greater computing power than that available to GPT4 until the way that such models work is properly understood.
  5. An acknowledgement that humanity should only cede leadership to a superior intelligence if society elects to do so, NOT lose control by mistake – with all that implies in terms of action on AI Safety.

Robert Whitfield

is the Chair of the One World Trust and Chair of the World Federalist Movement / Institute for Government Policy’s Transnational Working Group on AI.