Blog Post

The AI Safety Summit process: Will France Uphold the Torch at the 2025 AI Action Summit?

The AI Seoul Summit, held this past May, served as a muted follow-up to the Bletchley Park AI Safety Summit. As we prepare for the next summit in France, planned for February 2025, the stakes are high – there is a real risk of AI safety being side-lined in favour of national priorities for economic growth. If France fails to prioritise safety, it could unravel the progress made to date.

The Seoul Summit: A Shift Away from Safety

Priorities of the AI Seoul Summit:

Safety — To reaffirm the commitment to AI safety and to further develop a roadmap for ensuring AI safety

Innovation — To emphasise the importance of promoting innovation within AI development

Inclusivity — To champion the equitable sharing of AI’s opportunities and benefits

~

The AI Seoul Summit achieved notable progress, particularly through the signing of the Seoul Declaration by 27 nations and the EU, which reaffirmed a collective commitment to international cooperation on AI. Additionally, 16 leading AI firms*, including Amazon, Google, and Microsoft, made voluntary Frontier AI Safety Commitments, agreeing to develop and deploy advanced AI models responsibly. These firms pledged transparency, internal accountability, and risk management, with plans to publish a safety framework for severe risks ahead of the 2025 Paris Summit. In addition, 15 international and Korean tech companies* supported the Seoul AI Business Pledge, committing to responsible AI development and the establishment of internal safety regulations, such as watermarking and labelling AI-generated content. 

The creation of the Network of AI Safety Institutes, agreed upon by 10 countries and the EU, is another significant advancement, aiming to monitor the implementation of these safety commitments. The UK’s commissioning of the Interim International Scientific Report on the Safety of Advanced AI, with input from 30 countries and overseen by an international Expert Advisory Panel, also adds an authoritative source for policymakers on rapidly advancing AI technologies. 

However, despite these advancements, critical gaps remain. The removal of ‘safety’ from the summit title and the shift from discussing existential risks to broader impacts raised concerns that the focus on AI safety might be diluted. Also, the lack of robust enforcement mechanisms in these voluntary commitments highlights the ongoing tension between advancing AI innovation and ensuring safety. This tension poses a risk that safety considerations could be side-lined in favour of economic and technological progress.

Key to the success of AI safety governance is a process of AI safety summits to continuously engage the international community in confronting the impact of existential risks. Furthermore, it is essential that we build upon the information-sharing gesture of Bletchley Park and continue to foster international dialogue to support unified action on what are fundamentally global level risks.

Persistent Threat of AI: Commitments Remain Voluntary and Fragile

Lee Sharkey, Co-Founder and CSO of Apollo Research, highlighted at ‘Bletchley Unpacked’ that many AI-related risks remain unaddressed. Despite the Seoul Summit’s risk-based approach, detailed discussions on these risks were notably absent. Evaluations by the UK AI Safety Institution of large language models (LLMs) revealed their vulnerability to basic jailbreaks, their potential to generate harmful outputs, and their capability to conduct elementary cyber-attacks. These findings emphasise that the threats posed by AI are ongoing and unresolved.

The newly established Network of AI Safety Institutes offers some hope for pushing a risk-based AI safety and governance agenda. However, without robust enforcement mechanisms, the industry’s commitment to safety remains voluntary and fragile. The importance of AI safety cannot be overstated, yet AI technology continues to outpace the necessary regulations and standards.

The French Summit: Prioritising Economy Over Safety

The 2025 AI Action Summit in France presents a critical opportunity for the international community to advance AI safety and governance. However, the current plans for the summit are lacking. The summit’s guiding principle, “What does a society look like when AI is working well and in people’s interest?”, suggests a shift toward discussions on the future of work and the use of AI as assistants, while relegating safety to a secondary concern. The official track for AI safety and security has been reduced to ‘AI Trust’, with the main objective being to “consolidate the mechanisms for building trust in AI…”, which falls short of addressing imminent and existential risks.

This shift is a reflection of France’s national AI policy. The report ‘AI: Our Ambition for France’ by the French Artificial Intelligence Commission prioritises economic growth and technological sovereignty. The report suggests that AI could potentially double France’s economic growth, boosting GDP by €250 to €420 billion over the next decade. However, this economic drive comes with a trade-off: the report advocates for “de-demonizing AI without idealizing it”, downplaying the risks that AI could pose.

The report also criticises excessive regulation, arguing that it could stifle innovation, and supports the development of open-source AI models, despite the risks associated with disinformation and biosecurity. This stance is troubling, as it suggests that France might prioritise rapid AI development over the necessary precautions, potentially side-lining safety in the pursuit of economic growth.

The AI Race: Speed Over Safety

France’s ambitions to establish Paris as an ‘AI capital’ further complicates the situation. The French government’s aggressive investment strategy, including raising €15 billion in foreign investment and securing a €4 billion partnership with Microsoft, underscores its commitment to becoming a global AI leader. However, this drive to lead the AI race risks undermining the AI safety focus of the 2025 AI Action Summit.

The French AI Commission’s report argues that without a significant increase in AI investments, France and Europe could face economic decline. It calls for bold actions to secure computing power and facilitate access to data, while downplaying the need for stringent safety regulations. While we are not against such actions, the desire to play catch-up could have far-reaching consequences, potentially unravelling the AI Safety Summit process initiated at Bletchley Park.

Getting Back on Track: Time for Action

As we approach the 2025 AI Action Summit in France, it is imperative that AI safety remains at the forefront. The international community must advocate a balanced approach that does not sacrifice safety for economic gains. France’s current trajectory suggests a potential side-lining of safety in favour of economic ambitions, but this can and must be corrected.

Stakeholders, think tanks, and the public must demand that AI safety be a primary focus of the summit’s agenda. Ongoing dialogue, rigorous research, and sustained pressure on governments and industry leaders are essential to keep AI safety at the forefront. The legacy of Bletchley Park and Seoul must not fade; rather, it must be reinforced and expanded upon in Paris.

(*As of August 2024)