News
The event, co-hosted with the Republic of Korea, saw the UK government pledge new funding to researching AI safety measures, as well as convincing major tech firms to shore up their safety measures
The AI Seoul Summit took place this week, bringing together the UK and South Korean governments to discuss AI safety alongside some of the biggest companies in the AI industry.
The meeting saw 16 global AI companies commit to a set of safety outcomes building on those set out by the ‘Bletchley Declaration’.
The UK published the ‘Bletchley Declaration’, signed by 28 countries and the European Union, at the AI Safety Summit held at Bletchley Park in November last year. The document has the companies pledge to develop AI responsibly and responsibly, as well as collaborating on further AI safety and research measures.
The new commitments on AI safety, agreed by major tech firms from around the world, includes a promise not to develop or deploy AI models if associated risks cannot be mitigated. They must also display an increased level of transparency, publishing safety frameworks measuring the risks of their frontier models
The signatories of this “Frontier AI Safety Commitments” document are:
Amazon
Anthropic
Cohere
Google / Google DeepMind
G42
IBM
Inflection AI
Meta
Microsoft
Mistral AI
Naver
OpenAI
Samsung Electronics
Technology Innovation Institute
xAI
Zhipu.ai
“The true potential of AI will only be unleashed if we’re able to grip the risks. It is on all of us to make sure AI is developed safely and today’s agreement means we now have bolstered commitments from AI companies and better representation across the globe,” said Technology Secretary Michelle Donelan.
“The UK is a world leader when it comes to AI safety, and I am continuing to galvanise other nations as we place it firmly on the global agenda and capitalise on the Bletchley Effect.”
Alongside these pledges, the UK Technology Secretary Michelle Donelan has also announced £8.5 million in grant funding for AI safety research projects back in the UK.
The programme will be overseen by Shahar Avin, a researcher from the Centre for the Study of Existential Risk (CSER) in Cambridge, and Christopher Summerfield, Research Director at UK’s AI Safety Institute (AISI), which was launched by the government at the start of this year.
“We expect to offer around 20 exploratory or proof-of-concept grants and will invite future bids for more substantial proposals to develop research programmes further,” reads the AISI website. “AISI will collaborate on this work with UKRI, The Alan Turing Institute and other AI Safety Institutes worldwide for this programme.”
Initiatives being considered will include, but are not limited to, those challenging the malicious use of deepfakes and AI-related misinformation. Importantly, these solutions would ideally intervene on the relevant platforms themselves, rather than modifying the AI models that generated the content.
“With evaluation systems for AI models now in place, Phase 2 of my plan to safely harness the opportunities of AI needs to be about making AI safe across the whole of society,” said Donelan.
“This is exactly what we are making possible with this funding which will allow our Institute to partner with academia and industry to ensure we continue to be proactive in developing new approaches that can help us ensure AI continues to be a transformative force for good.”
How is AI changing the UK’s connectivity landscape? Join the discussion at Connected Britain 2024
Also in the news:
UK government conditionally approves £15bn Vodafone–Three merger
Nokia and Vodafone trial Open RAN with Arm and HPE
T-Mobile and Verizon to buy US Cellular, reports say