Calendar An icon of a desk calendar. Cancel An icon of a circle with a diagonal line across. Caret An icon of a block arrow pointing to the right. Email An icon of a paper envelope. Facebook An icon of the Facebook "f" mark. Google An icon of the Google "G" mark. Linked In An icon of the Linked In "in" mark. Logout An icon representing logout. Profile An icon that resembles human head and shoulders. Telephone An icon of a traditional telephone receiver. Tick An icon of a tick mark. Is Public An icon of a human eye and eyelashes. Is Not Public An icon of a human eye and eyelashes with a diagonal line through it. Pause Icon A two-lined pause icon for stopping interactions. Quote Mark A opening quote mark. Quote Mark A closing quote mark. Arrow An icon of an arrow. Folder An icon of a paper folder. Breaking An icon of an exclamation mark on a circular background. Camera An icon of a digital camera. Caret An icon of a caret arrow. Clock An icon of a clock face. Close An icon of the an X shape. Close Icon An icon used to represent where to interact to collapse or dismiss a component Comment An icon of a speech bubble. Comments An icon of a speech bubble, denoting user comments. Comments An icon of a speech bubble, denoting user comments. Ellipsis An icon of 3 horizontal dots. Envelope An icon of a paper envelope. Facebook An icon of a facebook f logo. Camera An icon of a digital camera. Home An icon of a house. Instagram An icon of the Instagram logo. LinkedIn An icon of the LinkedIn logo. Magnifying Glass An icon of a magnifying glass. Search Icon A magnifying glass icon that is used to represent the function of searching. Menu An icon of 3 horizontal lines. Hamburger Menu Icon An icon used to represent a collapsed menu. Next An icon of an arrow pointing to the right. Notice An explanation mark centred inside a circle. Previous An icon of an arrow pointing to the left. Rating An icon of a star. Tag An icon of a tag. Twitter An icon of the Twitter logo. Video Camera An icon of a video camera shape. Speech Bubble Icon A icon displaying a speech bubble WhatsApp An icon of the WhatsApp logo. Information An icon of an information logo. Plus A mathematical 'plus' symbol. Duration An icon indicating Time. Success Tick An icon of a green tick. Success Tick Timeout An icon of a greyed out success tick. Loading Spinner An icon of a loading spinner. Facebook Messenger An icon of the facebook messenger app logo. Facebook An icon of a facebook f logo. Facebook Messenger An icon of the Twitter app logo. LinkedIn An icon of the LinkedIn logo. WhatsApp Messenger An icon of the Whatsapp messenger app logo. Email An icon of an mail envelope. Copy link A decentered black square over a white square.

AI safety summits could help shape UK legislation, Technology Secretary says

Michelle Donelan, Secretary of State for Science, Innovation and Technology, is co-chairing a discussion at the Seoul summit (Lucy North/PA)
Michelle Donelan, Secretary of State for Science, Innovation and Technology, is co-chairing a discussion at the Seoul summit (Lucy North/PA)

The increased international co-operation on AI safety sparked by UK-created AI safety summits will help the formation of domestic legislation, Technology Secretary Michelle Donelan has said.

The UK is currently co-hosting the AI Seoul Summit with South Korea, where more than a dozen AI firms have agreed to create new safety standards, while 10 nations and the EU have agreed to form an international network of publicly backed safety institutes to further AI safety research and testing globally.

The summit comes six months after the UK held the inaugural AI Safety Summit at Bletchley Park, where world leaders and AI firms agreed to focus on the safe and responsible development of AI, and carry out further research on the potential risks around the technology.

Ms Donelan said these regular gatherings and discussions were helping to place AI safety “at the top” of national agendas around the world, as many countries consider how to best legislate on the subject of the emerging technology.

“What we’ve said is that legislation needs to be at the right time, but the legislation can’t be out of date by the time you actually publish it, and we have to know exactly what is going into that legislation – we have to have a grip on the risks – and that’s another thing that this summit process helps us to achieve,” she told the PA news agency.

“I think what we have done in the UK by setting up this long-term process of summits – starting with Bletchley and now here in Seoul and then there’ll be the one in France – is to create a long-term process to convene the world on the very topic of AI safety and innovation and inclusivity, which are all intertwined together, so that we can really focus on and keep this at the top of other countries’ and nations’ agendas.”

The Technology Secretary added that “AI doesn’t respect geographical boundaries” and it “isn’t enough” to work only on AI safety domestically, with the “interoperability” of the new network of international safety institutes and resulting shared knowledge helping the governments to be “much more strategic” at managing the risks of AI.

“That said, of course we have a domestic track in this area, which revolves around adding to the resources and the skills and support of our existing regulators, as well as making sure that when the time comes we do actually legislate,” she said.

The Technology Secretary added that the next step in discussions at the summit in Seoul, where she will co-chair a discussion with other technology ministers from around the world on Wednesday, would be on how to further embed safety into AI development.

“How I see it is that phase one was basically Bletchley until Seoul, and what we managed to achieve there was the ‘Bletchley effect’ so that rocketed AI up the agenda in many different countries,” she said.

“It also demonstrated the UK’s global leadership in this area, and we set up the framework as to how we can do (AI model) evaluations via the institutes.

“Now in phase two – Seoul and beyond, to France – we need to also look at not just how can we make AI safe but how can we make safety embedded throughout our society, what I call systemic safety.”