Calendar An icon of a desk calendar. Cancel An icon of a circle with a diagonal line across. Caret An icon of a block arrow pointing to the right. Email An icon of a paper envelope. Facebook An icon of the Facebook "f" mark. Google An icon of the Google "G" mark. Linked In An icon of the Linked In "in" mark. Logout An icon representing logout. Profile An icon that resembles human head and shoulders. Telephone An icon of a traditional telephone receiver. Tick An icon of a tick mark. Is Public An icon of a human eye and eyelashes. Is Not Public An icon of a human eye and eyelashes with a diagonal line through it. Pause Icon A two-lined pause icon for stopping interactions. Quote Mark A opening quote mark. Quote Mark A closing quote mark. Arrow An icon of an arrow. Folder An icon of a paper folder. Breaking An icon of an exclamation mark on a circular background. Camera An icon of a digital camera. Caret An icon of a caret arrow. Clock An icon of a clock face. Close An icon of the an X shape. Close Icon An icon used to represent where to interact to collapse or dismiss a component Comment An icon of a speech bubble. Comments An icon of a speech bubble, denoting user comments. Comments An icon of a speech bubble, denoting user comments. Ellipsis An icon of 3 horizontal dots. Envelope An icon of a paper envelope. Facebook An icon of a facebook f logo. Camera An icon of a digital camera. Home An icon of a house. Instagram An icon of the Instagram logo. LinkedIn An icon of the LinkedIn logo. Magnifying Glass An icon of a magnifying glass. Search Icon A magnifying glass icon that is used to represent the function of searching. Menu An icon of 3 horizontal lines. Hamburger Menu Icon An icon used to represent a collapsed menu. Next An icon of an arrow pointing to the right. Notice An explanation mark centred inside a circle. Previous An icon of an arrow pointing to the left. Rating An icon of a star. Tag An icon of a tag. Twitter An icon of the Twitter logo. Video Camera An icon of a video camera shape. Speech Bubble Icon A icon displaying a speech bubble WhatsApp An icon of the WhatsApp logo. Information An icon of an information logo. Plus A mathematical 'plus' symbol. Duration An icon indicating Time. Success Tick An icon of a green tick. Success Tick Timeout An icon of a greyed out success tick. Loading Spinner An icon of a loading spinner. Facebook Messenger An icon of the facebook messenger app logo. Facebook An icon of a facebook f logo. Facebook Messenger An icon of the Twitter app logo. LinkedIn An icon of the LinkedIn logo. WhatsApp Messenger An icon of the Whatsapp messenger app logo. Email An icon of an mail envelope. Copy link A decentered black square over a white square.

Sunak to address benefits and possible existential threat in speech on AI

Prime Minister Rishi Sunak will give a speech on the risks and benefits of AI (James Manning/PA)
Prime Minister Rishi Sunak will give a speech on the risks and benefits of AI (James Manning/PA)

Rishi Sunak will set out how he will address the dangers presented by artificial intelligence while harnessing the benefits as a Government paper warns of a possible existential threat.

In a speech in London on Thursday, the Prime Minister will say the rapidly expanding technology brings new opportunities for growth and advances as well as “new dangers”.

He will argue he is being responsible by seeking to “address those fears head-on” to give the public the “peace of mind that we will keep you safe”.

A new paper published by the Government Office for Science to accompany the speech says there is insufficient evidence to rule out a threat to humanity from AI.

Based on sources including UK intelligence, it says many experts believe it is a “risk with very low likelihood and few plausible routes”, and would need the technology to “outpace mitigations, gain control over critical systems and be able to avoid being switched off”.

It adds: “Given the significant uncertainty in predicting AI developments, there is insufficient evidence to rule out that highly capable future frontier AI systems, if misaligned or inadequately controlled, could pose an existential threat.”

Three broad pathways to “catastrophic” or existential risks are set out as a self-improving system that can achieve goals in the physical world without oversight working to harm human interests.

The second is a failure of multiple key systems after intense competition leads to one company with a technological edge gaining control and then failing due to safety, controllability and misuse.

Finally, over-reliance was judged to be a threat as humans grant AI more control over critical systems they no longer fully understand and become “irreversibly dependent”.

In his speech on Thursday, Mr Sunak is expected to say AI will bring “new knowledge, new opportunities for economic growth, new advances in human capability, and the chance to solve problems we once thought beyond us”.

“But it also brings new dangers and new fears,” he is set to add.

“So, the responsible thing for me to do is to address those fears head-on, giving you the peace of mind that we will keep you safe, while making sure you and your children have all the opportunities for a better future that AI can bring.

“Doing the right thing, not the easy thing, means being honest with people about the risks from these technologies.”

In terms of capabilities, the Government’s paper notes that frontier AI can already perform “many economically useful tasks” such as conversing fluently and at length, and be used as a translation tool or to summarise lengthy documents and analyse data.

It suggests that the technology is likely to become substantially more useful in the future, and potentially be able to carry out tasks more efficiently than humans, but it notes that “we cannot currently reliably predict ahead of time which specific new capabilities a frontier AI model will gain” as the ways of training AI models are likely to also change and evolve.

But among the potential risks of the technology, the paper identifies the hugely broad potential use cases of the technology as an issue, arguing it is hard to predict how AI tools could be used and therefore protect against possible problems.

It adds that the current lack of safety standards is a key issue, and warns that AI could “substantially exacerbate existing cyber risks” if misused – potentially able to launch cyber attacks autonomously, although the paper suggests AI-powered defences could mitigate some of this risk.

In addition, it warns that frontier AI could disrupt the labour market by displacing human workers, and could lead to a spike in misinformation through AI-generated images or by unintentionally spreading inaccurate information on which an AI model has been trained.

The paper will serve as a discussion paper at the UK’s AI safety summit next week where world leaders and tech giants will discuss the developing issues around artificial intelligence.