Calendar An icon of a desk calendar. Cancel An icon of a circle with a diagonal line across. Caret An icon of a block arrow pointing to the right. Email An icon of a paper envelope. Facebook An icon of the Facebook "f" mark. Google An icon of the Google "G" mark. Linked In An icon of the Linked In "in" mark. Logout An icon representing logout. Profile An icon that resembles human head and shoulders. Telephone An icon of a traditional telephone receiver. Tick An icon of a tick mark. Is Public An icon of a human eye and eyelashes. Is Not Public An icon of a human eye and eyelashes with a diagonal line through it. Pause Icon A two-lined pause icon for stopping interactions. Quote Mark A opening quote mark. Quote Mark A closing quote mark. Arrow An icon of an arrow. Folder An icon of a paper folder. Breaking An icon of an exclamation mark on a circular background. Camera An icon of a digital camera. Caret An icon of a caret arrow. Clock An icon of a clock face. Close An icon of the an X shape. Close Icon An icon used to represent where to interact to collapse or dismiss a component Comment An icon of a speech bubble. Comments An icon of a speech bubble, denoting user comments. Comments An icon of a speech bubble, denoting user comments. Ellipsis An icon of 3 horizontal dots. Envelope An icon of a paper envelope. Facebook An icon of a facebook f logo. Camera An icon of a digital camera. Home An icon of a house. Instagram An icon of the Instagram logo. LinkedIn An icon of the LinkedIn logo. Magnifying Glass An icon of a magnifying glass. Search Icon A magnifying glass icon that is used to represent the function of searching. Menu An icon of 3 horizontal lines. Hamburger Menu Icon An icon used to represent a collapsed menu. Next An icon of an arrow pointing to the right. Notice An explanation mark centred inside a circle. Previous An icon of an arrow pointing to the left. Rating An icon of a star. Tag An icon of a tag. Twitter An icon of the Twitter logo. Video Camera An icon of a video camera shape. Speech Bubble Icon A icon displaying a speech bubble WhatsApp An icon of the WhatsApp logo. Information An icon of an information logo. Plus A mathematical 'plus' symbol. Duration An icon indicating Time. Success Tick An icon of a green tick. Success Tick Timeout An icon of a greyed out success tick. Loading Spinner An icon of a loading spinner. Facebook Messenger An icon of the facebook messenger app logo. Facebook An icon of a facebook f logo. Facebook Messenger An icon of the Twitter app logo. LinkedIn An icon of the LinkedIn logo. WhatsApp Messenger An icon of the Whatsapp messenger app logo. Email An icon of an mail envelope. Copy link A decentered black square over a white square.

Tech firms falling short on misinformation ahead of general election – committee

A committee of MPs warned there is ‘too little evidence’ that tech firms were doing enough to manage threats to democracy (Peter Byrne/PA)
A committee of MPs warned there is ‘too little evidence’ that tech firms were doing enough to manage threats to democracy (Peter Byrne/PA)

The biggest tech and social media companies are falling short on protecting users from content designed to disrupt democracy by not working together on the issue, a joint committee of Parliament has warned.

The Joint Committee on the National Security Strategy (JCNSS) said it was concerned by the differing approaches across different tech firms around monitoring and regulating potentially harmful content.

The committee said the evidence it had received from the biggest platforms, as part of its defending democracy inquiry, showed companies were developing individual policies based on their own sets of principles, rather than co-ordinating standards and best practice.

Dame Margaret Beckett, chairwoman of the JCNSS, said evidence from firms including X (formerly Twitter), TikTok, Snap, Meta, Microsoft and Google showed an “uncoordinated, siloed approach to the many potential threats and harms facing UK and global democracy”.

Social media and wider tech platforms have found themselves under additional scrutiny this year because of record numbers of people expected to take part in elections.

Polls are due in more than 70 countries including the UK, US and India. That combines with the rapid evolution of artificial intelligence fuelling a rise in AI-generated content, including misleading material, better known as deepfakes.

Dame Margaret said the committee was also concerned by the firms’ use of free speech as a defence for allowing certain types of content to stay online.

“The committee understands perfectly well that many social media platforms were at least nominally born as platforms to democratise communications: to allow and support free speech and to circumvent censorship,” Dame Margaret said.

“These are laudable goals but they never gave these companies or any individual running and profiting from them the right or authority to arbitrate on what legitimate free speech is; that is the job of democratically accountable authorities.

“That only holds truer for the form that many of these publishing platforms have in fact taken – one of monetising information spread through addictive technologies.”

She added that members of the committee also had concerns over the approach of some of the biggest tech firms to combating the rising problem of AI-powered misinformation, and also criticised their approach to giving evidence to the inquiry.

New Year Honours list 2024
Dame Margaret Beckett is chairwoman of the Joint Committee on the National Security Strategy (Stefan Rousseau/PA)

“This year we have seen groups developing technology to help people decipher the veracity of the dizzying variety of information on offer at every moment online.

“We would have expected that kind of front foot and responsibility from the companies profiting from spreading the information,” she said.

“For a start, we expected social media and tech companies to proactively engage with our parliamentary inquiry, especially one so directly related to their work at such a critical moment in our global history.

“And if we must pursue a company operating and profiting in the UK to engage with a parliamentary inquiry, we expect much more than a regurgitation of some of its publicly available content which does not specifically address our inquiry.

“Much of the written evidence that was submitted shows – with few and notable exceptions – an uncoordinated, siloed approach to the many potential threats and harms facing UK and global democracy.

“The cover of free speech does not cover untruthful or harmful speech, and it does not give tech media companies a get-out-free card for accountability for information propagated on their platforms.”

Although some platforms have announced some tools to better monitor and flag AI-generated content on their sites, industry-wide standards on the issue are still not in place.

Earlier this year, fact-checking charity Full Fact warned that the UK was “vulnerable” to misinformation, in part because of gaps in existing legislation and the rise of technology such as generative AI.

But Dame Margaret warned that there was also “too little evidence” that tech firms were doing enough to manage the threats as well, and called for more Government intervention.

“Though we have not concluded our inquiry or come to our recommendations, there is far too little evidence from global commercial operations of the foresight we expected: proactively anticipating and developing transparent, independently verifiable and accountable policies to manage the unique threats in a year such as this,” she said.

“There is far too little evidence of the learning and co-operation necessary for an effective response to a sophisticated and evolving threat, of the kind the Committee described in our report on ransomware earlier this year.

“The Government’s Taskforce on Defending Democracy might be a useful co-ordinating body for social media companies to proactively submit and share their learning on foreign interference techniques.”

A Government spokesperson said: “Defending our democratic processes is an absolute priority and we will continue calling out malicious activity that poses a threat to our institutions and values, including through our Defending Democracy Taskforce.

“Once implemented, the Online Safety Act will also require social media platforms to swiftly remove illegal misinformation and disinformation – including where it is AI-generated – as soon as they become aware of it.”