Social media accounts linked to children were “directly targeted” with graphic content within as little as 24 hours of being created, a new report into online safety says.
It says accounts created for the study based on real children as young as 13 were served content around eating disorders, self-harm and sexualised images.
The study from children’s safety group the 5Rights Foundation and the children’s commissioner for England, Dame Rachel de Souza, said the research was “alarming and upsetting” and called for mandatory rules on how online services are designed to be introduced.
An Age Appropriate Design Code will come into force in September, with the Information Commissioner’s Office (ICO) able to levy fines and other punishments to services that fail to build in, by design, new safety standards around protecting the data of users under 18.
But 5Rights said more must be done to integrate broader child safety into online platforms from the design process onwards.
It says that despite knowing the age of younger users, social media platforms were allowing them to be contacted, unsolicited, by adults as well as recommending potentially damaging content.
Facebook, Instagram and TikTok were the platforms named in the report, which was carried out with the research firm Revealing Reality.
In response, all three services said they took the safety of younger users seriously.
“The results of this research are alarming and upsetting. But just as the risks are designed to the system, they can be designed out,” 5Rights chair Baroness Kidron said.
“It is time for mandatory design standards for all services that impact or interact with children, to ensure their safety and wellbeing in the digital world.
“In all other settings, we offer children commonly agreed protections. A publican cannot serve a child a pint, a retailer may not sell them a knife, a cinema may not allow them to view an R18 film, a parent cannot deny them an education, and a drug company cannot give them an adult dose of medicine.
“These protections do not only apply when harm is proven, but in anticipation of the risks associated with their age and evolving capacity.
“These protections are hardwired into our legal system, our treaty obligations, and our culture. Everywhere but the digital world.”
She added that the study had highlighted a “profound carelessness and disregard for children” was “embedded” in the features, products and services of the digital world.
Dame Rachel said: “This research highlights the enormous range of risks that children currently encounter online.
“We don’t allow children to access services and content that are inappropriate for them, such as pornography, in the offline world.
“They shouldn’t be able to access them in the online world either. I look forward to working with the Government, parents, online platforms and organisations such as 5Rights to bring about an online world fit which is fit for children.”
Online safety campaigner Ian Russell, who set up a foundation in his daughter Molly’s name after she took her own life after viewing self-harm and suicide content online, said the research showed “how algorithmic amplification actively connects children to harmful digital content, sadly as I know only too well, sometimes with tragic consequences”.
“In our digital wilderness, young people need curated pathways to explore, allowing them to roam while remaining safe. Routes to trusted areas of support, especially in connection to mental health, should be better signposted so help can be provided whenever it is needed,” he said.
“All of us, governments, corporations, and individuals need to move fast to mend all things digital.
“We must find ways to weed out online harms and cultivate the good, if our digital world is to flourish as it should.
“Above all, we must prioritise safety, especially for children when online. We must work to prevent digital wolves seeking out the vulnerable and destroying young lives.”
Responding to the report, a TikTok spokesman said: “Our top priority is to promote a safe and positive experience on TikTok, and we removed 62 million videos in the first quarter of 2021 for violating our community guidelines, 82% of which were removed before they had received a single view.
“Protecting our younger users is vitally important, and TikTok has taken industry-leading steps to promote a safe and age-appropriate experience for teens.
“We disabled direct messaging for under 16s, made accounts aged 13 to 15 private by default, and introduced family pairing so that parents and guardians can control settings such as search and direct messaging.”
A spokesman for Facebook, which also owns Instagram, said: “We agree our apps should be designed with young people’s safety in mind.
“We don’t allow pornographic content or content that promotes self-harm and we’re also taking more aggressive steps to keep teens safe, including preventing adults from sending DMs to teens who don’t follow them. We look forward to sharing more in the next couple of weeks.
“It’s worth pointing out, however, that this study’s methodology is weak in a few areas: first, it seems they’ve drawn sweeping conclusions about the overall teen experience on Instagram from a handful of avatar accounts.
“Second, the posts it highlights are not ones recommended to these avatar accounts, but actively searched for or followed.
“Third, many of these examples pre-date changes we’ve made to offer support to people who search for content related to self-harm and eating disorders.”