Teen’s Instagram accounts still show suicide content
Instagram tools designed to protect adolescents from harmful content fails to prevent them from seeing the suicide of self -harm.
The researchers also said that the social media platform, owned by Meta, encouraged the children “to publish content that received very sexual comments from adults.”
The test, by the safety groups of the child and cyber researchers, found 30 out of 47 safety tools for adolescents on Instagram “largely ineffective or no longer present.”
Meta has opposed the research and its results, saying that its protection led to teenagers seeing less harmful Instagram.
A Meta spokesman told BBC: “This report repeatedly indicates our efforts to enable parents and protect adolescents, and to hide how our safety tools work and how millions of parents use them today.”
“Teenagers’ accounts lead the industry because they provide automatic safety protection and parental controls.”
Company Adolescent accounts were presented to Instagram in 2024Saying that it will add better protection to young people and allow more supervision of parents.
It has been expanded to Facebook and Messenger In 2025.
The study was conducted on the effectiveness of adolescent safety measures by the American Research Center for Cyber Security for Democracy – and experts, including Infallor of violations Arturo Bugar On behalf of the child safety groups, including the Molly Rose Foundation.
After creating fake adolescents, the researchers said they had found important problems in these tools.
In addition to finding 30 tools that were ineffective or no longer present, they said nine tools “decreased harm but came with restrictions.”
The researchers said only eight of the safety tools that they analyzed 47 years were actively working – which means that adolescents had been presented to the content that broke Instagram rules about what should appear to young people.
This included publications that describe “humiliating sexual acts”, as well as the automatic completion of research terms that enhance suicide, self -harm, or eating disorders.
“These failures refer to the Corporate Culture in Meta, which puts participation and profit before safety,” said Andy Burouz, CEO of Molly Rose Foundation – which is carrying out the strongest online safety laws in the United Kingdom.
He was created after the death of Molly Russell, who took her life at the age of fourteen in 2017.
In an investigation held in 2022, the investigator concluded that she died while she was suffering from “the negative effects of online content.”
The researchers participated with the BBC news screen records about their findings, including young children who seemed to be at the age of 13 video of themselves.
In one video, a little girl from users asks for her attractiveness.
The researchers in the study algorithm on Instagram claimed “stimulating children under the age of 13 to perform risky sexual behaviors for likes and opinions.”
“They encourage them to publish content that receives very sexual comments from adults,” they said.
It was also found that adolescent account users can send “offensive messages and two women to each other”, and adult accounts were suggested to follow up.
Mr. Poroz said that the results indicate that dead teenage accounts were “a performance pushed with public relations instead of a clear and concerted attempt to repair long -term safety risk on Instagram.”
Meta is one of the many large social media companies that faced criticism of its approach to the child’s safety online.
In January 2024, CEO Mark Zuckerberg was among the heads of technology Grilled in the American Senate For their safety policies – they apologized to a group of parents who said that their children had been harmed by social media.
Since then, Meta has carried out a number of measures to try to increase the safety of children who use their applications.
“These tools have a long way before they are decent for the purpose,” said Dr. Laura Edleson, the participating director of the authors of the report in the report for democracy.
Meta BBC told the research that it failed to understand how her youth settings make up and said she was distorting them.
A spokeswoman said: “The truth is that teenagers who were put in this protection saw less sensitive content, witnessed less unwanted contact, and spent less time on Instagram at night,” said a spokeswoman.
They added, the parents gave “strong tools within their hands.”
“We will continue to improve our tools, and we welcome construction comments – but this report is not,” they said.
She said that the cybersecurity research tools for the Democracy Center, such as “taking notifications”, to manage the time of application are no longer available to teenagers’ accounts – when they were already published in other features or implemented elsewhere.