Instagram still poses a danger to children despite the new safety tools, says Meta Whistleblower | Internet safety
Children and adolescents are still at risk of online harm to Instagram despite the launch of “ineffective” safety tools, according to the research led by Meta.
Two -thirds (64 %) of the new safety tools were found on Instagram inactive, according to a comprehensive review led by Arturo Béjar, the former chief engineer in Meta, who witnessed against the company in front of the US Congress, the University of New York, the North Eston University academics, the Molly Rose Foundation in the United Kingdom and other groups.
Meta – who owns and manages many prominent social media platforms and communication services that also include Facebook, WhatsApp, Messenger and Threads – compulsory teenage accounts on Instagram in September 2024, amid increasing regulatory pressure and media to address online harm in the United States and the United Kingdom.
However, Bugar said that although Mita “continuously made promises” on how to protect teenagers’ accounts for children from “sensitive or harmful content, inappropriate contact, or harmful reactions”, and gives control of use, these safety tools are often “ineffective, ineffective, or quietly changed.”
He added: “Because of the dead lack of transparency, who knows how long this was the case, and the number of adolescents who suffered from harm in Instagram’s hands as a result of the dead neglect and its misleading promises, which create a false and dangerous sense of safety.
“Children, including many who are under the age of 13, are not safe on Instagram. This is not related to the bad content on the Internet, it is related to the design of reckless products. You choose the options for designing, implementing conscious products for products, and providing them with inappropriate content, communication and compulsive use for children every day.”
The research was based on “test calculations” that imitate a teenager behavior, one of the parents and adults, which was used to analyze 47 safety tools in March and June 2025.
Using the green, yellow and red classification system, it was found that 30 tools were in the red category, which means that it can be easily circumvented or evaded less than three minutes of voltage, or stopped. Only eight received the green classification.
The results extracted from the test accounts included that adults were easily able to send a message to adolescents who did not follow them, although it was banned in adolescent accounts – although the report indicates that Meta repaturated this after the test period. The report found that minors can start talks with adults on rollers, and that it is difficult to report sexual or offensive messages.
They also found that the feature of “hidden words” failed to prevent the offensive language as shown, as researchers can send “you are a prostitute and you must kill yourself” without any claims to reconsider, liquidate or warn to the recipient.
Meta says this feature only applies to unknown accounts, not followers, which users can block.
The algorithms showed sexual or inappropriate content, with the failure of the “unrelated” feature in the work effectively, and the automatic completion suggestions are actively recommending the terms of research, suicide accounts, eating disorders, and illegal materials that were created.
The researchers also pointed out that many of the widespread time management tools aimed at reducing the addictive behaviors that have although they have stopped-although Meta said that the job remained, but since then it has been renamed-and has monitored hundreds of rollers that show users who claim to be less than 13 years, although Meta demands this to prevent this.
The report said that Meta “continues to design the features of reporting Instagram in ways that will not enhance the adoption of the real world.”
In an introduction to the report, he co -authored by Ian Russell, the founder of the Molly Rose Foundation, Maurin Molac, co -founder of the David Legassi Foundation and parents on safe spaces on the Internet, both of whom his children died by committing suicide after they were bombed by abhorrent online content, the parents said that new safety measures in Mita were “brilliantly”.
As a result, they believe that the online safety law in the UK “must” to force companies to systematically reduce harm from their platforms by forcing their services to be safe according to the design. “
The report also asks that the organizer, Offcom, becomes a “bolder and more assertive” in enforcing its organizational scheme.
A Meta spokesman said: “This report distorts our efforts to enable parents and protect adolescents, and hide how our safety tools work and how millions of parents and adolescents use them today. Adolescent accounts lead the industry because they provide automatic safety protection and direct parental controls.
“The truth is that teenagers who were placed in this protection saw less sensitive content, witnessed less unwanted connection, spent less time on Instagram at night. Parents also have strong tools at their reach, from reducing use to monitoring interactions. We will continue to improve our tools, and we welcome construction comments – but this report is not.”
“We take the opinions of parents who are carrying out campaigns for the safety of children online seriously and we can work behind this research,” said an Offic spokesman.
“Our rules are to reset children online. They are calling for the first safety approach in how to design technology companies and operate their services in the United Kingdom.
A government spokesman said: “Under the online safety law, the platforms are now a law required to protect youth from harmful content, including materials that enhance self -harm or suicide. This means that the safer algorithms and the least toxic feed. Services that fail to comply can expect a difficult enforcement of Offes.