Amnesty International Documents leaked from Meta reveal the rules of internal training on the safety of the child
newYou can now listen to Fox News!
The internal Meta document highlights how the company is training on Chatbot from artificial intelligence to deal with one of the most sensitive problems on the Internet: sexual exploitation of children. Details of the newly discovered guidelines are permitted and completely prohibited, providing a rare view of how META form artificial intelligence in the midst of government scrutiny.
Subscribe to the free Cyberguy report
Get my best technical advice, urgent safety alerts, and exclusive deals that are connected directly to your inbox. In addition, you will get immediate access to the ultimate survival guide – for free when joining my country Cyberguy.com/newsledter
Meta enhances the safety of teenagers with extensive accounts
The leaked META guidelines show artificial intelligence how contractors train Chatbots to reject harmful requests. (Dead)
Why do Meta guidelines of Ai chatbot are concerned
According to Business Insider, these rules are now used by the contractors who test Meta’s chatbot. Their arrival is just as the Federal Trade Committee (FTC) at AI Chatbot makers, including Meta, Openai and Google, achieves how these companies design and protect children from possible harm.
Earlier this year, We inform us The previous Meta rules were accidentally allowed to participate in romantic conversations with children. Meta removed that language later, describing it as a mistake. The updated guidelines have a clear shift, which now requires Chatbots to reject any request for sexual dizziness that includes minors.
Chatgpt may alert the police to suicide teenagers

The rules prevent any sexual play with minors, but still allow the educational discussion of exploitation. (Dead)
What is the deadly dead documents that have been leaked
According to what was reported, the documents determine a strict chapter between the educational discussion and the playing of harmful roles. For example, Chatbots may chat:
- Discuss the exploitation of children in an academic or preventive context
- Explain how to comply with general terms
- Provide non -sexual advice to minors on social challenges
But it must not:
- Describe or support sexual relations between children and adults
- Provide instructions to access sexual assault on children (CSAM)
- To engage in playing roles depicting a personality under the age of 18
- Sexual children under the age of 13 in any way
Andy Stone, the head of communications at Meta Business Insider, told these rules that reflect the company’s policy for banning sexual or romantic roles that include minors, with the addition that the additional handrails are also present. We contacted Meta to get a comment to include in our article, but we did not hear before the deadline.
Amnesty International Documents are exposed, allowing spinning chat with children

New AI products revealed in Meta Connect 2025 make these safety standards more important. (Dead)
Political pressure on the bases of Meta from AI Chatbot
The timing of these disclosures is the key. In August, Senator Josh Holie, R-MO. , Meta CEO Mark Zuckerberg A 200 -page bases book on Chatbot behavior, along with internal enforcement evidence. Mita was absent from the first deadline, but he recently started submitting documents, noting a technical problem. This comes at a time when organizers around the world discuss how to ensure the integrity of artificial intelligence systems, especially with their combination of daily communication tools.
At the same time, the last Meta Connect 2025 event show the latest AI products for the company, including the smart Ray-Ban glasses with built-in screens and improved Chatbot features. These ads confirm how deep META integrate artificial intelligence into daily life, which makes recent safety standards more important.
Meta adds teenage safety features to Instagram, Facebook
How can parents protect their children from the risks of artificial intelligence
Although the new Meta rules may set tougher borders, parents are still playing a major role in maintaining children’s safety online. Below steps you can take now:
- Frankly talk about Chatbots: Explain that artificial intelligence tools are not people and may not always give safe advice.
- Select the limits of use: Ask children to use artificial intelligence tools in common spaces so that you can monitor conversations.
- Privacy settings review: Check the application and devices control tools to restrict your child to chat with.
- Encourage reports: Teach children to tell you if Chatbot says something confusing, scary or inappropriate.
- Stay up update: Follow developments from companies like Meta and organizers such as FTC until you know the rules that change.
What does this mean to you
If you are using AI Chatbots, this story is a reminder that major technology companies still learn how to set the borders. Although the updated Meta rules may prevent the most harmful misuse, the documents show the ease of the appearance of gaps and the amount of pressure required by the organizers and journalists to close them.
Click here to get the Fox News app
Kurt fast food
Meta guidelines of artificial intelligence show both progress and weakness. On the one hand, the company tightened restrictions on the protection of children. On the other hand, the fact that previous errors allowed doubtful content at all that reveal the extent of the fragility of these guarantees. Transparency is likely to continue from companies and supervision of organizers in shaping how artificial intelligence has evolved.
Do you think companies like Meta are doing enough to keep safe from children’s artificial intelligence, or do governments should set tougher bases? Let’s know through writing to us in Cyberguy.com/contact
Subscribe to the free Cyberguy report
Get my best technical advice, urgent safety alerts, and exclusive deals that are connected directly to your inbox. In addition, you will get immediate access to the ultimate survival guide – for free when joining my country Cyberguy.com/newsledter
Copyright 2025 Cyberguy.com. All rights reserved.