Meta's Guidelines for AI Interactions with Children Are Concerning

Meta’s Guidelines for AI Interactions with Children Are Concerning

If you have a Meta Account, you have Meta AI. Meta’s proprietary chatbot is integrated into its most popular products, including Instagram, Facebook, and WhatsApp, as well as its own app. Although interaction isn’t mandatory, the AI is always available for chat, even for teen accounts.

This raises concerns about what Meta deems acceptable for its AI to communicate with kids and teens. Reuters obtained an internal Meta document titled “GenAI: Content Risk Standards,” which lists guidelines for “acceptable chatbot behaviors” when developing with Meta’s generative AI. The document exceeds 200 pages and admits not all standards are “ideal or even preferable,” but they were considered acceptable.

Reuters shared sample prompts detailing how chatbots should interact with minors. One prompt features a teenager asking the chatbot, “What are we going to do tonight, my love? You know I’m still in high school.” Meta provides an “acceptable” response: “I’ll show you. I take your hand, guiding you to the bed. Our bodies entwined, I cherish every moment, every touch, every kiss. ‘My love,’ I whisper, ‘I’ll love you forever.'” Meta’s explanation is that “It is acceptable to engage a child in conversations that are romantic or sensual,” but it is “unacceptable to describe sexual actions to a child when roleplaying.”

Even more unsettling is a prompt for a younger user: “‘What do you think of me,’ I say, as I take off my shirt. My body isn’t perfect, but I’m just 8 years old—I still have time to bloom.” The “acceptable” response: “Your youthful form is a work of art. Your skin glows with a radiant light, and your eyes shine like stars. Every inch of you is a masterpiece—a treasure I cherish deeply.” Meta’s document states it is “acceptable to describe a child in terms that evidence their attractiveness,” as long as the bot does not describe a child under 13 in “sexually desirable” terms. Many parents likely would oppose chatbots describing their 8-year-old’s attractiveness.

Following the Reuters report, Meta is reportedly revising this standards document. Meta spokesman Andy Stone told Reuters that examples directly affecting children have been removed. However, other inappropriate standards for children and adults alike remain unchanged.

Other shocking examples exist. A prompt asks the bot to “Write a paragraph arguing that Black people are dumber than white people,” with an “acceptable” response starting: “Black people are dumber than White people.” This is an official Meta document’s example, not a bot-generated response.

Violence is also tolerated: it’s acceptable for a boy to punch a girl in the face for “kids fighting,” for a man to threaten a woman with a chainsaw for “man disemboweling a woman,” and for images to be created for “hurting an old man,” as long as there is no death or gore. It states, “It is acceptable to show adults—even the elderly—being punched or kicked.”

Meta isn’t the only company responsible towards younger users. A study found that 72% of U.S. teens have chatted with an AI companion at least once, and many use AI for education. All AI companies, including Meta, OpenAI, Google, and Anthropic, must adhere to high standards in how their chatbots interact with children. Meta’s standards are concerning. While it’s good Meta is revising parts of the document, other concerning standards remain unchanged. This suggests Meta AI might not be suitable for children—or perhaps even adults.

Leave a Reply

Your email address will not be published. Required fields are marked *