
OpenAI has announced new open-source safety prompts for developers, aimed at launching a mass deployment of policies to protect teens.
The prompt-based safety pack includes model guidance on common teenage risks, developmental content recommendations, and age-appropriate guidelines on topics such as self-harm, sexual content and romantic role play, dangerous trends or viral challenges, and harmful body ideals.
OpenAI said its a more robust alternative to the high-level guidelines previously offered, formatted as prompts that plug right into AI systems.
OpenAI added new Under-18 principles to its Model Spec in December. A few months prior, the company released gpt-oss-safeguard, an open-weight reasoning model designed to assist developers in implementing safety conditions and classifying safe and unsafe content. Unlike traditional safety classification processes, gpt-oss-safeguard can be fed platform safety policies directly, and infers the policy’s intent as it distinguishes appropriate outputs.
But “even experienced teams often struggle to translate high-level safety goals into precise, operational rules, especially since it requires both subject matter expertise and deep AI knowledge,” said OpenAI in its latest press release. “This can lead to gaps in protection, inconsistent enforcement, or overly broad filtering. Clear, well-scoped policies are a critical foundation for effective safety systems.”
The additional developer pack was designed in collaboration with nonprofit Common Sense Media and everyone.ai.
Experts have warned parents about excessive chatbot exposure of vulnerable teens and even young children, as AI companies attempt to get a handle on the ramifications of their models on user mental health. Last year, OpenAI was sued by the parents of teen Adam Raine in the industry’s first wrongful death case, with the Raine family claiming that a combination of ChatGPT sycophancy and lax safety policies was responsible for their son’s death by suicide. The company has denied allegations of wrongdoing and in response have beefed up its mental health and teen safety features, including age assurance. Even so, third-party developers licensing OpenAI’s models have struggled to maintain the same level of safety precautions, including in AI-powered children’s toys.
The case against OpenAI followed multiple lawsuits against controversial platform Character.AI and set the stage for a recent wrongful death suit filed against OpenAI competitor Google and its Gemini AI assistant.
Industry-wide, tech and social media companies are facing an onslaught of legal challenges regarding the long-term impact of their products on users. Last month, Instagram CEO Adam Mosseri and Meta head Mark Zuckerberg testified before a jury in a watershed case putting social media platforms on trial for their allegedly addictive design principles. A verdict has yet to be reached.
OpenAI said its new safety prompt pack is not a “comprehensive or final definition or guarantee of teen safety.” Robbie Torney, head of AI and digital assessments for Common Sense Media said that the new policies can build a “meaningful safety floor across the ecosystem,” filling an AI safety gap that has been exacerbated by a lack of operational policies for developers.
Developers can download OpenAI’s safety model on Hugging Face and access its new prompt pack on GitHub.
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
Click here to read more >> https://mashable.com/article/open-ai-new-developer-policies-to-protect-teens
IntheNews.tv What's Trending in the news today