OpenAI will soon launch a version of ChatGPT that will exclusively cater to teens under the age of 18, as the artificial intelligence company faces parents’ wrath over the protection of children from AI chatbots.
“We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection,” Sam Altman wrote in a blog post.
The kids’ version of ChatGPT will have parental controls to monitor their children’s conversations as well as their usage of the platform.
The startup’s safety updates follow a recent inquiry by the Federal Trade Commission into several tech companies, including OpenAI, examining how AI chatbots like ChatGPT may negatively impact children and teenagers.
How will ChatGPT for teens be different?
In case OpenAI identifies a user as a minor, they will automatically redirect them to the age-appropriate version of ChatGPT that will block graphic and sexual content and can involve law enforcement in rare cases of acute distress, the company said.
OpenAI is also working on technology to more accurately estimate a user’s age, but if there’s any uncertainty or incomplete data, ChatGPT will automatically default to the under-18 experience.
Last month, following a lawsuit in which a family alleged the chatbot contributed to their teenage son’s suicide, OpenAI outlined how ChatGPT will respond in “sensitive situations.”
The latest update will allow parents to link their accounts on ChatGPT to those of their underage kids via email, set blackout hours for when their teen can’t use the chatbot, manage which features to disable, guide how the chatbot responds and receive notifications if the teen is in acute distress.
Impact Shorts
More ShortsPlea for AI regulation reaches Senate
Meanwhile, three parents whose children died or were hospitalised after interacting with artificial intelligence chatbots called on Congress to regulate AI chatbots on Tuesday, at a US Senate hearing on harms to children using the technology.
Chatbots “need some sense of morality built into them,” said Matthew Raine, who sued OpenAI following his son Adam’s death by suicide in California after receiving detailed self-harm instructions from ChatGPT.
“The problem is systemic, and I don’t believe they can’t fix it,” Raine said, adding that ChatGPT quickly shuts down other lines of inquiry that do not involve self-harm.
With inputs from agencies