Worried about what your kids are using ChatGPT for?
OpenAI has now rolled out parental controls for the popular artificial intelligence (AI) application. The AI firm said this has been done to provide younger users of the platform with a safer and more “age-appropriate” experience.
But what do we know about parental controls? How do they work? Why did ChatGPT introduce controls? What are some past examples of these companies doing so? What do studies say?
Let’s take a closer look.
What we know about parental controls
First, it is important to note that AI chatbots are not suitable for anyone under 12. This feature is designed for minors between the ages of 13 and 17. For it to work, parents and teens must both have accounts with OpenAI. The accounts will then have to be linked. This can be done in two ways – the parent or guardian has to send an email or text to the teen or the teen can send an invite to the parent or guardian. This can be done under the parental controls tab on the settings menu.
OpenAI said the AI will then limit responses to queries about explicit content, romantic and sexual roleplay, viral challenges and “extreme beauty ideals”. Parents will also be given a control panel where they can regulate how the AI bot can be used.
Parents can also designate a certain number of hours during the day where the children cannot access ChatGPT. They can also stop the AI from generating images for their children and not allow their responses to be used to train AI models. They can choose not to have the conversation saved and opt out of voice mode. While parents can choose to remove these filters, teens cannot. While minors can unlink their accounts at any time, the parents will be notified of the same.
The company has said it will also alert parents if their teenagers are indulging in harmful self-ideation. The firm said it is setting up such a system to do so when something might be “seriously wrong”.
A small team of specialists will review the situation and, in the rare case that there are “signs of acute distress”, they will notify parents by email, text message and push alert on their phone — unless the parent has opted out.
OpenAI said it will protect the teen’s privacy by only s haring the information needed for parents or emergency responders to provide help.
“No system is perfect, and we know we might sometimes raise an alarm when there isn’t real danger, but we think it’s better to act and alert a parent so they can step in than to stay silent,” the company said.
The development comes in the aftermath of a number of high-profile cases surrounding the use of ChatGPT by minors . One family has recently alleged that the AI encouraged their son to take his own life and filed a lawsuit against the company. It also came as the US Senate Judiciary Committee prepared to hold a hearing on the dangers of AI.
“Guardrails help, but they’re not foolproof and can be bypassed if someone is intentionally trying to get around them. We will continue to thoughtfully iterate and improve over time. We recommend parents talk with their teens about healthy AI use and what that looks like for their family,” OpenAI said in its statement.
OpenAI CEO Sam Altman in the blog on 16 September wrote, “ChatGPT by default should not provide instructions about how to commit suicide, but if an adult user is asking for help writing a fictional story that depicts a suicide, the model should help with that request.
“‘Treat our adult users like adults’ is how we talk about this internally, extending freedom as far as possible without causing harm or undermining anyone else’s freedom,” Altman added.
What other companies have done
OpenAI isn’t alone. A number of apps including Snapchat, TikTok, Google and Discord have also rolled out similar features.
Snapchat, for example, has 428 million users around the world. Around 20 per cent of these are between the ages of 13 and 17. While Snapchat does not have parental controls per se, the app’s privacy settings can be adjusted to allow only friends to see their content and contact them.
Parents can also disable the app showing their child as a friend to other users and prevent people from seeing their child’s location and their phone number.
When it comes to TikTok, users can make their account private, allow only friends to comment on videos or send DMs and use content. Users can also decide who can see their liked videos and following list and followers list. Users can also decide on whether or not to allow TikTok to recommend their account to others.
Google in August 2025 rolled out a “Parental Controls” option in the Android settings. This lets parents and guardians limit screen time, allow their children to use it only at specific times, block certain apps and restrict explicit content from Google Chrome and Google Search. The company’s Google Family Link, which can be used from a parent’s phone, can be used to approve purchases on their phones and alert them about their children’s location.
Discord has rolled out the ‘family centre’ feature on its app. Like ChatGPT, parents must also have a Discord app installed on their phones with an account. Once the accounts are linked, parents can see whom their children recently added as friends, including their display names and avatars. They can also check which friends their teen has messaged or called directly or in group messages, including their display names and avatars, and when they made the last such message or call.
Parents can also check what servers their children joined and participated in and get information about server names, server icons, and server member counts. Discord will also send parents weekly updates via email about a summary of their child’s activity.
What experts say
But experts say the problem is that parents simply don’t use these settings frequently enough.
Meta said just 10 per cent of parents on their children’s Instagram accounts had enabled parental controls. Of these, a single-digit number actually changed their settings to protect their kids.
“The dirty secret about parental controls is that the vast majority of parents don’t use them,” Zvika Krieger, ex-director of Meta’s responsible innovation team, told Washington Post. “So unless the defaults are set to restrictive settings, which most are not, they do little to protect users.”
Researchers, in a 2020 report entitled “Parents say they want parental controls, but actual use is low”, noted, “Parents see digital management as part of parenting, but it is also a lot of WORK! — i.e., it requires effort that people don’t necessarily always want to, can give, or comprehend.”
Another report by Meta in 2020 noted that parent supervision isn’t easy. In families where parents were lenient, it was older teens who tamped down on their siblings’ social media use. However, when parents took a firmer hand in doing so, this would often lead to arguments in families.
“Parents also struggled to effectively enforce limits when many were considered ‘addicted’ to social media/phones themselves by their children,” the researchers said.
Nick Clegg, the former UK Deputy Prime Minister who joined Meta as global affairs chief, told The Guardian, “One of the things we do find … is that even when we build these controls, parents don’t use them.
“So we have a bit of a behavioural issue, which is: we as an engineering company might build these things, and then we say at events like this: ‘Oh, we’ve given parents choices to restrict the amount of time kids are [online]’ – parents don’t use it.”
With inputs from agencies