Site icon eDiscovery Today by Doug Austin

Parental Controls are Coming to ChatGPT: Artificial Intelligence Trends

Parental Controls are Coming to ChatGPT

After the very sad story of a 16-year-old boy who died by suicide after interacting with ChatGPT, parental controls are coming to ChatGPT.

According to CNN Business (Parental controls are coming to ChatGPT ‘within the next month,’ OpenAI says, written by Lisa Eadicicco and available here), ChatGPT’s parent company, OpenAI, says it plans to launch parental controls for its popular AI assistant “within the next month” following allegations that it and other chatbots have contributed to self-harm or suicide among teens.

The controls will include the option for parents to link their account with their teen’s account, manage how ChatGPT responds to teen users, disable features like memory and chat history and receive notifications when the system detects “a moment of acute distress” during use. OpenAI previously said it was working on parental controls for ChatGPT, but specified the timeframe for release on Tuesday.

Advertisement

“These steps are only the beginning,” OpenAI wrote in a blog post on Tuesday. “We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible.”

The announcement comes after the parents of 16-year-old Adam Raine filed a lawsuit against OpenAI alleging that ChatGPT advised the teenager on his suicide. Last year, a Florida mother sued chatbot platform Character.AI over its alleged role in her 14-year-old son’s suicide. There have also been growing concerns about users forming emotional attachments to ChatGPT, in some cases resulting in delusional episodes and alienation from family, as reports from The New York Times and CNN have indicated.

OpenAI didn’t directly tie its new parental controls to these recent reports, but said in a blog post last week that “recent heartbreaking cases of people using ChatGPT in the midst of acute crises” prompted it to share more detail about its approach to safety. ChatGPT already included measures, such as pointing people to crisis helplines and other resources, an OpenAI spokesperson previously said in a statement to CNN.

But in the statement issued last week in response to Raine’s suicide, the company said its safeguards can sometimes become unreliable when users engage in long conversations with ChatGPT.

Advertisement

“ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources,” a company spokesperson said last week. “While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade. Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.”

The “proof will be in the pudding”, of course, but it’s good to see that OpenAI is attempting to address an issue – instead of hiding from it – which appears to be growing as large language models like ChatGPT continue to interact in a more human-like manner. Younger users who may be less educated on how these models really work may be more likely to treat them as human, with clearly disastrous consequences in some instances. Let’s hope this helps.

So, what do you think? Are you surprised that parental controls are coming to ChatGPT? Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Exit mobile version