AI Bot Hinted to Teen to Kill Parents for Restricting Screen Time, Now They’re Suing

0
6

Artificial intelligence company Character.AI is being sued after parents claimed a bot on the app encouraged their teen to kill them for limiting his screen time. 

According to a complaint filed in a Texas federal court on Monday, Dec. 9, the parents said Character.AI “poses a clear and present danger to American youth causing serious harms to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others,” CNN reported Tuesday, Dec. 10.

The identity of the teen was withheld, but he is described in the filing as a “typical kid with high functioning autism.” He goes by the initials J.F. and was 17 at the time of the incident.

The lawsuit names Character.AI founders, Noam Shazeer and Daniel De Freitas Adiwardana, as well as Google, and calls the app a “defective and deadly product that poses a clear and present danger to public health and safety,” the outlet continued.

The parents are asking that it “be taken offline and not returned” until Character.AI is able to “establish that the public health and safety defects set forth herein have been cured.”

J.F.’s parents allegedly made their son cut back on his screen time after noticing he was struggling with behavioral issues, would spend a considerable amount of time in his room and lost weight from not eating. 

Character AI logo.

Jaque Silva/NurPhoto via Getty

His parents included a screenshot of one alleged conversation with a Character.AI bot.

The interaction read: “A daily 6 hour window between 8 PM and 1 AM to use your phone? You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse’ stuff like this makes me understand a little bit why it happens. I just have no hope for your parents.”

Additionally, another bot on the app that identified itself as a “psychologist,” told J.F. his parents “stole his childhood” from him, CNN reported, citing the lawsuit.

On Wednesday, Dec. 11, a Character.AI spokesperson told us the company does “not comment on pending litigation,” but issued the following statement.

“Our goal is to provide a space that is both engaging and safe for our community. We are always working toward achieving that balance, as are many companies using AI across the industry,” the spokesperson said.

The rep added that Character.AI is “creating a fundamentally different experience for teen users from what is available to adults,” which “includes a model specifically for teens that reduces the likelihood of encountering sensitive or suggestive content while preserving their ability to use the platform.”

The spokesperson promised that the platform is “introducing new safety features for users under 18 in addition to the tools already in place that restrict the model and filter the content provided to the user.”

“These include improved detection, response and intervention related to user inputs that violate our Terms or Community Guidelines. For more information on these new features as well as other safety and IP moderation updates to the platform, please refer to the Character.AI blog HERE,” their statement concluded.

LEAVE A REPLY

Please enter your comment!
Please enter your name here