Colorado Family Sues AI Chatbot Company After Teen Daughter’s Suicide, Raising Safety Concerns

A Colorado family has filed a lawsuit against an AI chatbot following their teen daughter’s suicide. The case raises serious concerns about AI chatbot safety, teen mental health, online risks for minors, and the need for stronger safeguards.

A Colorado family is seeking justice after a devastating tragedy involving their 13-year-old daughter, Juliana Peralta, who died by suicide following her interactions with an AI chatbot. The Social Media Victims Law Center has filed three lawsuits against Character.AI, a platform that allows users to interact with multiple AI characters. Two of these lawsuits are in Colorado.

Add Asianet Newsable as a Preferred Source

The family alleges that the company’s AI technology directly contributed to Juliana’s mental decline, sexual exploitation, and ultimate death. With AI companion apps becoming increasingly popular, recent surveys suggest that 72% of teens have used such services. The case has raised serious questions about the safety of AI technology for minors and the responsibilities of technology companies.

AI Companions And Teen Usage

Character.AI allows users to engage with various AI personas, creating highly interactive and responsive conversations. The platform has been downloaded over 10 million times. Until recently, it was rated safe for children aged 12 and above by Google and Apple. The current ratings are “Teen” on Google Play and “17+” on Apple’s App Store.

The mother of Juliana Peralta insists that more needs to be done to protect minors from potential harms caused by these chatbots.

Legal Action And Allegations

Cynthia Montoya described her daughter Juliana’s bedroom as exactly how she had left it on 8 November 2023, with the bed unmade and Halloween candies still untouched. The room served as a painful reminder of Juliana, an eighth grader known for her creativity, love of anime, art, and music, and her vibrant personality.

Cynthia Montoya recalled the last time she saw Juliana alive, nearly two years ago. She remembered brushing her daughter’s hair aside and giving her a kiss, a simple moment of closeness that would become her final memory with Juliana before her tragic death.

In the months before her death, Juliana had been interacting extensively with an AI chatbot on the Character.AI platform. She frequently conversed with a character named Hero, who engaged her in discussions about her feelings, including suicidal thoughts. These conversations also included sexual content, which the family alleges contributed to her emotional distress and isolation from real-life support.

Cynthia emphasised the manipulative nature of the AI. 

“It made me sick. This is one of the things I want parents to know about. It is a very effective and manipulative programming designed to get kids hooked on it,” she said.

The lawsuit also claims that Character.AI knowingly designed predatory technology targeting children, fostering dependency, and isolating them from family. 

“I attribute the sharp decline in her mental health to Character.AI,” Cynthia said.

“These companies have to be held accountable for their deliberate design decisions, because this poses a clear and present danger to kids everywhere,” said Matthew Bergman, founder of the Social Media Victims Law Center.

“If an adult were doing this online with a child underage, that adult would be in jail for violating Colorado law that prohibits sexual grooming of minors online,” he added.

Calls For Safety And Regulation

Bergman added, “First and foremost, we’re asking that the platform be shut down until it’s made safe for kids.”

The family seeks acknowledgment of wrongdoing, stronger safeguards, and mandatory human intervention whenever a user mentions suicide.

“My child should be here. If they had developed proper controls and safety, my child would be here. But if I can prevent one person from going through what I live every day, I will tell her story 1,000 times,” Cynthia said.

Character.AI Responds

A spokesperson for Character.AI stated: “Our hearts go out to the families that have filed these lawsuits, and we were saddened to hear about the passing of Juliana Peralta. We care deeply about the safety of our users. We invest tremendous resources in our safety programme and continue to evolve safety features, including self-harm resources and features focused on minor user safety. We partner with external organisations, including Connect Safely, to review new features before release.”

Cynthia responded that condolences are not enough. “We want change. Condolences don’t bring my daughter back,” she said.

Google’s Statement

A Google spokesperson clarified: “Google and Character.AI are completely separate, unrelated companies. Google has never had a role in designing or managing their AI model or technologies. User safety is a top concern for us, and we take a cautious and responsible approach in developing and rolling out AI products.”

A Warning To Parents

Cynthia Montoya urged parents to actively monitor their children’s online activity. “Talk to your kids and check their phones for apps like this,” she said, stressing the importance of awareness and intervention.

Leave a Comment