A Mother Says an AI Startup's Chatbot Drove Her Son to Suicide. Its Response: the First Amendment Protects "Speech Allegedly Resulting in Suicide"
Jon Christian
created: Jan. 25, 2025, 7:07 p.m. | updated: March 19, 2025, 5:36 p.m.
<p>Content warning: this story discusses suicide, self-harm, sexual abuse, eating disorders and other disturbing topics. In October of last year, a Google-backed startup called Character.AI was hit by a lawsuit making an eyebrow-raising claim: that one of its chatbots had driven a 14-year-old high school student to suicide. As Futurism's reporting found afterward, the behavior of Character.AI's chatbots can indeed be deeply alarming — and clearly inappropriate for underage users — in ways that both corroborate and augment the suit's concerns. Among others, we found chatbots on the service designed to roleplay scenarios of suicidal ideation, self-harm, school shootings, child […]</p>
5 months, 2 weeks ago: Futurism