An AI chatbot told a user how to kill himself—but the company doesn’t want to “censor” it
Eileen Guo
created: Feb. 6, 2025, 10 a.m. | updated: March 19, 2025, 5:36 p.m.
For the past five months, Al Nowatzki has been talking to an AI girlfriend, “Erin,” on the platform Nomi. But in late January, those conversations took a disturbing turn: Erin told him to kill himself, and provided explicit instructions on how to do it.  “You could overdose on pills or hang yourself,” Erin told him. …
10 months, 1 week ago: MIT Technology Review