US court denies chatbot free speech rights; AI firm, Google to face teen suicide suit
The court rejected arguments that chatbot outputs should be protected speech under the First Amendment.
A U.S. federal judge on Wednesday refused to dismiss a wrongful death lawsuit against Character.AI and Google, clearing the way for a Florida mother’s case to move forward.
The suit alleges that the chatbot platform contributed to the suicide of her 14-year-old son.
Megan Garcia filed the suit in October, claiming that her son, Sewell Setzer III, was emotionally manipulated by a Character.AI chatbot modeled after a Game of Thrones character.
Setzer died by suicide in February 2024. The case is one of the first in the U.S. to test the limits of constitutional protections for AI-generated content.
Judge rejects free speech defense
Character.AI and Google sought to dismiss the lawsuit by arguing that chatbot responses are protected under the First Amendment. U.S. District Judge Anne Conway disagreed, stating she is “not prepared” to rule that chatbot output qualifies as speech “at this stage.”
The court also found that Character.AI could assert the First Amendment rights of its users to receive chatbot responses, but not for the chatbot’s own outputs.
Conway added that the companies “fail to articulate why words strung together by an LLM (large language model) are speech.”