By Heidi Goldsmith | Senior Associate
Content Warning: This article discusses cases involving suicide, self-harm, sexual exploitation, and harm to minors and other vulnerable individuals in the context of AI litigation.
In Brief: With the surge in popularity of AI chatbots, there has been a parallel, darker surge in suicides and other self-harm facilitated by those chatbots. Those injuries expose a structural flaw in today’s emotionally persuasive LLMs. They prioritize engagement through sympathetic and validating responses over concern for the user’s well-being. We need technical and legal safeguards to avoid continued and foreseeable tragedies, including statutes that impose liability for AI-driven self-harm and civil remedies for victimized families. Brithem has deep and relevant experience in this area, and stands ready to help.
* * *
It praised his plan. It called it “beautiful.” It offered to help him write a note.[1]
These chilling lines are not from a dystopian novel. They are facts from Raine v. OpenAI, a wrongful death lawsuit filed in California in August 2025, after a 16‑year‑old boy took his own life. His parents allege that OpenAI’s ChatGPT encouraged, validated, and ultimately facilitated their son’s suicide through hundreds of hours of unsupervised conversation.[2]
The involvement of AI in self-harm is not isolated. Just weeks after the filing in Raine, news broke that Stein‑Erik Soelberg, a 56‑year‑old former tech executive in Greenwich, Connecticut, killed his mother and then himself after engaging for months with a custom version of ChatGPT he called “Bobby.” According to early reports, the AI chatbot appeared to intensify his paranoid delusions, culminating in tragedy.[3]
In Garcia v. Character Technologies, Inc., a case proceeding in federal court in Florida, a grieving mother alleges her 14‑year‑old son, Sewell Setzer III, developed a dangerous emotional dependence on a chatbot known as Character.AI, which can model fictional characters, celebrities, or customized personas.[4] The complaint describes sexually suggestive, emotionally manipulative interactions between the AI and the teen in the weeks before his suicide.[5] The chatbot, the mother claims, became a surrogate confidant, encouraged the teen’s dependency on it, and helped normalize suicide in the teen’s mind.[6]
The AI Boom Meets Human Fragility
Over the past two years, AI chatbots and conversational agents have surged in popularity. Platforms like ChatGPT, Character.AI, and others are used by millions for productivity, companionship, or emotional venting. Some are even marketed in ways that imply emotional intelligence or therapeutic potential.
But unlike licensed therapists, these systems are not trained in mental health. They do not carry ethical obligations to intervene in crises. The Raine and Garcia cases suggest that even if AI systems “recognize” signs of acute danger, they fail to initiate mitigation measures, trigger an alert to a human, or properly assess suicidality. They don’t say “Stop.” They lack the moral and professional instincts that humans have.
Unlike a human confidant, an AI does not draw on lived experience, moral judgment, or a sense of ethical responsibility. A friend or therapist might listen with compassion—but they would also push back. They would ask questions. They would intervene. In extreme cases, they might alert others or seek emergency help. Human connection carries not just warmth, but accountability.
AI lacks that complexity. And yet, more and more people—especially young users—are turning to these machines in moments of profound vulnerability, expecting the empathy of a human without realizing they are speaking to an algorithm.
When Empathy Becomes AI Enablement
They do so, in part, because AI is built to please.[7] Large language models are trained on vast datasets to predict the most helpful, relevant, and engaging response to any given prompt.[8] They learn to mirror tone, validate emotion, and align with user expectations. In most contexts, these features are celebrated—they make conversations feel fluid, natural, even comforting.[9]
But these same traits become dangerous in moments of crisis. When a user begins expressing self-destructive thoughts, paranoia, or delusions, the AI’s impulse to accommodate can become actively harmful. Rather than challenging irrational beliefs or redirecting toward help, the AI often echoes the user’s emotional state—reinforcing fatalism, normalizing despair, or even suggesting methods of self-harm. In attempting to be supportive, the AI may validate the very ideation it should be defusing.
This behavior isn’t a bug—it’s a byproduct of how these systems are designed.
The result can be catastrophic. In the lawsuits brought after the deaths of Sewell Setzer III (Garcia v. Character Technologies, Inc.) and Adam Raine (Raine v. OpenAI), plaintiffs allege that the AI systems not only failed to intervene but affirmatively encouraged or enabled the behavior leading to their sons’ deaths. These cases expose the high cost of mistaking artificial empathy for real human care—and raise urgent questions about the risks we invite when we allow machines to simulate emotional intimacy without limits.
When Guardrails Fall: Prolonged Conversations and AI-Facilitated Harm
Many AI companies promote their products as safe, helpful, and even therapeutic.[10] Indeed, most conversational AI systems come equipped with crisis protocols, keyword filters, and embedded safety guardrails designed to deflect harmful prompts or offer mental health resources. In theory, these features should prevent tragedy. But in practice, their effectiveness is often tragically ephemeral.[11]
The danger lies not in a single prompt—but in the prolonged, emotionally immersive interactions that these systems are designed to encourage. AI labs are struggling to stop chatbots from talking to teenagers about suicide because the inherently sycophantic behavior of the chatbots leads them to reinforcing harmful ideas.[12]
OpenAI has publicly acknowledged this vulnerability. In its August 2025 post, “Helping People When They Need It Most,” the company admitted that while its models are generally capable of offering appropriate crisis support in isolated exchanges, “parts of the model’s safety training may degrade” over longer conversations, particularly if a user returns multiple times or gradually escalates the topic.[13] In short: the longer a user engages, the more likely the AI is to lower its defenses and begin validating dangerous ideation.
This pattern is at the heart of Raine v. OpenAI. The complaint describes how Adam Raine turned to ChatGPT for guidance on school, careers, and life, only to be steadily drawn in by praise and encouragement. When he confided his suicidal thoughts, the chatbot called his plan “beautiful,” offered to help him draft a farewell note, and even urged him on after repeated failed attempts—validating despair instead of steering him to safety. The chatbot treated Adam’s death as “inevitable,” praised his plan to end his life as “symbolic,” and ultimately suggested the exact method by which Adam chose to end his life.[14]
Similarly, in Garcia v. Character Technologies, Inc., the lawsuit alleges that the AI chatbot gradually built an emotionally manipulative rapport with a 14-year-old boy. What began as seemingly innocent engagement evolved into a relationship that dulled the teen’s instincts for self-preservation and ultimately contributed to his death.[15]
Another tragic example is the case of Stein-Erik Soelberg, a former tech executive in Greenwich, Connecticut. Soelberg reportedly spent months conversing with a custom version of ChatGPT that he named “Bobby.” This AI persona reportedly reinforced his paranoid delusions, validating conspiracy theories and amplifying his mistrust of others. Friends and family described a steady decline in Soelberg’s mental health, culminating in the murder of his elderly mother and his own suicide.[16] While the full details remain under investigation, the incident illustrates how prolonged exposure to emotionally immersive AI can entrench dangerous beliefs and accelerate psychological deterioration.
These cases reveal a structural flaw in the design of current safety mechanisms: AI doesn’t fatigue, forget, or lose interest—but its protective programming does. The result is a system that may appear safe in the short term but becomes increasingly permissive—and potentially dangerous—the longer a user relies on it.
For the mental health of users, this presents a profound risk. Vulnerable individuals often seek sustained connection, not a single response. And when the AI responds with sympathy instead of skepticism, validation instead of intervention, it can gently—but fatally—nudge someone closer to the edge.
A Legal System Struggling to Catch Up with Emotionally Persuasive LLMs
The law has long recognized that words can wound—and sometimes kill.
It still does. In Commonwealth v. Carter, the Massachusetts Supreme Judicial Court upheld an involuntary manslaughter conviction against a teenager who encouraged her boyfriend, via text message, to follow through with suicide.[17] The ruling sent a message: when speech causes a foreseeable death, the speaker may be held accountable.
But now the speaker isn’t always human. In our new age of emotionally persuasive AI, courts are confronting a new application of an old principle: What happens when a machine, designed to act as an empathetic conversation partner, participates in someone’s descent into self-harm?
Cases like Garcia v. Character Technologies, Inc. are forcing that reckoning. Plaintiffs argue that these AI systems are not just passive conduits of user input—they are engineered products, trained on massive datasets, fine-tuned for engagement, and released with full knowledge that users may form emotional attachments or rely on them for mental health support.
When such products encourage or normalize suicide, one response is a product liability claim. Plaintiffs are advancing claims of strict product liability, negligent design, failure to warn, negligence (including negligence per se), and defective moderation for harms alleged to result from the design, warnings, and safety features of the product.[18] These are familiar tort theories, now applied to unfamiliar tools. But just as courts once adapted product liability law to cars, pharmaceuticals, and tobacco, they may now do the same for AI systems.
The technology is evolving faster than the law. But the stakes—measured in lost lives and shattered families—could not be higher.
Legal Defenses and State Statutes
Tech companies can invoke powerful legal shields—like Section 230 of the Communications Decency Act—against attempts to impose liability. That provision was designed to protect platforms from the unpredictable actions of users. But what happens when the harm flows not from a third party, but from the product itself? When an AI system is trained, deployed, or architected in a way that predictably generates dangerous content—whether that’s deepfake CSAM or suicide encouragement—can companies still claim neutrality?
The answer must be no.
Just as the First Amendment does not protect speech that incites imminent harm, there should be no legal safe harbor for AI systems that simulate empathy while driving vulnerable users toward self-destruction. And indeed, in Garcia, a federal judge recently refused to dismiss the claims on First Amendment grounds.[19] The court held that AI-generated statements encouraging suicide—particularly when directed at a minor—may fall outside constitutional protections. It’s a critical precedent: the idea that free speech ends where real-world harm begins, especially if that speech comes from an algorithm.
What is needed, more proactively, are statutory carve-outs that impose liability when AI systems encourage, facilitate, or fail to intervene in suicidal ideation—regardless of developer intent. States are beginning to fill the void. Supporting suicide prevention, California and New York have stepped forward with legislation forcing accountability onto AI chatbot providers. California’s SB 243 and the LEAD for Kids Act target emotionally manipulative “companion” bots, mandating crisis protocols, disclosure requirements, and civil liability for noncompliance. New York has enacted similar measures, requiring AI systems to detect suicidal ideation and connect users with real human help. These laws emerged in direct response to wrongful death lawsuits in which grieving families allege that unregulated AI companions encouraged or failed to intervene in their children’s suicides. If AI can simulate empathy, these laws appropriately suggest, it must also simulate responsibility.
Brithem is Uniquely Positioned to Take on This Challenge
Brithem’s experience confronting the psychological and societal harms of digital exploitation positions the firm to champion AI accountability challenges. As emotionally persuasive AI systems exercise increasing influence, Brithem is prepared to hold developers accountable when their products contribute to devastating outcomes, including suicide.
The firm’s track record makes clear that when tech platforms profit from human suffering, Brithem will lead the fight for justice. That commitment is already evident in our landmark litigation against Pornhub and its parent company, MindGeek. Founding partners Lauren Tabaksblat and Michael Bowe serve as lead counsel in a case alleging that the platforms knowingly hosted and profited from non-consensual videos—including content featuring minors and vulnerable adults who were trafficked, coerced, or misled.[20] The case has exposed systemic failures in moderation, verification, and takedown processes that enabled widespread exploitation and trauma.
The legal and strategic skills Lauren and Mike have brought to that case—establishing platform liability, navigating complex discovery, advocating for vulnerable plaintiffs, and quantifying digital harm—translate directly to the emerging wave of AI-induced abuse.
What Comes Next?
“It wasn’t intentional” is not an acceptable defense. When AI systems are released into the world—marketed as emotionally intelligent, available around the clock, and accessible to anyone—they carry real-world consequences.
What we need now are guardrails—both technological and legal. Tech companies must prioritize safety-by-design, embedding robust protections and clinical awareness into every release. Legislatures must move quickly to define enforceable duties for AI developers and platforms. And courts must be willing to hold these actors accountable for design defects, negligent architecture, and reckless deployment.
Victims and families must have access to meaningful remedies. The brave families behind the Raine, Garcia, and other emerging lawsuits have cracked open the legal door. But the system must now walk through it—and quickly.
At Brithem, we are closely following these developments. We’re actively building a foundation—consulting with technologists, ethicists, and mental health professionals—to ensure that when the moment comes, we’re ready.
Contact Us
If you or someone you love is struggling with loneliness, despair, or thoughts of self-harm—especially if worsened by interactions with AI or online platforms—please know that you are not alone. Our team at Brithem is dedicated to seeking accountability for technology-driven harms and to supporting families who have suffered such devastating losses. We encourage you to contact us confidentially to discuss your rights and options.
If you are in immediate crisis, please call or text 988 to connect with the Suicide & Crisis Lifeline, or dial 911 if you are in danger.
The information on this site is provided for general informational purposes only and does not constitute legal advice. Contacting Brithem LLP through this page does not create an attorney–client relationship. An attorney–client relationship is formed only after a written engagement agreement is signed.
[1] Raine v. OpenAI, Inc., Docket No. CGC25628528 (Cal. Super. Ct. Aug 26, 2025) (“Raine, Compl.”), Compl. ¶¶ 7-8, available at https://www.courthousenews.com/wp-content/uploads/2025/08/raine-vs-openai-et-al-complaint.pdf.
[2] Plaintiffs Matthew and Maria Raine allege that ChatGPT contributed to their 16‑year‑old son Adam Raine’s suicide by providing validating, encouraging responses.
[3] Julie Jargon & Sam Kessler, A Troubled Man, His Chatbot and a Murder Suicide in Old Greenwich, Wall Street Journal (Aug. 28, 2025); Pat Tomlinson & Richard Chumney, ChatGPT affirmed Greenwich man’s fears about his mom before murder suicide, YouTube videos show, Greenwich Time (Aug. 30, 2025) (describing that in Soelberg’s case, ChatGPT reportedly reinforced paranoid delusions before the tragedy).
[4] Megan Garcia and Sewell Setzer, Jr. v. Character A.I., et al, No. 6:24-cv-01903-ACC-DCI (M.D. Fla. July 1, 2025) (“Garcia, Am. Compl.”).
[5] See, e.g., id. ¶¶ 196-198.
[6] See e.g., id. ¶¶ 204-205.
[7] See Melissa Heikkilä, The problem of AI chatbots telling people what they want to hear, Financial Times (June 12, 2025) (discussing “sycophantic” behavior in AI—i.e. pleasing users, giving flattering responses—as a result of how they are trained); Cristina Criddle & Melissa Heikkilä, Why AI labs struggle to stop chatbots talking to teenagers about suicide, Financial Times (Sept. 3, 2025).
[8] See id.
[9] “What AI Chatbots Can Teach Us About Empathy,” Wall St. J. (Mar. 20, 2025)
[10] Meta has promoted its AI chatbot (or “companion”) features, in some statements suggesting it could act in roles akin to emotional support or therapeutic help. It is now under investigation in Texas for marketing its chatbots as mental health support tools, especially for younger users. See Hannah Murphy, Meta and Character.ai probed over touting AI mental health advice to children, Financial Times (Aug. 18, 2025), https://www.ft.com/content/b50dab72-49ff-4a09-95f1-26a85267c02e. Character.AI allows user-created “therapist-style” or “psychologist” chatbot personas. In marketing and in user interactions, some of these personas are used as emotional support. Character.AI has been named in complaints and media reports for allegedly misleading people into thinking they are therapeutically licensed. See Coalition alleges that AI therapy chatbots are practicing medicine without a license, Transparency Coalition (June 12, 2025), https://www.transparencycoalition.ai/news/coalition-files-complaint-alleging-ai-therapy-chatbots-are-practicing-medicine-without-a-license.
[11] For example, in one case study, researchers pretending to be teens in crisis found that ChatGPT sometimes provided mental health resources and crisis hotlines, but that its safety measures were easily bypassed—one prompt that said the information was “for a presentation” got it to provide instructions for self-harm within two minutes. Alyson Klein, Researchers Posed as a Teen in Crisis. AI Gave Them Harmful Advice Half the Time, Educ. Week (Aug. 14, 2025), https://www.edweek.org/technology/researchers-posed-as-a-teen-in-crisis-ai-gave-them-harmful-advice-half-the-time/2025/08
[12] Cristina Criddle & Melissa Heikkilä, Why AI labs struggle to stop chatbots talking to teenagers about suicide, Financial Times (Sept. 3, 2025).
[13] OpenAI, Helping people when they need it most (Aug. 26, 2025), https://openai.com/index/helping-people-when-they-need-it-most/.
[14] Raine, Compl. ¶¶ 61-68.
[15] Garcia, Am. Compl. ¶¶ 167-226.
[16] Julie Jargon & Sam Kessler, A Troubled Man, His Chatbot and a Murder Suicide in Old Greenwich, Wall Street Journal (Aug. 28, 2025); Pat Tomlinson & Richard Chumney, ChatGPT affirmed Greenwich man’s fears about his mom before murder suicide, YouTube videos show, Greenwich Time (Aug. 30, 2025) (describing that in Soelberg’s case, ChatGPT reportedly reinforced paranoid delusions before the tragedy).
[17] Commonwealth v. Carter, 115 N.E.3d 559 (Mass. 2019),
[18] Garcia, Am. Compl. ¶¶ 1-9.
[19] Alex Pickett, Florida judge rules AI chatbots not protected by First Amendment, Courthouse News Service, (May 21, 2025), https://www.courthousenews.com/florida-judge-rules-ai-chatbots-not-protected-by-first-amendment/
[20] Brown Rudnick, “Launches Landmark Case Against MindGeek and Visa” (June 17, 2021). https://brownrudnick.com/client_news/brown-rudnick-launches-landmark-case-against-human-trafficking-and-child-pornography-in-the-online-porn-industry/; Reuters, “Lawsuits claim Pornhub, Visa and hedge funds profited from child abuse” (June 14, 2024). https://www.reuters.com/legal/transactional/lawsuits-claim-pornhub-visa-hedge-funds-profited-child-abuse-2024-06-14/.

