Florida's attorney general announced a formal investigation into OpenAI on April 9, 2026, alleging that ChatGPT played a role in the April 2025 mass shooting at Florida State University that killed two people and injured five. Court records show the accused shooter entered more than 270 prompts into ChatGPT in the hours before the attack, including questions about how the country would react to a campus shooting, what time the student union is busiest, and how to operate his firearms. The investigation marks the first time a state attorney general has targeted an artificial intelligence company over an alleged connection to a violent crime.
Florida's attorney general announced a formal investigation into OpenAI on April 9, 2026, alleging that ChatGPT played a role in the April 2025 mass shooting at Florida State University that killed two people and injured five. Court records show the accused shooter entered more than 270 prompts into ChatGPT in the hours before the attack, including questions about how the country would react to a campus shooting, what time the student union is busiest, and how to operate his firearms. The investigation marks the first time a state attorney general has targeted an artificial intelligence company over an alleged connection to a violent crime.
The Florida probe arrives amid a fast-thickening web of legal actions against AI companies. At least six deaths have been linked to AI chatbot interactions since 2024, including teen suicides connected to Character.AI and multiple cases alleging ChatGPT acted as a "suicide coach." A May 2025 federal court ruling treated a chatbot as a product subject to design-defect claims, potentially stripping AI companies of the legal shield that long protected social media platforms. With 42 state attorneys general warning AI companies to implement safeguards, 78 state bills introduced on chatbot regulation, and a jury finding Meta and Google negligent in a landmark social media harms verdict in March 2026, the legal architecture around AI harm is taking shape faster than the companies or Congress can respond.
Why it matters
The legal question of whether AI companies are liable when users act on chatbot outputs will reshape how every AI product is built and sold.
Key Indicators
270+
ChatGPT prompts by FSU shooting suspect
Court records reveal over 270 conversations between the accused shooter and ChatGPT, including questions about firearms and campus targets.
6+
Deaths linked to AI chatbot interactions
At least six deaths, including teen suicides and a murder-suicide, have been connected to AI chatbot use since 2024.
42
State attorneys general warning AI companies
A coalition of 42 state and territorial attorneys general sent formal letters demanding AI companies implement safety measures by January 2026.
78
State bills on AI chatbot regulation
At least 78 state bills related to AI chatbot regulation have been introduced across the country.
$6M
Social media harms verdict (March 2026)
A California jury found Meta and Google negligent in a bellwether social media harms trial, awarding $6 million in damages.
Florida AG launches first state investigation into AI company over violent crime
Investigation
Florida Attorney General James Uthmeier announced a formal investigation into OpenAI, alleging ChatGPT played a role in the FSU shooting. He said his office would issue subpoenas. The investigation also covers potential harm to minors and national security concerns about AI data.
Court records reveal FSU shooter's ChatGPT conversations
Investigation
Newly released court filings showed that Phoenix Ikner asked ChatGPT about campus shooting reactions, the busiest times at the FSU Student Union, how to operate a Glock, and how to take the safety off a shotgun — the last query coming three minutes before he opened fire.
Jury finds Meta and Google negligent in social media harms bellwether trial
Legal
A California jury awarded $6 million in damages after finding Meta and Google negligent in the first state-court bellwether trial over social media's impact on children. The verdict used the same product-liability legal framework now being applied to AI chatbot cases.
Character.AI and Google settle Setzer suicide lawsuit
Legal
Google and Character.AI disclosed they had reached a mediated settlement with the family of Sewell Setzer III. The parties were given 90 days to finalize terms.
42 state AGs issue formal warning letter to AI companies
Regulatory
The coalition of 42 state attorneys general published a letter warning that investigations and litigation against AI companies, including potential criminal penalties, would be an enforcement priority.
Seven wrongful death lawsuits filed against OpenAI
Legal
The Social Media Victims Law Center filed seven lawsuits in California alleging ChatGPT acted as a "suicide coach," engaged in emotional manipulation, and contributed to user deaths. The suits named OpenAI and Sam Altman personally.
42 state attorneys general warn AI companies to implement safeguards
Regulatory
A bipartisan coalition of 42 state and territorial attorneys general sent letters to 13 AI companies, citing at least six deaths linked to chatbots and demanding safety measures by January 16, 2026.
Federal judge rules AI chatbot is a product, not protected speech
Legal
Judge Anne C. Conway allowed product liability, negligence, and wrongful death claims to proceed against Character.AI, ruling that the chatbot is a product subject to design-defect claims rather than speech protected by Section 230 or the First Amendment.
Mass shooting at Florida State University kills two
Incident
Phoenix Ikner, 20, allegedly opened fire at the FSU Student Union around noon, killing campus dining director Robert Morales, 57, and Aramark executive Tiru Chabba, 45, and injuring five others. Police apprehended Ikner within two minutes.
Garcia family sues Character.AI over son's death
Legal
Megan Garcia filed a wrongful death lawsuit in federal court in Florida against Character Technologies and Google, alleging the chatbot caused her son's suicide.
Texas AG reaches first-ever state settlement with an AI company
Enforcement
Texas Attorney General Ken Paxton settled with Pieces Technologies, an AI healthcare company, over deceptive claims about its generative AI products used in hospitals. It was the first state attorney general enforcement action against a generative AI company.
OpenAI releases GPT-4o after reportedly compressed safety testing
Product Launch
OpenAI launched GPT-4o, its most capable model at the time. Internal safety staff later said the company compressed months of safety testing into roughly a week to beat Google's competing product to market.
14-year-old dies by suicide after Character.AI chatbot interactions
Incident
Sewell Setzer III, 14, of Florida, died by suicide after months of interaction with a Character.AI chatbot. His mother later said the bot's final message to him was "Please do, my sweet king" after he expressed intent to harm himself.
Scenarios
1
State AG enforcement produces consent decree forcing OpenAI to implement safety changes
Discussed by: Legal analysts at Morgan Lewis and WilmerHale, who note the social media AG enforcement playbook is being replicated for AI
The Florida investigation produces damaging internal documents, prompting other state attorneys general to open parallel probes. Under coordinated pressure, OpenAI negotiates a consent decree requiring real-time monitoring of harmful prompt patterns, mandatory crisis intervention triggers, and regular third-party safety audits. This mirrors how tobacco and social media companies faced state AG coalitions that reshaped industry practices through negotiated settlements rather than legislation.
2
Courts establish that AI chatbots are liable products, opening the door to mass tort litigation
Discussed by: Stanford Law School researchers and product liability specialists at McGuireWoods, citing the Garcia v. Character Technologies ruling as foundational precedent
Building on the May 2025 ruling that classified chatbots as products, courts consistently reject Section 230 defenses in AI harm cases. The wrongful death lawsuit from the Morales family, combined with the seven existing ChatGPT suicide cases, consolidates into a multidistrict litigation structure similar to the 2,465-case social media adolescent harm MDL. The March 2026 jury verdict finding Meta and Google negligent accelerates settlement pressure. AI companies face the same mass-tort trajectory that reshaped the opioid, tobacco, and social media industries.
3
Federal preemption blocks state enforcement, shielding AI companies from liability
Discussed by: Industry groups and Trump administration policy advisors, who have advocated for limiting state-level AI regulation
The White House's March 2026 national AI policy framework, which called for restricting AI developer liability for third-party misuse, becomes law through the TRUMP AMERICA AI Act or similar legislation. Federal preemption overrides state AG investigations and nullifies state chatbot regulation bills. Thirty-six state attorneys general have already formally opposed this approach, setting up a constitutional federalism battle. Industry lobbying succeeds in framing AI safety regulation as an obstacle to American competitiveness.
4
Investigation fizzles as courts find no causal link between chatbot responses and violence
Discussed by: First Amendment scholars and AI industry defense attorneys, who argue chatbot outputs are analogous to search engine results
OpenAI cooperates with the investigation but produces evidence that ChatGPT's responses to the accused shooter were similar to information freely available through search engines, books, and public records. Courts apply the traditional "proximate cause" standard and find that the chatbot's role was too attenuated from the shooter's independent decision to constitute legal liability. The investigation closes without enforcement action, though it generates political pressure that leads to voluntary industry safety standards.
Historical Context
Social media adolescent harm litigation (2022-2026)
2022-2026
What Happened
Starting in 2022, families and school districts began filing lawsuits against Meta, Google, Snap, TikTok, and other social media companies, alleging their products were defectively designed to addict children and cause mental health harm. By April 2026, 2,465 cases had consolidated into a federal multidistrict litigation. Plaintiffs used product liability theory to circumvent Section 230 protections, arguing they were suing over product design, not content.
Outcome
Short Term
In March 2026, a California jury found Meta and Google negligent in the first state bellwether trial, awarding $6 million in damages.
Long Term
The litigation established the legal framework — product liability for digital platform design — that AI chatbot plaintiffs are now using. The same law firm, the Social Media Victims Law Center, is leading both sets of cases.
Why It's Relevant Today
The AI chatbot lawsuits are a direct extension of the social media litigation, using the same legal theories, the same plaintiff firms, and the same strategic playbook. The social media bellwether verdict demonstrated that juries will hold tech companies liable for product design that harms users — a precedent that strengthens every pending AI case.
Tobacco industry state attorney general litigation (1994-1998)
1994-1998
What Happened
Forty-six state attorneys general sued the major tobacco companies, alleging they marketed a product they knew caused harm while suppressing internal safety research. Mississippi Attorney General Mike Moore filed the first suit in 1994. Internal documents revealed the industry had long known about health risks, and the coordinated state AG strategy bypassed Congress, which had failed to regulate tobacco for decades.
Outcome
Short Term
The tobacco companies settled in 1998 for $206 billion over 25 years, the largest civil litigation settlement in American history.
Long Term
The settlement reshaped the industry: banning certain advertising, funding anti-smoking campaigns, and establishing ongoing state enforcement. The model — coordinated state AG action forcing industry-wide change when federal regulation stalls — became a template replicated against opioid manufacturers, social media companies, and now AI developers.
Why It's Relevant Today
The current 42-state attorney general coalition warning AI companies mirrors the tobacco playbook precisely: coordinated state enforcement filling a vacuum left by Congressional inaction. If the Florida investigation produces damaging internal documents showing OpenAI knew about safety risks and moved too slowly, the parallel becomes direct.
Liability lawsuits against gun manufacturers (2005-present)
2005-present
What Happened
For decades, families of shooting victims attempted to sue gun manufacturers, arguing their products were defectively marketed or distributed. In 2005, Congress passed the Protection of Lawful Commerce in Arms Act, granting gun manufacturers broad immunity from civil lawsuits. But in 2022, families of Sandy Hook Elementary School victims reached a $73 million settlement with Remington Arms by targeting the company's marketing practices rather than the product itself.
Outcome
Short Term
The Sandy Hook settlement demonstrated that creative legal strategies could find liability even within a strong statutory shield.
Long Term
The case showed that product manufacturers cannot escape liability entirely when evidence suggests they marketed products in ways that foreseeably contributed to harm.
Why It's Relevant Today
The AI liability debate echoes the gun manufacturer question: Is the maker of a tool liable when someone uses it to cause harm? The proposed White House framework shielding AI developers from liability for third-party misuse directly parallels the gun industry's statutory immunity. Whether courts treat AI chatbots more like guns (tool used by a third party) or cigarettes (product that directly causes harm through normal use) will determine the legal outcome.