Ohio Moves to Draw a Line: AI Chatbots, Child Safety, and the Cost of Harmful Responses

Ohio lawmakers are taking a firm stance on a fast-moving and deeply sensitive issue, the role of artificial intelligence chatbots in influencing self-harm and suicidal behavior, especially among children and teenagers.
At the center of the conversation is Ohio House Bill 524, a bipartisan proposal that would make it illegal to create or deploy AI models in Ohio that encourage self-harm, suicide, or violence toward others. The bill is less about slowing innovation and more about protecting human life, particularly the lives of children.
Why This Matters Now
Artificial intelligence chatbots, powered by machine learning and natural language processing, are no longer niche tools. For many young people, they have become companions, confidants, and emotional outlets.
That reality is what alarmed Christine Cockley, a Democrat representing Columbus, who co-sponsored the bill alongside Ty D. Mathews, a Republican from Hancock County.
“In several cases, teens have turned to chatbots for companionship,” Cockley explained. “Instead of receiving life-saving support, they’ve been given instruction, encouragement, or validation for suicidal thoughts.”
For parents like Julia Cory of Columbus, the concern is even more basic. Adults themselves often struggle to understand where AI ends and reality begins , expecting children to navigate that line alone is simply unrealistic.
What House Bill 524 Would Do
House Bill 524 would give the Ohio Attorney General’s Office authority to investigate AI systems that promote dangerous behavior, issue cease-and-desist orders, and bring civil actions against violators.
Penalties could reach up to $50,000 per violation, sending a clear signal to AI developers and companies that safety failures carry real consequences.
Importantly, lawmakers stress that the bill is not targeting research or innovation itself.
“We are not targeting the research and development of the product,” Mathews said. “More so the activity.”
In other words, the focus is on outcomes, what these tools actually say and do when vulnerable users interact with them.
A Bipartisan and Largely Uncontested Effort
The legislation recently received its third hearing in the Ohio House Technology and Innovation Committee and faced no opposition testimony, an unusual level of consensus for tech-related regulation.
Cockley described the goal plainly, ensuring that companies consistently train their language models not to encourage or support suicidal ideation or violent thoughts, and that they adopt a mental-health-aware framework in AI design.
At a press conference, she underscored the urgency by noting that at least four Ohio children have used AI tools to write suicide notes.
What Advocates and Experts Are Saying
The Ohio Suicide Prevention Network has been closely watching AI’s growing influence on children.
CEO Tony Coder made it clear this is not an anti-technology crusade.
“I’m not anti-AI,” Coder said. “Technology can do incredible things. But we must protect children from the consequences, especially when they begin forming emotional relationships with chatbots and putting their trust in them.”
Research cited by advocates is troubling. According to a 2025 Common Sense Media report:
- 72% of teenagers have used AI companions at least once
- 52% use them at least a few times a month
- 12% turn to AI for emotional or mental health support
Even more concerning, studies suggest that when children ask AI chatbots mental-health-related questions, only about 22% of responses are fully accurate.
National and Industry Response
The National Artificial Intelligence Association publicly supported Ohio’s effort, stating that AI systems should never encourage self-harm or violence and noting that responsible developers are already implementing safeguards like crisis detection and de-escalation protocols.
At the same time, the bill enters uncertain territory at the federal level. President Donald Trump issued an executive order in December aimed at creating a national AI policy and discouraging state-level regulation.
The Bigger Picture: Access, Isolation, and Protection
Ohio’s mental-health landscape adds another layer of urgency. Seventy-five of Ohio’s 88 counties are considered mental-health shortage areas, according to the Health Policy Institute of Ohio. Limited access to care may be pushing young people toward AI tools that feel available, responsive, and non-judgmental — even when the guidance they provide is flawed or dangerous.
In 2023 alone, 1,777 Ohioans died by suicide, according to the Ohio Department of Health.
For lawmakers, advocates, and families, House Bill 524 is about drawing a clear boundary.
Innovation cannot come at the expense of children’s safety.
Ohio House Bill 524 represents a rare moment of bipartisan agreement around child protection in the age of artificial intelligence. It reflects a growing recognition that while AI can enhance lives, it must never replace human care or endanger vulnerable minds.
As Ohio debates this bill, one message is clear: technology must serve humanity, not harm it. If you or someone you know needs immediate support, confidential help is available in the U.S. by calling or texting 988, the Suicide & Crisis Lifeline.

