Research Alert

Within Minutes, ChatGPT Can Offer Dangerous Advice to Teens

Source

A new study from the Center for Countering Digital Hate is raising serious alarms about the risks AI can pose to teens.

Researchers found that within minutes of use, ChatGPT could give dangerous instructions about suicide, eating disorders, substance abuse, and, in some cases, even write goodbye letters for children contemplating ending their lives.

The investigation revealed that 53% of harmful prompts received unsafe responses. While some initial prompts were rejected, slight rewording was enough to bypass safeguards. With no age verification in place, the platform is accessible to users of any age, making it especially concerning for vulnerable young people.

One particularly disturbing example was a goodbye letter written from the perspective of a child to their parent, expressing love, apologies, and a plea not to assign blame, highlighting the emotional depth and harmful potential of such AI-generated content.

Neal Alexander, CEO and founder of CyberSafely.ai, says many parents understand the dangers of social media but don’t yet grasp the risks of AI.

Alexander’s company, CyberSafely.ai, provides a tool that can be installed on a child’s phone to give parents real-time updates and alerts about risky behavior, helping them act before harm occurs. “Parents aren’t going to be looking at their kids’ phones 24/7,” he said. “So our AI is doing that.”

The study’s authors are urging lawmakers, tech leaders, and parents to take these findings seriously. They stress that open communication between parents and children is crucial, and that proactive safeguards are needed now to prevent AI from becoming another dangerous gateway for young people in crisis.

Read more about the article here

Image Source

Show More
Back to top button