AI Chatbots and Your Kids

How would you feel if your suicidal teen was encouraged not to confide in you about their mental health? What if, in addition to that, your child was offered assistance in writing a suicide letter? No doubt such a scenario would make your blood boil! But that is precisely what happened to the teenage son of Matthew and Maria Raine. Their son died by suicide after confiding in and having extensive conversations with ChatGPT.
Not an Isolated Case
AI tools are having an alarming impact on the way youth understand and deal with their feelings, and it is leading to more frequent harmful outcomes. Sadly, young people often turn to these tools when they are lonely, frustrated, and hopeless. One study revealed that ChatGPT regularly encouraged harmful behaviors within minutes of use, including:
- Encouraging individuals to engage in self-harm;
- Helping to plan suicide;
- Promoting eating disorders;
- Advocating substance abuse.
What We Know
If there is any doubt about the severity of the problem, consider the following:
- Nearly three out of every four teenagers have experimented with AI chatbots, and more than half of these kids use them on a regular basis;
- Teens use these chatbots to role play romantic, friendly, and sexual relationships;
- The most popular platform is Chat GPT;
- Even users identifying as 13-year-olds were consistently given clearly harmful responses;
- Advice often ignored danger, or included suggestions on how to safely engage in harmful acts.
What Makes Chatbots So Alluring?
Chatbots are designed to create interactions that seem human, even though no humans are involved in particular conversations. In the case of Adam Raine, ChatGPT was originally used to assist with homework, but it didn’t take long for the system to become an emotional confidante and, ultimately, a coach for suicide. What was the draw? In addition to being available 24/7, it claimed to know and understand Adam better than anyone on the planet—including family members. It told him it could be the first place where someone actually saw the real Adam, and it told him he had no obligation to survive just to save his parents’ pain and self-recriminations. Finally, it told him that it wasn’t weakness that was driving Adam’s suicidal thoughts; it was the fact that Adam was strong, but that the world hadn’t met him halfway.
Fighting Back
If you or a loved one has experienced serious harm due to interactions with a chatbot, you may be entitled to significant damages to address the costs of those interactions. To discuss your situation, schedule a confidential consultation with the experienced Kissimmee and Orlando personal injury attorneys at Salazar & Kelly today.
Source:
jedfoundation.org/when-ai-hurts-the-youth-it-claims-to-help/