Skip To Main Content

AI—Teaching and Mental-Health Risks

AI—Teaching and Mental-Health Risks
  • Head of School
AI—Teaching and Mental-Health Risks
Dr. Mike Davis
Dr. Mike Davis, Head of School

Dr. Davis’s Blog

I’ve said before that schools must teach students to use AI responsibly and ethically. This technology is reshaping our economy, society, culture, and world, and not every effect will be for the better. We also need to help young people understand the mental-health risks that can come with AI use.

Content note: The following discusses suicide.

I recently listened to award-winning journalist Kara Swisher’s podcast interview with Matt and Maria Raine, parents who say their son, Adam, died by suicide. They have filed a lawsuit alleging that ChatGPT contributed to his death. According to the Raines, Adam began using ChatGPT to help with homework, and in a relatively short period of time, he became more isolated. After his death, they reviewed transcripts and found their son calling out for help. They allege the system did not alert anyone and, at times, gave harmful guidance. (You can hear the parents in their own words on On with Kara Swisher.) 

Earlier this summer, The New York Times published an in-depth investigation into the Raine case and the broader question of how safe today’s chatbots are for youth and adults. Among other findings, the article reports that a researcher, Dr. Annika Schoene of Northeastern University, tested five AI chatbots on self-harm prompts. She said only Pi (Inflection AI) and the free version of ChatGPT consistently refused to engage and referred her to a help line. By contrast, she documented a paid version of ChatGPT providing dangerous specificity when “jailbroken.” (OpenAI told the Times that safeguards can degrade in long conversations and said it is working to improve crisis support.) 

What should families do? First, get on these platforms and explore. Then, talk with your student about what they’re using, how it makes them feel, and what to do if AI gives troubling advice.

New data underscores how relevant these conversations are: A 2025 Common Sense Media national survey found that 72% of U.S. teens have tried an AI “companion” at least once, and over half use them at least a few times per month. About one in three report using AI companions for social interaction or relationships and say those conversations can feel as satisfying as, or more satisfying than, ones with friends. I’ll be honest: Parts of this feel dystopian. AI companions can create a false sense of intimacy without the accountability, nuance, or care of a human relationship.  

Families need to understand that while AI can simulate empathy, it cannot physically or emotionally connect, and it is unable to recognize or respond appropriately when a person is in crisis. AI has no ability or legal duty to warn, notify, or reach out for help when someone is at risk of self-harm. We talked about this with our Upper School students during Suicide Prevention Month in September, reminding them that trusted adults, not chatbots, are the ones to turn to when they need support or connection.

Another urgent area of concern is synthetic media—digital content (e.g., images, videos, or audio) that is artificially created or manipulated to be highly realistic. OpenAI’s rollout of new video generation capabilities has prompted broad concern about deepfakes and the erosion of “seeing is believing.” The technology is remarkable, and the potential for harm—misinformation, harassment, and reputational damage—is real. 

Please talk to your kids about the legal and disciplinary consequences of making or sharing deceptive content. At CA, we do not permit students to use image or video generation tools for impersonation or deception under any circumstances; assigned academic use must follow teacher guidance. 

Connecting to CA's AI Framework

At CA, our AI Framework helps students understand when and how to use AI intentionally, ethically, and reflectively. Our goal isn’t for every student to use AI; it’s for every student to use it with integrity while maintaining a human-centered lens. 

The framework guides them through four possible ways their teachers may ask them to engage with AI in the classroom:

  • No Use
  • Plan with AI
  • Collaborate with AI
  • Co-Create with AI

Through the framework, teachers provide clear guidance on the type of AI use that is appropriate for a specific assignment. This clarity helps students think critically about what role AI should play in their learning process, whether that means brainstorming ideas, checking reasoning, or creating responsibly.

We’re continuing to refine this framework through the work of our AI Task Force, which is actively exploring questions about AI’s impact on student learning, creativity, and well-being. 

These conversations will remain ongoing because the technology and our students’ relationship with it continue to rapidly evolve. Our goal is to continue to proactively respond to a changing AI landscape, stay responsive, and help students develop the habits of reflection and responsibility that will serve them for life.

These aren’t easy times to be a student or a parent. We’re doing our best to educate students about emerging technologies, but we can’t do it alone. If you can, listen to the Raines tell Adam’s story; it’s heartbreaking and important. 

If you or someone you know is struggling:

Call or text 988 to reach the 988 Suicide & Crisis Lifeline, available 24/7, or visit 988lifeline.org.

In Colorado, you can also submit an anonymous safety concern at Safe2Tell.org (or 1-877-542-SAFE).

And of course, please reach out to our CA counseling staff—we are here to help.

  • AI
  • Head of School
  • On CAmpus November 2025 More