Skip To Main Content

AI in High School: Theory and Practice

AI in High School: Theory and Practice
  • Academics
  • Upper School
  • Upper School Principal
AI in High School: Theory and Practice
Max Delgado

During our full-day faculty professional development last November, the Upper School dedicated our time exclusively to AI in the classroom. As a jumping-off place, we began with three things:

  • Students are using AI more than we realize: If current trends hold, AI use will become increasingly frictionless, making it more ubiquitous and passive in daily life. This means students will put less effort in but get more out. And they are using AI at rates that far exceed adult use.
  • The pace of change for AI is hard to predict (although many are trying): Despite headlines and confident predictions, the most honest posture is to admit that no one knows where AI is going. This creates a volatile and unpredictable moment for schools.
  • And lastly, to date, there is very little regulation: The sandbox our students are exploring has almost no external guardrails. Unless federal or state agencies create meaningful limits—and there is no indication this will happen soon—schools and families will be responsible for establishing boundaries around student use.

Some of us are excited by all this. Some are worried. Most are both.

These three realities create a new central question for schools like CA: How do we show up for kids in the middle of all this?

AI is a weather pattern

At the Upper School, we’ve decided to think about AI as a weather pattern. This isn’t the typical approach to educational questions, where the temptation is to frame emergent movements as happy solutions or looming crises, but doing so flattens AI’s complexity. But AI doesn’t flatten. It expands. So we must acknowledge that AI is happening now, it’s happening to everyone, and the forecast keeps changing. And because you can’t change the weather, we must decide how to dress for the conditions—and assume the conditions will change again.

This requires letting go of old ideas.

Schools love solutions. They are comforting and familiar to schools. And while we wish that AI might fit into a solution paradigm, meaning it fixes, enhances, or advances a narrow and well-understood educational question, that’s not what’s happening here. Instead, our challenge is to develop a posture for a reality we can’t forecast. Any AI framework we adopt should meet this moment—and remain flexible enough to keep us from the fragile temptation to treat any stance as permanent.

The framework we’ve adopted—and the one we’ve shared with students—is our best effort to answer this immediate and messy question: What clarity do students need today about how AI should inform their academic work, and what kinds of exploration are appropriate under the guidance of their teacher?

For now, the baseline rule is straightforward: Students should assume no AI use unless explicitly guided by their teacher through this framework.

Impossible questions

We’re also starting from the position that it’s safer to assume we’re wrong about what might unfold with AI than to assume we can predict its trajectory. Knowing you’re probably wrong offers a strange kind of freedom. As a division, it means we can do far more if we embrace that the real risk for the Upper School isn’t being wrong—it’s becoming rigid.

Many traditions—intellectual and faith-based alike—have built frameworks for approaching “impossible questions,” the kinds of questions that never settle into a final answer. Anyone who works in schools already knows this terrain. Raising kids generates an endless stream of these impossible questions; there is no single response sturdy enough to carry a child from infancy to adulthood and still meet the shifting developmental, emotional, and moment-specific needs that growing humans will require.

Design theorists Horst Rittel and Melvin Webber call these “wicked questions”—the kinds of realities that introduce complexity, contradiction, and discomfort without the promise of permanent resolution. As soon as you land on an answer, the question evolves, making your answer outdated or moot.

AI in education introduces a whole set of these impossible questions:

  • If AI can generate competent work instantly, what does learning mean?
  • If AI can pass our tests, what are those tests measuring?
  • If everything can be optimized, what do we lose in the process?
  • If AI reduces struggle, how do we teach students the value of struggle?

Impossible questions are ones you can’t “tap out” of. Unlike a job, city, or neighborhood you’ve outgrown; unlike an old habit that keeps holding you back; unlike a game you just can’t win—these aren’t the kinds of questions you can forfeit or sidestep by letting go, moving on, or moving somewhere new. Impossible questions are persistent and insistent. They follow you wherever you go.

Do the next right thing

Our approach is simple: Do the next right thing, recognizing that “right” today may not be “right” tomorrow. We are more interested in an AI posture than an AI policy. 

Part of doing the next right thing is to own that we’re not celebrating AI, nor are we catastrophizing it—that’s predictive thinking. Instead, we’re navigating it as a reality: one with little regulation, fast adoption, and an uncertain future.

And this is where Upper School faculty are uniquely equipped. We got into education because we want to be the adults in the room when kids need adults mos—when things are unclear, when it’s hard to tell the difference between things falling apart or being reorganized. Showing up for kids is the most reliable compass we have. 

What this looks like in practice

Last August, we asked every teacher to experiment with AI in their classroom over the course of the 2025-2026 school year—not just to explore new possibilities for learning, but because we need adults who genuinely understand what students are experiencing. To anchor that work, we developed four quadrants that help teachers focus their efforts as they develop lesson plans:

  • Human-Centered Mindset: Creates opportunities for students to question, critique, and analyze AI’s influence. Highlights AI’s role in society, in specific disciplines, and in the learning process itself.
  • Disciplinary Application: Uses AI to illuminate or extend authentic disciplinary practices (e.g., scientific inquiry, literary analysis, historical thinking). Shows students how AI interacts with expert habits of mind.
  • Ethical and Responsible Use: Engages students with questions of fairness, bias, authorship, and academic integrity. Helps students recognize both the capabilities and limitations of AI.
  • Base Competencies: Integrates AI into cycles of drafting, feedback, and revision. Keeps learning and refinement—not shortcuts—at the center of AI use.

We’re grateful to our faculty presenters—Allie Bronson, Ben Hoffman, Stephanie Mendrala, Eric Sheldrake, Lisa Todd, and Amanda Zranchev—who offered their colleagues structured workshops that modeled how teachers maintain strong presence, ask hard questions, and help students develop judgment about when and how AI serves their learning—and when it doesn’t.

What we know for sure

There’s a lot we don’t know about what will happen within AI, but here’s what we do know:

  • Students need adults who are human-centered, thoughtful, and moral to help them navigate this moment.

  • Students make better choices when they explore AI with adults they trust.

  • Students need support, exposure, and skepticism—often all at the same time, and ideally from the same people.

AI will change scholarship. Our understanding will be outdated soon—possibly within the year. But our role remains constant: to be the steady adult voice that helps young people navigate uncertainty with wisdom, adaptability, and curiosity.

And that’s how we’ll dress for the weather.

We’re grateful for your partnership as we navigate this together.
 

  • AI
  • Academics
  • Upper School
  • Upper School Principal


More News