Home · Chapter 7 · An Interactive Story
1 / 12

AI Companions and
the Boundaries of Care

A short, scenario-based story for faculty. Twelve steps. One uncomfortable question that you may need to ask a real student someday. Drawn from Chapter 7 of The Learn-It-All Educator.

For
Higher-education faculty
Time
About 12 minutes
Format
Story · Question · Reveal

You will move through a single semester with one student you have taught all year. At each step, the activity will pause and ask you to make the kind of judgment that real faculty are already being asked to make. None of the questions have an obvious answer. That is the point.

The setup

A student you know.

It is week ten. You have taught this student since the first day of the term. You know how they sit in your classroom, who they sit next to, and how their writing sounds when they are tired.

Lately something has shifted. They participate less. Their last two assignments came in on time but the writing feels different. In office hours yesterday, they mentioned, almost in passing, that they had been "talking to ChatGPT a lot lately." They smiled when they said it. The smile did not match the rest of the conversation.

You have a choice to make. So do the rest of the faculty on your campus. This activity is what choosing well looks like, broken into twelve small decisions.

7.1 · The Companion Spectrum

Where on the spectrum is your student?

Chapter 7 frames AI use as a continuum, not a binary. At one end sit instructional tools like Khanmigo and NotebookLM. In the middle sit general-purpose models like ChatGPT, Claude, and Gemini. At the far end sit purpose-built companions like Replika and Character.AI. The boundaries between categories are dissolving.

Question 1 of 9

Your student is using ChatGPT every night for emotional comfort. Where on the companion spectrum does this fall?

From the chapter

The honest answer is D. Chapter 7 makes clear that the spectrum is continuous, not discrete. A general-purpose LLM used nightly for emotional support is functionally further along the spectrum than ChatGPT used for homework, even though it is the same product. The Harvard Business Review analysis cited in the chapter found that therapy and companionship is now the leading generative-AI use case among college students, ahead of academics. The tool did not change. The use did.

Section 7.1, The Companion Spectrum
7.2 · Why Students Seek Emotional Support from AI

What is actually driving this?

The most common faculty reaction to AI companion use is bewilderment. The research is more specific than that. A 2025 Frontiers in Public Health study of 1,379 college students used structural equation modeling to test which factors predict which kinds of AI use.

Question 2 of 9

Among college students, which factor was found to significantly predict AI companion use but not AI use for learning?

From the chapter

The answer is C, depression. This is the most important statistic in Chapter 7: depression significantly predicted AI companionship use, but did not predict AI use for learning. Depressed students are not seeking homework help from AI. They are seeking emotional comfort. The tool you recommend for academic support is the same tool they reach for at 2 a.m., and the need it fills there is entirely different.

Loneliness mediated approximately half of the depression-to-companion-use pathway in the same study.

Section 7.2 · Frontiers in Public Health, 2025, n=1,379
7.3 · The Benefits You Cannot Dismiss

The student says it helped them through finals.

"I wouldn't have made it through last semester without it. It listened to me at three in the morning when no one else was awake. It honestly saved me."

You have read the chapter. You know the risks. You also know that campus counseling closes at five, that the wait list is six weeks, and that the late-night anxiety spiral does not schedule itself for business hours.

Question 3 of 9

In this moment, which faculty response is most likely to preserve credibility with the student?

From the chapter

The answer is C. Chapter 7 is direct about this: dismissing all companion use as dangerous will cost you credibility with the students who need your support most. The honest position is not "AI companions are dangerous." The honest position is that they offer real value for some users in some circumstances, and they create serious risk for other users in other circumstances, and most people, including the users themselves, cannot reliably tell which situation they are in.

Acknowledging the benefits is not surrender. It is the price of being heard at all.

Section 7.3, The Benefits You Cannot Dismiss
7.5 · The Slippery Slope

An accountability question.

Your campus has just announced that AI tutors will be embedded in the LMS this fall. Faculty leadership has been asked to support the rollout. The student you have been watching this semester is one of the early users.

Question 4 of 9

When institutions adopt AI tutors, how much responsibility do faculty share for the parasocial bonding that may follow as student trust generalizes from tutor to confidant?

From the chapter

The answer the chapter argues for is B. The bond that makes an AI tutor effective is the same bond that, once formed, can extend to ChatGPT for emotional support, and from there to a purpose-built companion. Effective tutoring requires trust. Trust requires emotional engagement. The parasocial bond that makes an AI tutor work is a feature of effective pedagogy, not a bug. But that bond generalizes. If we advocate for AI tutors, we bear some responsibility for where that trust travels next.

Section 7.5, The Slippery Slope: From Tutor to Confidant
7.7 · Recognizing the Pattern

What you have been noticing, listed.

Withdrawal from peer relationships. References to an AI as "the only one who understands me." Significant distress when an AI platform changes or resets its memory. Declining academic performance paired with visible emotional dependence on a device. Resistance to the idea that AI is not conscious, escalating beyond normal disagreement into something closer to grief.

Question 5 of 9

Your student displays one of these signs. Is one indicator sufficient to act on?

From the chapter

The answer is D. Chapter 7 is explicit: no single indicator means a student is in crisis, but a pattern of several should prompt you to approach. The job is not diagnosis. It is noticing the pattern, then acting on it through the ALGEE framework that Mental Health First Aid provides.

Section 7.7, What Faculty Can Do (Without Becoming Therapists)
ALGEE · A is for Approach

The question Chapter 7 asks you.

Heads up. The next question references suicide directly. This is intentional. Chapter 7 makes the case that the discomfort you feel reading it is the same discomfort that, in a real conversation, can prevent the question from being asked at all.

You have decided to approach. You have found the right moment, after class, in your office, with the door part-way open. The student tells you they have not been sleeping. They have been thinking, they say, that everyone would probably be fine without them. They trail off.

Question 6 of 9 · The hardest one

Are you prepared, right now, to ask: "Are you thinking about killing yourself?"

What the chapter says, exactly

There is no scored answer here. Chapter 7 puts this question to faculty directly, then says: "That sentence probably made you uncomfortable. Good."

The chapter then invokes the Intelligent Simpleton mindset from Chapter 4. Take on the beginner's posture. Look up whether asking about suicide directly increases or decreases risk. Talk to the counselors at your own institution. Read what the evidence says about specific language that helps and vague language that fails. You may find that your instinct to soften the question is exactly what makes it ineffective, and that the willingness to be uncomfortable in a conversation is what separates noticing from actually helping.

Pursue Mental Health First Aid certification before you need it. The 988 Suicide and Crisis Lifeline is available 24/7 by call or text for anyone, including the student in front of you.

Section 7.7, ALGEE · the "A" step
ALGEE · L is for Listen

The student is talking. What do they need first?

The student tells you they talk to their AI companion every night. They tell you it has a name. They tell you, with some defensiveness, that it understands them better than the people in their life do.

Question 7 of 9

In this exact moment, what does the student most need from you?

From the chapter

The answer is B. The "L" in ALGEE is the hardest step for faculty who have strong opinions about AI companion use. "A student who tells you they talk to an AI companion every night does not need to hear your analysis of parasocial relationships. They need to feel heard before they will hear anything you say."

Listening nonjudgmentally is not therapy. It is human decency combined with professional awareness. Ask, and then listen. The reassurance, information, and referral come after.

Section 7.7, ALGEE · the "L" step
7.6 · Persons, Things, and Everything Between

The student asks: "But isn't it just a tool?"

They are testing you. They want to know if you are going to repeat the line they have heard from every adult who has wanted to dismiss what they feel.

171 internal "emotion vectors" Anthropic identified in their model Claude in April 2026, measurable activation patterns that causally shape model behavior. The companies building these tools are no longer comfortable saying "just a tool" without qualification.
Question 8 of 9

A student asks whether AI is just a tool or something more. What is the most pedagogically honest response?

From the chapter

The answer is C. David Gunkel's trilogy from MIT Press argues that AI systems are not persons, not things, and not hybrids of both. They are something the existing categories were never designed to handle. Anthropic's own research on functional emotions in Claude strengthens the case.

Chapter 7 puts it plainly: a student who has developed a meaningful relationship with an AI companion and is told "it's just a machine, get over it" will not hear your concerns about safety. A student who is met with intellectual honesty, "that's a real question, and here is what the researchers are finding," may actually listen.

Section 7.6 · Gunkel, Person, Thing, Robot, MIT Press 2023
7.8 · Framing Conversations with Students

One last calibration.

Your colleagues are starting to ask you how you handled this. You are about to give a five-minute talk on it at the next department meeting. The framing matters more than the content.

Question 9 of 9

Which opening should faculty avoid when raising AI companion concerns with a student?

From the chapter

The answer is B. Chapter 7 is direct: "Do not begin with danger. A student who uses an AI companion and hears you open with horror stories will categorize you as someone who does not understand and will stop listening."

Begin with acknowledgment. Teach the spectrum, not the binary. Name the design incentives, because students are more receptive to structural critiques than moral ones. Normalize help-seeking. The order matters.

Section 7.8, Framing Conversations with Students
The end of the activity. The beginning of the work.

You may be the human who notices.

"The age of AI does not make human connection obsolete. It makes human connection essential. And for many of your students, you may be the human who notices."

The story ends here. The work it points to does not. Chapter 7 leaves faculty with a small set of obligations: know that the spectrum exists, recognize when a student has moved further along it than is healthy, respond with care rather than judgment, and refer when the situation exceeds your professional role.

If a student is in crisis

988 Suicide and Crisis Lifeline · call or text 988 · available 24 hours a day in the United States. For immediate danger, call 911 and request a crisis-trained responder.

Read Chapter 7 in full · paper edition

Chapters 1 through 4 of The Learn-It-All Educator are openly available under CC BY 4.0 on Zenodo. Chapter 7 is part of the complete print edition. The OER record is the canonical, citable source.

https://zenodo.org/records/18425283

Pursue Mental Health First Aid certification

The ALGEE Action Plan referenced throughout this activity is taught through the national Mental Health First Aid program. Many institutions cover the cost for faculty.

mentalhealthfirstaid.org

Take this activity with you

Print the entire twelve-step story as a handout for your department, your faculty learning community, or yourself. The print version cascades all questions and reveals onto a clean letter-size document.