Having the AI Conversation with your Students
As an instructor, you are likely overwhelmed with questions and concerns about generative AI in relation to your teaching practice. To find answers, you may be talking with colleagues, listening to podcasts, reading articles, attending workshops, and seeking out other resources and support. Most certainly, you are encountering ideas about AI and teaching that are in no way unified: some suggest that instructors wholeheartedly and unreservedly “infuse” AI in their teaching, some suggest complete avoidance, and still others suggest a slow, deliberate approach to deciding whether to have students use AI.
It’s important to recognize that students are confronting the same incoherence and confusion. Just as we must find our way in the conversation about AI, we must also engage and guide them in that conversation. Current research supports the need to help our students make sense of the role of AI in their lives, showing us that graduating high school students expect their university instructors to guide them into and through their engagement of AI. The research also suggests that these students have strong concerns about AI in both their university careers and their lives beyond the university (Goebel et al., 2024).
Engaging our students in a conversation about AI may initially feel daunting, but having a plan can reduce your anxiety and help ensure a productive outcome. This resource will provide a research-based approach and a series of concrete suggestions for an effective conversation.
What does the research tell us about high-impact conversations with students?
Research can provide us with some foundational ideas about conversations that truly help students sort through their ideas about and behaviors with AI.
Behavior change comes from a deep, internal commitment that derives from self-observation, thoughtful planning, and then monitoring and adjusting that plan over time with support from others and with the use of self-recognition and reward for progress (Nielson et al., 2018). Simply telling students that they should resist AI or writing policies that tell them they have to use it in particular ways won’t ensure that students internalize or surface their own thoughts, worries, and questions about this technology. Meaningful changes in behaviors around AI only come about through experiential, thoughtful, and planful work done with and by our students.
If we really want to help students regulate their behavior, we should avoid taking an “authoritarian” approach to the AI conversation in which we simply tell them what they can or can’t do or preach to them about the risks of using AI. Rather, we should lead the conversation with some authority, but as an authority who truly is invested in and listens to students’ experiences and ideas. Foundational research from developmental psychology identifies the key features of this authoritative approach (as opposed to an authoritarian approach) to navigating high-impact conversations with emerging adults (Baumrind, 1968).
- The instructor is aware of and supportive of the students’ needs and concerns.
- The instructor listens authentically to their students and structures a genuine conversation.
- The instructor expects that their students are motivated to have a productive, honest conversation and to learn and work in productive and honest ways.
- The instructor provides reasons for suggested behaviors.
- The instructor provides autonomy to the students and invites them into decision making about what they want to do and plan to do.
In sum, the research on behavioral change tells us that students must be directly involved in regulating their own behavior around AI; the research from developmental psychology tells us that this regulation is part of a thoughtfully structured conversation that surfaces their thoughts, beliefs, and concerns around AI and learning.
How should we structure high-impact conversations about AI and learning with our students?
A conversation that makes way for real behavioral and cognitive change must involve our students in a genuine exploration of their own thoughts and concerns. It must also provide structure to ensure focus and the opportunity for change. The four steps below provide a reliable framework to guide your conversation.
- Surface students’ thoughts and concerns through a compelling and highly structured task.
- Draw out students’ thinking through a genuine debrief of their ideas.
- Provide new information to help fill in gaps in students’ thinking or resolve cognitive conflicts that arise during discussion.
- Provide scaffolded opportunities for students to reflect on the experiences they’ve had in the conversation and plan forward with new decisions and resolutions about AI.
This structure can help us, as instructors, listen to our students and learn from them what they believe, know, and feel about AI. Being a caring and respectful interlocutor during the conversation means that students will be more open with us and that we, in turn, can be genuine and open with them. Students are more likely to change their thinking when we come to them as a caring authority figure, but this means we need to be prepared to change or modify our thinking in light of their ideas as well.
Example 1: Engage students in a conversation about the research on their peers’ use of and concerns about AI
In order to examine their own use of AI and to explore their own concerns about it, this conversation invites students to explore research on AI use and concerns about AI among high school seniors. Considering the worries and expectations of those soon to enter college allows students some academic distance from their own charged feelings about AI and makes for a comfortable, focused conversation.
Task to surface students’ thoughts and concerns
Have students predict the percentage of college-bound high school seniors who regularly use AI and who have concerns about the ethics of AI. Draw on recent research to create this kind of task. This example uses Goebel et al. (2024).
- Students work individually to choose among options that represent the findings of the study.
- Students work in small groups for about 10 minutes to come to consensus on one option and prepare to justify their choice. This task design results in exciting and productive conversation as students draw on their own ideas, experiences, and learning to reach a consensus.
- Groups share their responses simultaneously using clickers, small whiteboards at each group’s table, or having a group representative come to the classroom whiteboard to write their answer. This simultaneous sharing of answers creates interest and excitement as differences in thinking or alignment in ideas are quickly revealed.
Debrief of students’ ideas
After the groups have shared their responses, the instructor asks for justifications of groups’ ideas, focusing on getting at students’ underlying ideas and beliefs. After about eight minutes, the instructor should summarize the main themes that have emerged from students and groups. As this is a real conversation, at no time should the instructor correct students or tell them the results of the research. Probing for reasons why students made their decisions is the genuine and caring way to respond to students.
New information
Reveal the results of the research and note where groups’ predictions did or did not align with the data. The study cited above has somewhat surprising results: only 16% of seniors say they often use AI to complete school work, but 72% say they have concerns about the ethics of AI.
Scaffolded reflection and planning
After this rich conversation, students will be ready to revisit and reconsider their relation to AI as learners. Research suggests that when they work with their peers to commit to academic honesty, students are more likely to resist learning shortcuts such as leaning into AI inappropriately or not discussing their use of AI openly with their instructor (Lang, 2013; Lang, 2020). The instructor should therefore structure a discussion between peers about the implications of the conversation they’ve just had.
- Students write about how they have used AI in the past, what concerns they have about it, why they don’t want to use it, and what strategies they have used to avoid using AI inappropriately in the past.
- Students share their individual concerns in their groups.
- Groups use these ideas to come up with language that captures their concerns and their desire to resist using AI inappropriately.
- The whole class uses these concerns to create a shared class or community pledge about the risks of using AI and their decision to not use it inappropriately.
- Students sign that pledge. The instructor can provide some gravitas and ritual around this signing.
Example 2: Engage students in a conversation about the research on the effects of AI use on learning
To learn about the effects of AI use on thinking and learning, this activity invites students into a conversation where they work together to predict the results of recent research on the impact of AI on memory, cognitive work, and brain activity. Working with this research provides academic distance from their conceptions and misconceptions about the impact of AI and makes for a comfortable, focused conversation.
Task to initiate an engaging conversation
Have students predict the relative impact of essay writing using three approaches (using the brain only, using Google, using ChatGPT) on memory and learning. Draw on recent research to create this kind of task. This example uses Kosmyna et al. (2025).
- Students work individually to predict which approach to essay writing will result in the greatest ability to contribute to a discussion about the essay topic.
- Students work in groups for about 10 minutes to come to consensus and prepare to justify their choice. This task design results in exciting and productive conversation as students draw on their own ideas, experiences, and learning to reach a consensus.
- Groups share their responses simultaneously, using clickers, small whiteboards at each group’s table, or having a group representative come to the classroom whiteboard to mark an answer. This simultaneous sharing of answers creates interest and excitement as differences in thinking or alignment in ideas are quickly revealed.
Debrief of students’ ideas
After the groups have shared their responses, the instructor should ask for justifications of groups’ ideas, focusing on getting at students’ underlying ideas and beliefs. After about eight minutes, the instructor should summarize the main themes that have emerged from students and groups. As this is a real conversation, at no time should the instructor correct students or tell them the results of the research. Probing for reasons why students made their decisions is the genuine and caring way to respond to students.
New information
Reveal the results of the research and note where groups’ predictions did or did not align with the data. The study cited above has somewhat surprising results for many students: namely that using ChatGPT to write an essay resulted in extraordinarily poor memory, lack of engagement and ownership with writing, and weak brain connectivity during writing.
- 83% of ChatGPT users couldn't quote a single sentence from essays they'd written just minutes earlier.
- In the brain-only group, only 11% had trouble quoting their own work. They were overall most engaged and remembered the most.
- ChatGPT users showed dramatically weaker brain connectivity. And when they later tried to write without AI, their brains looked more like novices than practiced writers.
Scaffolded reflection and planning
After this rich conversation, students will be ready to revisit and reconsider their relation to AI as learners. Research suggests that when they work with their peers to commit to academic honesty, students are more likely to resist learning shortcuts such as leaning into AI inappropriately or not discussing their use of AI openly with their instructor (Lang, 2013; Lang, 2020). The instructor should therefore structure a discussion between peers about the implications of the conversation they’ve just had.
- Students write about how they have used AI in the past, what concerns they have about it, why they don’t want to use it, and what strategies they have used to avoid using AI inappropriately in the past.
- Students share their individual concerns with their groups.
- Groups use these ideas to come up with language that captures their concerns and their desire to resist using AI inappropriately.
- The whole class uses these concerns to create a shared class or community pledge about the risks of using AI and their decision to not use it inappropriately.
- Students sign that pledge. The instructor can provide some gravitas and ritual around this signing.
Example 3: Engage students in an analytic conversation about AI output
In order to help them question the reliability of AI output, this conversation invites students to analyze images generated by AI. Critiquing the visual output of AI can be a more immediate and dynamic experience than critical reflection on text output (Ippolito, 2024). Visual aberrations and bias, as well as the non-thinking quality of AI, can be easier for students to identify and articulate. Text output tends to sound sophisticated to students and may be harder for them to critique. Working with the visual output of AI makes for a comfortable, focused conversation.
Task to initiate an engaging conversation
Have students identify bias, unreliability, and other limitations of AI generated images. Use a simple prompt to generate an image with ChatGPT. For example, “Create four photorealistic images of a girl and a cat. Show the full body of the girl and the full body of the cat in each image.” The resulting images can be presented to students along with the prompt you used.
- Students work individually to generate their ideas about the ways in which the images reveal bias, unreliability, or other limitations of AI.
- Students work in groups for about 10 minutes to share their ideas and identify their three biggest concerns and prepare to explain why these critiques are most concerning. This task design results in exciting and productive conversation as students build on one another’s ideas and discover more and more concerns with the AI generated images.
- Groups share their ideas on the whiteboard, identifying their three concerns in one or two words each. This simultaneous sharing of answers creates interest and excitement as differences in thinking or alignment in ideas are quickly revealed.
Debrief of students’ ideas
After the groups have shared their concerns, the instructor should ask groups to articulate why they are concerned about the aspects of the images they have identified, focusing on getting at students’ underlying understanding of how AI generates images. After about eight minutes, the instructor should summarize the main themes that have emerged from students and groups. As this is a real conversation, it is important for the instructor to listen to and explore what is concerning about the images for students rather than adding the instructor’s own ideas at this point. Probing for students’ underlying concerns and critiques of AI is the genuine and caring way to respond to students.
New information
It is possible that students do not identify bias and limitations in the images generated by AI. For example, images of a girl and a cat rarely show a child with a disability. Similarly, the setting of the AI generated images are often in an idealized, rural environment. While ChatGPT is getting better at generating images of hands, physical postures may be distorted and unreal. If students do not immediately notice them, reveal details like these about the image that demonstrate the limitations of AI. Then share why AI is limited in what it can generate by describing how it is trained. Remind students that AI cannot think and doesn’t have a working model of reality, of humanity, or of ethics and values.
Scaffolded reflection and planning
After this rich conversation, students will be ready to revisit and reconsider their relation to AI as learners. Research suggests that when they work with their peers to commit to academic honesty, students are more likely to resist learning shortcuts such as leaning into AI inappropriately or not discussing their use of AI openly with their instructor (Lang, 2013; Lang, 2020). The instructor should therefore structure a discussion between peers about the implications of the conversation they’ve just had.
- Students write about how they have used AI in the past, what concerns they have about it, why they don’t want to use it, and what strategies they have used to avoid using AI inappropriately in the past.
- Students share their individual concerns with their groups.
- Groups use these ideas to come up with language that captures their concerns and their desire to resist using AI inappropriately.
- The whole class to use these concerns to create a shared class or community pledge about the risks of using AI and their decision to not use it inappropriately.
- Students sign that pledge. The instructor can provide some gravitas and ritual around this signing.
Conclusion
Inviting students into a meaningful conversation about AI isn’t simple, but by using a combination of care and structure, you will open a space that is often not available to our students. The university classroom is the ideal place to create this space for yourself and your students as the pressures mount both outside and inside the university for mindless adoption of AI technology. This conversation can continue when you ask students to pause and reflect on their own behaviors, feelings, and thoughts in relation to the pledge they created. Engage students in this reflection at key moments in the semester by asking them to do some writing guided by prompts you create. This will deepen their commitment to their class pledge and serve as another forum for this important, ongoing conversation about AI.
References
- Baumrind, D. (1968). Authoritarian vs. authoritative parental control. Adolescence, 3(11), 255–272.
- Goebel, C. Strauss, D., & Tessier, N. (2024, May). AI and Academia: Student perspectives and ethical implications. StudentPOLL Art & Science Group, 17(1).
- Ippolito, J. (Interviewee). (2024, June 27). Toward a more critical framework for AI us with Jon Ippolito [Audio podcast episode]. Teaching in Higher Ed Podcast.
- Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv preprint arXiv:2506.08872.
- Lang, J. M. (2013). Cheating lessons: Learning from academic dishonesty. Harvard University Press.
- Lang, J. M. (2020). Distracted: Why students can’t focus and what you can do about it. Basic Books.
- Nielsen, L., Riddle, M., King. J. W., NIH Science of Behavior Change Implementation Team, Aklin, W. M., Chen, W., Clark, D., Collier, E., Czajkowski, S., Esposito, L., Ferrer, R., Green, P., Hunter, C., Kehl, K., King, R., Onken, L., Simmons, J. M. , Stoeckel, L., Stoney, C., . . . Weber, W. (2018). The NIH science of behavior change program: Transforming the science through a focus on mechanisms of change. Behaviour Research and Therapy, 101, 3- 11.