Should I use AI detection tools in my courses?
When we suspect that students are misusing generative AI to short-circuit their learning, our feelings of frustration can lead us to focus our energy and attention on catching them in the act. Many plagiarism checkers like Copyleaks have AI detection features, and those tools present themselves as a quick and easy solution to the problem. However, as is often the case, turning to technology to navigate our teaching can have unintended consequences. Before making any decision to use AI detection, it’s important to think carefully about the potential for harm.
This guide will provide an overview of how AI detection tools work, how using them can negatively affect students and your relationship with them, and alternative ways to reduce misuse of AI by students. The final section of this article is for instructors who believe that the potential benefits of using AI detection outweigh the potential harms, providing suggestions for using these tools cautiously, transparently, and with students’ learning in mind.
How AI detection tools work
As you decide whether to use AI detection tools in your courses, it’s important first to know how they work. While there are differences in the specifics methods each tool uses, at their core they work in similar ways. First, they rely on an AI model that has been trained on a large body of text and a set of rules to help them analyze the properties of a text and determine whether text is more likely to have been AI-generated or human-generated. A recent Ars Technica article (Edwards, 2023) uses GPTZero as an example. That detector looks at a text’s “perplexity” (i.e., how commonly used the phrases in that text are) and “burstiness” (i.e., the amount of variation in sentence length and construction). If many of the sentences in a text are highly predictable (or have low perplexity) and are consistent or uniform in style, GPTZero will say that the text is very likely to have been generated by AI.
As you read the description above, you may have noticed that the characteristics of a text that an AI detector will likely flag as AI-generated are similar to the writing of many novices in a field: full of commonly-used phrases and low in sentence style variation. This means that these tools are not likely to be very accurate in predicting whether undergraduate student writing was generated by a human or by AI: false positives may result because our developing students write in ways that are, in fact, similar to the way AI writes! In fact, many higher education institutions have turned off AI detection features in their plagiarism detection tools because of the high risk of false positive results and unfounded accusations of plagiarism. Moreover, while false positives are a potential problem across the board, they are particularly likely to affect English language learners.
The problem with focusing on “detection”
The reliability of AI detection tools is questionable at best, but that isn’t the only reason we need to be cautious about using them: we also need to be mindful of the message we send to students when we choose to use these tools. Choosing to surveil our students can create an environment of distrust, putting us in the role of police person and our students in the role of potential criminals. When students see us as their adversaries, they are unlikely to believe that we are there to support their learning.
Alternatives to AI detection
Rather than putting our focus on catching students so they can be punished when they use AI inappropriately, it’s more productive to think of the ways that we can create the conditions in our course that will reduce students’ perception that they need to rely on AI to complete their work. This means giving thought to the way we design our assignments, the way we help students prepare for those assignments, and the ways that students recognize and reflect on their preparation. Please visit our resource How can you design assignments to that students use AI responsibly—or don’t use it at all? for a concrete set of steps to help you plan assignments that will reduce students’ perceived need to rely on AI to complete their work.
Using AI detection to support student learning
If, after careful consideration of the drawbacks and alternatives listed above, you still believe that the potential benefits of using AI detection outweigh the potential harms, you should proceed with great care. This means being transparent with students, being critical of the results AI detection tools produce, and helping students reflect on their work and their learning. These three approaches are described below, along with some examples of how you might implement them.
Be transparent about your decision
Students have a right to know that you will be using AI detection tools. Your decision should be clearly laid out in any assignments where you will be submitting students’ work to any database and should be part of a conversation with students as well. Be clear about why you have made the decision to use AI detection and how you think it will support students’ learning.
When you explain why you are using AI detection, emphasize that you’ve made this choice because you think it will support students’ learning in the course. Explain how they will be using what they learn from the reports they receive to help them make good decisions in their work. You may also explain that you see AI detection as a tool to help them resist the temptation to rely on AI in ways that will harm their learning. Remind them, also that you want to ensure their grades have meaning: you want to help protect the meaning and value of their program and their college experience. Finally, explain that learning to use AI effectively also means learning NOT to use AI when it will stand in the way of them developing important cognitive skills. Making good decisions about using AI is something they will be doing throughout their academic and professional lives, and you believe that using this tool will help them do that.
Below are some examples of language you can use to communicate your decision to students.
Example of language for an assignment where students cannot use AI at all
I know how tempting it can be to use AI to help you do your work, but I’m asking you not to use AI for this assignment because I believe that it will get in the way of the learning I want you to do. I know that it can be tempting to take shortcuts and let technology do some of the work for you, but I’ve taken two steps with this assignment to help reduce that temptation. First, you will be doing a lot of preparatory work on this paper over the next three weeks of the course, both in class and by writing partial drafts. This means that by the time the final draft is due, you will have done most of the work and won’t need to rely on external tools. Second, because I want to help you monitor your behaviors around using AI, you will be submitting this assignment in Copyleaks, which has an AI detection tool. We will both analyze the report Copyleaks provides so that we can be sure that your work on this paper demonstrates your learning.
Example of language for an assignment where students can use AI for revising and editing
I have designed this assignment to help me (and you) get a sense of what you are learning in this course, so it's important to get a fair and accurate assessment of what you can do within the parameters I’ve described. While it’s acceptable for you to use AI to help you revise and edit your final draft, the ideas in your paper must be your own. The AI checker in Copyleaks can help me (and you!) determine whether you are working within the guidance I’ve provided. We will both analyze the report Copyleaks provides so that we can be sure you’re not relying in AI in ways that are interfering with the learning I want you to do in this course.
Most importantly, make sure that you aren’t just talking at students about your decision: ask them to contribute to this discussion so that they can tell you what their concerns are about AI generally and about AI detection specifically. You may be surprised that students share many of the same concerns that you do! A conversation where you spend a good deal of time listening to what students have to say can not only help you understand them, but it can also help them see you as someone who they can trust to support their learning.
Don’t accuse students of cheating based on reports from AI detection tools
As noted above, AI detection tools are notoriously problematic. This means that you cannot just take the reports they produce at face value and make accusations. Instead, use a report that suggests that work was generated by AI as an opportunity to have a conversation with a student about their work in your course. Visit CATLOE’s guide on what to do if you suspect cheating with AI for specific guidance on how you can respond productively in these situations.
Focus on reflection, not detection
Rather than just surveilling students or attempting to catch them cheating, have students reflect on their use of AI when they submit assignments and when they review the reports that AI detection tools provide. If students did not have the option to use AI on an assignment, have them document and explain the process they undertook to complete the assignment on their own.
Below are some reflection prompts to guide students’ thinking when they are submitting assignments where they did NOT have the option to use AI.
- Describe the steps you took to research, draft, revise, and edit your work on this assignment. How long did you spend on each stage of the process?
- What is the most important thing you learned from doing the work of this assignment?
- Where do you think your work on this assignment is strongest? What are you most proud of?
- If you had more time to work on this assignment, what would you continue to work on?
If students have the option to use AI on an assignment, have them document and explain how and when they used it. Ask students to save transcripts of their interactions with AI tools like ChatGPT so they can refer back to them. (You may even ask them to submit those as addenda to their assignments!) In addition, ask students to do some written reflection on their experience of using AI on an assignment.
Below are some reflection prompts to guide students’ thinking when they are submitting assignments where they had the option to use AI.
- Describe how you used AI on this assignment. At what stage(s) of your work did you use it?
- Did using AI help you on this assignment? If so, describe how it was helpful. If you used AI but did not find it helpful, explain where it fell short.
- Will you use AI on future assignments where you have the option to do so? Why or why not?
- What will you do differently on future assignments where you have the option to use AI?
- Are there other resources you could use instead or in addition to AI that would help you succeed on future assignments?
AI detection tools, which are usually embedded in other plagiarism prevention and detection tools, typically produce an originality or matching report that gives an originality score and highlights instances of text that are suspected to have been generated by AI. As described above, these scores and instances of matching should be read with a critical eye: most importantly, though, they should not just be read by you. After students have submitted work, make sure to give them access to those reports and have them reflect on how the detection tool characterized their work.
Below are some reflection prompts to guide students’ thinking when they analyze reports from AI detection
- Does anything in this report surprise you?
- Do you think this report accurately reflects your use of AI on this assignment? Point to specific instances where the report flagged use of AI accurately and places where the report flagged use of AI inaccurately. You may also point to areas where you did use AI and Copyleaks didn’t flag it.
- Does seeing this report make you think about your use of AI in new ways? For example, does this report show that you relied on AI more than you initially thought, or less than you initially thought?
- Based on what you see in this report, what will you do differently on future assignments?
Resources
Coffey, L. (2024, February 9). Professors cautious of tools to detect AI-generated writing. Inside Higher Ed. https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2024/02/09/professors-proceed-caution-using-ai
Edwards, B. (2023, July 14). Why AI writing detectors don’t work. Ars Technica. https://arstechnica.com/information-technology/2023/07/why-ai-detectors-think-the-us-constitution-was-written-by-ai/
Lang, J. M. (2013). Cheating lessons: Learning from academic dishonesty. Harvard University Press.
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., and Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7). https://doi.org/10.1016/j.patter.2023.100779
McCabe, D. L., Butterfield, K. D., & Treviño, L. K. (2017). Cheating in college: Why students do it and what educators can do about it. Johns Hopkins University Press.