At ISA a couple of weeks back, I facilitated a Teaching Cafe discussion on AI and Chat GPT’s impact in our classes. Thanks to the Innovative Pedagogy Conference Committee generously allocating us space, several colleagues from a variety of different institutions stopped by to share their thoughts and ask questions about the ethics, practical responses, and positive aspects of this technology. I’m going to share a few of these responses in case they aid others in thinking through how AI will affect their teaching, with the caveat that AI is advancing at a rapid rate and many of the strategies we discussed will be outdated very quickly.
I’ve categorized our conversation into three themes: how to mitigate the impact of AI in our classes; ethics and academic honesty; and leveraging AI to teach.
Mitigating Impact of AI
Perhaps the largest focus in our conversation was on how to mitigate the impact of AI on our teaching–in other words, how to adjust to easily accessible chat AI features without throwing out everything we do in our classes. Some techniques shared months ago, when Chat GPT first surfaced, are less useful even now with GPT 4, and will date this blog entry very quickly. Still, they may have value for someone trying to figure out what to do right now, and for stimulating thought about future responses, so I share them here:
- Move the writing process to the classroom. We typically ask students to complete writing assignments outside of class, but adopting a ‘flipped classroom’ model where students complete parts of their assignments in the classroom not only limits the ability to use AI, but lets students engage in the writing process in a more collaborative setting and with access to their instructor. This can also help you learn their voice early on, so you can see how it develops throughout the course (and if it suddenly changes due to use of AI).
- Scaffold the writing process more. Best practices in writing suggest doing this anyway: rather than assigning a paper due at the end of class, break the assignment down into its component parts on which students receive regular feedback. This can give you insight into the writing process, and you can require students to explain how they used the feedback–reducing the ability to just have AI write the entire paper. It also provides an opportunity for students to ‘show their work’ along the way, and can be combined with best practices in metacognitive work to help students think through their learning process.
- Change prompts and assignments to those that are harder for AI to do well. Chat GPT was trained on information only through 2021, so assigning prompts about more recent events, or entirely fictional scenarios, can help. Over time, prompt design will lose its power in bypassing AI, but as a general rule, we should spend more time crafting our prompts than we do.
- Try oral or visual exams and assignments. I don’t think we do enough oral exams in political science–especially given how important public speaking and political communication is to our field. An oral exam, or having students produce podcasts, recorded debates, or other audio projects is one way to reduce the impact of AI while also working on oral communication skills. Likewise, while AI can produce visuals, asking students to create visual concept maps or memes is currently another way to reduce the impact of AI
- Design assignments that create an intrinsic interest for students to complete. Students are more likely to turn to shortcuts like AI when they see no value or interest in completing their assignments. Creating authentic, transparent assignments that students find interesting will help supply the motivation to actually do the work.
Ethics and Academic Honesty
We also spent some time thinking through the ethics of AI use and how to respond as academics. The conversation here started with the recognition that some students will always cheat, whether with AI, hiring someone to do their work, or some other method of avoiding doing the work. How much energy do we want to give to trying to catch the students that will find some way to cheat? For some, this approach can reduce the feeling that we must ‘do something’ to limit AI’s impact, as we must decide how to use our limited time: to catch the cheaters, or support the non-cheaters?
For others, though, students are taking clear advantage of chatbots already, turning in work that is far beyond their previously demonstrated abilities and with the clear hallmarks of AI use. Such activity may require some kind of institutional intervention to extend an academic honesty code to the use of AI along with how it will be enforced against that use it. For many students though, simply discussing with them why the use of AI without attribution or as a substitute for their own work is both unethical and bad for their learning. It also has clear privacy concerns, as we do not know what the companies behind the chatbots will do with the data that is fed into them.
There are chat detectors, but there’s no detector that can tell you definitively how and to what extent AI was used in creating a piece of writing. In some cases, you will get false positives and negatives. If you are interested in detection, you can try the detectors, but also learn the hallmarks of how current AI responds to writing prompts–the generic language, creating fake sources, inability to develop innovative arguments or engage in serious critical analysis–and trust in your ability to detect it.
Leveraging AI in the classroom
Finally, we discussed ways in which AI can be seen as a useful tool that we can leverage as instructors. It can automate a lot of mundane tasks–the first draft of a letter of recommendation, emails, policy statements for syllabi–and help with less mundane ones, such as writing prompts, exam questions, and even lesson plans. For example, you can feed a writing prompt into Chat GPT, and have it generate some sample responses. You can then review those responses with students, which achieves several purposes: first, seeing the kind of response that Chat GPT creates for your prompt; showing students that you know about AI and how it can be used (potentially warning off those who consider cheating in this way); and most importantly, critiquing the response so students can understand the criteria you value in their writing and getting a sense of what a response might look like. Using AI to help teach the writing and revision process is one way that reduces the burdens of spending time on writing in a political science classroom. Finally, we need to consider that AI is here to stay, and we should be thinking of how to teach our students to use it ethically and effectively. Denying them that chance at skill development because a few students will cheat does them a disservice.
Much more to come in the world of AI in higher education, but I hope this outline of our conversation at ISA is useful to those instructors who are struggling with how to respond.