Generative AI in the classroom

Many contemporary conversations about innovations in teaching and learning will eventually touch on the same subject – AI and its place in higher education. There have already been some great posts on ALPS covering a range of angles for exploring AI and political science education, including protecting the integrity of assessments, the choice to adopt or resist AI in the political science classroom, on AI and specification grading, and on AI and the pace of technological change. ALPS is not alone in this attention to AI. PS: Political Science & Politics published a piece by Ardoin and Hicks in April detailing fears of AI-driven academic misconduct and suggested methods for successfully using AI as a learning tool. Looking at WonkHE’s coverage of AI over the last year shows the extensive and profound effects that AI-related concerns and applications are already having in the sector.

However, one thing that has been largely overlooked to date is the idea of bringing AI into the political science classroom, not as a tool for research, for writing essays, or summarising texts, but as a subject of study. 

The generative AI tool ChatGPT has experienced a meteoric rise in its widespread adoption. It took 2 months for ChatGPT to reach 100 million users. Comparing this to TikTok’s 9 months, Instagram’s 30 months, and Spotfy’s 55 months, shows just how rapid the uptake of the chatbot has been. ChatGPT is now being hyped as a replacement for Google Search, with many using it as a first port of call for answering queries. This gives ChatPGT and its developer, OpenAI, an enormous amount of political power and opens up the potential for generative AI to be a vital subject of political science study. 

A recent paper published in the Journal of Political Science Education by Stefan Kehlenbach, entitled ‘The Impact of Infrastructure: Considerations of Generative AI in the Classroom’ set out this case well. One section at the end of Kehlenbach’s article stood out to me: 

All elements of political science research are impacted by AI in some way, and so by asking important questions about its usage we can leverage its popularity to ask the important questions that political science wants to answer. How does the spread of AI impact lawmaking now that policy makers are beginning to use generative AI as a part of the lawmaking process? What role does AI play in national defense? Should an AI system be allowed to make decisions about who to attack, or how to deploy troops? How might this impact the existing structures of warfare and the responsibility to protect? How does the flow of rare earth minerals, microchips and other building blocks for AI impact what we think about political economy? These questions are more interesting and more important than the questions about its pedagogical effectiveness. 

Kehlenbach (2024, 8)

I found Kehlenbach’s paper extremely thought provoking and would really recommend giving it a read. It got me thinking about how, if ChatGPT is providing answers to political questions, it might be shaping political debates in a way that is not yet fully appreciated or engaged with. It also struck me that if we do respond to Kehlenbach’s call and make ChatGPT and other forms of generative AI a subject of study, this might also provide a foundation for students to be more critical in their own use of the tool. 

To start thinking about how generative AI might be used as a subject of study in the political science classroom, I went to ChatGPT and (ironically, I know) asked ‘Should people stop using generative AI because of its negative environmental impacts?’. I’ll post the lengthy response below but it won’t surprise you to learn that the result was a big “no”. The response is one that I can definitely see political science students getting a lot of discussion and analysis out of in a classroom-based exercise. It was a response that, interestingly, struck me as starkly similar to some of the contemporary defences that fossil fuel and other carbon-intensive companies make – that they provide a valuable service, boost the economy, and that it is up to governments to provide regulation.

Having these kinds of critical conversations in first-year political science classes could be a useful way to begin analysing the power of generative AI and to develop a more sceptical approach amongst students to outputs from tools like ChatGPT. Pairing that analysis with a core piece of reading on different understandings and forms of power would allow for these conversations to be analytically rigorous, rather than descending into debates of whether students like/ dislike using generative AI. Terry Hathaway’s ‘Lukes Reloaded: An Actor-Centred Three-Dimensional Power Framework’ is one piece of reading on power that I have found that students really enjoy engaging with and is a useful basis of analysis. 

It’s clear that, whether you love it or hate it, the debate around AI in education is not going away anytime soon – despite the problems that plague tools like ChatGPT. By incorporating generative AI as a subject of study, we can at least prepare students to critically analyse its broader societal impacts and the power structures it influences. This approach will not only enhance their understanding of AI but also equip them with the analytical tools needed to navigate and challenge its role in contemporary politics.

Innocuous answer or example of invisible power at play?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.