Generative AI in the classroom

Many contemporary conversations about innovations in teaching and learning will eventually touch on the same subject – AI and its place in higher education. There have already been some great posts on ALPS covering a range of angles for exploring AI and political science education, including protecting the integrity of assessments, the choice to adopt or resist AI in the political science classroom, on AI and specification grading, and on AI and the pace of technological change. ALPS is not alone in this attention to AI. PS: Political Science & Politics published a piece by Ardoin and Hicks in April detailing fears of AI-driven academic misconduct and suggested methods for successfully using AI as a learning tool. Looking at WonkHE’s coverage of AI over the last year shows the extensive and profound effects that AI-related concerns and applications are already having in the sector.

However, one thing that has been largely overlooked to date is the idea of bringing AI into the political science classroom, not as a tool for research, for writing essays, or summarising texts, but as a subject of study. 

The generative AI tool ChatGPT has experienced a meteoric rise in its widespread adoption. It took 2 months for ChatGPT to reach 100 million users. Comparing this to TikTok’s 9 months, Instagram’s 30 months, and Spotfy’s 55 months, shows just how rapid the uptake of the chatbot has been. ChatGPT is now being hyped as a replacement for Google Search, with many using it as a first port of call for answering queries. This gives ChatPGT and its developer, OpenAI, an enormous amount of political power and opens up the potential for generative AI to be a vital subject of political science study. 

A recent paper published in the Journal of Political Science Education by Stefan Kehlenbach, entitled ‘The Impact of Infrastructure: Considerations of Generative AI in the Classroom’ set out this case well. One section at the end of Kehlenbach’s article stood out to me: 

All elements of political science research are impacted by AI in some way, and so by asking important questions about its usage we can leverage its popularity to ask the important questions that political science wants to answer. How does the spread of AI impact lawmaking now that policy makers are beginning to use generative AI as a part of the lawmaking process? What role does AI play in national defense? Should an AI system be allowed to make decisions about who to attack, or how to deploy troops? How might this impact the existing structures of warfare and the responsibility to protect? How does the flow of rare earth minerals, microchips and other building blocks for AI impact what we think about political economy? These questions are more interesting and more important than the questions about its pedagogical effectiveness. 

Kehlenbach (2024, 8)

I found Kehlenbach’s paper extremely thought provoking and would really recommend giving it a read. It got me thinking about how, if ChatGPT is providing answers to political questions, it might be shaping political debates in a way that is not yet fully appreciated or engaged with. It also struck me that if we do respond to Kehlenbach’s call and make ChatGPT and other forms of generative AI a subject of study, this might also provide a foundation for students to be more critical in their own use of the tool. 

To start thinking about how generative AI might be used as a subject of study in the political science classroom, I went to ChatGPT and (ironically, I know) asked ‘Should people stop using generative AI because of its negative environmental impacts?’. I’ll post the lengthy response below but it won’t surprise you to learn that the result was a big “no”. The response is one that I can definitely see political science students getting a lot of discussion and analysis out of in a classroom-based exercise. It was a response that, interestingly, struck me as starkly similar to some of the contemporary defences that fossil fuel and other carbon-intensive companies make – that they provide a valuable service, boost the economy, and that it is up to governments to provide regulation.

Having these kinds of critical conversations in first-year political science classes could be a useful way to begin analysing the power of generative AI and to develop a more sceptical approach amongst students to outputs from tools like ChatGPT. Pairing that analysis with a core piece of reading on different understandings and forms of power would allow for these conversations to be analytically rigorous, rather than descending into debates of whether students like/ dislike using generative AI. Terry Hathaway’s ‘Lukes Reloaded: An Actor-Centred Three-Dimensional Power Framework’ is one piece of reading on power that I have found that students really enjoy engaging with and is a useful basis of analysis. 

It’s clear that, whether you love it or hate it, the debate around AI in education is not going away anytime soon – despite the problems that plague tools like ChatGPT. By incorporating generative AI as a subject of study, we can at least prepare students to critically analyse its broader societal impacts and the power structures it influences. This approach will not only enhance their understanding of AI but also equip them with the analytical tools needed to navigate and challenge its role in contemporary politics.

Innocuous answer or example of invisible power at play?

Obsolete By The Time You Read This

Technology marches on while our social institutions generally don’t. Eventually a force pushes the existing order beyond its tipping point, and there’s a bit of mayhem until a new equilibrium is reached. Think of knocking a wasp nest off the rail of your back porch after Aunt May has been stung for the third time. For the profession of teaching, that force is AI.

I know that I’ve been periodically yammering on this blog about the technological transformation of higher education for the last twelve years, and, by outward appearances, it may seem that the peasants haven’t yet grabbed their Instagram pitchforks and stormed the Ivory Tower. But I think AI is the qualitatively different icing on the cake, for a few reasons.

First, open-access AI has demolished the academic cheating industry. Here is the stock price of the “educational support services” company Chegg since OpenAI released ChatGPT:

Instead of buying an essay from an online paper mill in Pakistan or Kenya, students who are unwilling to learn can now get what we insist they deliver for free.

Second, AI has now improved to the point where its text output is equal to or better than that of most human students. Below is a partial screenshot of how ChatGPT analyzed “Why Are There No Arab Democracies?” by Larry Diamond (Journal of Democracy 21, 1). Yes, AI can now understand articles and even books that one uploads in pdf form.

AI has rendered the written assessment of student learning, as traditionally formulated, a pointless endeavor.

Third and most importantly, rapidly advancing AI will change how universities institutionalize knowledge. Many traditional fields of study, perceived by students and administrators as ossified vestiges of a pre-digital world, are already marginalized. Universities will use AI as yet another rationale for allocating greater resources to other methods of inquiry.

Last point, tangential to classroom praxis but not to higher ed as a whole: AI portends the death of academic writing as a genre, to which I say, good riddance. The overly educated often refuse to communicate in the vernacular to obfuscate the fact that they actually have nothing to say. Or that they do have something of value to say, but never learned how to say it. If all you do is spout vapid gobbledygook, you’re now redundant, because AI can accomplish the same task much more efficiently. If AI can convey your good ideas more clearly than you can, time to admit that all those years of formal education never trained you to write well in the first place. You need to up your game — possibly with the help of AI.*

*This blog post was not written by AI, though you couldn’t tell if it was.

Chat GPT and Specifications Grading

Unsophisticated use of Chat GPT tends to produce generically poor essays, with repetitive structure, lack of analysis, and pretty stilted prose. Whether its identifiable as AI or not, the reality is that an essay written that way is likely to get a poor grade. When you receive a poorly written essay in which you suspect AI use, there are two typical paths:

  1. Pursue it as a case of suspected misconduct. You might run it through a detector to check for AI use, or ask the student to submit evidence of the work as it progressed through revisions.  Detectors are notorious for producing false positives, though, and students who were acting in good faith (but just have poor writing skills) will be caught up in this.
  2. Ignore the suspected use and just grade it accordingly. The essay is likely to get a C, as Devon Cantwell-Chaves pointed out in a recent tweet, so how much energy do you want to spend on trying to catch users out, when the results are poor? 
Devon Cantwell-Chavez tweets on February 13, 2024 about her approach to grading assignments where Chat GPT use is suspected.

To this I wish to add a third path: use specifications grading. 

Continue reading “Chat GPT and Specifications Grading”

Generative AI changes teaching and learning: how to protect the integrity of assessment

This academic year, the UCL Centre for the Pedagogy of Politics (CPP) is hosting a series of online panel events. Our first event on 30 October was on the theme of ‘Using technology to teach politics’. In this guest post, one of the panellists at that event, Simon Sweeney (University of York), offers further reflections on the challenges involved in higher education’s embracing generative AI, where tools such as ChatGPT call into question issues of authorship and have profound implications for assessment.

A few years ago, we were worrying about students’ using essay mills, a form of contract cheating that plagiarism detection software struggled to identify. The Covid-19 pandemic and online delivery coincided with a reported increase in academic dishonesty (AD). In late-2022 the arrival of generative artificial intelligence (GAI) chatbots like ChatGPT is a further challenge to the integrity of assessment.

Universities realised that banning chatbots was not feasible, as AI has become an established feature in our lives and graduate employment. As educators, we need to respond positively to the opportunities AI presents, recognising its benefits and assimilating AI into teaching and learning practice.

This means developing strategies that accommodate students’ use of GAI while protecting assessment integrity.

Continue reading “Generative AI changes teaching and learning: how to protect the integrity of assessment”

Hasty thoughts on AI in the Classroom

This morning, I had a great talk with Dr. Elia Elisa Cia Alves and several of her Federal University of Paraíba colleagues about using artificial intelligence tools in the classroom.

Based solely on my observations of colleagues and literature, I perceive a roughly even split between AI adopters and naysayers. I’ll admit that I lean towards the naysayers in that I argue AI stifles critical thinking and effective writing skills. Yet, I know educators who’ve fully embraced AI and teach students how to use it as a tool rather than a crutch. Two examples include critiquing AI-generated papers and students comparing their own writing against an AI revision of their work. I’ll admit this approach is lending me some ideas, such as having students generate an article on a news topic, then providing them an actual news article on the topic and letting students compare the two (this, of course, assumes the article is genuine).

I felt better about using AI in the classroom at the end of the meeting, but with one lingering caveat–large classroom assignments. Even the AI skeptic in me can imagine running an AI-driven activity in my smaller courses (19 and 40 students), but my 150-student courses? I mean, I can barely see the back row of students in those sections without a telescope, let alone trying to imagine wrangling an activity in an efficient amount of time.

This post barely scratches the surface. I’m curious to read your thoughts in the comments and am open to guest submissions on longer treatments! alps@activelearningps.com