Unsophisticated use of Chat GPT tends to produce generically poor essays, with repetitive structure, lack of analysis, and pretty stilted prose. Whether its identifiable as AI or not, the reality is that an essay written that way is likely to get a poor grade. When you receive a poorly written essay in which you suspect AI use, there are two typical paths:
Pursue it as a case of suspected misconduct. You might run it through a detector to check for AI use, or ask the student to submit evidence of the work as it progressed through revisions. Detectors are notorious for producing false positives, though, and students who were acting in good faith (but just have poor writing skills) will be caught up in this.
Ignore the suspected use and just grade it accordingly. The essay is likely to get a C, as Devon Cantwell-Chaves pointed out in a recent tweet, so how much energy do you want to spend on trying to catch users out, when the results are poor?
This academic year, the UCL Centre for the Pedagogy of Politics (CPP) is hosting a series of online panel events. Our first event on 30 October was on the theme of ‘Using technology to teach politics’. In this guest post, one of the panellists at that event, Simon Sweeney (University of York), offers further reflections on the challenges involved in higher education’s embracing generative AI, where tools such as ChatGPT call into question issues of authorship and have profound implications for assessment.
A few years ago, we were worrying about students’ using essay mills, a form of contract cheating that plagiarism detection software struggled to identify. The Covid-19 pandemic and online delivery coincided with a reported increase in academic dishonesty (AD). In late-2022 the arrival of generative artificial intelligence (GAI) chatbots like ChatGPT is a further challenge to the integrity of assessment.
Universities realised that banning chatbots was not feasible, as AI has become an established feature in our lives and graduate employment. As educators, we need to respond positively to the opportunities AI presents, recognising its benefits and assimilating AI into teaching and learning practice.
This means developing strategies that accommodate students’ use of GAI while protecting assessment integrity.
This morning, I had a great talk with Dr. Elia Elisa Cia Alves and several of her Federal University of Paraíba colleagues about using artificial intelligence tools in the classroom.
Based solely on my observations of colleagues and literature, I perceive a roughly even split between AI adopters and naysayers. I’ll admit that I lean towards the naysayers in that I argue AI stifles critical thinking and effective writing skills. Yet, I know educators who’ve fully embraced AI and teach students how to use it as a tool rather than a crutch. Two examples include critiquing AI-generated papers and students comparing their own writing against an AI revision of their work. I’ll admit this approach is lending me some ideas, such as having students generate an article on a news topic, then providing them an actual news article on the topic and letting students compare the two (this, of course, assumes the article is genuine).
I felt better about using AI in the classroom at the end of the meeting, but with one lingering caveat–large classroom assignments. Even the AI skeptic in me can imagine running an AI-driven activity in my smaller courses (19 and 40 students), but my 150-student courses? I mean, I can barely see the back row of students in those sections without a telescope, let alone trying to imagine wrangling an activity in an efficient amount of time.
This post barely scratches the surface. I’m curious to read your thoughts in the comments and am open to guest submissions on longer treatments! email@example.com