It’s a truism that no academic is actually interested in assessment. Sure, there are certainly academics who find assessment stimulating and and engaging as a topic, but none of us has ever met one.
Except you have: me.
I never really understood the antipathy towards assessing: maybe it’s a carryover from being a student, where being tested felt, in very large part, like being punished. And I’m not going to pretend that I really liked sitting down to go through scripts.
One big exception to that was the reflective piece that my negotiation students wrote about their work, exploring and expanding on what they had taken from the course and contextualising it in the wider literature. Every one was a fascinating insight into my students’ heads, in a way that 4,000 words on ‘IR is overly fixated on power’ never is.
Any way. The only point were colleagues do seem to get more interested is when they have a problem with their assessment.
Right now, that problem is ChatGPT.
If you’ve somehow missed this one, this is an AI text generator, capable of creating extensive – and seemingly well-written – responses to minimal prompts. Including to things like essay titles.
To say that colleagues have been concerned is very much an understatement.
Both online and in-person, I’ve seen colleagues describe the system as the death knell of the essay format in assessment. Text is both of a standard that it could satisfy criteria to get a passing grade and that isn’t going to trigger anti-plagiarism software (since it’s not cut-and-paste, but organically created).
For all the rumours that this latter software will become upgraded to pick up on such AI-generated text, the feeling is deeply pessimistic.
My own view is perhaps more measured, mainly because with all the examples I’ve seen I haven’t felt the output is that impressive, especially for any one looking to bypass they way to the kind of higher grade that so many students seek out.
As others have noted, the quality and rigour of such texts isn’t up to much, which means both that it’s possible to pick up on AI generation (even if evidencing a plagiarism case is still a massive pain in the neck) and that students get a dubious amount of return (in grade terms).
But the bigger point is that text is only part of how assessment works.
The choice of questions you ask and the requirements you impose on students also matter massively.
Take that negotiation class I mentioned. Because I knew all the students, saw all they did in class and debriefing them extensively at the time about their learning, I had a very clear idea of what might be in their reflective pieces.
So if someone tried to write about stuff they hadn’t done (and a couple tried), I knew and could mark accordingly.
More generally, this all should be making us think more carefully about what assessment is for. And part of that is acknowledging that the very large majority of students don’t want to cheat on their education: sure it’s less effort (they think), but it screws when they are out in the world, trying to use skills or understanding they don’t actually have.
So I leave you with this example of how we can get students to engage with these challenges. It’s not a whole solution, but it is a recognition that blind panic or utter despair aren’t helpful responses.
We’ll be coming back to this several more times, no doubt, but I’d love to hear your thoughts on the matter.