Chat GPT and Specifications Grading

Unsophisticated use of Chat GPT tends to produce generically poor essays, with repetitive structure, lack of analysis, and pretty stilted prose. Whether its identifiable as AI or not, the reality is that an essay written that way is likely to get a poor grade. When you receive a poorly written essay in which you suspect AI use, there are two typical paths:

  1. Pursue it as a case of suspected misconduct. You might run it through a detector to check for AI use, or ask the student to submit evidence of the work as it progressed through revisions.  Detectors are notorious for producing false positives, though, and students who were acting in good faith (but just have poor writing skills) will be caught up in this.
  2. Ignore the suspected use and just grade it accordingly. The essay is likely to get a C, as Devon Cantwell-Chaves pointed out in a recent tweet, so how much energy do you want to spend on trying to catch users out, when the results are poor? 
Devon Cantwell-Chavez tweets on February 13, 2024 about her approach to grading assignments where Chat GPT use is suspected.

To this I wish to add a third path: use specifications grading. 

Continue reading “Chat GPT and Specifications Grading”