A follow-up to my post in April about making exams exercises in active learning:
From the very beginning of my teaching career, I’ve emphasized, or at least tried to emphasize, the importance of being able to construct evidence-based arguments. My exams are almost always intended to evaluate students’ proficiency at this task. As I mention in the post linked to above, the final exam for my comparative politics course in Spring 2020 included the stock phrase of:
reference course readings to support your argument.
For the final exam in Spring 2021, I substituted:
support your argument with 1) information from the Gerkhania server on Discord, and 2) cited references to at least two red and two green journal articles listed in the syllabus.
Explicitly requiring the citation of four articles that students were nominally already familiar with from previous assignments resulted in greater use of scholarly evidence in exam essays than had typically occurred in the past. Students sometimes didn’t use these sources in ways that actually supported their arguments, but in these cases I could tell that at least an attempt had been made.
However, to my surprise, not a single student referred to specific player behavior during the simulation. That is not how students read “information from the Gerkhania server on Discord.” Instead, they summarized the simulation’s outcome or, worse, repeated the general background information on Gerkhania that I had provided before the simulation began. So, for 2022, the exam prompt should probably include something like:
support your argument with 1) examples of specific actions made by players during the Gerkhania simulation, and 2) cited references to at least two red and two green journal articles listed in the syllabus.
This is all well and good, because my main purpose for the final exam is summative assessment of learning. But I also want the final exam to help me gauge whether the Gerkhania simulation contributed effectively to this learning. While the first part of my potential exam prompt gets at this question indirectly, I think more is needed. So I have been thinking about “scaffolding” the final exam around the simulation.
I typically run Gerkhania over three sessions. It occurred to me that I could assign something like the following after each session:
Which theoretical perspective best explains players’ behavior in today’s Gerkhania session? Why? Refer to specific player actions and Course Reading X in your response.
These assignments would be short pieces of writing, easy for students to complete and for me to grade. They would allow students to practice for the final exam, and they would function as a contemporaneous reflective through-briefing rather than just a post-hoc debriefing. And I would be able to observe whether students’ ability to construct evidence-based arguments about the simulation improved over time.