
One question that popped up a couple of times at EuroTLC was whether I use peer assessment in my simulations and – by extension – whether it is possible to use it. As such, it’s useful to consider this a bit further.
Essentially, there appear to be two ways to use peer assessment here: one more direct, the other more incidental.
The direct way is to set up an explicit peer assessment mechanism. That requires all participants to rate the performance of other participants. To do that, several elements are needed.
Firstly, there has to be a very clear protocol about what is being evaluated and to what standard. The content would need to align to learning objectives, and would probably need to consider both substantive knowledge and assorted skills (preparation, communication, team work, leadership, etc.), in as far as those can be demonstrated within the simulation. The standard is either some set of benchmark statements (“A = very clear presentation skills, with repeated demonstration of active listening” etc.) or something more relative (“Which three students showed the best presentation skills?”). The latter requires less scaffolding, but is more subjective.
Secondly, a decision is needed on who assesses whom. Do you pair students off, for mutual evaluation? Do you ask everyone to assess everyone else, or a group of others? Does a student get one evaluation or several? Again, questions of practicality impinge here: the number of pairings grows exponentially, so while it might help to even out individual evaluations, it comes at a price.
This is also linked to the key question of how peer assessment is used. Does it contribute to the final mark, or is it more formative? Does it have to be considered by a second- or external-examiner (in which case, can you document it appropriately)? And, importantly, do students get to know who evaluates them and with what result?
All of these points highlight that while direct peer assessment is valuable, it also comes with costs and challenges. The benefits are clear: capturing more fully what students have done in their simulation, developing their critical skills and reinforcing the active learning agenda of the simulation itself. But without proper consideration of the factors mentioned, it can undermine student buy-in, not to mention lead to gaming [sic] by individuals, which might distort the gameplay.
This then leads me to the second method of peer assessment, which I have used several times to good effect.
Rather than ask for explicit evaluations, this element is instead captured in a final reflective piece that I have asked students to write after a simulation. This piece asks students to identify what they have learnt during the simulation, and relate it back to the literature. This can be specified more, and with larger simulations I do ask students to consider not only their own performance, but also that of other players, including team-members.
The actual assessment remains with me, and is focused on their reflective ability – rather than their performance per se in the simulation – but I now have access to another level of evaluation that would be otherwise lost to me. As a result, if one member of a three-person group tells me how fantastic they were, but the other two members point out their faults, then I can use that to triangulate my grading, since the first person would appear to lack some critical depth. Such a process can obviously be extended across groups, although it does tend to be more at the relative end of things (“I was really impressed by X’s ability to use the rules of procedure”), since there is no requirement to provide systematic coverage or evaluation.
The trade-off here is then between depth of peer assessment and student engagement in that process on the one hand, and the extent of pedagogic scaffolding and the risk of exposing students’ judgements to criticism on the other.
Neither method is perfect, but both might offer something useful to you and your simulations.