My first-year module this semester has been a real training ground for me. Not only am I going all-in on flipping, but I’m also trialing the new assessment software that the University is thinking of using.

By extension, that also means it’s a training ground for my students, something that I’ve been very open about with them.
The flipping seems to be working and I’ll be writing up my thoughts on that later in the semester, but having coming through the first use of the software I need to make some decisions now.
In part, my situation arises from wanting to push how we used the software past a conventional approach. Not only did students submit a literature review to it, but they then had to review someone else’s using the system, all in aid of a final piece of self-reflection (which we’re marking now).
Using the marking function is a bit more involved than just submitting work and a couple of people did get a bit lost on that. But the bigger problem was that not everyone submitted work.
In the good old days (i.e. last year and before) we did all this in-class, so it was much simpler to cover (the exceptionally few) missing pieces. However, because we’d pre-selected peer reviewers, we ended up with some students having nothing to review and others not getting their work reviewed.
That’s a failing on my part: next time, I’d leave allocation until after the first submission was in, so everyone who submitted got allocated and reviewed.
But that’s next time. What about now?
Already, I’ve indicated to everyone that not getting peer feedback won’t count against them in marking, but a couple of students have felt that absent such comments they’re not in a position to complete the self-reflection.
To that, I’ve had to underline that it’s self-reflection, so peer feedback was only ever one component of that: indeed, the whole purpose of the somewhat-convoluted exercise is to get students becoming more independent and critical about their learning.
All that said, peer review was added in here to help prompt everyone to think more about what they’ve done and what they could do.
As we sit down to mark, the question will be much we can, and should, take the circumstances into account. Until we’ve seen the full range of work, that’s going to be a tricky call to make.
However, it all highlights an important point in such situations: do we have fall-backs?
Trying new things is inherently risky – that’s why many colleagues stick with what they know – but with some risk management, that need not be a barrier to moving practice forward.
Annoying through our situation here is, it’s not fatally-compromising to the endeavour: we know who’s affected and how; they’re still able to submit work; and the assessment is relatively small in the overall scheme of things.
Yes, we’ll be using the system again for the final exam, but without the aspects that have proved problematic. Indeed, the exam has already been trialled elsewhere in the University, so that’s well-understood.
So, on balance, I feel comfortable that we can manage the situation and implement the necessary changes next time around to remove the problems identified.
Which is, of course, a big part of the reason for trying it out in the first place.