Evaluating performance

One of the exciting consequences of the British system of second- and external-marking, is that we get to sit down several times a year to discuss and confirm all our grades. These exam boards are not renowned for their gripping nature, as we plough through sheet after sheet of grades and stats and grapple with the application of university regulations, but they are vital to the functioning of the system.

Yesterday, we held our last board of this cycle, and for me it was the last such board I’ll chair for some time (I’ll get to sit on the university equivalent instead), and it prompted some reflection about how we evaluate students’ performances.

One of the big themes that we raise with students is the need to think of their studies in an integrated way, i.e. not a series of discrete and un-related modules, but an evolving and expanding body of knowledge that we access at different points. We want them to see that discussions about political theory in one module have relevance for their research on a case study in another.

However, when I look at our assessment regime, we do not always practise what we preach. Colleagues set pieces of assessment, feed those marks into the database and only at the exam board do we put it all together to evaluate performance as a whole.

Two caveats might be introduced here. Firstly, colleagues do talk to each other, so we have a good idea of what has been taught elsewhere, which shapes our expectations of students and allows us to design assessment that helps them make the connections. Secondly, the nature of the academic calendar makes it effectively impossible to monitor performance in assessments in real-time.

Nonetheless, if the exam board is to have any meaning beyond simply confirming grades, then it needs to be able to understand the bigger picture of a student’s work during the year.

This is not a particularly novel insight – and certainly not for me – but it does remind me that exam boards are as much about opening debates as they are about closing them.

For us, that means several things.

Despite the passing sense of dread that an exam board induces in people, we now run more of them than before: at the end of the first semester, we review all those modules, to help give us a sense of students’ trajectories, which we can discuss with them in tutorials, so that they can better benchmark themselves. It also means that we do the same at the start of the academic year, talking through this round of assessment with our returning students.

Finally, it means talking amongst ourselves about our teaching, when we can see problems with individual modules: if students generally seem to struggle with a module, can we work out why that is and how we can unblock the problem? Student input can be helpful in this too and generally module leaders have had those discussions already in class.

I won’t pretend that I’ll miss exam boards that much, but I remember the lessons that they can teach us all.

3 Replies to “Evaluating performance”

  1. Interesting blog Simon, but I would love to hear your comments about the elephant in the room that you are hinting about, which is that when you sit down and add up all the numbers, you are measuring the same thing many many times, and not measuring several of the things you say you are developing at all. Programme level assessment strategies are the only answer, no?

  2. Matthew, potentially that’s the case, but I know that we also spend a lot of time with our programme learning objectives, making sure that they get covered by the individual modules we run, precisely to avoid that issue. At the same time, I do accept that making holistic evaluations of performance and achievement over an entire degree is very difficult indeed.

Comments are closed.

%d bloggers like this: