Proxy, proxy, proxy: out! out! out!

IMG_0077
Right place, right time, wrong conversation

At some point we must have run out of conversational topics on holiday, because I found myself on a beautiful Croatian beach, trying to explain the ‘good degrees’ metric to my wife. To my credit, I didn’t try very hard, although it wasn’t helped by the fact that I struggled to come up with a good justification for it.

For non-UK readers, “good degrees” measures the percentages of awards in the top two categories (firsts and upper seconds) for undergraduates. That’s about 70% of them, a figure that’s risen over time.

My Croatian struggle came in trying to unpack what it showed and why that might be useful.

Partly, it’s a reflection of our student body: if we have more academically-able students, then they’ll be more likely to get better grades, and so a ‘good degree.’

But equally, it’s also a reflection of our assessment and grading. Are we the kind of place that rewards good work (assuming that student populations are broadly equivalent)?

So what does it show? Our selectiveness? Our accommodating assessment? Our great students? Something else?

The issue is one that might have remained of minimal interest to anyone, but for the use of the metric in league tables. These aim to rank universities on the quality of their provision and have become an important part of the British HE sector, not least for recruiting new students. However, that also introduces an incentive to improve one’s ‘good degree’ metric.

The obvious responses in this situation would be to have stricter entry requirements (which is also a league table metric) and to have more lenient assessment. The first is very much in the universities’ hands and you’ve seen exactly that happening. The second is a very different kettle of fish.

Assessment in the UK differs from many other countries with its use of second- and external-examining (see this for more), which is in turn bound by a national framework of performance standards: broadly speaking, any one at a UK university should be marked to the same kind of standard, whatever and wherever they study, against benchmarks, rather than against each other. And, broadly speaking, that’s what happens.

The upshot is that any (league table) pressure to go soft on assessment is counter-weighted by the need to meet sector and disciplinary norms: as our university tells us, the norms are the norms and we shouldn’t be messing around with them.

The problem now becomes one of still wanting to improve ‘good degrees’, but losing the most obvious tool to achieve that. So instead, we try other things. These cover pretty much every part of what we do in our teaching. Are there pinch points in assessments or modules that we can redesign, to meet learning objectives in a more accessible way? Are there staff how need help with redesigning assessments or with providing support? Are there early markers of weaker performance that can trigger timely intervention? Are there extra resources we could provide that would help improve student outcomes?

All of those questions are good questions to ask of any educational body, since they all speak to the core question of ‘how do we help our students to achieve their full potential as learners?’

Unfortunately, all of those questions are not ones that convert into nice, simple metrics. Instead we fall back onto things like ‘good degrees’ and back into the question of what we are trying to do.

The issue is an old one: if you make something a metric, then people tend to work towards those metrics, they gain value in of themselves. That’s a problem if we forget that metrics are proxies, which is all too easy to do.

As we move towards a Teaching Excellence Framework here in the UK, that’s something we’re going to have to deal with more and more.

%d bloggers like this: