The UK’s Office for Students has now published its decisions on student outcomes.
As I discussed earlier this year, when the consultation went out, this is an intellectually-dubious way to ensure sub-standard university providers are held to account.
My dubiousness stemmed then from the metrics used, which focused on progression, completion and post-qualification employment: to reduce the ‘value’ of a degree to these points suggests a rather narrow view of what we try to do with our teaching and with students’ learning.
You’ll be shocked to hear that despite much grumbling from providers, the OfS has broadly stuck with its plans, albeit with a rather less automatic penalty process.
Speaking as someone whose institution is below its benchmark for various elements, I’m aware this might all sound like motivated reasoning, but equally I’m also aware that my university is easily the biggest provider in the UK, taking in mostly non-standard students (so typically mature and part-time), so I do wonder why a one-size fits all approach was deemed to be appropriate.
As others have pointed out, the discretion in the application of penalties effectively leaves us very uncertain about how any of this plays out: the OfS could take a very rigid view and just hit every infringement of metricised performance to make a point of how standards need to be raised, or it could be very relaxed about it all and treat this as a diagnostic tool for providing support.
Certainly, contextual factors are mentioned here, but equally it is fair to say that OfS has tended to be somewhat at loggerheads with universities about How Things Should Be, especially with a government standing over them that seems to want a recasting of Higher Education.
Even if we are [back?] in a phase of British politics being very much less than settled, it’s clear that all of this will mean more interest by providers in metrics, and that colleagues will need to keep in mind how that plays out in their subject fields.
As much as I like to say that the best and most sustainable route to good metrics is through good academic practice – i.e. not through metric-chasing – it’s also clear that we have to have a clear eye on what metrics count for internal managers and external regulators.
The more we can articulate a coherent and cohesive vision of how our efforts to build learning environments for our students, the better we can push back against the effects of trying to reduce such things to points on a dashboard.