I’m still enough of a kid to be excited to see the place I work at mentioned in the news, especially if it’s in an outlet my mum might see.
Of course, it’d be better if the context of this particular mention were different, but I guess you can’t have it all.
This all comes off the back of the on-going debate in government about grade inflation.
I wrote about all this last summer, and I’m not sure I’ve gotten much further in my thinking about this, except to note the shift in framing to combating ‘artificial’ grade inflation.
While this might seem to start to take account of the other factors at play, what it singularly doesn’t do is set out a means of calculating this in practice.
Obviously, there are changes in student characteristics that have a direct bearing and these are relatively simple to capture: socio-economic status; entry grades; progressive performance in each year of study.
However, there are also obviously changes in the teaching environment: staffing changes; changes in pedagogic approach; changing curricula (we’ve made our final year dissertation optional thus year, for example); changing provision of learning resources outside the degree programme, at the library or in welfare; changes in programme regulations.
And these factors are much harder to capture in calculating impact on student performance. We know the government thinks teaching matters – hence the joys of TEF – but I’ve yet to see metrics that seek to measure how any one of these factors affects outcomes.
More prosaically, event at a more local level, we don’t really know what’s happening, as I discussed recently with my negotiation module. If I’m not clear on the relevant factors within my own classroom, why should I expect a government body to be able to do so for every class in the country?
Two answers suggest themselves here.
The first is that just like REF and TEF, everyone knows that metrics are hard to come by, so the likely response is to use what there is to hand. That means primarily student-driven data. Control for their variability, produce a value-added score then semi-randomly decide on what is ‘too much’ and penalise accordingly. Sure, it’ll hurt those bits of teaching that are very effective, but it’ll ‘send a message’. or something.
The second is that this is much more about politics than about good pedagogy.
Certainly, the percentage of first-class degrees has gone up markedly over time, but to pin that on grade-inflation immediately cuts out a framing of the issue as one that requires careful research and understanding. Crucially, ‘grade inflation’ positions teachers as the baddies in this: sod good practice, let’s just chuck out some more high grades to keep the kiddies onside.
I take exception to that and I would imagine any lecturer with a scrap of self-respect would feel the same too.
Whether our voice will get to count for anything in the coming development remains to be seen.