Last week I spent an afternoon in London, taking part in a meeting of the QAA’s benchmark review group for Politics and IR. And it is was lot more fun than I’ve made it sound.
For those not familiar with the English & Welsh system, the Quality Assurance Agency is the quango charged with ensuring academic standards in universities. It does this through a variety of mechanisms, such as institutional visits, but a key tool is its set of benchmark statements.
These specify what constitutes a degree in a given subject, in terms of substantive content, skills development and pedagogical practice. They are composed by groups of academics from the subject area, with a 7 year review cycle.
Clearly the danger in any such exercise is the imposition of undue constraint on the innovative practice that might be occurring in individual institutions, so the thrust is very much about supporting creative approaches, rather than forcing programmes into a canonical approach. The 2007 benchmark for Politics expressed this very clearly and we were very happy to endorse that again: given the essentially contested nature of much of the subject matter, criticality and creativity are rightly seen as core skills of any Politics graduate.
I won’t say much about what we covered, not least because our discussions will continue for a while yet, before a public consultation, but the meeting was a valuable insight into how our discipline has moved on.
In particular, it was striking how much more rigour we now expect of our students. The 2007 benchmark mentioned research skills, but as part of a bigger list of things to consider. By contrast, our group was keen to make sure that this was brought forward and underpinned by pedagogy that emphasises a wider variety of assessment forms and constructive feedback mechanisms.
Interestingly, this was only partially about quantitative methods. The quantitative turn over the past two decades has had major impacts on the way we research and the way we teach, but it is precisely because we now have a more mature sense of the need for mixed method approaches that it is research skills as a whole that really matter. Students need to be able to know what methods to use when, as well as how to use them.
As someone who has used the benchmarks for many years, when writing my local programme validation documents, the process of participating in this group has been very instructive. There is sometimes the feeling that benchmarks are there as hoops to be jumped through, a checklist to be checked.
By contrast, sitting around a table with a set of individuals from across the discipline, all dedicated to prompting high quality teaching (and research for that matter), it is clear that the benchmarks are meant to be a support and an encouragement to try the best that we can with our provision.
To take a mundane example, sitting in that room, I learnt about two or three techniques that I’d not heard of before, which I’d like to bring back into my own institution. By including those points in the benchmark, I would hope that everyone has the same opportunity to learn and develop.
One Reply to “Where do benchmarks come from?”
Comments are closed.