As the ALPSblog all prepare for their biennial reunion at the APSA TLC in Washington later this week, I’ve been thinking some more about how we measure success in teaching.
Partly, this is because it’s my job to think such thoughts, but also it’s because we’ve just had our own big L&T event at Surrey: ExciTeS. With a mix of colleagues and external speakers, the day was an excellent opportunity to join up some of the dots in our practice and to do some big-picture stuff. Using more appropriate jargon, naturally.
A theme that emerged from our debate was the extent to which institutional structures constrain or enable different teaching practices. For instance, the university’s requirement that all taught modules be only one semester in length meant that previous year-long modules could no longer run, but it also allowed colleagues to try out some new ideas.
More particularly, we talked about how to measure success and how best to maximise that success.
Success strikes me as a rather nebulous concept at the best of times, given that it has a deeply subjective base: if I think something is important to achieve, and I achieve, then who is anyone else to say I’ve not been successful?
We know this is a problem, which is why many institutions target proxies for good performance: completion rates, percentage of ‘good degrees’, entry into employment post-study, and the like. Exam boards and boards of study will look at student performance at a module level, to identify problems or bottlenecks. We also use students’ evaluations of teaching (via their end-of-module questionnaires), not least because they already exist and provide a different perspective. In short, we have lots of metrics and lots of data.
This is not the place to discuss the merits or demerits of each of these, not least because it is very rare to rely on any one of them alone to make a judgement about the success of the teaching. Instead, we could more profitably ask how best to produce outcomes that work for those metrics (which often become targets) and also work for whatever wider sense of success we might hold personally.
This is what brings us back to the discussion about institutional frameworks: the institution requires us to meet certain targets, so is that a problem or an opportunity?
Too often, it’s a problem. A target becomes reified, held up as the ultimate purpose of our activity, rather than being treated as a metric, a way of measuring how we’re doing. Yes, everyone could come out with top honours if we just marked very leniently, but that would not only offend our quality-assurance mechanisms, but would also offend our sense of how best to support our students.
For me, I work in learning & teaching not to hit targets, but to help each student achieve the very best of their potential, whatever level that might be. When I teach, I try to get the most of the sessions I run with students, because it is the most engaging thing for them, not because it improves their performance on a particular metric.
Importantly, it is not to say that metrics aren’t important, but that they are just what they are called: metrics. My classes get good metrics, but not because I’m trying to improve them: I’m trying to improve my teaching and improve students’ learning.
It’s all too easy for us to get hung up on metrics – on a quantified vision of success – but if we can recall that education requires a holistic and integrated approach, then perhaps we can create learning environments that are good for students, good for those who set targets and good for us.
The big problem here is Campbell’s Law: the more an quantitative indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor. Success is virtually impossible to actually measure and the result is that we jump into using any indicator we can think of that might be tangentially connected to it and that is ready to hand. Validity? Well, we have to measure something or will do it for us! Campbell describes what is often the result of this kind of thinking.
This is all silly in the extreme. Your approach is the best: what counts is how well we do at helping our students achieve mastery of the discipline they have chosen. If we do that, the rest of the “metrics” will take care of themselves. The problem is that the administrators aren’t often interested in the students and how they learn. It’s like they say in the UK: those who can do, those who can’t end up running the place.