How to measure whether your teaching’s working

As long as a cat…

As we hurtle towards the summer ‘break’ and everyone remembers the deadline they cut you some slack on, it’s also a time when we’re often thinking about next semester.

For those of you with interests in making L&T a bigger part of your work, one obvious route is researching and publishing on what you do in the classroom.

Often that might be about trying out something different with students, which you think generates benefits for their learning, and might be of use to others in the same situation: we’ve published lots of such pieces from our guest authors here at ALPS.

While the thing you’re doing is the obvious centre of attention, the second element – whether it works – sometimes gets a bit lost (speaking as someone who reviews a good number of journal submissions in this field), so I thought it’s useful to think a bit more about this.

Measuring learning turns out to be a less-than-simple task: if it weren’t, then we’d all know about how to do it. The problem turns in part on the multiplicity of things we might consider, and in part on the difficulty in making any accurate/meaningful measure of these things.

Learning is not simply about knowledge, but also skills, social capital and much more. Each of those itself has many sub-elements, not all of which might be immediately obvious to anyone, nor equally important to everyone. Likewise, learning happens at lots of different speeds, so do you focus on the immediate gains, or something more long-term?

The (faint) silver lining to this particular cloud is that everyone’s in the same boat. I’m yet to see a comprehensive evaluation tool that I could recommend to you, even though there a number of really good ideas out there (for example, this or this (which makes the good point that students’ perception of what they learn isn’t the same as teachers’ measure of what they learn)).

The important thing here is to be mindful of this from the start of any pedagogic research, embedding your measurement protocol into the design from the start, rather than hoping it’ll come to you later: a short post-course questionnaire about whether your students liked the thing they did isn’t likely to suffice.

That means thinking about what elements you focus on measuring (and why), then on how you’ll measure them. In particular, think about whether and how you can have a control for your teaching intervention: if it’s not practical to have another group of students not doing it, then will pre/post testing cover things robustly enough? Just like your other research, try to control your variables as much as you can, so you can be more confident about isolating effects.

And it also means asking for help if you’re unsure. Your institution probably has a great bunch of people centrally who work on just these kinds of projects and who can give you excellent advice and support. Likewise, you can ask us here or online about specific ideas: it’s worth looking back at our posts for suggestions and colleagues who’ve worked on similar things.

Do all that and your pedagogic research will be off to a flying start (which might be the only flying you get to do).