But really: how?

Should we sign up Bean for an FHEA-accredited course too?

Exciting times here, as the second family member undertakes training in Learning & Teaching. No, not the kids (although maybe it’d make a nice birthday present for them), but my very talented SO.

After many years working in pure research roles, she’s now starting to take on some teaching, so she’s picking up her institution’s introductory package, which is prompting some really good conversations at the dinner table.

One issue that’s coming up – for me probably more than for her – is the question of how one gets from the L&T training session to the on-the-ground experience in your classroom.

This was started by a discussion of assessment and its central role within the learning process. As my SO noted, assessment is always formative and should be always linked to the feedback and adaptation process.

For her, one practical problem with that much of her opportunity to teach comes from giving guest lectures on modules run by other people: we’re talking here about specialised Masters programmes with lots of such input from research experts, teaching to their particular expertise.

Obviously, that can be great for students as they get to interact with a wide range of leading People, but it’s pretty rubbish for any formative development using assessment, per the training session. My SO might only be spending a couple of hours with the group, leaving little or no time to adapt content or pedagogy to their specific needs or interests.

That raised the more general problem of the translation from theory to practice that I’ve mentioned.

As someone working on simulations and other forms of active learning, I’ve always been rather sensitive to this, since the restraints under which I work have been rather obvious. The shape of the teaching spaces I’m in; the number of students I’ve got; the flexibility of the timetable: all of these impose some really consequential limits on what I can do.

And beyond that there is the entire process of Quality Assurance. My old negotiation module ended up using a reflective writing assessment partly because it aligned well with my learning objectives, but also because it worked for our second marking/external marking regime. My initial thoughts about something much more immaterial – me marking what they did in negotiations, or getting them to negotiate with me for their grade – might have made as much sense pedagogically, but they would have fallen at the hurdle of our L&T committee, who would (reasonably) have asked me how we could be confident about the equity and consistency of such approaches.

Yes, QA has flexibility in it, just as one can find flexibility on capping numbers or working with your timetabler*, but there are ultimately going to be limits to this, as well as paths that are better-trodden than others. As I’m guessing you’re finding with the Great Jump Online, there’s an institutionalised tendency to regularise our offerings to students, be that in terms of contact time or format or assessment or whatever.

And this is where I recognise that this is something that I got relatively little training, just as I suspect most of us got little training. L&T development understandably has to make sure it concentrates on the fundamentals of good pedagogic practice, often matched up with a bunch of ‘handy hints’ on stuff to do in the classroom. But that misses the discussion about the articulation between those two levels.

How do we move a model of alignment or of rolling assessment/feedback into our class, when that class is not a blank sheet (even when it might not yet exist)?

I’ve tangled with this before, and even made a simulation-simulation to try and work through some key steps. It’s not great, but it does suggest a way forward.

The central concern has to be the learning objectives that we establish, be they within the context of a programme, or a module, or even just a session. From that, we then have to be willing to flex practice around our constraints to find a way that meets those objectives.

To do that, we need not only a good sense of our purpose, but also of the range of options that are open to us, coupled to a willingness to try them out. Which all sounds a bit daunting.

But this is maybe a good point to remember that students are generally willing to follow us where we want to go (pedagogically), as long as we can show that we’ve got a logic behind it. Bringing them into our design and delivery process – in part through that continuous feedback loop – can help to edge out the edges and make it work better for everyone.

Whether and how that pans out for my SO, I’ll let you know.