On rowing and teaching

In my younger days I was a very keen rower. I caught the bug at university, and it took about another 15 years to really shake it off (and, TBH, I don’t think I really have, to this day). Towards the end of my time in the sport I lost a load of weight and switched into coxing – becoming the short shouty guy in the stern of the boat with the mic and the rudder strings – and from there got involved in coaching. And that, come to think of it, was one of my first ventures into teaching.

Now, rowing is a weird sport, in that the ‘training-to-competing’ ratio is hugely skewed towards the former. A quick comparison to rugby helps here (and may reveal some deep-seated jealousy of / animosity towards the egg-chasers). I had friends who played rugby who’d train one or two nights a week and then play a fixture every weekend – and then, true to the stereotype, spend a good few days recovering from the hangover. In contrast, we, at the rowing club, would spend nine months of the year training six days a week, then race frantically for about six weeks, and then have a month off. 

What has this to do with teaching? Well, hear me out…

Teaching is a bit like coaching sports – or rather, for the time being, coaching sport is a bit like teaching. The coach of my friends’ rugby team would be able to run them through a particular drill, or practice a particular tactic, on a Wednesday evening, and then (provided the oafs had remembered said drill / tactic) watch it in action that weekend. A review would ensue, followed by some tinkering in training the following week. The point is: the coach had a regular flow of data from his team, about how well his teaching methods were working, how well the individuals and the team were executing what he had taught them, and how to calibrate all of this in order to get the best of the team.

I, as a rowing coach, had no such luxury. We’d spent months and months training essentially in isolation, with very little contact with opposition (apart from the occasional accidental race against the much-derided Club From The Other Side Of The River), and it would be ages before we could see if the crew were able to hold their own in an actual race.

A while back it struck me that this is quite similar to our teaching-and-assessment model. In my department, and indeed, in many other political science departments around the UK, the pattern is quite similar: we spent eight to ten weeks delivering material and teaching skills, and then, after a briefing and cheery ‘Good luck!’ we send our students off to write their essays / policy briefs / to do whatever it is we ask them to do. Along the way, we have very little opportunity to see how well they’re learning the stuff we’re teaching them; and, for that matter, they have very little opportunity to find out how well the can do whatever skill it is we’re asking them to demonstrate.

(Before I get to the point of my piece, let me clarify something. There’s a conflating of ‘material’ and ‘skills’ here – and it’s deliberate. Back to my rowing analogy: I can explain to a rower, using a whiteboard and some video, how to perform a certain part of the stroke, and why it’s important that it’s done in a certain way, and that would constitute ‘material.’ That rower could understand that ‘material’, but whether or they can do actually do that thing is another matter – and that’s what I mean by ‘skill.’ TBH I’m not sure we’re clear enough with our students about this distinction, and I think we tend to assume our job is just to deliver material / transfer knowledge, and it’s their job to master the skills (subconsciously? From elsewhere? Who knows)).

There are a few fixes I can think of for this pickle. The first is to build in some form of continuous summative assessment, using spaced repetition and the frequent release of grades (and feedback) to help students learn. Although this makes me shiver and reminds me of my schooldays I see the attraction: at least that gives us a much more constant flow of data about performance (sorry, learning), and allows us to calibrate teaching accordingly.

The second is to structure the course around a portfolio of work, which is not assessed during the course, but from which students draw material when compiling their summative submission after the course has finished. This time, the ‘data flow’ comes in the form of on-the-go formative feedback, either from the teacher or from peers (or both, for that matter).

My approach harks shamelessly back to my rowing coaching, and goes like this.

Back in the day, we’d spend weeks and weeks practicing one miniscule aspect of the rowing stroke (think of those films of sprinters performing odd prancing drills in order to perfect their take-off). Then we’d switch and practice some other skill; and then much later we’d put them together, combine them, and have a crack at doing it at speed and under pressure. The point is these ‘miniscule aspects’ were the disaggregated bits and pieces of the finished article: the individual rowing stroke, to be repeated several hundred times during the race; the start, the mid-race push, and the finish; the steering of the tricky bend; and so on. When coaching – sorry, teaching – the crew how to do each element, we’d explain in great detail where it sat in the overall scheme of things, and why it was important that it was done like this and not how The Crew From The Other Club do it, and so on.

And so it is with teaching political science. I take the final thing I want the students to do well at, and I think about all the sub-tasks behind that thing: scraping the literature, presenting and justifying a position, using and critiquing evidence, even taking notes. These are my ‘miniscule aspects’, which I then blend into the design of the seminar activities. We might, then, spend a seminar discussing a particular contention in the literature about a certain topic, where my aim is both to ‘convey material’, and also to give the students a chance to learn, and refine, a particular skill – which in turn will feature in their eventual race (sorry, summative assignment). I make sure to explain how that nugget of material, and the accompanying skill, will be important further down the line.

Thinking about things this way has really helped me confront two interlocking challenges: how to handle a teaching-and-assessing pattern in which the latter is compressed into one instant at the end of term; and how to foreground the assessment without making the course all about the assessment (which, I fear, would just undermine the students’ deep learning of the material). One day I might summon up the nerve to screen a rowing race in the first week (Rio 2016 men’s four?) as I explain all this to the students…

3 Replies to “On rowing and teaching”

  1. This reminds me of the distinction between ‘assessment *for* learning’ (where the assessment is what the students do in order to learn) and ‘assessment *of* learning’ where the only point of assessment is just to measure what/how much learning has taken place….

    1. Interesting. What are your views about whether/when the “assessment-for” versus “assessment-of” approach is more appropriate?

      1. Oh, well, personally I think that ‘assessment *for*’ is always what we’re aiming for – students learn by doing activities and getting feedback, preferably iteratively, so they improve at whatever the core skills are. ‘Assessment *of*’ is often a necessity: students need to get a mark and that mark needs to go into the University system so they can progress and graduate and whatever. To me, this is the least interesting thing about assessment and ideally should be minimised because once students are focusing on the mark, they stop focusing on learning.

Comments are closed.