Explain and justify

Students often think that there is a ‘right’ or a ‘wrong’ approach to every aspect of an assignment. This is a thought that can often lead to intense anxiety around an assessment period.

There are, of course, many areas in which this is correct. There are things that are simply ‘right’ or ‘wrong’, depending on the context and form of the assessment. If a department insists that a student submit with Harvard-style referencing, for example, it’s good practice for them to cite in Harvard, rather than footnotes.

However, students often take these concerns to a degree that might limit the potential for going out and exploring exciting and original cases or approaches, as they feel the need to double check every detail before committing – “is it right to use China as a case study for this essay question?”, “is it suitable to use liberal intergovernmentalism as a theory in my essay?”, etc. To students that are worrying about whether their approach, case, method, is the ‘right’ one to employ in completing an assessment, I usually reply with the simple advice to ‘explain and justify’.

Continue reading “Explain and justify”

Thinking about the open exam

It’s assessment season (at least here at the University of York) and so our students will, one hopes anyway, be getting stuck into preparation for their upcoming summative assessments. For many students, this will mean getting ready to take open exams.

The open exam format is one that was a bit of an oddity before the pandemic. Students taking notes into an exam would have been something that was done on a tiny, rolled up piece of paper hidden in a sleeve. If that tiny piece of paper was ever discovered, the Academic Misconduct Authorities would be on the case in a flash. However, as the pandemic made it impossible to get dozens of students together in one place for an exam and to, therefore, police the behaviour of those students, the open exam format quickly came into vogue and became institutionalised. 

To quickly specify the differences in format, I think it’s worth highlighting three different types of exams:

Continue reading “Thinking about the open exam”

Participatory research as active learning

Over the last semester, I have been running a research project, funded by the University of York’s Learning and Teaching Fund, exploring assessment norms and innovations utilised on Politics and International Relations degrees.

The project has been my first foray into participatory research with students. Throughout the project, six students took on the role of ‘Student Partners’ to help design, deliver, and conclude the research activities – activities that included running focus groups with other students. It made sense from a methodological standpoint to have this level of student involvement – after all, I was hoping to gain a student-centred understanding of the challenges and opportunities with innovating assessment practices.

Last week, we had our final research activity – a half-day workshop with the Student Partners to discuss research findings and to conclude the project. We also had some time to discuss the experiences of the students in collaborating with staff and each other on the research project. That discussion was an eye-opening one and made me keenly aware of how beneficial this kind of hands-on research experience can be for students as a form of active learning.

Continue reading “Participatory research as active learning”

Guest Post: Feminist Pedagogy within Constraints: Teaching Reflective Writing in a UK Higher Education Institution

Dr Cherry Miller
Dr Jenny Morrison

Today’s guest post is by Cherry Miller from University of Helsinki and Jenny Morrison from University of Glasgow.

Assessments are a core area for feminist teaching. Traditional assessments such as the essay or exam can reinforce gendered or other hierarchies in education through favouring ‘masculine’ forms of learning that prioritise disimpassioned objective expression. Thus, feminists have called for greater diversity of assessments in general, and assessments that value the personal alongside the ‘objective’ political in particular. While feminists engage with a range of assessments, the reflective journal (RJ) has become commonplace on feminist courses. 

Nonetheless, research cautions against the assumption that ‘alternative’ assessment automatically fosters more effective learning and teaching. Rather, all academic assessments include implicit expectations and exist within the constraints of the academy. That the reflective journal remains a less familiar – albeit growing – format in higher education means there can be greater uncertainty regarding the expectations of such an assessment. Therefore, we believe the drive to diversify assessments leads to certain tensions for feminist pedagogy.

Continue reading “Guest Post: Feminist Pedagogy within Constraints: Teaching Reflective Writing in a UK Higher Education Institution”

Chance, Skill, and Grading Classroom Games

An academic hill I die on periodically is being adamantly opposed to assigning grades to classroom games.

A game’s win conditions are based on varying blends of chance versus skill. Pure chance games (such as roulette) are simply bets against which players cannot build effective long-term strategies. Monopoly, for example, is almost pure chance–stripped of the optional auction rules, the most effective strategy for winning Monopoly is consistently rolling a dice score of 7. Barring weighted dice or building a dice-rolling wrist over 10,000 hours, winning or losing Monopoly comes down to the luck of the dice. It might be a good game for illustrating wealth inequality in the classroom (which was the original designer’s intent), but if you grade students based on winning rounds of Monopoly, then you’re grading them based on almost pure, uncut chance.

On the other end are games of pure skill: think Go or Chess. In a game of pure skill, masters will almost always defeat amateurs (note that I said “almost always”). In classroom games based on pure skill–especially without repeat play opportunities–your most prepared students will not just almost always defeat your least prepared students, they will almost always crush them.

A well-designed classroom game leans heavily towards skill, as the game presents students with an opportunity to demonstrate their knowledge. But to avoid the “crush them through pure skill” problem, the game should include elements of chance. Students should feel as if they can win all the way up to the last minute of class. However, the key remains chance. Students can also still lose at the last minute to a close competitor, even with well-prepared and overall effective strategies. If you grade them based on even a fraction of chance, it has a chilling effect on students taking risks and diminishes their sense of course mastery. Losing without a grade penalty encourages students to reflect on future strategies and thus remain engaged with the material. But with a grade applied? All the students might remember is that they received a B- for the course because “they lost a stupid game.”

Instead, I have my students write short reflection essays (say, 2-3 pages) about the course learning objective that was measured in the game: what the game’s win condition was, their processes for crafting a strategy to achieve it, and how they either overcame obstacles to succeed or what they would change if they failed. You can also use reflection essays for pure chance games, such as the above-mentioned Monopoly, for students to reflect on what the game was intended to illustrate (or ask, “what would you have done differently if you could have applied a viable strategy?”). I’ll do a longer post in the future on debriefings and after-action reports.

All that said, I do know educators who place small point values on their games: negligible for the student’s overall course grade–think 10 points in a 1,000-point course–yet enough to inspire a sense of meaningful competition (particularly useful in games with more than one winner or multiple proportional win-conditions, which encourages students to scrap over every last point. They might lose but still walk away with 5/10 points). Even knowing that, however, I want me student to remember the game’s lesson points, not the game’s grade points.

Generative AI changes teaching and learning: how to protect the integrity of assessment

This academic year, the UCL Centre for the Pedagogy of Politics (CPP) is hosting a series of online panel events. Our first event on 30 October was on the theme of ‘Using technology to teach politics’. In this guest post, one of the panellists at that event, Simon Sweeney (University of York), offers further reflections on the challenges involved in higher education’s embracing generative AI, where tools such as ChatGPT call into question issues of authorship and have profound implications for assessment.

A few years ago, we were worrying about students’ using essay mills, a form of contract cheating that plagiarism detection software struggled to identify. The Covid-19 pandemic and online delivery coincided with a reported increase in academic dishonesty (AD). In late-2022 the arrival of generative artificial intelligence (GAI) chatbots like ChatGPT is a further challenge to the integrity of assessment.

Universities realised that banning chatbots was not feasible, as AI has become an established feature in our lives and graduate employment. As educators, we need to respond positively to the opportunities AI presents, recognising its benefits and assimilating AI into teaching and learning practice.

This means developing strategies that accommodate students’ use of GAI while protecting assessment integrity.

Continue reading “Generative AI changes teaching and learning: how to protect the integrity of assessment”

Little by little: Challenging awarding gaps in UK higher education

We have another great guest post by Jeremy F. G. Moulton at the University of York!

The Financial Times recently published an analysis of data from HESA (the UK’s Higher Education Statistics Agency) which revealed a troubling trend in UK universities. As the analysis’s headline read: ‘Non-EU international students at UK universities less likely to get top grades’. The data was clear – undergraduates from outside the EU were twice as likely as UK students to receive a lower-second or third-class degree in 2021-22 (and therefore missing out on the upper-second or first-class degrees that employers most favour).

This trend is especially concerning given the significant growth in the number of non-EU international students studying at UK universities in recent years, with almost three times as many studying in the UK today than in 2007. With these students making up a larger proportion of those within our classrooms, there is a need to ensure that universities are not providing them with a second-class education.

The problem with awarding gaps is not, however, limited to the difference in degree outcomes between non-EU international students and EU and “home” students. There is a wealth of data already covering racialised awarding gaps, socio-economic awarding gaps, and disability awarding gaps, for example. 

The question is – what can be done to help challenge these trends?

I work at an institution which has the ambition of ‘eradicating’ awarding gaps by 2030. This has given those working on teaching and learning a serious opportunity to engage with the issue, to understand the challenges, and to make some early steps to address those gaps.

Of course, there are many factors that shape awarding outcomes where we have limited power to respond. Structural issues will often be the deciding factor on, for instance, whether students will have to take on part-time work to fill the increasing gulf between the costs of student life and the student loans provided. 

This is not a niche issue, over half of undergraduates in the UK are now in part-time employment. Some universities are already experimenting with compacting students’ timetables so classes only cover three days of the week. This gives students clear days for paid work and, hopefully, ensures that they don’t miss any teaching. This compartmentalisation of the week might also be of benefit to academics by providing set days for research, writing, and tackling that ever-growing pile of emails. 

However, there are also opportunities for individual, course-level initiatives to achieve the ambition of combatting awarding gaps.

The first opportunity is to reflect on how we are assessing our students. In particular,  optionality could be one particularly effective path forward. Letting students select from a range of assessment formats, tailored for respective modules, would allow students the choice of assessment to best suit their learning style and the skills that they are seeking to develop through their studies. A recent report on optionality in assessment highlighted its potential to limit awarding gaps, making the recommendation that ‘Educational institutions should prioritise the introduction of diverse assessment formats to explicitly address accessibility and concerns about fairness, ensuring access to necessary resources and skills development to prevent the unintentional widening of awarding gaps’. 

The second opportunity is to take the time to consider the range of cultures and approaches to learning that are present in our increasingly diverse classrooms, and to engage students on these matters. It is easy to take for granted that the approaches to teaching and learning that have been taught or socialised to us are in some way the ‘standard’ but this is a practice that risks alienating some students. 

For example, research on students from China’s learning styles has highlighted that they often do not have experience in or enjoy some of the norms of active learning that are the mark of much of the teaching and learning in Western institutions. Therefore, engaging all students at the start of courses about the hows and whys of the approaches to learning that will be utilised will provide at least some shared understanding of the expectations and benefits of getting involved in seminar discussions. 

Finally, taking the opportunity to support students’ confidence in engaging with learning should never be missed. Learning students’ names; encouraging students to interact and learn with each other outside of the classroom; giving students the chance to speak individually (ending teaching sessions a few minutes early but sticking around can give students the chance to ask questions that they might be self-conscious about asking in front of a group) – these are all relatively easy ways to try and give students more confidence in themselves, as well as the learning process. This, in turn, has the potential for increasing engagement and attainment. 

In sum, these above suggestions boil down to one simple but key idea – that we should find space for empathy in our teaching. Certainly, there are a host of other pathways to begin combatting awarding gaps. Whilst sector- and institution-level data is vital in identifying problematic trends, it will take a more granular approach to understand the specific, course-level issues and responses. This is a big issue but it is also one that academics can begin engaging with and combatting by instituting small changes.

Is it in the assessment criteria?

I zoomed into an excellent QAA event this week on implementing racially inclusive practice in assessment, based on a project at University of Leicester, partnering with Birmingham City University and University of Wolverhampton. I’d very much recommend that you have a good look at their report in detail. The take-home for me was that that whilst an inclusive or decolonised curriculum and role models are incredibly important for engagement and for inspiring students, particularly racially minoritised students, if you want to tackle race awarding gaps, the solution is pedagogical.

Their approach is deceptively simple: they focused on making sure that the hidden curriculum is made visible for all students and the tacit is made explicit, that students understand exactly what they have to do to succeed, with no guessing games involved, with clear documentation of what is required, and that all assessment criteria are clearly and transparently explained with examples of what good or less good work against those criteria would look like. One of the staff who had implemented the intervention very disarmingly said that he felt a bit embarrassed that he and his colleagues hadn’t been doing this already! He also said that although there was some initial resistance because of worries about ‘spoonfeeding’, the improvement in the students’ work that he saw and the way they engaged allayed most of those fears. They found that by doing this, they could reduce awarding gaps significantly, improve student enjoyment and confidence, and also improve staff experience of teaching and assessing!

There is a lot to learn from in the report. Personally, I’ve already thought a lot about assessment criteria over the years, in an attempt to be inclusive, yes, but also because I just wanted to communicate with students what I wanted them to do, so they would learn better and I could read better work when assessing. As a less experienced teacher, I realised that I was marking work down for not doing things that I had never taught or told the students to do – which offended my sense of justice. But I knew I did want the students to do those things (such as make a coherent argument, evaluate evidence, use examples, write for an audience, use appropriate referencing), so it got me to thinking about how I might teach those things in the context of a disciplinary and substantive module. I came to the conclusion that having transparent criteria and spending some time making sure that everyone understands them would help me communicate what skills I wanted to see and how they might develop them. It turns out to be a practice that serves all students – not just those who have been previously disadvantaged, but also the ones who keep doing pretty well, but don’t know why.

As we know that tutors are often looking for different things in their students’ work, it usually doesn’t work in a discipline like ours to have generic or departmental criteria. It is an incredibly useful exercise for you, as a tutor, to sit down and write out what it is you are looking for in students’ work. This helps clarify expectations for me and helps me think about what and how I will teach. When team-teaching, working with other tutors to clarify not only what the assessment criteria are but also what they mean in practice is extremely useful for making sure that teaching and marking are fair and consistent. And working with students to help them understand marking criteria doesn’t so much help them ‘tick the right boxes’ in a spoon-feed way, but, much more importantly, understand what skills they are learning and why.

For my current module, the assessment is a portfolio, and the assessment criteria are as follows (although I do allow students to negotiate them, which I won’t dwell on here but will come back to another day):

  • Depth of understanding of how politics and power are shaped by, and shape, the natural world
  • Ability to weave together ideas from the module into your own coherent text
  • Depth and originality of critical evaluation of your own relationship with the natural world
  • Ability to argue for your perspective on how nature should be governed or cared for, by whom and in what ways, including use of reasons and evidence
  • Appropriate selection of multimedia on the portfolio
  • Ability to write appropriately for a particular audience (please specify: eg visitors to an exhibition, policy-makers, everyday readers of narrative non-fiction)
  • Creativity of your work on the portfolio
  • Evidence of learning and development over time in the module
  • Depth of critical engagement with the module materials and readings
  • Extent of additional research and further reading
  • Craft of writing, including readability, spelling and grammar
  • Accuracy of bibliographic materials

I like the approach of starting with a noun plus preposition, like ‘depth of’ or ‘ability to’, because it demonstrates that these are skills one can be better or worse at in a qualitative sense. Thus, this is not a box-ticking exercise for students but rather an invitation to engage in deep and dialogical reflection on what, for example, the ‘ability to argue’ or ‘appropriate selection of multimedia’ really looks like in practice.

It’s very important not to stop with listing the assessment criteria, of course, but rather to make them the centre of an ongoing conversation. Here is my top tip: every time a student asks a question about the assessment, or about what ‘good work’ might look like, I bring it back to the assessment criteria. So, let’s say they ask, ‘does my portfolio need to be consistent week by week?’ I will say, ‘Is that in the assessment criteria? No. So, I won’t be looking for that. If it’s something you want to learn, that is, how to create your own consistent style, that’s great – you can do so and add it to the assessment criteria for your self-assessment. But it’s not necessary from my point-of-view.’

Here is my top tip: every time a student asks a question about the assessment, or about what ‘good work’ might look like, I bring it back to the assessment criteria.

Or let’s say they ask, ‘Can my writing be more personal?’ I will say, ‘Is it in the assessment criteria?’ This is a longer conversation – the answer is, yes, I am asking them to give an account of their relationship with the natural world, so more personal writing in the first person is clearly appropriate. However, if they are using part of their portfolio to write for policy-makers, this can lead to a deeper conversation about what sort of writing, evidence and argument a policy-maker might be interested in. Distinguishing these different crafts of writing and talking about when they are appropriate, or not, is much more useful for learning than just prohibiting one of them without explaining why.

Other ways of getting students to engage deeply with the assessment criteria might include:

  1. Guided marking exercises where students mark examples of work with reference to the assesment criteria. Your aim here is to get them to focus on the criteria and not make the sorts of vague comments (‘this was not well structured’) that they have probably experienced themselves at times.
  2. Peer feedback where the focus is on giving each other feedback according to one or more of the assessment criteria.
  3. Formative feedback from the tutor where they have to tell you which criteria they want feedback on. (I have a form and they can’t have their feedback unless they tell me which criteria they are particularly interested in.)
  4. Self-assessment where students have to tell you how well they met the criteria, and where they could have done better.
  5. Any other discussion with examples of the criteria and what they mean, preferably iteratively, so they can improve over time.

Summative feedback should also, of course, refer constantly and closely to the assessment criteria. But by that point, this is just an exercise in demonstrating that you could be trusted to do what you said you were going to do. To return to the QAA discussion on racially inclusive criteria, the return of summative work should not be an opportunity to say: ‘Ta-DAH! This is what you should have done.’ What the students should have done should be clear right from the get-go, or else how can they learn how to do it?

Tools for Discussion: An Interpersonal Growth Toolkit

Today’s guest post is from Dr. Allison Anoll at Vanderbilt University!

Classroom discussion is a common pedagogical tool, but many instructors and students alike find themselves lost with the lack of structure.  Equally problematic, standard approaches to grading participation that simply count how often students speak can (re)produce racial and gender disparities.

How can we, as instructors, lead effective discussions? How can we help students grow in their interpersonal skills while also ensuring classrooms are inclusive spaces?

In my small, seminar classes I use a tool for structuring and assessing participation I call the Interpersonal Growth Toolkit.This tool relies heavily on a framework developed by Stephen Brookfield in The Skillful Teacher, where he argues that classroom discussion skills need to be taught just like any other learning goal. The Interpersonal Growth Toolkit provides students with learning goals for classroom discussion, tools for achieving these goals, and regular sources of feedback from the instructor about progress.

Here’s how it works. I start by identifying skills I want students to learn and practice in the discussion setting. In my classes, that’s: 1) humility; 2) confidence; and 3) social science thinking. I provide students with a detailed list of tools for how they can practice each of these themes. For instance, to practice humility, students can,

  • Provide credit to classmates for ideas and inspiration by using their name in comments.
  • Ask the group for a moment of silence to slow the pace of discussion and allow you (and others) time to gather your thoughts.
  • Find a way to express appreciation for what is new, interesting, or challenging in the discussion. Be specific about what has helped you understand something new.
  • Make a comment or ask questions that encourages others to elaborate on their ideas.

I then ask my students to identify which of the three areas—humility, confidence, or social science thinking— they are weakest in as a discussion participant.  My students then develop an individualized growth plan by identifying 2-3 skills in their area of weakness they want to try out in class. They write these skills on an index card that I hand back to them at the beginning of each class for the first two weeks. At mid-term, they write a brief reflection on how they think they are doing in their area of growth and what they want to work on for the rest of the semester. In combination with my own notes, I prepare mid-term feedback for each student about their performance in the three areas and tools to try in the second half of the semester. Students receive a final grade for their participation based on their growth and performance in each of the three areas.

This approach increases the quality of classroom discussions immensely. Not only do students and the instructor have a clear sense of what people are supposed to be doing during discussion time, but students themselves can become advocates of inclusivity in the classroom. Students who lean towards dominating a discussion are asked to think about how they can use their skills to draw others in; students who are nervous speaking up are provided with skills for finding their space and ways into a discussion. Using this technique, I have seen students grow immensely over the course of a semester with quieter students getting bolder and more dominant students using their skills to build bridges between other’s ideas. In my seminar courses, it is the norm to hear from every student multiple times over the course of a discussion.

This approach and assessment are well-suited for relatively small classes (less than 20 students). In larger classes, it is more difficult to build a community of trust that fosters vulnerability and to find enough time for all the students to practice their skills. However, this tool is suitable for any level of instruction: I use it with freshman, seniors, and even new graduate students. Instructors should feel empowered to adapt the skills they want students to learn in the discussion as long as they also provide specific tools to complement these skills. Want students to practice intellectual curiosity as a core skill instead of humility? Develop a list of tools that students can practice in a discussion to build this skill. You can look at chapter 8 of Brookfield’s The Skillful Teacher for more tools to consider.

To implement this assessment, remember to include a learning objective in your syllabus that highlights interpersonal growth as a key goal of the course.  I use, “By the end of the course, you will be able to:demonstrate growth in discussion techniques including close listening, speaking with evidence, challenging with respect, and summarizing others’ contributions.’’

Keeping track of student contributions and how they fit into each assessment category can be difficult to do on your own while also leading the discussion. When I use this tool, I hire an advanced undergraduate or graduate student to sit in class with me and keep track of student contributions. I use these notes to develop feedback and grades for my students.

You can find details about The Interpersonal Growth Tool Kit here. The document also includes a rubric. (In general, I’m a big fan of rubrics, but that’s a topic for another day.) Happy discussing!

On rowing and teaching

In my younger days I was a very keen rower. I caught the bug at university, and it took about another 15 years to really shake it off (and, TBH, I don’t think I really have, to this day). Towards the end of my time in the sport I lost a load of weight and switched into coxing – becoming the short shouty guy in the stern of the boat with the mic and the rudder strings – and from there got involved in coaching. And that, come to think of it, was one of my first ventures into teaching.

Now, rowing is a weird sport, in that the ‘training-to-competing’ ratio is hugely skewed towards the former. A quick comparison to rugby helps here (and may reveal some deep-seated jealousy of / animosity towards the egg-chasers). I had friends who played rugby who’d train one or two nights a week and then play a fixture every weekend – and then, true to the stereotype, spend a good few days recovering from the hangover. In contrast, we, at the rowing club, would spend nine months of the year training six days a week, then race frantically for about six weeks, and then have a month off. 

What has this to do with teaching? Well, hear me out…

Teaching is a bit like coaching sports – or rather, for the time being, coaching sport is a bit like teaching. The coach of my friends’ rugby team would be able to run them through a particular drill, or practice a particular tactic, on a Wednesday evening, and then (provided the oafs had remembered said drill / tactic) watch it in action that weekend. A review would ensue, followed by some tinkering in training the following week. The point is: the coach had a regular flow of data from his team, about how well his teaching methods were working, how well the individuals and the team were executing what he had taught them, and how to calibrate all of this in order to get the best of the team.

I, as a rowing coach, had no such luxury. We’d spent months and months training essentially in isolation, with very little contact with opposition (apart from the occasional accidental race against the much-derided Club From The Other Side Of The River), and it would be ages before we could see if the crew were able to hold their own in an actual race.

A while back it struck me that this is quite similar to our teaching-and-assessment model. In my department, and indeed, in many other political science departments around the UK, the pattern is quite similar: we spent eight to ten weeks delivering material and teaching skills, and then, after a briefing and cheery ‘Good luck!’ we send our students off to write their essays / policy briefs / to do whatever it is we ask them to do. Along the way, we have very little opportunity to see how well they’re learning the stuff we’re teaching them; and, for that matter, they have very little opportunity to find out how well the can do whatever skill it is we’re asking them to demonstrate.

(Before I get to the point of my piece, let me clarify something. There’s a conflating of ‘material’ and ‘skills’ here – and it’s deliberate. Back to my rowing analogy: I can explain to a rower, using a whiteboard and some video, how to perform a certain part of the stroke, and why it’s important that it’s done in a certain way, and that would constitute ‘material.’ That rower could understand that ‘material’, but whether or they can do actually do that thing is another matter – and that’s what I mean by ‘skill.’ TBH I’m not sure we’re clear enough with our students about this distinction, and I think we tend to assume our job is just to deliver material / transfer knowledge, and it’s their job to master the skills (subconsciously? From elsewhere? Who knows)).

There are a few fixes I can think of for this pickle. The first is to build in some form of continuous summative assessment, using spaced repetition and the frequent release of grades (and feedback) to help students learn. Although this makes me shiver and reminds me of my schooldays I see the attraction: at least that gives us a much more constant flow of data about performance (sorry, learning), and allows us to calibrate teaching accordingly.

The second is to structure the course around a portfolio of work, which is not assessed during the course, but from which students draw material when compiling their summative submission after the course has finished. This time, the ‘data flow’ comes in the form of on-the-go formative feedback, either from the teacher or from peers (or both, for that matter).

My approach harks shamelessly back to my rowing coaching, and goes like this.

Back in the day, we’d spend weeks and weeks practicing one miniscule aspect of the rowing stroke (think of those films of sprinters performing odd prancing drills in order to perfect their take-off). Then we’d switch and practice some other skill; and then much later we’d put them together, combine them, and have a crack at doing it at speed and under pressure. The point is these ‘miniscule aspects’ were the disaggregated bits and pieces of the finished article: the individual rowing stroke, to be repeated several hundred times during the race; the start, the mid-race push, and the finish; the steering of the tricky bend; and so on. When coaching – sorry, teaching – the crew how to do each element, we’d explain in great detail where it sat in the overall scheme of things, and why it was important that it was done like this and not how The Crew From The Other Club do it, and so on.

And so it is with teaching political science. I take the final thing I want the students to do well at, and I think about all the sub-tasks behind that thing: scraping the literature, presenting and justifying a position, using and critiquing evidence, even taking notes. These are my ‘miniscule aspects’, which I then blend into the design of the seminar activities. We might, then, spend a seminar discussing a particular contention in the literature about a certain topic, where my aim is both to ‘convey material’, and also to give the students a chance to learn, and refine, a particular skill – which in turn will feature in their eventual race (sorry, summative assignment). I make sure to explain how that nugget of material, and the accompanying skill, will be important further down the line.

Thinking about things this way has really helped me confront two interlocking challenges: how to handle a teaching-and-assessing pattern in which the latter is compressed into one instant at the end of term; and how to foreground the assessment without making the course all about the assessment (which, I fear, would just undermine the students’ deep learning of the material). One day I might summon up the nerve to screen a rowing race in the first week (Rio 2016 men’s four?) as I explain all this to the students…