One thing that has been really good about being part of ALPS has been the community around it.
For example, this week’s post is inspired by my former colleague and general force of nature, Maxine David, who pushed out this thread the other day (click to read it all):
Essentially, Maxine’s asking the same question that I think we’ve all asked at some point: what are we trying to achieve in our classes?
As you’ll see from the responses to the thread, I started to sketch out a position, but I’d like to expand on it here some more.
Amanda and Nina have long championed failure in the classroom as a valuable learning experience for students. Their argument – which I also hold to – is that hitting nominal targets is good, but not a complete education: not hitting them encourages students to reflect more on the process of learning (and application) that they’ve undertaken. Think of it as being analogous to playing a game, where not hitting the (rather different) target makes you go back and try again, with the thought of why it didn’t work before in your mind.
This model requires us to acknowledge that learning has multiple targets.
Yes, we want students to know stuff and know how to do stuff (which we can catch with summative assessments), but we also want students to know how to know all this. Becoming a reflexive learner and a critical thinker is a core skill for building capacity to learn throughout the rest of one’s life and it’s a skill that has no easy metric, nor any obvious threshold.
And thresholds were my first thought when I read Maxine’s thread.
When we assess, we typically look for evidence of meeting some threshold: does the student demonstrate that they know enough about X or enough about how to do Y? Those thresholds are present in our grading and those institutional matrices that benchmark us all to common standards.
Maxine rightly points out that we cannot really ever separate out the formative and summative elements of assessment: if we genuinely value the development of reflexive learning, then we absolutely shouldn’t be trying to separate them out in the first place.
But this position is vanishingly rare in academia these days. Yes, I tell my doctoral students that a good viva should see every singly person coming out of the room having learnt something, but even that’s not a given.
Easy as it would be to blame the pressures of QA culture and metrification for all this, we also have to recognise that we often don’t create opportunities within our own classes. Even if we aren’t allowed to make adjustments for support received (as Maxine suggests), we should still be trying to instil a culture of collaboration, reflection and development among our students and between them and us.
In so doing we might start to reclaim some of that learning opportunity that will serve everyone in the class well, wherever they are and whatever they do.
You might have seen that England is going through some very pointed discussions about racism, following the European football championships. This tweet from one of the national team players exactly captures the point:
As you will have noticed, there’s a pandemic going on.
I mention this mainly because I’ve spent the past week listening to/watching/reading people give their opinion/knowledge/expertise about what to do with Covid and I realise I’ve become a bit inured to the difference between opinion, knowledge and expertise.
The trigger came a few evenings ago, when someone who I know is usually pretty good about these things getting drawn into questions from a presenter about whether certain government advice was sensible or not. To be clear, pandemics is not their research at all.
Mostly, they just about hovered on the right side of framing things in terms of what they did know about, but it was a pretty close shave.
Perhaps it was that closeness that made me reflect again about all the times we have people in the media opine on such questions when they have no evidence base and (seemingly) little understanding of relevant issues.
Your country might have the same problem.
This matters for academics because we might become part of a response to this. Our work is grounded in a degree of rigour, and presented with a good understanding of what we do and don’t know and what caveats might apply.
But it’s not always that simple.
Most obviously, as we specialise, the more we know that we don’t know, or rather; the more we know that someone else could give a more useful answer.
Unfortunately, the media doesn’t work like that. Journalists know who is likely to be available and worth contacting, based on previous experiences, to generate impactful comment. Your self-discipline might not be someone else’s.
Of course, the counter-argument to that is that if you decline to comment, then some motor-mouth might fill the gap, so shouldn’t we be trying to avoid that?
Some thoughts occur here.
Firstly, we all need to remember that academic work has public value in some way. Research is not only for research’s sake, but helps to advance human understanding. That implies that we all have some societal responsibility to translate our activity back into forms that can be shared more widely.
Secondly, we should all be making sure that we feel suitably prepared for engaging with the media. No, it’s not always the most pleasant experience, but I would say that the large predominance of journalists have a genuine interest in understanding things better and what to know what you know. If your institution offers media training (and it probably does), then get it: it can take the edge off your concerns.
Thirdly, while we need to respect the limits of our knowledge and expertise, that shouldn’t stop us helping the media to work through their questions. In particular, we can help journalists understand that the way they frame debates might be misleading: walking them through a different way of looking at things can be as important for them as getting a quote. So think about what you might bring to the table, especially if the initial contact strikes you as an odd way of approaching a topic.
Finally, if you can’t help a journalist, then direct them to someone who can. If we want to break journalists out of their eco-systems of contacts, then we are really well-placed to help them do that. So reply with a polite decline, but also a couple of people to try instead. Think about the diversity of those names too, be that for gender or seniority.
If we want the media to do a better job of discussing issues, then we have to play our part too.
This past weekend, I went for a walk with my son. Older readers will remember him from his Lego days: he’s a bit taller now.
Our walk went from central London, along the river Thames, heading back to our home. If we’d completed it, then we’d have hit 100,000 steps and walked about 50 miles (80km).
Spoiler: we didn’t complete it.
But what’s this to you?
As I sit here now, with my feet still somewhat tender, I’m thinking about motivation and where we get it from. That applies as much to the classroom as it does to walks.
In the latter case, we tried to it for a variety of reasons. These included:
100,000 is the highest badge that Fitbit offer for daily step count;
Last year, we did some similar long walks and only got up to 60,000 steps;
Covid – there’s been a lack of other things we might do;
It’s nice to sometimes turn the chat into action;
It’s nice to have a joint thing to do, together;
We each think we’re fitter than the other one.
Now, none of these are particular good reasons to wake up before 4am to catch a train to walk for 10 hours solid, but they were our reasons.
To use more formal language, there’s a mix of intrinsic and extrinsic motivation going on here: more the former than the latter, since we’re not usually that bothered about Fitbit badges. Essentially, we did it because we wanted to do it.
A comparison with our classes is instructive here, because while students are typically not obliged to take a degree programme, they often are bound by course requirements, our syllabus and class set-up to do much as we say.
Even when we try to use active learning, we have to recognise that the boundaries of that are quite narrowly defined. It’s really rare to be offering up something that is broadly unstructured for students to make of it as they will.
For me and my son, our aim was roughly to see if we could get to 100,000 steps in a day. Last year’s effort had involved walking around and around our house (seriously), so we wanted to try something less demoralising. But there’s where we can see the two sides of such potential flexibility.
On the one hand, we could pick any route, done any how, at any time. Yes, I suggested the flattest possible option, and one with multiple bail-out points (luckily), but we might just as well have heading the other way from our house and made for the seaside. That scope to try whatever we want can be very liberating, and also enlightening, since our discussions beforehand made us think a lot about the various factors we’d need to consider (food, drink, loo stops, weather, scenery, maybe walking at night, etc.)
But that freedom can also be inhibiting. To get to the starting line (in both senses) requires much more engagement and reflection. For some that be too much, too daunting.
In the course of getting ready for this, we both did some research. Mine had more online resources about managing feet during long walks; his had more YouTube videos. One thing I did find was a site that organises events, including a river Thames walk of 50 miles. If we’d signed up for that (it’s in a few weeks), we’d have been supported all the way, with proper meals and stewards and a broom wagon to collect us.
But it wouldn’t have been the same. And it wouldn’t have been what we wanted.
Which might be the final point to consider: your motivation isn’t someone else’s.
Look back at that list, up top. It’s my list, not our list. I think it’s not so different from my son’s but that’s for him to know and to articulate (he declined the option to co-author this post). But that difference didn’t stop us from doing the walk, or from enjoying it, or from learning about our current limits.
Maybe the lesson here is that everyone comes to learning experiences with their own priorities and motivations, and as educators it’s for us to work with that. Like a good undergrad, I could note that education literally means ‘drawing out’, which is what this is all about.
If we can recognise what everyone brings and if we can create spaces that resonate with those differences, then we can all gain from it, both students and educators.
One to think about, as we wait for the train back home, if only to take our mind off our soles.
As we hurtle towards the summer ‘break’ and everyone remembers the deadline they cut you some slack on, it’s also a time when we’re often thinking about next semester.
For those of you with interests in making L&T a bigger part of your work, one obvious route is researching and publishing on what you do in the classroom.
Often that might be about trying out something different with students, which you think generates benefits for their learning, and might be of use to others in the same situation: we’ve published lots of such pieces from our guest authors here at ALPS.
While the thing you’re doing is the obvious centre of attention, the second element – whether it works – sometimes gets a bit lost (speaking as someone who reviews a good number of journal submissions in this field), so I thought it’s useful to think a bit more about this.
Measuring learning turns out to be a less-than-simple task: if it weren’t, then we’d all know about how to do it. The problem turns in part on the multiplicity of things we might consider, and in part on the difficulty in making any accurate/meaningful measure of these things.
Learning is not simply about knowledge, but also skills, social capital and much more. Each of those itself has many sub-elements, not all of which might be immediately obvious to anyone, nor equally important to everyone. Likewise, learning happens at lots of different speeds, so do you focus on the immediate gains, or something more long-term?
The (faint) silver lining to this particular cloud is that everyone’s in the same boat. I’m yet to see a comprehensive evaluation tool that I could recommend to you, even though there a number of really good ideas out there (for example, this or this (which makes the good point that students’ perception of what they learn isn’t the same as teachers’ measure of what they learn)).
The important thing here is to be mindful of this from the start of any pedagogic research, embedding your measurement protocol into the design from the start, rather than hoping it’ll come to you later: a short post-course questionnaire about whether your students liked the thing they did isn’t likely to suffice.
That means thinking about what elements you focus on measuring (and why), then on how you’ll measure them. In particular, think about whether and how you can have a control for your teaching intervention: if it’s not practical to have another group of students not doing it, then will pre/post testing cover things robustly enough? Just like your other research, try to control your variables as much as you can, so you can be more confident about isolating effects.
And it also means asking for help if you’re unsure. Your institution probably has a great bunch of people centrally who work on just these kinds of projects and who can give you excellent advice and support. Likewise, you can ask us here or online about specific ideas: it’s worth looking back at our posts for suggestions and colleagues who’ve worked on similar things.
Do all that and your pedagogic research will be off to a flying start (which might be the only flying you get to do).
This is something that’s been a question for me for a very long time and I’m still not sure that I know what the answer is.
On the one hand, you would reasonably expect those you teach to know what they’re teaching about; otherwise how could you feel confident that what they teach you is correct?
But on the other, given the wide range of transferable knowledge and skills that those who teach typically have, are we really saying that you need to have direct and intimate knowledge of an area before you can instruct?
This might feel like semantics about the gap between ‘knowing’ and ‘being a specialist’ but it’s a challenge that programme leaders and university departments regularly have to juggle.
In my case, I have taught a surprisingly wide range of course – from terrorism studies to research methods to theory of knowledge to German politics – about which I knew little more than the basics before I started. Conversely, I’ve never run a course on euroscepticism, on which I’ve written my PhD and published extensively, nor on Brexit, even though I’ve done a big pile of work helping non-student audiences to understand issues and dynamics.
There are good reasons for that: as part of what was a small department, we all needed to chip into covering particular subjects at particular points, because it was necessary for the overall needs of the programme. In the case of my research specialisations, there wasn’t enough student interest to justify running a course when other options were available.
Personally, I was quite happy not to send even more of my life talking about Brexit in recent years than I already was, but that might just be me.
But we come back to the dilemma: how much do you need to know to teach?
In the most extreme case I had, the terrorism course, I embraced my limitations by placing students in a very central role in the class. After a couple of weeks of introductory debates, they each picked topics of interest to them, I grouped them into cogent areas and then the rest of the semester was spent with student-led sessions, with all of us reflecting and developing our understanding as we went. I was very open about how much I knew, and flagged up useful readings as we went, allowing everyone to get meaningful feedback on their topic before using it to produce a final paper on that subject.
For the two years I ran that, it worked. The external examiner (who did know more about the subject) was very positive about both approach and content; the students gave positive formal and informal feedback; and I didn’t see any significant problems. But still the doubt about whether it was acceptable lingers: Certainly it wasn’t as rich an experience as could have been offered by someone with more of a background in the subject.
The tension here is between knowledge and teaching.
Yes, one needs to know your way around a subject, to understand the connections within it and beyond it. But you also need to be willing and able to share that with students.
I’m guessing we’ve all encountered the great name of a field in person, giving a plenary or out in the world, clearly highly knowledgeable but unable to communicate that in terms that are accessible. I still have vivid memories of a 2-hour (!) plenary speech by a highly eminent authority at a conference that told me (and probably any one else, to judge from the slumped forms around me) nothing of any use, even though it touched on several relevant areas.
This is not at all a call for enthusiasm/drive over ability – there’s more than enough bullshit out there as it is – but it is to ask for recognition that effective teaching cannot be reduced down to ‘just knowing stuff’: pedagogy matters, especially the willingness to understand learners’ needs.
That’s a big part of why I really like active learning in all its variety: rather than me talking about what interests me, I get to talk about what interests them, contextualising and developing it into a broader set of knowledge and skills that can be of use down the line.
But I’d like to read your views on this, because it still leaves a lot very open to discuss.
I had my first writing workshop last week. We were sharing bits of text as part of our work towards a new Masters programme in IR, mainly to make sure we were on the same page on how we go about communicating with our students.
To recap, our programmes are distance-learning only, so it’s a mixture of textbook-like text, online activities, audio and video elements, all bundled up on an online platform. The textbook-type elements are pretty central to all this, as the main location of content delivery, so having a style and structure that is accessible and appropriate is really important.
Obviously, I struggled.
As much as I write a lot of text, it’s for rather different contexts. Here I’m essentially writing to myself and an imagined community of colleagues: it’s very informal and variable in its structuring and content, not least because I can always write another piece next week to unpack anything that didn’t work out now.
Both journal articles and practitioner documents like briefings are pretty well-specified audiences too, so it’s relatively easy to slip into the conventions of those genres.
But here I’m trying to think about creating text that sits within a broader package of content, co-authored with half-a-dozen other people, all going out to a very diverse student body, who’ll be consuming it at distance.
Part of the challenge is finding a voice that works not only for yourself and the student, but also for the other authors. A striking outcome of the workshop was thus having to think about both the substantive content and the register you adopt.
Right now, it’s the latter that is going me pause for thought, since I’m towards the more relaxed end of the spectrum. Yes, I can communicate the content clearly enough and at a level that is felt appropriate, but the way I do that sits rather awkwardly with others’ texts.
Of course, some of this is down to preference. Some people don’t like ‘you’s and ‘I’s in their academic writing, others can’t stand slang (or oblique references to memes). My personal preference has always been to try to keep things as simple as possible and to draw people in with things that might not be the most obvious ways in.
That’s all legitimate, but still doesn’t get to an answer about how to draw that together with other approaches, so that students aren’t experiencing radical changes in voice and style. Which is why the programme leaders are now writing author guidelines about just such questions for us to discuss and agree.
And this is another example of where this method of teaching is perhaps more rigorous than in-person equivalents: all programmes taught by more than one person have this multiplicity within them, but it’s very rare that we explicitly sit down to discuss whether and how that works for students.
In-person teaching tends to leave the question at the point of the value of diversity, when we might usefully think more about the challenges it creates too.
Something that’d need more than the one workshop, I’m guessing.
Exciting times here, as the second family member undertakes training in Learning & Teaching. No, not the kids (although maybe it’d make a nice birthday present for them), but my very talented SO.
After many years working in pure research roles, she’s now starting to take on some teaching, so she’s picking up her institution’s introductory package, which is prompting some really good conversations at the dinner table.
One issue that’s coming up – for me probably more than for her – is the question of how one gets from the L&T training session to the on-the-ground experience in your classroom.
This was started by a discussion of assessment and its central role within the learning process. As my SO noted, assessment is always formative and should be always linked to the feedback and adaptation process.
For her, one practical problem with that much of her opportunity to teach comes from giving guest lectures on modules run by other people: we’re talking here about specialised Masters programmes with lots of such input from research experts, teaching to their particular expertise.
Obviously, that can be great for students as they get to interact with a wide range of leading People, but it’s pretty rubbish for any formative development using assessment, per the training session. My SO might only be spending a couple of hours with the group, leaving little or no time to adapt content or pedagogy to their specific needs or interests.
That raised the more general problem of the translation from theory to practice that I’ve mentioned.
As someone working on simulations and other forms of active learning, I’ve always been rather sensitive to this, since the restraints under which I work have been rather obvious. The shape of the teaching spaces I’m in; the number of students I’ve got; the flexibility of the timetable: all of these impose some really consequential limits on what I can do.
And beyond that there is the entire process of Quality Assurance. My old negotiation module ended up using a reflective writing assessment partly because it aligned well with my learning objectives, but also because it worked for our second marking/external marking regime. My initial thoughts about something much more immaterial – me marking what they did in negotiations, or getting them to negotiate with me for their grade – might have made as much sense pedagogically, but they would have fallen at the hurdle of our L&T committee, who would (reasonably) have asked me how we could be confident about the equity and consistency of such approaches.
Yes, QA has flexibility in it, just as one can find flexibility on capping numbers or working with your timetabler*, but there are ultimately going to be limits to this, as well as paths that are better-trodden than others. As I’m guessing you’re finding with the Great Jump Online, there’s an institutionalised tendency to regularise our offerings to students, be that in terms of contact time or format or assessment or whatever.
And this is where I recognise that this is something that I got relatively little training, just as I suspect most of us got little training. L&T development understandably has to make sure it concentrates on the fundamentals of good pedagogic practice, often matched up with a bunch of ‘handy hints’ on stuff to do in the classroom. But that misses the discussion about the articulation between those two levels.
How do we move a model of alignment or of rolling assessment/feedback into our class, when that class is not a blank sheet (even when it might not yet exist)?
I’ve tangled with this before, and even made a simulation-simulation to try and work through some key steps. It’s not great, but it does suggest a way forward.
The central concern has to be the learning objectives that we establish, be they within the context of a programme, or a module, or even just a session. From that, we then have to be willing to flex practice around our constraints to find a way that meets those objectives.
To do that, we need not only a good sense of our purpose, but also of the range of options that are open to us, coupled to a willingness to try them out. Which all sounds a bit daunting.
But this is maybe a good point to remember that students are generally willing to follow us where we want to go (pedagogically), as long as we can show that we’ve got a logic behind it. Bringing them into our design and delivery process – in part through that continuous feedback loop – can help to edge out the edges and make it work better for everyone.
Whether and how that pans out for my SO, I’ll let you know.
When it comes to culture (and work), I’m a neophile.
The lure of the new is understandable: pushing into experiences you’ve not had before, discovering things that engage in their novelty, making you reconsider what you already know.
That doesn’t need to be anything particularly radical (as those who’ve met me can attest, I’m scarcely living on the bleeding edge of existence), but a willingness to make steps into new territory can open so much of value.
Which is why this past week has been something of a oddity for me.
First, my ‘move’ in my new home office has stuck some old favourites from the book shelves back in my line of sight, reminding me to re-read them. Which I’ve been doing.
And now, a first family weekend break away from home in 18 months has resulted in a day of watching (re-watching for the adults) a couple of seasons of Line of Duty with the kids as the rain pelted down.
In both cases, the experience of revisiting these materials has been a very positive one.
Partly, it’s the rediscovery of things that I’ve consumed and internalised over the years, but haven’t necessarily focused on too hard. That comes with an understanding that I’ve taken elements or narratives from that and fed it into the general morass of ‘stuff’ that fills my head. The sharp images of distinctive moments or examples might be relatively clear and (usually) accurate, be that the reformulation of the notion of time or the panicked chase to the flyover.
But much more it’s been about the realisation of what I had forget or just never noticed first time around.
Yes, for a TV series that’s had several more series since, the revisiting of older episodes benefits from hindsight (you know much better where to look), but so too with academic literature.
The last time I read many of these books was maybe 20 years ago, when the world – and my world – looked rather different. The things I’m interested in academically might have some similarity to back then, but pretty obviously things have moved on. Re-reading sources comes with that different lens and opens up new points of understanding.
This is, I know, somewhat mundane, especially if you’ve been in the habit of doing this already. But for me it has been a reminder that part of moving forward and exploring new things is the necessity to also check back in with what’s already been done. If nothing else, it helps to reduce the chances of having to reinvent the wheel; something that’s as common in popular culture as it is in academia.
The great thing about being a historical institutionalist is that – for the vast majority of the time – one’s assumption that things aren’t going to change pays off.
How it was yesterday is how it is today is how it’ll be tomorrow. Simple.
Until it’s not.
And right now I’m in the middle of a big rupture which, even though I’ve known about it and planned for it for some months, is turning out to be more of a pain than I’d thought.
Moving jobs is hardly the most unusual of life events, but it has underlined to me the need to plan carefully in managing your digital footprint. After a decade of blogging not here on ALPS but also on my departmental pages (where I seem to have left a less-than ideal last post that I now can’t delete), there’s a question about how to (re)make a place for me to share my ideas.
And this is the central point: social media is about sharing and discussing, so you need to be able to make contributions and others need to find them and respond.
In the case of ALPS that’s been a long process of building up an audience over 10 years, to point that we have a pretty good profile within our community (if still far short of that of professional associations), mainly through word-of-mouth as we’ve passed through endless conferences and talks. And as long as Chad keeps on bringing in the support to pay for the hosting, we’ve got a long-term proposition.
Universities are long-term propositions too (pace Chad’s regular posts to the contrary), but unless you’ve got a very particular arrangement with your VC/Provost/whatever, your relationship with that university might turn out to be rather short. So are institutional blogs the best bet?