Yesterday, I was invited to talk to a colleague’s class in another university. As always, it was good to get out and about and see other places (even if that place was a tad unprepossessing), but it was the discussion over a sandwich afterwards that was most enlightening.
Discussion was ranging over a number of topics, when one of the party said that they were always surprised by how conservative students have become, in the sense of disliking anything other than conventional, lecture-and-seminar formats. This has come after someone else had related how they felt obliged to provide the more passive lecture content in addition to the active learning, so that their (final-year) students wouldn’t feel too deprived of knowledge.
Very occasionally, different parts of one’s life collide with each other, often after you’ve had a bit too much to drink, which further contributes to the further unrolling of the evening.
In this case, no drinking (except of cups of tea) was involved for me this week, when I attended a ‘wargame’ of the British renegotiation of European Union membership. Run by Open Europe, the day had two parts, each intended to cast some light on what might happen in, respectively, the current renegotiation and then in the event of a ‘no’ vote in the referendum.
This was a high-rent production. Fancy City venue, live-streaming of the entire event, plus (most importantly) roles being played by People (former ministers and ambassadors): The British were represented by a former Foreign Secretary, Maclom Rifkind, and a former Chancellor, Norman Lamont. All of this built on a previous event in 2013 and given the number of TV camera crews from across Europe, it was not your usual event.
Around these parts, we tend to make a lot of innovation. We write posts about cool new things we do, partly because we like telling people about such stuff, partly because we think you’d like to such stuff too.
It’s really only fitting that the presence of the QAA at work that should drive this week’s reflection and activity. As their audit team spend the week meeting with staff and students, and working through the huge pile of documentation that we’ve given them, we’ve necessarily had to make some adjustments to our usual order of things. In my case, that meant bringing my class to an end a bit early yesterday, so that I could get to my meeting on time.
I’ll admit that when I first got the schedule, I did um and ah about whether I should simply move the class altogether: that would mean less rushing about for me, and more class time for the students. But since I’ve been prepping for this visit for about a year and half, I doubted that the extra time would do anything much to help. This is part of my classic model of exam-stress management: it’s usually too early to stress, until it’s too late to stress. As long as I don’t think about it in the transitional (‘it’s the right time to stress’) period, I’m fine.
Any way, more importantly, I was also thinking about what this particular session with the students was trying to achieve. It’s about negotiation in practice, following on from two sessions on negotiation theory, and is basically there to manage the transition into the much more active phase of the module, by highlighting the difficulties of using theory in practice.
In previous years, I did the following: recapped the key messages from the theory, then showed them a video that I made of a haggle, discussed how I was good and bad in using theory (bad mainly), then sharing my key practical tips on negotiation, then getting them to play a small crisis game (like this one). In short, build linearly from theory through my failings (to make them comfortable with reflective practice), then discussing their practice in the crisis game.
That structure made a degree of sense, but it also is rather gentle in immersing students into what will come, and probably unnecessarily so, on the evidence of all of the cohorts I’ve taught, who seem to grasp the need to look in on their practice and then articulate that externally.
So I chopped it all around. I got them to watch the video before class (also saving me the hassle of IT/speakers/etc.), then launched straight into the crisis game as they arrived in the classroom, debriefed them using the theory, before moving on to discussion of the video, and finally my tips.
This refocusing on their practice meant the session was a much more constructive opportunity for them to consider what they did and do, still with the knowledge that (via the video) I’m also capable of self-critique (not least of my acting/directing skills). Giving them the practical experience at the top of the class really helped to give them something to hang their thoughts on.
The bonus of all this was that it also shortened the session, because the video was moved out, so next year I can work on adding more content/activity to the session.
If there’s a general point behind this, then it’s to think about how and why we scaffold our students’ learning. Often, there’s sound reasoning in building up to more advanced activity, but it’s also worth reflecting on whether that later activity is really any more advanced, or just different from the norm. As I’ve discussed in other contexts, sometimes we need to credit our students with more resilience and capability than we do: isn’t self-reflection something we want to be developing in our students from day one?
You probably don’t have the impetus of a QAA visit to help you try moving things around, but it’s still worth a try.
British universities are now all back up and running for the academic year, after our traditional summer break, which we like to use laughing at American colleagues who had to start back in August (August!!).
As pay-back for such rude behaviour, we’re also now about to receive the government’s latest thinking about Learning & Teaching, in the form of a Green Paper.
No one is sure what it’ll contain, but it’s likely to set out plans for a Teaching Excellence Framework (TEF), linked to student funding and to assorted other reforms of the sector. I’ve written about TEF before, but I’d like to return to the theme, since it looks set to become the defining feature of L&T in this country for the next decade or more, just like its research counterpart.
Part of the thinking that now seems to be floating around is a progressive building of TEF from a starting point of basic metrics (progression, completion, good degrees, employment, etc.) into something with more of a qualitative element. It’s this second part that I’d like to consider some more.
Personally, I find the concept of ‘high quality teaching’ rather problematic. How does one judge such a thing? Is it about the quality of the outputs, or the inputs, or the improvement between the two? Does it need to be grounded in pedagogic research, or is it sufficient that ‘it works’? Do we judge the individual elements of learning & teaching (whatever that might mean) or the whole package (whatever that means too)? Do we stick to the formal curriculum or to the softer parts of a university experience?
All are valid questions to ask, and all provide very different answers and incentives to act in a particular way.
Consider a challenge that my home department faces. We tend not to get a terribly high percentage of first class and upper second degrees (‘good degrees’ in the jargon), compared to the rest of the university. It’s not a big difference, but enough to be noticed. One of the main reasons that we’ve identified is that we use a wide range of forms of assessment compared to others: not just essays and exams, but also presentations, group projects, negotiating briefs, policy briefs, reflective pieces, and so on. We do that because we believe it develops more useful skills for our students, since they are much more likely to encounter those kinds of things in the outside world.
The problem is that our range of assessment means that students don’t get the depth of practice that they would if they only did essays and exams, which in turn tends to depress their grades. Having seen examples of such assessment monocultures (bicultures, more precisely), I can attest to their value in maximising performance in those assessment forms.
So the question is whether that’s a good thing or a bad thing. And one could see how the argument could be made both ways. The real challenge comes in working out how to balance different claims in any evaluation exercise.
The riposte to this would be that the combination of quantitative and qualitative elements in TEF would exactly allow for universities to contextualise and clarify their situations. But that’s not the same as offering a mechanism for balancing out the different parts, not least for the inevitable league table that will result from all of this.
As I discussed last time, I feel rather conflicted about TEF. On the one hand, it’s an excellent opportunity for L&T to be brought to the foreground and given the attention it requires and deserves. On the other, I worry that it will tend to push us back into small-c conservatism of practice, where innovation and risk are discouraged, because of the potential upsetting of the basic metrics that will weigh heavily in the exercise. We’ve seen something similar with REF, where the gaming of the rules is the dominant mode.
If there were a way to embed an incentive to challenge ‘traditional’ practice and to create spaces for trying out new ideas, then I’d be very happy, concerns over innovation for innovation’s sake notwithstanding. Even just something that required us to justify the choices we make would be a start, maybe in the form of a statement by each teaching unit of its pedagogical underpinnings. Much of it might be rather vapid stuff to start with, but the exercise would at least force people to engage with the research and literature that’s out there and get that ball rolling.
Another possibility might to require universities to host a minimum number of L&T testbeds, to try out new approaches, with their outcomes ring-fenced from the metrics, so as to allow for some risk-taking and to generate internal potential for more wide-spread change. Certainly, there’s an issue of how students might feel being used as guinea-pigs, but that’s manageable.
These are just random thoughts, but I hope they point towards some of the tensions that have to be resolved with TEF. That the Green Paper has been delayed suggests that the politicians have not agreed on what should be included: academics need to be ready to engage and contribute to shaping whatever comes out.
When I talk with colleagues who teach, but who aren’t really into teaching (if you know what I mean), there’s often a sense that the L&T crowd are out to make life difficult for them. Regardless of whether that’s true or not (as discussed last week), the image is one of ‘us’ wanting to make ‘them’ do ever more complicated things, for no good reason.
This is actually a big issue, but for now we’ll focus on some parts of it and see how we go. Within that, one issue is what I will be calling the promotion/representativeness problem*. This is just the tension between us trying to share our work and often picking out the most glamorous or extensive example, rather than the simple thing that might be of more general interest.
To give you an example, for all my work on simulations, I’ve have just as much positive feedback (and adoption, a more useful measure) of my ABC post-it note technique.
The problem is that the things we talk about – the things that we have often put a lot of time and effort into, to be sure – are not the things that colleagues might find most useful. To use a more pertinent example, I’m giving a workshop on active learning later this week and I’d like to recommend a great demonstration of this, but it’s being run by the university down the road and it certainly wouldn’t work in two places so close.
All of this is a more general issue in talking about active learning. It doesn’t have to be huge simulations, or totally re-creating a curriculum; it can be a pile of small activities, bundled in with what already happens.
Of course, the solution to this is to go all jujitsu and use weaknesses as strengths.
If we’re in the game of trying to colleagues that L&T is as important as research – which I’ll suggest is our usual tussle – then why not make the L&T look more like research?
Students-as-Researchers is not only a great handle, it can also be great work. And it doesn’t have to be about working to formal outputs, although you can do that: it can be about getting students to engage with the research process in more explicit ways. And that can start from the first semester: last year, I got my Liberal Arts & Sciences students to present a research proposal for their first piece of assessment and used that as a way to help them get into the mindset of research. The projects might not have got funding in the real world, but as a first step, it was very helpful indeed.
Maybe the way to approach all of this is to treat it as we would a class: first question is – as always – what do we want to achieve?
If we talk to colleagues and get a proper sense of what they look for students to learn, then we can offer something from our range of pedagogies and techniques that might suit them, rather than throwing out idea after idea at them in the vague hope that they spot something they like.
That’s not about jargon, and it’s not about the fanciest thing you’ve ever done, and it’s a slog, but in terms of securing lasting improvement in our collective practice, it’s probably the best way forward.
I’m thinking that if I dress it up a bit, then it’ll sound like something that the research-focused might consider**
** – obviously, such dressing-up is also a problem, for general accessibility
At some point we must have run out of conversational topics on holiday, because I found myself on a beautiful Croatian beach, trying to explain the ‘good degrees’ metric to my wife. To my credit, I didn’t try very hard, although it wasn’t helped by the fact that I struggled to come up with a good justification for it.
For non-UK readers, “good degrees” measures the percentages of awards in the top two categories (firsts and upper seconds) for undergraduates. That’s about 70% of them, a figure that’s risen over time.
My Croatian struggle came in trying to unpack what it showed and why that might be useful.
Partly, it’s a reflection of our student body: if we have more academically-able students, then they’ll be more likely to get better grades, and so a ‘good degree.’
But equally, it’s also a reflection of our assessment and grading. Are we the kind of place that rewards good work (assuming that student populations are broadly equivalent)?
So what does it show? Our selectiveness? Our accommodating assessment? Our great students? Something else?
The issue is one that might have remained of minimal interest to anyone, but for the use of the metric in league tables. These aim to rank universities on the quality of their provision and have become an important part of the British HE sector, not least for recruiting new students. However, that also introduces an incentive to improve one’s ‘good degree’ metric.
The obvious responses in this situation would be to have stricter entry requirements (which is also a league table metric) and to have more lenient assessment. The first is very much in the universities’ hands and you’ve seen exactly that happening. The second is a very different kettle of fish.
Assessment in the UK differs from many other countries with its use of second- and external-examining (see this for more), which is in turn bound by a national framework of performance standards: broadly speaking, any one at a UK university should be marked to the same kind of standard, whatever and wherever they study, against benchmarks, rather than against each other. And, broadly speaking, that’s what happens.
The upshot is that any (league table) pressure to go soft on assessment is counter-weighted by the need to meet sector and disciplinary norms: as our university tells us, the norms are the norms and we shouldn’t be messing around with them.
The problem now becomes one of still wanting to improve ‘good degrees’, but losing the most obvious tool to achieve that. So instead, we try other things. These cover pretty much every part of what we do in our teaching. Are there pinch points in assessments or modules that we can redesign, to meet learning objectives in a more accessible way? Are there staff how need help with redesigning assessments or with providing support? Are there early markers of weaker performance that can trigger timely intervention? Are there extra resources we could provide that would help improve student outcomes?
All of those questions are good questions to ask of any educational body, since they all speak to the core question of ‘how do we help our students to achieve their full potential as learners?’
Unfortunately, all of those questions are not ones that convert into nice, simple metrics. Instead we fall back onto things like ‘good degrees’ and back into the question of what we are trying to do.
The issue is an old one: if you make something a metric, then people tend to work towards those metrics, they gain value in of themselves. That’s a problem if we forget that metrics are proxies, which is all too easy to do.