Biting off more than you can chew?

kerry_and_lavrov_with_senior_advisers_negotiate_chemical_weapons_agreement_on_september_14_2013
We can do better than this. Probably

A while back, I wrote about running a sim on Brexit, the UK’s departure from the EU. (Obviously, ‘departure’ makes it sound grand and stately, rather than the big old mess that it more actually resembles, but I digress). In it, I asked if anyone was up for working together on doing this.

Well, we have a winner: Matthew LeRiche from Memorial in Newfoundland. Matthew joined us in Surrey earlier this year for our ALPS workshops, and this doesn’t seem to have discouraged him from working with us again. Continue reading “Biting off more than you can chew?”

New Brave New World (again)

A university, yesterday

Like the ticking of a rather slow clock, we’ve just had the publication of a White Paper on Higher Education here in England and Wales. This comes after a long consultation process that stretches back into the previous Parliament and which leaves as many questions open as it addresses.

I’m not going to run through all of it because a) it’s very long, b) not all of it relates to L&T and c) I can’t face it: there’s lots of good pieces here and here. But I will update on the famous Teaching Excellence Framework (TEF), which I’ve written about before (here, here and here).

For those of you who have managed to avoid this idea so far, firstly my congratulations: essentially, it’s seen as an equivalent of the Research Excellence Framework (REF), trying to provide some judgement on the quality of teaching provision in a way that is comparable across institutions and disciplines. In so doing, it aims to increase transparency for students and to drive up standards.

As with all good ideas, the operationalisation has been the problem: how do you actually do this? Continue reading “New Brave New World (again)”

What makes for ‘quality teaching’?

Nope
Nope

British universities are now all back up and running for the academic year, after our traditional summer break, which we like to use laughing at American colleagues who had to start back in August (August!!).

As pay-back for such rude behaviour, we’re also now about to receive the government’s latest thinking about Learning & Teaching, in the form of a Green Paper.

No one is sure what it’ll contain, but it’s likely to set out plans for a Teaching Excellence Framework (TEF), linked to student funding and to assorted other reforms of the sector. I’ve written about TEF before, but I’d like to return to the theme, since it looks set to become the defining feature of L&T in this country for the next decade or more, just like its research counterpart.

Part of the thinking that now seems to be floating around is a progressive building of TEF from a starting point of basic metrics (progression, completion, good degrees, employment, etc.) into something with more of a qualitative element. It’s this second part that I’d like to consider some more.

Personally, I find the concept of ‘high quality teaching’ rather problematic. How does one judge such a thing? Is it about the quality of the outputs, or the inputs, or the improvement between the two? Does it need to be grounded in pedagogic research, or is it sufficient that ‘it works’? Do we judge the individual elements of learning & teaching (whatever that might mean) or the whole package (whatever that means too)? Do we stick to the formal curriculum or to the softer parts of a university experience?

All are valid questions to ask, and all provide very different answers and incentives to act in a particular way.

Consider a challenge that my home department faces. We tend not to get a terribly high percentage of first class and upper second degrees (‘good degrees’ in the jargon), compared to the rest of the university. It’s not a big difference, but enough to be noticed. One of the main reasons that we’ve identified is that we use a wide range of forms of assessment compared to others: not just essays and exams, but also presentations, group projects, negotiating briefs, policy briefs, reflective pieces, and so on. We do that because we believe it develops more useful skills for our students, since they are much more likely to encounter those kinds of things in the outside world.

The problem is that our range of assessment means that students don’t get the depth of practice that they would if they only did essays and exams, which in turn tends to depress their grades. Having seen examples of such assessment monocultures (bicultures, more precisely), I can attest to their value in maximising performance in those assessment forms.

So the question is whether that’s a good thing or a bad thing. And one could see how the argument could be made both ways. The real challenge comes in working out how to balance different claims in any evaluation exercise.

The riposte to this would be that the combination of quantitative and qualitative elements in TEF would exactly allow for universities to contextualise and clarify their situations. But that’s not the same as offering a mechanism for balancing out the different parts, not least for the inevitable league table that will result from all of this.

As I discussed last time, I feel rather conflicted about TEF. On the one hand, it’s an excellent opportunity for L&T to be brought to the foreground and given the attention it requires and deserves. On the other, I worry that it will tend to push us back into small-c conservatism of practice, where innovation and risk are discouraged, because of the potential upsetting of the basic metrics that will weigh heavily in the exercise. We’ve seen something similar with REF, where the gaming of the rules is the dominant mode.

If there were a way to embed an incentive to challenge ‘traditional’ practice and to create spaces for trying out new ideas, then I’d be very happy, concerns over innovation for innovation’s sake notwithstanding. Even just something that required us to justify the choices we make would be a start, maybe in the form of a statement by each teaching unit of its pedagogical underpinnings. Much of it might be rather vapid stuff to start with, but the exercise would at least force people to engage with the research and literature that’s out there and get that ball rolling.

Another possibility might to require universities to host a minimum number of L&T testbeds, to try out new approaches, with their outcomes ring-fenced from the metrics, so as to allow for some risk-taking and to generate internal potential for more wide-spread change. Certainly, there’s an issue of how students might feel being used as guinea-pigs, but that’s manageable.

These are just random thoughts, but I hope they point towards some of the tensions that have to be resolved with TEF. That the Green Paper has been delayed suggests that the politicians have not agreed on what should be included: academics need to be ready to engage and contribute to shaping whatever comes out.

Proxy, proxy, proxy: out! out! out!

IMG_0077
Right place, right time, wrong conversation

At some point we must have run out of conversational topics on holiday, because I found myself on a beautiful Croatian beach, trying to explain the ‘good degrees’ metric to my wife. To my credit, I didn’t try very hard, although it wasn’t helped by the fact that I struggled to come up with a good justification for it.

For non-UK readers, “good degrees” measures the percentages of awards in the top two categories (firsts and upper seconds) for undergraduates. That’s about 70% of them, a figure that’s risen over time.

My Croatian struggle came in trying to unpack what it showed and why that might be useful.

Partly, it’s a reflection of our student body: if we have more academically-able students, then they’ll be more likely to get better grades, and so a ‘good degree.’

But equally, it’s also a reflection of our assessment and grading. Are we the kind of place that rewards good work (assuming that student populations are broadly equivalent)?

So what does it show? Our selectiveness? Our accommodating assessment? Our great students? Something else?

The issue is one that might have remained of minimal interest to anyone, but for the use of the metric in league tables. These aim to rank universities on the quality of their provision and have become an important part of the British HE sector, not least for recruiting new students. However, that also introduces an incentive to improve one’s ‘good degree’ metric.

The obvious responses in this situation would be to have stricter entry requirements (which is also a league table metric) and to have more lenient assessment. The first is very much in the universities’ hands and you’ve seen exactly that happening. The second is a very different kettle of fish.

Assessment in the UK differs from many other countries with its use of second- and external-examining (see this for more), which is in turn bound by a national framework of performance standards: broadly speaking, any one at a UK university should be marked to the same kind of standard, whatever and wherever they study, against benchmarks, rather than against each other. And, broadly speaking, that’s what happens.

The upshot is that any (league table) pressure to go soft on assessment is counter-weighted by the need to meet sector and disciplinary norms: as our university tells us, the norms are the norms and we shouldn’t be messing around with them.

The problem now becomes one of still wanting to improve ‘good degrees’, but losing the most obvious tool to achieve that. So instead, we try other things. These cover pretty much every part of what we do in our teaching. Are there pinch points in assessments or modules that we can redesign, to meet learning objectives in a more accessible way? Are there staff how need help with redesigning assessments or with providing support? Are there early markers of weaker performance that can trigger timely intervention? Are there extra resources we could provide that would help improve student outcomes?

All of those questions are good questions to ask of any educational body, since they all speak to the core question of ‘how do we help our students to achieve their full potential as learners?’

Unfortunately, all of those questions are not ones that convert into nice, simple metrics. Instead we fall back onto things like ‘good degrees’ and back into the question of what we are trying to do.

The issue is an old one: if you make something a metric, then people tend to work towards those metrics, they gain value in of themselves. That’s a problem if we forget that metrics are proxies, which is all too easy to do.

As we move towards a Teaching Excellence Framework here in the UK, that’s something we’re going to have to deal with more and more.

Teaching Excellence Frameworks, and other things to think about over the summer

Soft_ruler
This measures everything. Everything.

One of the mutterings that has flitted about the HE sector here in the UK in recent months has been the idea of a Teaching Excellence Framework, an equivalent of the Research Excellence Framework (REF) that has either come, or is coming, to your local university.

As with REF, the idea would be to use a range of metrics to identify, measure and encourage high quality learning & teaching. That might include completion rates, ‘value added’, dissemination of good practice and case studies of innovation and general excellence.

The idea is one that got a big boost last week, when the new government announced that it would allow British universities to increase their fees if their teaching was of high quality.

Now you might imagine that I’d be all for that, since I could write an impact case study already now and my institution does very well in L&T metrics.

However, I find myself being deeply ambivalent about it all.

In part, that ambivalence is driven by my general concern about any one-size-fits-all approach, especially because – just as in research – there are very many ways to make a useful contribution and (conversely) just as many ways to not capture that. How do you compare a MOOC with lab work with a simulation?

But it also comes from a concern about the abstraction of L&T from its context.

When I run a module, I run it with the particular group of students in mind. Each class or session finds me (like it would most colleagues) trying to adapt the material to meet the needs of the students as we progress. Most mundanely, the confused face will prompt us to rework some section, to unpack it and help that confusion pass. Less mundanely, I might rework an entire module to accommodate the needs of a group (as indeed I will be doing this summer).

In REF, individual researchers submit four pieces of work for peer-evaluation: this forms the bulk of the work, alongside some elements dealing the research environment and its wider impact.

In a TEF, it’s inconceivable that we’d do the same: I submit my four best students? My four best modules? My four best sessions?

No, there’d be a bunch of metrics at programme or departmental or university level, describing some of those things I’ve mentioned above. Then there’d be some kind of case study approach.

This anonymises – or, rather, depersonalises – what we do. As a useful piece by John Canning points out, a TEF might start out with good intentions, but quickly institutions would move towards playing the game and optimising the metrics.

This already happens: institutions in the UK spend a lot of time and effort in trying to improve their league table rankings and that includes targeting poorly-performing indicators. That’s not a bad thing – my own university now has much better systems for supporting students who might fail modules, for instance – but the tendency is always to focus on the metric, rather than the students themselves.

I appreciate that in all of this, I’m just the sort of person who should be getting involved, as someone who cares about, and has a substantial job role in, L&T. So I’m going to try to use my ambivalence to good effect. That’s why I’m going to try and be part of the discussion about TEF wherever I can, so that even if it’s not perfect, it will still be less imperfect than otherwise. And I’d say the same to colleagues: rather than just grumble about what’s being done to you, try and shape it.

All that said, I’m about to have a couple of weeks of not thinking about work at all. Doubtless I’ll return with some moment of inspiration from my break, but I really wouldn’t hold your breath. See you on the other side.

3 ways to teach the 2015 General Election

UK_opinion_polling_2010-2015For a very long time, teaching about British general elections was hampered by one simple fact: you were never quite sure when they would happen. Unless there was a clear play to sit out the maximum term (as in 1992), then students and instructors would be at the whim of the occupant of No.10.

So the introduction of fixed term parliaments by the current administration is a real boon: we have a date (7 May), which also falls nicely towards the end of the teaching period in the spring semester. This lets us think about some creative ways we might teach about it.

I’m assuming that those devoting entire modules/courses to the subject will already be up and running, but for the rest of us, there’s still the opportunity to import aspects of what’s happening into the classroom. Consider these three starting points.

1: real-time information

The most obvious thing to do is to get students aware of, and discussing, what’s happening right now. Major news outlets are setting up micro-sites for the election, aggregating content. Likewise, there are some excellent resources, including those handling polling data (here and here) or more general academic comment.

What’s becoming evident is that there is a huge amount of near-instantaneous material/comment available, so setting up a module account on Twitter to collect and share it is a great way to capture this and funnel it into the class: perhaps you set the last half-hour of a session to discuss what’s happening.

2: active debate

Last semester I got my students to role-play a debate on the television debates, playing different parties. This topic is still on-going and gets the students into thinking about the media, the status of parties and the semantics of public debate, in a form that is readily accessible.

If that’s not your thing, then you could run a hustings, or a putative post-election coalition negotiation. All of these active-learning models force students to take their knowledge and apply it to a situation, where they can see how positions interact with each other and where presentational (and representational) factors can strongly influence the course of events.

3: comparative approaches

Aware as I am that most of our readers aren’t UK-based, it’s also worth thinking about using the British case as a comparator for your own national system. As I’ve noted, the quantity and quality of material available is extremely high, and students’ awareness of the basic features of the system should be pretty good (at least in the relative sense). Couple this to the obvious differences between your country and the UK (electoral system, parliamentary government, party structure, media structure, etc.) and you have a great opportunity to leverage more insight into how national polities work.

This might be of comparing electoral campaigning, or issue development, or even just how your country sees what is going on elsewhere in the world (this last one might well be most suitable for those places that think they are the centre of the world (ahem)).

As I say, this is just a first effort, but if you have more suggestions, either of techniques or of resources, then do please post below in the comment box.

Sustaining excellence in learning & teaching

Autumn colours surround the Tu Hwnt i'r Bont tearooms on the banks of the River Conwy
If I were out, it’d probably all be looking like this.

In my institution at least, the drawing in of the nights can mean only one thing: reviewing the previous academic year’s teaching and drawing up action plans. As the shards of autumnal light slant through the office window and my inner poet gets overly confident, I’m busy looking at management reports.

This is part of the condition of British universities, where matters of student teaching have both intrinsic and extrinsic value. The former is something that any educational body should be bothered with, while the latter is important for the famous league tables by which so much stock is set: students make decisions on such tables, so there is a material benefit as much as anything else.

I’m in the very fortunate position that Surrey has done very well in these tables, to the point that we put links on our website, so anyone can see. Our sharp upward trajectory is been driven by some intense management focus on picking up on failings and pushing good practice as widely as possible, aided (to some extent) by a stronger intake of students.

That’s all and (very) good, but we now find ourselves having to think about the matter in a rather different way, namely of sustainability.

In the initial context, sustainability didn’t really come into it: if there was a problem, then a solution was sought and much attention brought to bear. Typically, such solutions have involved closer engagement of teaching staff, be it in training, teaching delivery, assessment and feedback, pastoral care and more general administrative activity (i.e. everything).

As colleagues would observe from time to time, while individual actions might not be too much of a burden, taken together they represent a considerable demand of time and effort. To take my own example, while moving up to my new post has demanded much of me, it has also made me appreciate how much time I used to spend dealing with student matters of various kinds.

Surrey is not alone in this: the general standard of provision has improved markedly across the HE sector in the UK in recent years, further pushing everyone to make yet more improvements in order to remain ‘excellent’.

The difficulty will be working out what matters.

As regular readers of this blog will know, a key issue in most areas of pedagogic practice is the sparseness of the evidence base upon which to make informed choices. Almost certainly, we’re doing something that doesn’t produce any benefit for students and we’re not doing something that would help them: the problem is we don’t know what falls in which category.

Therefore, with this in mind, as well as management reports, I’m going to be spending more time reading research, in order to better understand the situation. How far it will help remains to be seen, but now I have better oversight of a range of departmental practices and cultures, I hope that I will be able to marry that up to produce interventions that will help everyone, staff and students alike.

Subjectivity in Assessment

Over the past couple of weeks, I’ve been experiencing something new for me: marking someone else’s assessment. More precisely, first-marking their assessment.

One of my colleagues has been on extended leave and so wasn’t in a position to mark the final essay for one of their modules, for final year undergraduates. Since the students need the work marked to allow them to complete their degrees this summer, our School was in a bit of a fix.

On the one hand, the usual practice is that the module leader does the first marking: they’ve run the module and know the students and material best. On the other, in the face of no certainty about when the colleague would be able to mark, and the clear need for students to know their situation for their final set of modules, there was a clear need to move marking on.

At a technical level, this was actually much simpler and problem-free than might be imagined. By coincidence, the module concerned covered much of the same subject matter as a module that I teach to Masters students, and even the form of assessment was nearly identical to my own. In addition, it is now our practice to provide both a generic and a specific rubric to students of our expectations for each piece of assessment, so I had not only the title of the work, but also extended comments on what that involved, as well as the university’s descriptions of classifications.

In addition, we followed the usual practice of British universities, and a colleague second-marked all of the scripts (the role I would normally have played in this module), and an external examiner (from another university) will look at a sample of the marking for the module. Both the second and external markers are there to provide a benchmark of practice and standards, to help ensure that students are treated fairly and comparably, not only across their programme of study, but also in comparison with other students on similar programmes in other institutions. In this case, the external will know about this situation, so they are fully informed.

At this point, some American readers will be shaking their heads in disbelief at the regulation around this: indeed, it’s one of my favourite pastimes at US L&T conferences to mention it.

However, from the students’ perspective, all of these procedures didn’t really matter to them. Several came to see me over the days after the marks were released, to suggest – very politely, it must be stressed – that my marking wasn’t the same as my colleague’s would have been.

In the words of one student: “I wrote that work for them, not for you.”

At one level, I completely sympathise with such comments: we do, after all, make a point of stressing to students that each of my colleagues has different views and approaches, so it might be fair enough to suggest that each of us has different things we look for in assessment.

However, this is to miss the deeper point that we do not ask students to follow our approaches, but rather develop their own ideas and worldviews. Neither I nor my colleague would want a student to slavishly reproduce our own thoughts in their assessment, as if it were some truer truth. I marked – like we all mark – not on whether I agreed with the approach, but on whether the approach worked and was supported by evidence.

The entire model of British assessment practice is to try to minimise any effect of subjectivity. Indeed, students (and others) often forget that all our marking is subject to revision by people who often stand at much greater distance from the teaching than I have with my colleague: external examiners are able to move entire spreads of marks, if they see fit.

The bigger point that I have taken from this is that despite such mechanisms to bolster objectivity, if students do not see and feel that, then it becomes a moot point. I am working with them, to try and underline this, but it remains a work in progress.

Learning isn’t a contact sport

Here in the UK, we increasingly worry about contact with our students, about the simple question of how much time do we spend with them in class or supervision.

The driver of this has been the introduction by the government of Key Information Sets (KIS) which provides basic metrics for each and every programme offering in England and Wales: here’s a typical one for a programme that I’ve randomly picked out of nowhere.

As you’ll see, it has a worthy aim at heart: to provide benchmarks for what a student can expect, in terms of costs, teaching delivery and career prospects. Equally obviously, I am going to question the utility of this data.

Firstly, it assumes that aggregated performance of previous cohorts holds some resemblance to the individual performance of future students. Spending, as I do, a lot of time with individual students and their specific learning requirements, I am dubious about the proposition, especially when I consider how the demographics of our current intake looks very different to the students we recruited three or four years ago (who data you now see).

Secondly, it mixes programme-, School- and University-level data, in ways that are not immediately apparent and which again do not give an applicant a meaningful understanding of what things are like. That’s why we invite all our applicants for a day on campus, so that they can make up their own minds.

But it is the third issue that I would like to develop.  One of the metrics is the percentage of ‘time spent in lectures, seminars or similar’, as compared to independent study.  The discourse around this (across the HE sector, I should point out) is that more contact time is A Good Thing.

Certainly, for subjects where lab work is an essential element, it’s easy to see the logic behind this. But for social sciences, much of what we are doing is developing students’ abilities to be self-reflective and critical thinkers, who are able to articulate their own views. In part, this requires opportunities to do that articulation (be it through discussion, or coursework), but it also requires time for students to pursue self-study, in order to begin to know their own mind.

If we want to build students’ self-reliance, then surely we also want to be giving them space to self-organise. This might seem a forlorn hope at times, but over-structuring of time can be counter-productive to initiative.

Ultimately, there is an irony that while observers and league tables might value more contact time, students actually on the programme often don’t treat it as such: despite offering a time-efficient way to get into issues and debate, there are always students who see it either as a distraction or as an irrelevance. Perhaps if we can understand that better, then we might be able to make the most of however much contact time we have.