Sustaining excellence in learning & teaching

Autumn colours surround the Tu Hwnt i'r Bont tearooms on the banks of the River Conwy
If I were out, it’d probably all be looking like this.

In my institution at least, the drawing in of the nights can mean only one thing: reviewing the previous academic year’s teaching and drawing up action plans. As the shards of autumnal light slant through the office window and my inner poet gets overly confident, I’m busy looking at management reports.

This is part of the condition of British universities, where matters of student teaching have both intrinsic and extrinsic value. The former is something that any educational body should be bothered with, while the latter is important for the famous league tables by which so much stock is set: students make decisions on such tables, so there is a material benefit as much as anything else.

I’m in the very fortunate position that Surrey has done very well in these tables, to the point that we put links on our website, so anyone can see. Our sharp upward trajectory is been driven by some intense management focus on picking up on failings and pushing good practice as widely as possible, aided (to some extent) by a stronger intake of students.

That’s all and (very) good, but we now find ourselves having to think about the matter in a rather different way, namely of sustainability.

In the initial context, sustainability didn’t really come into it: if there was a problem, then a solution was sought and much attention brought to bear. Typically, such solutions have involved closer engagement of teaching staff, be it in training, teaching delivery, assessment and feedback, pastoral care and more general administrative activity (i.e. everything).

As colleagues would observe from time to time, while individual actions might not be too much of a burden, taken together they represent a considerable demand of time and effort. To take my own example, while moving up to my new post has demanded much of me, it has also made me appreciate how much time I used to spend dealing with student matters of various kinds.

Surrey is not alone in this: the general standard of provision has improved markedly across the HE sector in the UK in recent years, further pushing everyone to make yet more improvements in order to remain ‘excellent’.

The difficulty will be working out what matters.

As regular readers of this blog will know, a key issue in most areas of pedagogic practice is the sparseness of the evidence base upon which to make informed choices. Almost certainly, we’re doing something that doesn’t produce any benefit for students and we’re not doing something that would help them: the problem is we don’t know what falls in which category.

Therefore, with this in mind, as well as management reports, I’m going to be spending more time reading research, in order to better understand the situation. How far it will help remains to be seen, but now I have better oversight of a range of departmental practices and cultures, I hope that I will be able to marry that up to produce interventions that will help everyone, staff and students alike.

Subjectivity in Assessment

Over the past couple of weeks, I’ve been experiencing something new for me: marking someone else’s assessment. More precisely, first-marking their assessment.

One of my colleagues has been on extended leave and so wasn’t in a position to mark the final essay for one of their modules, for final year undergraduates. Since the students need the work marked to allow them to complete their degrees this summer, our School was in a bit of a fix.

On the one hand, the usual practice is that the module leader does the first marking: they’ve run the module and know the students and material best. On the other, in the face of no certainty about when the colleague would be able to mark, and the clear need for students to know their situation for their final set of modules, there was a clear need to move marking on.

At a technical level, this was actually much simpler and problem-free than might be imagined. By coincidence, the module concerned covered much of the same subject matter as a module that I teach to Masters students, and even the form of assessment was nearly identical to my own. In addition, it is now our practice to provide both a generic and a specific rubric to students of our expectations for each piece of assessment, so I had not only the title of the work, but also extended comments on what that involved, as well as the university’s descriptions of classifications.

In addition, we followed the usual practice of British universities, and a colleague second-marked all of the scripts (the role I would normally have played in this module), and an external examiner (from another university) will look at a sample of the marking for the module. Both the second and external markers are there to provide a benchmark of practice and standards, to help ensure that students are treated fairly and comparably, not only across their programme of study, but also in comparison with other students on similar programmes in other institutions. In this case, the external will know about this situation, so they are fully informed.

At this point, some American readers will be shaking their heads in disbelief at the regulation around this: indeed, it’s one of my favourite pastimes at US L&T conferences to mention it.

However, from the students’ perspective, all of these procedures didn’t really matter to them. Several came to see me over the days after the marks were released, to suggest – very politely, it must be stressed – that my marking wasn’t the same as my colleague’s would have been.

In the words of one student: “I wrote that work for them, not for you.”

At one level, I completely sympathise with such comments: we do, after all, make a point of stressing to students that each of my colleagues has different views and approaches, so it might be fair enough to suggest that each of us has different things we look for in assessment.

However, this is to miss the deeper point that we do not ask students to follow our approaches, but rather develop their own ideas and worldviews. Neither I nor my colleague would want a student to slavishly reproduce our own thoughts in their assessment, as if it were some truer truth. I marked – like we all mark – not on whether I agreed with the approach, but on whether the approach worked and was supported by evidence.

The entire model of British assessment practice is to try to minimise any effect of subjectivity. Indeed, students (and others) often forget that all our marking is subject to revision by people who often stand at much greater distance from the teaching than I have with my colleague: external examiners are able to move entire spreads of marks, if they see fit.

The bigger point that I have taken from this is that despite such mechanisms to bolster objectivity, if students do not see and feel that, then it becomes a moot point. I am working with them, to try and underline this, but it remains a work in progress.

Learning isn’t a contact sport

Here in the UK, we increasingly worry about contact with our students, about the simple question of how much time do we spend with them in class or supervision.

The driver of this has been the introduction by the government of Key Information Sets (KIS) which provides basic metrics for each and every programme offering in England and Wales: here’s a typical one for a programme that I’ve randomly picked out of nowhere.

As you’ll see, it has a worthy aim at heart: to provide benchmarks for what a student can expect, in terms of costs, teaching delivery and career prospects. Equally obviously, I am going to question the utility of this data.

Firstly, it assumes that aggregated performance of previous cohorts holds some resemblance to the individual performance of future students. Spending, as I do, a lot of time with individual students and their specific learning requirements, I am dubious about the proposition, especially when I consider how the demographics of our current intake looks very different to the students we recruited three or four years ago (who data you now see).

Secondly, it mixes programme-, School- and University-level data, in ways that are not immediately apparent and which again do not give an applicant a meaningful understanding of what things are like. That’s why we invite all our applicants for a day on campus, so that they can make up their own minds.

But it is the third issue that I would like to develop.  One of the metrics is the percentage of ‘time spent in lectures, seminars or similar’, as compared to independent study.  The discourse around this (across the HE sector, I should point out) is that more contact time is A Good Thing.

Certainly, for subjects where lab work is an essential element, it’s easy to see the logic behind this. But for social sciences, much of what we are doing is developing students’ abilities to be self-reflective and critical thinkers, who are able to articulate their own views. In part, this requires opportunities to do that articulation (be it through discussion, or coursework), but it also requires time for students to pursue self-study, in order to begin to know their own mind.

If we want to build students’ self-reliance, then surely we also want to be giving them space to self-organise. This might seem a forlorn hope at times, but over-structuring of time can be counter-productive to initiative.

Ultimately, there is an irony that while observers and league tables might value more contact time, students actually on the programme often don’t treat it as such: despite offering a time-efficient way to get into issues and debate, there are always students who see it either as a distraction or as an irrelevance. Perhaps if we can understand that better, then we might be able to make the most of however much contact time we have.