Structuring feedback on marking

Among my various duties here, I am responsible for overseeing undergraduate dissertations. This means I run group sessions during the year, organise and collate marks from colleagues supervising individual students. It’s something that I’ve done for many years and it’s a generally rewarding experience, seeing how far our students can take their own research agendas.

However, there has been one aspect of this that has particularly struck me this year, namely feedback.

For reasons that I will not dwell upon, I released marks to students before I could return to them the feedback forms produced by the markers. Normally, we would do this at the same time, to try and maximise the engagement with the feedback, but it wasn’t possible this time.

In my email telling students were posted, I explained about the double-blind marking process. I’ll assume most of you will know this, but if not, it’s simply giving a first (the supervisor) and second (another colleague) markers copies to mark at the same time, which they do without any sight of the other’s marking: differences are then reconciled at a meeting afterwards, to produce an agreed mark.

Despite explaining this, all of the responses that I received from students to the mark release asked how their mark had been arrived at. A consequence of double-blind marking is that there is no anchoring of marks (as when one moderates), hence there is more mobility of marks (up or down) from any interim feedback process (of which we have much with dissertations).

In all those cases, I replied, explaining the system again, followed shortly by the feedback forms from both markers. Even when I did this, I had another query from a student about the differences between markers and the potential impact that might have with any external moderation.

All of this suggests a number of things.

A pile of dissertations, today
A pile of dissertations, today

Firstly, double-blind marking (and probably all kinds of marking protocol) need to be explained repeatedly throughout modules and programmes of study, so that students might more reasonably understand the logistics and effects of the approach. This might sound mundane, but I’ve had enough conversations with academics about the difference between marking and moderation to know that mundane things are not the same as simple things.

Secondly, it has highlighted the importance of contextualising marks with feedback. In this particular case, many of our students are thinking of graduate study, so their performance in the dissertation is particularly important, so good quality, constructive feedback is essential. My concern in this case is that the mark will be what is remembered. Indeed, it is noticeable that I had more comments back about marks than about the feedback this time, and I would venture to say that this is a general pattern.

Thirdly, the process has once again pointed one of the difficulties of giving useful feedback to final year students in their final semester. In previous years, much of the feedback I and my colleagues produce for these students at this time of year (for dissertations or coursework) never gets back to students, because they have already left and have little motivation to collect feedback on work that’s now behind them. We have tried sending out feedback, but with moderate impact. Perhaps we need to think again about how we can close the loop of studying, so that our students get the full benefit of their time with us.

4 thoughts on “Structuring feedback on marking

  1. I wonder what you think about this article: http://www.uwa.edu.au/__data/assets/pdf_file/0012/1888608/Bloxham-2002.pdf

    Regarding your second and third points — we require a preliminary thesis for our majors in their third year of study. In theory the feedback from the third year thesis gets incorporated into the student’s final thesis in the fourth year. Each thesis is essentially a one-on-one exercise with a faculty advisor who helps the shape the thesis along the way, making sure it’s of sufficient quality. The problem is that the process is not very efficient — regular conferences with each student throughout a semester work out to the equivalent of dozens of independent studies where the same problems (for example, “this is not a scholarly source”) are repeatedly visited.

    From anecdotal information, students regard the experience mostly as a rite of passage. They find the experience challenging, but I have no idea whether they leave the program having learned from the feedback they’ve received (post-graduation assessment?).

    What is the scale of marks? Here it’s supposed to be A-F, though no one’s going to fail by the end of the process. Do you use a “more than sufficient, sufficient, not sufficient” system?

  2. I think that there’s a lot to that article, although it’s hard to see how any one university could break out of the current system without some penalty.

    We do lots of formative work during the final year, which in turn builds on modules in earlier years on research skills and design. Hence also the group sessions that I run in the earlier part of the year, to pick up those generic issues.

    I agree about the rite of passage aspect: it’s a major block of work, largely self-directed. But it’s just that upon which I feel we could do more to aid reflection. However, if I were a student, I’d think it odd to get post-graduation feedback too…

    Marking is on the usual British scale (0-100%), like all our assessment.

  3. It seems to me that part of the problem is that we need to change the way in which we and students see this piece of work. We’ve been trying to re-design our dissertation process to mirror the academic research process: proposal, conference/poster presentation, publication. One of the intentions has been to make it more of a communal process: we do research and share it with a wider audience of our peers. The second is that it helps with milestones for the less organized. At present I’m working up the idea of a conference with colleagues in other institutions, which might feed into a poster publication at the PSA. We are also developing a publication strategy for the best work – see this for an interesting example (http://biohorizons.oxfordjournals.org/) – and a depository so that we are at least keeping this work.

  4. Chris, I think a lot of these ideas are really important. We too have been talking about milestones: we ask students to submit abstracts, chapter outlines and literature reviews before Christmas, as a way of helping them focus their ideas, which seems to have been a very positive process. If we’re trying to get students to think about future study, then engagement with the academic community via presentation and dissemination is very useful too and I’d be interested to see how your work pans out.

Leave a Reply