It’s graduation week here, so there’s lots of family and friends milling around, as well as former students all robed-up. Seeing them has been making me think about the whole notion of personal development and learning that’s bound up in the word ‘graduation’.
However, it’s too hot to get too far into that, so I’m focusing today on feedback forms. This is also due to the more prosaic aspect that the University is reviewing its practice in this area and we’re being asking to reflect on how feedback should be given to students in a more consistent manner.
Rather than get into the detail of that, I offer some reflections on our current feedback, here in the School. It’s been interesting how, since our strong performance in the Guardian league tables, various colleagues from other institutions have been in contact to ask about this particular point, since it’s something that is generally an area of less strong performance.
I’m not telling you anything too secret when I say that the feedback form is – in of itself – not the most important part of feedback, but it is often the most visible, and we’ve put a lot of time and effort into making it work better. Indeed, for something that is so simple (in principle), we spend much time each year reviewing and amending it.
For us, the form does three things. Firstly, it provides an indication of achievement in the relevant piece of coursework. Secondly, it provides constructive comments on how to improve performance in future assessment. Thirdly, it provides external examiners with an indication of how internal marking reached a conclusion.
To do that, our form has three sections, which roughly map onto those three purposes.
In the first, there are tickboxes for ‘Structure’, ‘Language/grammar’, ‘referencing’ and ‘resource base’. The marker can chose from five options in each, from ‘excellent’ to ‘weak/none’. Tickboxes were chosen here because we felt it was useful to give students a benchmark of performance in the broadest sense on some key metrics and in these aspects there is typically not much more to be said.
In addition, there is a free-text box for “Addressing the question; quality of evaluation; evident and argument”, where we give more detailed evaluation of those points, which usually require some explanation and unpacking of issues.
In the second element, we have a free-text box with ‘advice for improvement’, where we try to show students how they can overcome issues or barriers, so that they have a constructive and positive message that comes with the evaluative element.
Finally, on a separate sheet, we provide space for first and second markers to indicate marks and how they have resolved any differences between them, to produce a final internal mark. This second sheet is not returned to students, who only see that final mark on the sheet with the other elements listed above.
Putting this all together – plus some other elements – produces a single sheet feedback form that places the mark literally at the bottom, so that students (hopefully) will read their feedback (and feedforward) before getting to that part.
Our experience with the form has been generally very positive, with students noting the value of the forward-looking elements and the sense of where they stand.
However, there are still issues to be addressed. Firstly, there is a natural tendency to seek (and provide) a uniform evaluation, in the sense that if the work is ‘good’ then one might expect every element of the feedback to be good. This is self-evidently not true, but if all the tickboxes are in one category and the mark is in another, then students do notice and will ask about it.
Secondly, we do not specify what each element ‘means’, so there are as many interpretations of ‘addressing the question’ (for example) as there are markers. This is something we have discussed, but without a clear outcome: because the form is used across a wide variety of assessment types and titles, there can be very different legitimate interpretations and to specify at a School level what each element is risks losing a lot of the fine detail that colleagues bring to their feedback. Put differently, the openness helps colleagues have buy-in to their feedback (rather than just pumping stuff into a rigid design), which in turn allows students to get feedback that is genuinely engaged and engaging.
By no means have we cracked the form, and the University’s discussions might take us in a different direction again this autumn, but I can heartily encourage you all to think about how feedback works for you, be that at the level of a form, or in how feedback is given (which is a whole different issue).
Alas whatever feedback is given and however it is given and whatever the system used most students are focused only on the mark at the bottom. Most don’t read the feedback more than superficially and don’t see why they should spend much time thinking about it. There is also a huge gap in undestanding and expectations between staff and student, which well meaning formulaic feedback forms do little to address. When students say they want better feedback, many really just want a magic bullet to improve their academic performance. The quickest route to greater srtudent satisfaction is probably a dose of judiicious grade inflation rather than more feedback-ology
Lecky – I do recognise that issue and would love to say that we’ve solved it. However, we haven’t. A much larger part of our work on feedback has been to make sure that students get a strong, consistent message about what feedback is (and isn’t). That means being very explicit in saying ‘this is feedback’ and connecting all the individual elements during tutor meetings and group discussions. As for grade inflation, I can see the logic, but it’s not a solution at all, either in QA terms or in self-development terms.