Open Book, Open Mouth MCQ Exam

Yesterday saw my first run at a multiple-choice in-class exam, where students could bring in any materials and talk to each other.  The idea was one that I got from my good colleagues on this blog, who suggested that even with such obvious advantages there would still be a range of performances. And they were right.

After struggling with the questions, I finally got my 45 questions for the 45 minutes. I had previously suggesting that rationally the group would divide them up and share answers, but I’d do so in the knowledge that such cooperation was unlikely.  On this point, I have to admit, I was wrong.  In the week beforehand, about 2/3rds of the students had set up a Facebook group and agreed who would tackle what question.  About 2/3rds brought in a laptop, a quarter a course textbook and about a fifth had written notes on paper (only one mind map though).  Throughout the exam, there was gentle discussion.

Let’s start with results.  The average grade was about 73%, which is roughly 10% higher than we’d usually find.  Moreover, the standard deviation was very small, typically half what we’d normally find.  This suggests that there was collaborative working and it was (on average) beneficial to students’ performance.

However, given what we knew to be the level of cooperation, the surprise is that the grades weren’t even higher.  And this was where neither I nor the students had fully thought things through.  Sharing out questions is fine, as long as you trust peers to find the right answer.  Talking with students afterwards, it was apparent that not everyone pulled their weight in the group, either by not posting at all, or by posting an answer that was debatable (at best).  Looking at the pattern of wrong answers, it’s clear that the group (and others too) typically made the same mistakes.  In part this was about not reading the question properly, but also in part not having robust verification mechanisms in place.

Some students haven’t been happy about the free-riding (coincidentally one the topics in the exam), although I’m still impressed that they were at least willing to give it a go in the first place.  While some students were clear that they didn’t want to join the group because they wanted to get their grade on their own merits, most did respond to the incentives to work together.

So what have I learnt from this?  Firstly, it has vindicated the original idea that it is not access to materials that solely determines results.  Secondly, it is apparent that those groups of students who have been most active in seminar classes have achieved better results on average, suggesting that this plays some role in supporting good preparation.  Thirdly, those with weaker language skills or with no contact with other students were the ones who suffered most, the former because of the phrasing of the questions, the latter because they didn’t get any guidance from peers (even if that was not unambiguous). Fourthly, reading the question is always important.

My main concern is that this exam became more about the practice of sitting exams than it was about the substance.  Several comments were made to the effect that MCQs are not the best place to test interpretative knowledge, with which I largely agree.  I’m inclined to retain the exam for next year, but in a revised form: I’m getting more feedback from students in coming weeks, so I want explore some more what they have done.

%d bloggers like this: