Discussion sections (‘tutorials’ at my institution) are frequently my most and least favorite part of teaching. When they work well, they can be more than the sum of their parts. Informed by the week’s readings, students will drive discussions in unexpected directions, critiquing the work and applying it to new examples. Small group activities will be dynamic and student-led, leaving participants with a deeper appreciation of the material and their own preconceptions.
When they fail, they fall to the lowest common denominator. Unprepared students waste valuable time derailing conversations, trying to hide the fact that they haven’t done the readings. Well prepared students become resentful when, class after class, their peers scramble to extract Cliff’s Notes versions of the material from them. The best outcome to be hoped for class is that students come away with a basic understanding of the essential arguments in the readings.
Other than avoiding the dreaded after-lunch time slot for sections, I’ve not been able to do much to ensure that a section runs smoothly. Some classes just worked and some didn’t. Some students simply don’t seem to be motivated by the threat of a poor participation grade or the promise of a vigorous discussion.
Last Fall, I tried a new strategy: grading preparation instead of participation. I allowed students in my States and Markets class to self-report their level of preparedness and claim a commensurate grade for each class session. I would then spend the first 10-15 minutes of the session calling on three random students (assisted by an on-screen random number generator) to verify that they were prepared. After the verification portion of the class was finished, I led the section as usual — sometimes with discussion and sometimes with activities.
The results were fantastic.
Since the grades were established at the beginning of each class session, students were much more willing to drive discussion and take it to a higher level. Though there were a fair number of students who did not do the readings each week, they clearly indicated this in the sign-in sheets and did not feel the need to disrupt the flow of the sessions.
Grading was easier, fairer, and more transparent. The quality of discussion was consistently better than previous years. Ten students also favorably mentioned the ‘Preparation’ style of discussion section in their written feedback for the class (which also received higher quantitative student evaluations than previously).
I suspect the observed positive outcomes were due to: 1) students being afraid of being called on in front of their peers if they misrepresented their preparedness level; 2) students not feeling the need to divert the discussions; and, 3) well-prepared students getting rewarded with lively, informed discussions (several students referred to this in the written comments).
Though it is possible that the positive outcomes may have been a fluke and unrelated to the change in assessment structure, I found the transparency and fairness features of this assessment sufficient to make me want to use it in place of participation grades in the future. If it continues to lead to amazing discussion sessions, so much the better!
For the instructions that I provided to the students at the start of each class and the written comments that mention the assessment, click here.