Crowd-Sourced Studying

Similar to Simon’s experience with the MCQ exam, students typically formed groups and divided the questions among members when doing my take-out quizzes and the connections exercise last semester, but they failed to verify that each other’s answers were correct. This semester, students in my comparative politics course are forming study groups on their own time but blindly accepting their peers’ output, because I can see the same incorrect answers propagating across the exams of multiple students.

To me this is extremely inefficient, especially considering the extrinsic grade-based motivation of most of my students, and I wonder why it happens.  Are students using a rational choice model in which the cost of verifying that an answer is correct is higher than the chance that the answer they’ve been given is wrong? Would reducing the exam to a single question, where students would either score 100 or 0, alter their behavior?

I’m reminded of what I just read in Revolution 2.0, by Wael Ghonim. A Facebook page that he created was instrumental in launching the Egyptian revolution. As part of his strategy to build a sense of community among readers and get them to participate in events, he frequently conducted online polls. Perhaps getting students to survey themselves — across the entire class rather than just within a study circle — would enable them to detect erroneous answers.

I’m also reminded that the organizational behavior of many universities is often not a good model for the kind of collaboration we want to promote among students. It can be very difficult to get faculty, staff, and administrators to communicate across institutional units and exchange information.