As previously discussed, this semester I am attempting to research whether metacognitive exercises improve students’ learning — as measured by exam scores. My class completed the first survey and exam. A few initial impressions about the data and the process of collecting it:
Eighteen students completed the pre-exam survey a total of twenty-seven times. Two students submitted responses three times each. This demonstrates the importance of requiring that students include some kind of identifying information when they complete the survey, so that duplicate submissions can be removed from the data set.
I suspect the survey data are skewed because of above average effect or subject bias. By coding the responses from 1 to 5, with 1 being “never” and 5 being “always,” the highest possible sum score from the ten survey questions is 50. The average for this survey was 40. I doubt students actually engaged in the study strategies referenced in the survey as frequently as they said they did.
The average total score on the exams five multiple choice questions was 7.7 out of 10. Given the small sample and the nature of the data, a statistical analysis that compares these scores against survey responses isn’t meaningful, but I did run a correlation in Excel, which resulted in a very non-impressive r of -0.12.
The exams in this course are extremely low stakes — the first and second exams are worth 25 points each, and the final exam is worth only 40 points, out of more than 1,000 points available from all graded items. That might have affected how diligent students were in studying for the exam.
Given the small size of the class and the usual host of possible confounding variables, I can already guess that I won’t be able to identify a relationship between the metacognition surveys and students’ exam performance. Repeatedly asking students about their study techniques might help them learn more, but I’m not going to be able to demonstrate it.