The spring semester is coming to an end. I’m starting to mentally review what didn’t work well, or at least didn’t work as well as I had hoped it would. I’ve written before on the need to align assignment directions with a course’s intended learning outcomes. I’ve also written about the pros and cons of rubrics. While doing some grading recently, I had an “a-ha!” moment about how the links between learning outcomes, assignment directions, and grading rubrics need to be seamless.
My university uses Canvas by Instructure* as its so-called learning management system. Canvas has the neat feature of clickable rubrics. Although I still also write free-form comments, clicking around on a rubric is a very easy means of generating a numerical grade for a student’s work. So I’ve embedded a rubric with every assignment, and I’ve put “read the rubric below” at the beginning of every assignment’s directions.
I realized very late in the semester that in some assignments I failed to “close the loop” as the assessment specialists say. Assignment directions contained language that I lifted from the appropriate learning outcomes, but the rubrics didn’t.
For example, in one assignment I ask students to compare and contrast arguments contained in a book chapter and a peer-reviewed journal article. Per one of the learning outcomes for the course, students are supposed to discuss the methods used by each author. But this criteria is completely absent from the rubric.
Why did this happen? I had taken a rubric that I had created long ago for another course, modified the criteria and point distributions slightly, and slapped it into the assignment’s webpage. I should have been thinking harder about how rubrics are supposed to make my life easier and make the evaluation process more transparent for the students. Making sure my rubrics are appropriately worded is now on my to-do list for the summer. Yes, it’s a very exciting life we academics lead.