I’ve written previously about students learning less by not taking notes. My university is introducing first-year seminars in the fall semester, and I’m teaching one of them. Probably most of the incoming students can benefit from explicit instruction on study skills, and I’m thinking about how to incorporate study skills training into the course. This guide has given me some good ideas, as have the webpages at links embedded in the post that I mention above.
Perhaps even more difficult than altering students’ study habits is changing their perception of failure. Students come to college thinking that they need to avoid failure rather than embrace, recover, and learn from it. As I’ve mentioned previously, the quality of failure assignment at the end of the semester has been incredibly useful in this regard. Starting in the fall, I’ll also be putting what I’m calling a knowledge plan assignment at the beginning of the semester — students will identify the goals they have for a course and write about how they might achieve them. But I should also be teaching about the usefulness of failure throughout the semester rather than just referencing it at either end of the semester. Anne Sobel has some great advice on how to do this in a column in the Chronicle of Higher Education.
One of the best ways to teach a concept is to model it, and this is difficult for instructors who personify the penultimate achievement in institutionalized learning. Telling students that it is important to fail but keeping failure shrouded in secrecy is not very useful either. It is probably much more valuable to create a classroom environment in which students openly fail in front of their peers and feel that it is safe to do so.
To these ends, I’ve created a series of collaborative projects in my first-year seminar with student teams evaluating the work of other teams. I’m being deliberately mysterious about the exact nature of these projects because I plan on writing about them in more detail this fall, but they combine the use of a technology with creative writing. Students will beta test each other’s projects, scoring them with a rubric on form and function that I’m hoping they themselves will create at the beginning of the course.
As Anne Sobel points out, I need to ensure that students don’t evaluate their work solely on the basis of whether an end product exemplifies success. There needs to be a process in which students repeatedly practice and get feedback so that failure becomes a routine, easily-recognizable event. The trick will be to design a system in which this feedback comes from the students themselves.
The spring semester is coming to an end. I’m starting to mentally review what didn’t work well, or at least didn’t work as well as I had hoped it would. I’ve written before on the need to align assignment directions with a course’s intended learning outcomes. I’ve also written about the pros and cons of rubrics. While doing some grading recently, I had an “a-ha!” moment about how the links between learning outcomes, assignment directions, and grading rubrics need to be seamless.
My university uses Canvas by Instructure* as its so-called learning management system. Canvas has the neat feature of clickable rubrics. Although I still also write free-form comments, clicking around on a rubric is a very easy means of generating a numerical grade for a student’s work. So I’ve embedded a rubric with every assignment, and I’ve put “read the rubric below” at the beginning of every assignment’s directions.
I realized very late in the semester that in some assignments I failed to “close the loop” as the assessment specialists say. Assignment directions contained language that I lifted from the appropriate learning outcomes, but the rubrics didn’t.
For example, in one assignment I ask students to compare and contrast arguments contained in a book chapter and a peer-reviewed journal article. Per one of the learning outcomes for the course, students are supposed to discuss the methods used by each author. But this criteria is completely absent from the rubric.
Why did this happen? I had taken a rubric that I had created long ago for another course, modified the criteria and point distributions slightly, and slapped it into the assignment’s webpage. I should have been thinking harder about how rubrics are supposed to make my life easier and make the evaluation process more transparent for the students. Making sure my rubrics are appropriately worded is now on my to-do list for the summer. Yes, it’s a very exciting life we academics lead.
*I have no financial interest in Instructure, although I wish the company would hire me.
As I’ve mentioned before, I use rubrics a lot, but students seem to rarely check their work against them as they are writing. Students often don’t even read rubrics before starting an assignment. I also haven’t found students reviewing each other’s work with rubrics for guidance to be beneficial.
One of my graduate students recently submitted something that might work better than a rubric in getting undergraduates to self-evaluate their writing. The graduate student asked “Why is my work good writing?” and provided three answers, referring to himself in the third person:
- The author systematically negates the argument [presented in the assigned readings] for uncontrolled immigration.
- The author discusses the topic of immigration in a pro and con fashion.
- The statistics used [in the assigned readings] support the author’s argument about immigration.
I can see requiring students to come up with three answers of this sort to the above question for their own written work. The complexity of thought reflected by a student’s answers is probably a good indication of how skilled that student is at writing, or at least how much the student has thought about what he or she is going to submit to me for grading.
I spend a lot of time creating rubrics, but I regularly encounter students who disagree with the content of a rubric after they’ve earned a grade that is lower than they expected. I’ve stumbled across a website that makes it easy to create checklists with which students can evaluate themselves: PBL checklists.
Yes, these checklists are for K-12 students, but the criteria still apply at the university level — for example, “The sequence of ideas is logical.”
If the checklist is for an activity requiring collaboration or presentation, I recommend creating both a first-person version (“I . . .”) and another version for other students (“The presenter . . .”) to compensate for the better-than-average effect.
Something of a response to Simon’s last post on recognizing a situation for what it is and using the tools at hand to improve it . . .
It’s commonly acknowledged in online course design that no two people will navigate a website in exactly the same manner. Placing the same information in multiple locations on a course website increases the probability that students will find it.
But, like horses, leading a student to water does not mean he or she will necessarily drink it.
This past week I received numerous emailed queries about the quality of failure assignment that I’ve made a part of all my courses this semester. Students were essentially asking, “What is this assignment and how do I do it?”
I’ve written before about my reluctance to return to testing students on the syllabus at the beginning of every semester. In an attempt to avoid this unpleasant task this time around, when creating each of my course websites I pasted the directions from the syllabus into the description for each assignment. I also included a rubric that appears at the bottom of each assignment’s webpage.
From a discussion with a colleague, I accidentally discovered that students, at least on this campus, don’t bother to read rubrics. At all. So now I’m inserting “Read the rubric below” at the top of each webpage — it will be the first thing they see when they go through the process of uploading and submitting an assignment.
Whether this will motivate students to read directions and evaluate their own work in relation to a rubric before submitting it to me, I don’t know, but I’m starting to wonder if the proliferation of instructions, encouragements, and good wishes only serves to reinforce the learned helplessness that my students already possess in abundance.
The light at the end of the tunnel is getting brighter — this is the last full week of classes this semester. While doing some preliminary grade calculations, I noticed something odd.
As I’ve mentioned previously, I often structure my courses around reading responses. In my undergraduate courses this semester, my syllabi stated that 1) students had to submit a certain number of responses, but if any of these responses did not earn full marks, they could 2) submit additional responses to earn the maximum amount allowed for that portion of their final grade. For example, if reading responses accounted for thirty percent of the final grade, and fifteen responses were required, students could continue to submit responses after their fifteenth in the hopes of earning that full thirty percent. I said nothing in my syllabi about “the more you turn in over and above the required number, the more extra credit you can earn.”
Yet approximately a third of my students continued to submit reading responses after they had earned the maximum amount allowed — they were reading and writing even though doing so had no effect on their grade. It’s possible that some were doing this because they were so excited by the assignments, but I doubt it. Probably most, if not all, just misinterpreted the syllabus in a way that (at least in theory) benefited them in the long run.
I’ll offer up my darling and beautiful wife’s experience as a counter-example. She is also an academic, and her courses are also writing-centered. For one course, she built an extensive set of rubrics on the course’s website; a rubric was in plain view just below the directions for each assignment. Rubrics were also visible when students submitted assignments. Nothing to click on to open a pop-up window; the rubrics were right there, smack in the middle of the webpage.
In an end of the semester evaluation, she asked the class if they found the rubrics useful. According to what students wrote on the evaluation forms, not a single one of them had read the rubrics.
In the early years of my teaching career, I adopted rubrics to speed up grading of student writing, but I’d see the same mistakes, from the same students, on paper after paper throughout the semester. The content of the rubric would leave as little an impression on students’ minds as the inked comments on their papers. And there were the usual end-of-semester complaints of “Why didn’t I get a good grade on this?”
Now I have students create their own rubrics (sort of). About a week before the first major writing assignment is due, I distribute a few short writing samples to students. Each sample is a modified anonymous passage written by students in prior semesters. Each passage contains a variety of writing errors — vague or hard to find thesis statement, illogical organization, run-on sentences, spelling mistakes, etc. I usually hand out five or six different samples. Students read and write comments on the samples and then form groups with other students who have the same one.
While in groups, students compare notes and discuss how the passage they’ve examined can be improved. In the final stage of the exercise, one student from each group reports on the group’s findings to the rest of the class. I jot down notes and ask questions.
I then tell the class that I will create a rubric based on what they have identified as indicators of good and bad writing. Students are therefore responsible for following their own recommendations.
Students make similar comments every semester, so I don’t have to change my rubric much, if at all.