Specifications Grading

The Chronicle of Higher Education had a great post this past week about Specifications Grading, a style of grading that moves away from holistic (“too light–feels like a C”) approaches into the world of contract grading, where specific criteria are set that, if met, earn students a particular grade.

Thorton Mellon definitely believed that college instructors grade by weight.

I hate grading, so I was eager to read more.  This style of grading sets clear, achievable metrics for success on a given assignment, and ties that success to the learning outcomes for both the assignment and the course.  Grading itself becomes simple–either students met the metrics, or they did not.  Students are given more choice in how to approach their work for the course, but at the same time, must complete enough work at a satisfactory level in order to pass the class or get their desired grade. This system, if done correctly, can eliminate student grade complaints, put students in charge of their own learning, AND reduce faculty grading time (YES!),  What’s not to love?

Continue reading

Authenticity in Classroom Presentations

Bored AudienceNina’s recent post about students’ tendency to default to infopedia-style presentations reminded me of my own horrible experiences with students incoherently regurgitating information. It occurred to me that many of us — because of the nature of our academic experiences — incorrectly regard writing and presentation design as unrelated processes. They both are forms of communication, so why not apply the principles of one to the other?

I’ve written previously about how a good writing assignment clearly specifies the role of the author, the audience for whom the author is writing, and the format that the author is supposed to follow. We can construct guidelines for student presentations the same way:

  • What role are you performing when giving this presentation? What’s the presentation’s purpose?
  • To whom will you be delivering this presentation?
  • What is the format with which you will deliver the presentation?

If your instructions for presentations allow a class to answer the above questions with “student, other students, data dump,” then that’s what you’re going to get.

I find it easiest to grade format, so I assess that first. Student teams upload files of their presentations before they deliver them, and I grade the files using this rubric:

Presentation Rubric

After my feedback via the rubric, teams have time to improve format if they wish before delivering the presentations in class.

The audience for the presentations is the rest of the class, not me, so I use peer evaluation. Teams compete and whichever team students select as the best earns points. For a sense of how these competitions have evolved over time, check out my posts here, here, here, and here, and there’s also Amanda’s and Simon’s big-picture posts on the subject. Or just click on the Presentations category in the right-hand sidebar column on your screen.

As for role, I’ll be the first to admit that it can be very difficult getting students to step outside of themselves and take on the identities of government officials, corporate executives, or journalists. But providing clear expectations about audience and format goes a long way toward helping students achieve this third goal.

More on Twine

After my last post on building an entire course around the creation of interactive digital texts, Jason Enia asked if I had additional materials to share. Here they are:

Three Wise MonkeysThe first-year seminars, of which this course is but one example, are part of my university’s recently-revised general education curriculum.  The following learning outcomes are specified for the seminars:

  • Utilize the liberal arts skills to analyze and evaluate significant texts and investigate mathematical and scientific processes.
  • Speak about significant issues in a cogent, analytical, and persuasive manner.
  • Write about significant issues in a cogent, analytical, and persuasive manner.

At the end of the semester, I am expected to assess students’ work on the course’s final Twine text against these learning outcomes with this rubric. I don’t use this rubric for grading. Instead, as I mention in my previous post, student teams grade each other’s work using a different rubric that I’ve designed specifically for this type of assignment.

My syllabus contains these passages:

Course Description

In a disaster, who lives, who dies, and why? This course examines the life and death decision making of individuals who have survived hurricanes, earthquakes, civil war, and genocide. With story-boarding and free software, you will learn how to design digital interactive stories on this subject that evolve according to the decisions made by the reader. The texts that you read and create will allow you to explore ethically complex decision making in high-risk, worst-case environments—the same kinds of situations faced by humanitarian aid workers, diplomats, military personnel, and, all too frequently, ordinary individuals.

Course Objectives

• Practice recognizing and creating thesis statements.
• Analyze arguments.
• Create and communicate persuasive, evidence-based narratives orally and in writing.
• Reflect on different cultures by examining biographical depictions of historical events.
• Evaluate the work of oneself and others.

I ensure that students steadily read the book associated with each Twine with frequent but brief writing assignments.

That is basically everything — anything else specific to the Twine process is described in my last post.

Failure As A Skill

FailI’ve written previously about students learning less by not taking notes. My university is introducing first-year seminars in the fall semester, and I’m teaching one of them. Probably most of the incoming students can benefit from explicit instruction on study skills, and I’m thinking about how to incorporate study skills training into the course. This guide has given me some good ideas, as have the webpages at links embedded in the post that I mention above.

Perhaps even more difficult than altering students’ study habits is changing their perception of failure. Students come to college thinking that they need to avoid failure rather than embrace, recover, and learn from it. As I’ve mentioned previously, the quality of failure assignment at the end of the semester has been incredibly useful in this regard. Starting in the fall, I’ll also be putting what I’m calling a knowledge plan assignment at the beginning of the semester — students will identify the goals they have for a course and write about how they might achieve them. But I should also be teaching about the usefulness of failure throughout the semester rather than just referencing it at either end of the semester. Anne Sobel has some great advice on how to do this in a column in the Chronicle of Higher Education.

One of the best ways to teach a concept is to model it, and this is difficult for instructors who personify the penultimate achievement in institutionalized learning. Telling students that it is important to fail but keeping failure shrouded in secrecy is not very useful either. It is probably much more valuable to create a classroom environment in which students openly fail in front of their peers and feel that it is safe to do so.

To these ends, I’ve created a series of collaborative projects in my first-year seminar with student teams evaluating the work of other teams. I’m being deliberately mysterious about the exact nature of these projects because I plan on writing about them in more detail this fall, but they combine the use of a technology with creative writing. Students will beta test each other’s projects, scoring them with a rubric on form and function that I’m hoping they themselves will create at the beginning of the course.

As Anne Sobel points out, I need to ensure that students don’t evaluate their work solely on the basis of whether an end product exemplifies success. There needs to be a process in which students repeatedly practice and get feedback so that failure becomes a routine, easily-recognizable event. The trick will be to design a system in which this feedback comes from the students themselves.

Getting Loopy

LoopThe spring semester is coming to an end. I’m starting to mentally review what didn’t work well, or at least didn’t work as well as I had hoped it would. I’ve written before on the need to align assignment directions with a course’s intended learning outcomes. I’ve also written about the pros and cons of rubrics. While doing some grading recently, I had an “a-ha!” moment about how the links between learning outcomes, assignment directions, and grading rubrics need to be seamless.

My university uses Canvas by Instructure* as its so-called learning management system. Canvas has the neat feature of clickable rubrics. Although I still also write free-form comments, clicking around on a rubric is a very easy means of generating a numerical grade for a student’s work. So I’ve embedded a rubric with every assignment, and I’ve put “read the rubric below” at the beginning of every assignment’s directions.

I realized very late in the semester that in some assignments I failed to “close the loop” as the assessment specialists say. Assignment directions contained language that I lifted from the appropriate learning outcomes, but the rubrics didn’t.

For example, in one assignment I ask students to compare and contrast arguments contained in a book chapter and a peer-reviewed journal article. Per one of the learning outcomes for the course, students are supposed to discuss the methods used by each author. But this criteria is completely absent from the rubric.

Why did this happen? I had taken a rubric that I had created long ago for another course, modified the criteria and point distributions slightly, and slapped it into the assignment’s webpage. I should have been thinking harder about how rubrics are supposed to make my life easier and make the evaluation process more transparent for the students. Making sure my rubrics are appropriately worded is now on my to-do list for the summer. Yes, it’s a very exciting life we academics lead.

*I have no financial interest in Instructure, although I wish the company would hire me.

Rubrics Schmubrics

CupcakeAs I’ve mentioned before, I use rubrics a lot, but students seem to rarely check their work against them as they are writing. Students often don’t even read rubrics before starting an assignment. I also haven’t found students reviewing each other’s work with rubrics for guidance to be beneficial.

One of my graduate students recently submitted something that might work better than a rubric in getting undergraduates to self-evaluate their writing. The graduate student asked “Why is my work good writing?” and provided three answers, referring to himself in the third person:

  • The author systematically negates the argument [presented in the assigned readings] for uncontrolled immigration.
  • The author discusses the topic of immigration in a pro and con fashion.
  • The statistics used [in the assigned readings] support the author’s argument about immigration.

I can see requiring students to come up with three answers of this sort to the above question for their own written work. The complexity of thought reflected by a student’s answers is probably a good indication of how skilled that student is at writing, or at least how much the student has thought about what he or she is going to submit to me for grading.

From Instructor To Student Evaluation

rubricI spend a lot of time creating rubrics, but I regularly encounter students who disagree with the content of a rubric after they’ve earned a grade that is lower than they expected. I’ve stumbled across a website that makes it easy to create checklists with which students can evaluate themselves: PBL checklists.

Yes, these checklists are for K-12 students, but the criteria still apply at the university level — for example, “The sequence of ideas is logical.”

If the checklist is for an activity requiring collaboration or presentation, I recommend creating both a first-person version  (“I . . .”) and another version for other students (“The presenter . . .”) to compensate for the better-than-average effect.