Authenticity in Classroom Presentations

Bored AudienceNina’s recent post about students’ tendency to default to infopedia-style presentations reminded me of my own horrible experiences with students incoherently regurgitating information. It occurred to me that many of us — because of the nature of our academic experiences — incorrectly regard writing and presentation design as unrelated processes. They both are forms of communication, so why not apply the principles of one to the other?

I’ve written previously about how a good writing assignment clearly specifies the role of the author, the audience for whom the author is writing, and the format that the author is supposed to follow. We can construct guidelines for student presentations the same way:

  • What role are you performing when giving this presentation? What’s the presentation’s purpose?
  • To whom will you be delivering this presentation?
  • What is the format with which you will deliver the presentation?

If your instructions for presentations allow a class to answer the above questions with “student, other students, data dump,” then that’s what you’re going to get.

I find it easiest to grade format, so I assess that first. Student teams upload files of their presentations before they deliver them, and I grade the files using this rubric:

Presentation Rubric

After my feedback via the rubric, teams have time to improve format if they wish before delivering the presentations in class.

The audience for the presentations is the rest of the class, not me, so I use peer evaluation. Teams compete and whichever team students select as the best earns points. For a sense of how these competitions have evolved over time, check out my posts here, here, here, and here, and there’s also Amanda’s and Simon’s big-picture posts on the subject. Or just click on the Presentations category in the right-hand sidebar column on your screen.

As for role, I’ll be the first to admit that it can be very difficult getting students to step outside of themselves and take on the identities of government officials, corporate executives, or journalists. But providing clear expectations about audience and format goes a long way toward helping students achieve this third goal.

More on Twine

After my last post on building an entire course around the creation of interactive digital texts, Jason Enia asked if I had additional materials to share. Here they are:

Three Wise MonkeysThe first-year seminars, of which this course is but one example, are part of my university’s recently-revised general education curriculum.  The following learning outcomes are specified for the seminars:

  • Utilize the liberal arts skills to analyze and evaluate significant texts and investigate mathematical and scientific processes.
  • Speak about significant issues in a cogent, analytical, and persuasive manner.
  • Write about significant issues in a cogent, analytical, and persuasive manner.

At the end of the semester, I am expected to assess students’ work on the course’s final Twine text against these learning outcomes with this rubric. I don’t use this rubric for grading. Instead, as I mention in my previous post, student teams grade each other’s work using a different rubric that I’ve designed specifically for this type of assignment.

My syllabus contains these passages:

Course Description

In a disaster, who lives, who dies, and why? This course examines the life and death decision making of individuals who have survived hurricanes, earthquakes, civil war, and genocide. With story-boarding and free software, you will learn how to design digital interactive stories on this subject that evolve according to the decisions made by the reader. The texts that you read and create will allow you to explore ethically complex decision making in high-risk, worst-case environments—the same kinds of situations faced by humanitarian aid workers, diplomats, military personnel, and, all too frequently, ordinary individuals.

Course Objectives

• Practice recognizing and creating thesis statements.
• Analyze arguments.
• Create and communicate persuasive, evidence-based narratives orally and in writing.
• Reflect on different cultures by examining biographical depictions of historical events.
• Evaluate the work of oneself and others.

I ensure that students steadily read the book associated with each Twine with frequent but brief writing assignments.

That is basically everything — anything else specific to the Twine process is described in my last post.

Failure As A Skill

FailI’ve written previously about students learning less by not taking notes. My university is introducing first-year seminars in the fall semester, and I’m teaching one of them. Probably most of the incoming students can benefit from explicit instruction on study skills, and I’m thinking about how to incorporate study skills training into the course. This guide has given me some good ideas, as have the webpages at links embedded in the post that I mention above.

Perhaps even more difficult than altering students’ study habits is changing their perception of failure. Students come to college thinking that they need to avoid failure rather than embrace, recover, and learn from it. As I’ve mentioned previously, the quality of failure assignment at the end of the semester has been incredibly useful in this regard. Starting in the fall, I’ll also be putting what I’m calling a knowledge plan assignment at the beginning of the semester — students will identify the goals they have for a course and write about how they might achieve them. But I should also be teaching about the usefulness of failure throughout the semester rather than just referencing it at either end of the semester. Anne Sobel has some great advice on how to do this in a column in the Chronicle of Higher Education.

One of the best ways to teach a concept is to model it, and this is difficult for instructors who personify the penultimate achievement in institutionalized learning. Telling students that it is important to fail but keeping failure shrouded in secrecy is not very useful either. It is probably much more valuable to create a classroom environment in which students openly fail in front of their peers and feel that it is safe to do so.

To these ends, I’ve created a series of collaborative projects in my first-year seminar with student teams evaluating the work of other teams. I’m being deliberately mysterious about the exact nature of these projects because I plan on writing about them in more detail this fall, but they combine the use of a technology with creative writing. Students will beta test each other’s projects, scoring them with a rubric on form and function that I’m hoping they themselves will create at the beginning of the course.

As Anne Sobel points out, I need to ensure that students don’t evaluate their work solely on the basis of whether an end product exemplifies success. There needs to be a process in which students repeatedly practice and get feedback so that failure becomes a routine, easily-recognizable event. The trick will be to design a system in which this feedback comes from the students themselves.

Getting Loopy

LoopThe spring semester is coming to an end. I’m starting to mentally review what didn’t work well, or at least didn’t work as well as I had hoped it would. I’ve written before on the need to align assignment directions with a course’s intended learning outcomes. I’ve also written about the pros and cons of rubrics. While doing some grading recently, I had an “a-ha!” moment about how the links between learning outcomes, assignment directions, and grading rubrics need to be seamless.

My university uses Canvas by Instructure* as its so-called learning management system. Canvas has the neat feature of clickable rubrics. Although I still also write free-form comments, clicking around on a rubric is a very easy means of generating a numerical grade for a student’s work. So I’ve embedded a rubric with every assignment, and I’ve put “read the rubric below” at the beginning of every assignment’s directions.

I realized very late in the semester that in some assignments I failed to “close the loop” as the assessment specialists say. Assignment directions contained language that I lifted from the appropriate learning outcomes, but the rubrics didn’t.

For example, in one assignment I ask students to compare and contrast arguments contained in a book chapter and a peer-reviewed journal article. Per one of the learning outcomes for the course, students are supposed to discuss the methods used by each author. But this criteria is completely absent from the rubric.

Why did this happen? I had taken a rubric that I had created long ago for another course, modified the criteria and point distributions slightly, and slapped it into the assignment’s webpage. I should have been thinking harder about how rubrics are supposed to make my life easier and make the evaluation process more transparent for the students. Making sure my rubrics are appropriately worded is now on my to-do list for the summer. Yes, it’s a very exciting life we academics lead.

*I have no financial interest in Instructure, although I wish the company would hire me.

Rubrics Schmubrics

CupcakeAs I’ve mentioned before, I use rubrics a lot, but students seem to rarely check their work against them as they are writing. Students often don’t even read rubrics before starting an assignment. I also haven’t found students reviewing each other’s work with rubrics for guidance to be beneficial.

One of my graduate students recently submitted something that might work better than a rubric in getting undergraduates to self-evaluate their writing. The graduate student asked “Why is my work good writing?” and provided three answers, referring to himself in the third person:

  • The author systematically negates the argument [presented in the assigned readings] for uncontrolled immigration.
  • The author discusses the topic of immigration in a pro and con fashion.
  • The statistics used [in the assigned readings] support the author’s argument about immigration.

I can see requiring students to come up with three answers of this sort to the above question for their own written work. The complexity of thought reflected by a student’s answers is probably a good indication of how skilled that student is at writing, or at least how much the student has thought about what he or she is going to submit to me for grading.

From Instructor To Student Evaluation

rubricI spend a lot of time creating rubrics, but I regularly encounter students who disagree with the content of a rubric after they’ve earned a grade that is lower than they expected. I’ve stumbled across a website that makes it easy to create checklists with which students can evaluate themselves: PBL checklists.

Yes, these checklists are for K-12 students, but the criteria still apply at the university level — for example, “The sequence of ideas is logical.”

If the checklist is for an activity requiring collaboration or presentation, I recommend creating both a first-person version  (“I . . .”) and another version for other students (“The presenter . . .”) to compensate for the better-than-average effect.

The MacGyver Mindset or Learned Helplessness?

Something of a response to Simon’s last post on recognizing a situation for what it is and using the tools at hand to improve it . . .

It’s commonly acknowledged in online course design that no two people will navigate a website in exactly the same manner. Placing the same information in multiple locations on a course website increases the probability that students will find it.

But, like horses, leading a student to water does not mean he or she will necessarily drink it.

This past week I received numerous emailed queries about the quality of failure assignment that I’ve made a part of all my courses this semester. Students were essentially asking, “What is this assignment and how do I do it?”Sign

I’ve written before about my reluctance to return to testing students on the syllabus at the beginning of every semester. In an attempt to avoid this unpleasant task this time around, when creating each of my course websites I pasted the directions from the syllabus into the description for each assignment. I also included a rubric that appears at the bottom of each assignment’s webpage.

From a discussion with a colleague, I accidentally discovered that students, at least on this campus, don’t bother to read rubrics. At all. So now I’m inserting “Read the rubric below” at the top of each webpage — it will be the first thing they see when they go through the process of uploading and submitting an assignment.

Whether this will motivate students to read directions and evaluate their own work in relation to a rubric before submitting it to me, I don’t know, but I’m starting to wonder if the proliferation of instructions, encouragements, and good wishes only serves to reinforce the learned helplessness that my students already possess in abundance.

When Not Reading Is A Good Thing . . . And When It’s Bad

The light at the end of the tunnel is getting brighter — this is the last full week of classes this semester. While doing some preliminary grade calculations, I noticed something odd.

As I’ve mentioned previously, I often structure my courses around reading responses. In my undergraduate courses this semester, my syllabi stated that 1) students had to submit a certain number of responses, but if any of these responses did not earn full marks, they could 2) submit additional responses to earn the maximum amount allowed for that portion of their final grade. For example, if reading responses accounted for thirty percent of the final grade, and fifteen responses were required, students could continue to submit responses after their fifteenth in the hopes of earning that full thirty percent. I said nothing in my syllabi about “the more you turn in over and above the required number, the more extra credit you can earn.”

Yet approximately a third of my students continued to submit reading responses after they had earned the maximum amount allowed — they were reading and writing even though doing so had no effect on their grade. It’s possible that some were doing this because they were so excited by the assignments, but I doubt it. Probably most, if not all, just misinterpreted the syllabus in a way that (at least in theory) benefited them in the long run.

I’ll offer up my darling and beautiful wife’s experience as a counter-example. She is also an academic, and her courses are also writing-centered. For one course, she built an extensive set of rubrics on the course’s website; a rubric was in plain view just below the directions for each assignment. Rubrics were also visible when students submitted assignments. Nothing to click on to open a pop-up window; the rubrics were right there, smack in the middle of the webpage.

In an end of the semester evaluation, she asked the class if they found the rubrics useful. According to what students wrote on the evaluation forms, not a single one of them had read the rubrics.

Using Rubrics As Teaching Tools

In the early years of my teaching career, I adopted rubrics to speed up grading of student writing, but I’d see the same mistakes, from the same students, on paper after paper throughout the semester. The content of the rubric would leave as little an impression on students’ minds as the inked comments on their papers. And there were the usual end-of-semester complaints of “Why didn’t I get a good grade on this?”

Now I have students create their own rubrics (sort of). About a week before the first major writing assignment is due, I distribute a few short writing samples to students. Each sample is a modified anonymous passage written by students in prior semesters. Each passage contains a variety of writing errors — vague or hard to find thesis statement, illogical organization, run-on sentences, spelling mistakes, etc.  I usually hand out five or six different samples. Students read and write comments on the samples and then form groups with other students who have the same one.

While in groups, students compare notes and discuss how the passage they’ve examined can be improved. In the final stage of the exercise, one student from each group reports on the group’s findings to the rest of the class. I jot down notes and ask questions.

I then tell the class that I will create a rubric based on what they have identified as indicators of good and bad writing. Students are therefore responsible for following their own recommendations.

Students make similar comments every semester, so I don’t have to change my rubric much, if at all.