Fall 2020: Looking Backward and Forward

Time once again to evaluate my teaching successes and failures. As usual, I will focus on the failures, but to start, a happy accident: discovering setting options in Google Forms. I still use Google Forms for students’ teammate evaluations. I gave students these instructions for this past semester’s evaluations:

You have 10 points to distribute across members of your team according to each person’s contribution to team projects. These projects include course notes on Google Docs, reading response breakout discussions, and presentations. For example: If one person did all the work, award that person 10 points and the other members of the team 0 points. If one person did 70 percent of the work, a second person did 30 percent of the work, and the rest of the team did 0 percent of the work, award the first person 7 points, the second person 3 points, and everyone else 0 points. Total points awarded across all members of your team must equal 10 or your response will be discarded. I will use people’s responses to calculate an average ranking for each member of your team. This ranking determines the teammate evaluation portion of your course grade [as many as 50 points out of more than 1,000, or approximately five percent].

So, in my mind, this is a very low stakes assessment, without forced ranking. The five members of one team, in fact, noticed that they would all earn the full 50 points if they gave each other scores of 2. To me it was yet one more sign of their ability to collaborate productively.

But as usual, some other students submitted a completed Google Form multiple times or ranked their team members with, for example, scores of 10, 9, 9, 9, and 9. However, when designing the Google Form for another class, I serendipitously clicked on settings (gear icon) when trying to do something else. I was greeted by this dialog box:

As shown, users can capture respondents’ email addresses and limit people to one response. Checking these boxes makes it easy to prevent and decipher students’ errors. One can even convert a Google Form into an auto-graded quiz by going to the Quizzes tab. Maybe this is quite familiar to you, but it was new for me. I’ll be making use of the settings options from this point forward.

Eight students (out of fifty-four) emailed me complaints about the teammate evaluation portion of the grade once I had entered the information in the Canvas LMS. They perceived earning 20 or 30 points out of 50 as “failing,” even though I explicitly inform students in multiple ways that the final course grade is based solely on total points accumulated over the semester, not on the results of individual assessment instruments. I think students’ mistaking perception for reality is in part due to me listing the teammate evaluation as a 50-point assignment on Canvas. When students don’t earn the maximum possible points shown, they react as if I’m at fault for their performance. Next semester, I will see if Canvas allows me to label this item as worth 0 points, to make it look like the teammate rankings are “bonus” points.

When Students Ignore Feedback II

This semester I am teaching fifty-four undergraduate students in two courses. At present, ten of these students have D or F averages. One of them has previously stated that he will withdraw. While it’s not my responsibility to ensure that he actually follows through on this before the semester ends, I will limit my analysis to the remaining nine students.

Both courses have had six assignments so far, so fifty-four separate opportunities for the nine students to receive feedback on their writing. But only forty-one of these assignments, or approximately 75%, were submitted.

Students saw my feedback, in the form of brief comments and a marked rubric, on only nine of the forty-one submitted assignments — approximately 20%. How do I know this? The “student viewed” time stamp feature of Canvas, discussed previously and shown below. If there isn’t a “Student Viewed Document” message, the student didn’t look at the assignment after I graded it.

Six of the nine students have not looked at any of my feedback on past assignments, despite earning failing grades. I will now email them to explain the likely relationship between their grades and their unwillingness to take advantage of the feedback I provide. This action on my part assumes that students will read the email, which isn’t guaranteed. But if this doesn’t catch their attention, mid-semester grade reports might.

Creating Community III: Memo Exercise

More on creating community in an online course (previous posts are here and here): a collaborative writing exercise.

For several years, I’ve used memos as an authentic writing assignment. Or tried to. Often the results haven’t met my expectations. I have had students write multiple complete memos in a course, all in the same format, assuming that they will apply my feedback from the previous memo to the next one. Instead, students repeat the same mistakes over and over again.

For the upcoming semester, I decided to turn memo writing into a group activity, on the chance that collaborating with peers might produce better results. As in previous semesters, I will provide source material and the prompt. In a graded individual assignment, each student will write only specific portions of a memo, described below in the sample instructions and rubric. Students will be able to use a new memo template and guidelines that will be available on Canvas.

Continue reading “Creating Community III: Memo Exercise”

Looking Backward and Forward

Expanding on my last post on failures from this semester:

From where I stand, information literacy skills are important, because they help one identify and demolish specious claims made by authority figures. An assignment that, for example, forces students to locate three peer-reviewed journal articles is practice in finding credible information. It also allows students to determine whether a topic is suitable for a semester-long research project.

To me, these outcomes are both beneficial and rather obvious. But from the students’ perspective, the assignment could simply be yet another meaningless hoop to jump through on the way to getting another A+ on a transcript. Given the sources many students cited in the different stages of their storymap projects, it looks like too many of them customarily take the latter approach to research.

Therefore, in future courses that involve research projects, I should create assignments that are limited to the task of locating scholarly sources and place those assignments at the beginning of the semester. I should demonstrate why this skill is useful outside of the classroom.

I’ve noticed a similar problem with student writing — really basic errors that indicate a lack of proofreading. I don’t expend more effort evaluating a student’s work than the student did creating it. But I do know that sloppy writing indicates sloppy thinking and that the former advertises one’s propensity for the latter to the rest of the world. Again, I should demonstrate early in the semester why it’s important to proofread one’s work before it reaches an audience. My favorite example? The missing Oxford comma that cost a dairy company US$5 million.

I’m also seeing, from the last few journal article worksheets students are submitting, that many still do not have a clear understanding of how evidence-based arguments are constructed in academic literature. An author typically poses a research hypothesis or question at the beginning of a journal article and concludes with the same hypothesis or question reworded as declarative statement. I.e., “Why is the sky blue?” in the introduction with “The sky is blue because . . . ” as the conclusion. Yet on worksheets some students are writing that the hypothesis is about one thing while the conclusion is about some other thing. So again, students need practice in understanding the components of a written argument in scholarly literature, and that practice needs to happen early in the semester.

In principle I’m talking about scaffolding. But many of my assignments are attempts at getting students to builds several different skills simultaneously. I think I need to disentangle my goals for these assignments so that they target only one skill at a time.

More Reasons to Go Back to Basics?

Per my philosophy of never letting a good crisis go to waste, I’ve already started thinking about what has and hasn’t worked in this unusual semester.

In my globalization course, students seem to have sufficiently mastered the tools needed to create storymaps. To my surprise, nearly all of them learned how to use the software during two in-class lessons led by our digital scholarship librarian.

Yet with the semester almost over many still don’t seem to understand that U.S. News and Forbes are not peer-reviewed academic journals. If I teach this course again — it’s rotating to a colleague next year — I should probably include assignments at the beginning of the semester in which students are explicitly graded on their ability to locate appropriate sources. Currently this information literacy skill is only assessed through the rubric attached to the three storymap assignments.

In my comparative politics course, I will soon try to run my Gerkhania simulation online for the first time. To make things even more interesting, the class is down to eight students and the simulation is heavily modified from previous versions. I’ll report what happens in a few weeks.

When I moved this course online a month ago, I converted a classroom exercise in analyzing journal articles into several graded assignments. In this type of assignment, students have to answer these questions:

  1. Article’s subject—what is the question, puzzle, or problem examined?
  2. What and where is the thesis?
  3. What is the theoretical perspective (rational actor, culture, structure)? How do you know this?
  4. What are the independent variables (causes) examined?
  5. What is the dependent variable (effect) examined?
  6. What is the conclusion of the author(s)?

My reason for doing this, other than filling up the remainder of an extended semester? It had become clear before the campus closed that students were often skipping over assigned journal articles and reading only the accompanying news stories that illustrated the articles’ theoretical arguments.

Some students are still unable to correctly identify an author’s thesis or conclusions — despite the classroom exercises during the first half of the semester. So in the future, students are going to get more instruction and more (graded) practice in how to read academic literature.

Yet more tips on moving teaching online

Chad and Amanda have already given their ideas on the crash move to online, so here’s my version too, in graphic form.

You can download a PDF version here.

As the others have said, you need to triage your teaching: what absolutely must happen and what’s just nice to have?

If you’re struggling, then ask colleagues, both within your institution and beyond: there are lots of great people on Twitter (start with the @ALPSblog follow list).

When Students Ignore Feedback

While I don’t comment on student writing nearly as much as some professors do, I expect students to at least read what I do write. A colleague recently pointed out that our Canvas LMS displays a date stamp at the top right of the Speedgrader web page when a student has viewed a previously-submitted assignment after an instructor has commented on it. I had never noticed this before, I guess because the date stamp’s font is rather small. Here it is, indicated by the red arrow:

This feature became very useful in a course in which students are required to write a series of memos that all have the same format. Last week, a student taking the course sent me this email:

I’m not sure what is expected from the assignments, my memo 3 was completely different from 2 yet your comment says see comments about memo 2. I am a second semester senior doing grad classes that had a 3.6 gpa last semester. Somehow I’m failing every single assignment in a freshman level class, while still attending every single class except one and participating in class basically every session. 

I looked at the student’s submissions for memos 1, 2, and 3 — no date stamp. My comments had been ignored. My reply to the student’s email:

The memo is a standard method of efficiently communicating information that is used in a wide variety of professional environments. I’m surprised you haven’t yet had much of an opportunity to practice this form of writing, so here is what I am willing to do: you can earn up to 10 points by emailing me by noon on Friday a memo that discusses how well you incorporated my feedback on your Memo 1, provided by my comments on your work on Canvas, into your Memo 2, and the same for Memo 3 in respect to my comments on your Memo 2.

Completion of my “extra credit opportunity” would have required the student to admit that he had not read my comments and thus ignored the feedback I had provided.

The student did not respond.

Design Fail or Attention Fail?

I recently graded a writing assignment for one of my courses and I’m wondering if it’s an example of “You can lead students to the education but you can’t make them learn.”

The instructions for the assignment:

You have been given the task of submitting a memo to the National Security Council  that answers the following question:

  • Will Nigeria remain a single state or divide into separate states?

Download the memo template; use it to format your work. Turn the question above into a declarative sentence and use it as the memo’s executive summary.  Write two subsections that support your argument using evidence taken from course readings. Each subsection should be a single paragraph. Reference sources with in-text parenthetical citations. 

The information above was repeated by the memo template itself and by the rubric attached to the assignment. From my perspective, the assignment is totally straightforward and the assessment criteria are completely transparent. Yet . . .

Several students wrote memos on the question of “Should?” rather than the actual question of “Will?

Many students also failed the “Turn the question above into a declarative sentence . . .” part. A few representative examples of what they submitted as an executive summary:

  • “In the current course, Nigeria will see a deterioration in multiple states because of the lack of agreement over these issues and the failure of the government to uphold a true democracy. Nigeria is a fledgling state on the verge of an inner collapse due to current trends.”
  • “The United States should help Nigeria dissolve into multiple sovereign states, by mediating the separation process. Nigeria is currently facing a slew of ethnic and social conflicts the country is made up of 36 states which legally sew divisions between regional outsiders and regional natives, this has sparked ethnic and religious conflicts.”
  • “The best path forward for Nigeria is to remain a single state. Splitting the nation up now would only be detrimental to Nigeria’s ability to control its sphere of influence across the African continent. Splitting Nigeria into multiple states would challenge the work that has gone on for years to make it politically equitable and would not account for the vast cultural differences of the nation.”

And so on.

I’m wondering how I should interpret what happened. Is this simply a case of failing to follow directions? If not, I don’t know how I can make things more obvious.

Modeling “Good” Writing

Today we have a guest post from Colin Brown, assistant teaching professor in the Department of Political Science at Northeastern University. He can be reached at colin [dot] brown (at) northeastern [dot] edu.

A few weeks ago, Shana Gadarian made a point on Twitter about explicitly teaching writing that I strongly endorsed. Getting our students to write better will, at minimum, make our job easier. If we’re lucky, it might actually promote better thinking.

For upper-level students, very open-ended prompts sometimes lead to really creative and insightful thinking—and a dissertation is of course the vaguest prompt of all. But our expectations often rest on implicit assumptions about what we think of as “good” political science. Amanda Rosen has written about this in the context of transparency. As she points out, telling students “write a five-page essay” doesn’t acknowledge that essay means different things in different fields, and many of our students will not yet know what this even means in political science.

Clarity is critical for essay prompts, especially for introductory students. While long, detailed instructions might help point students toward what they should think about, students new to a field often don’t have the context to know what is most important in a long prompt. To them, any sentence with a question mark might appear to be equally important—causing them to focus on what we thought was a minor point and producing a disconnect between what we want to assess and what we actually assess when our implicit expectations aren’t met.

Here are what I think were a failed and a relatively successful attempt to do this in my past semester’s intro comparative politics class. Students told me that while the instructions were explicit in the first, it was hard to know where to start and which parts to emphasize. With the latter prompt, they said it was not only clear what to do but why they were doing it.

One question I’ve found to be a bit polarizing in talking with colleagues is whether to provide model papers or templates. Is it a better way to make our aims clear, or does it cause students to just parrot back the template? I’ve always found myself on the side of providing models. Graff and Birkenstein’s They Say, I Say has solidified my thinking for why. They identify common rhetorical devices that mirror the most common academic ways of thinking, which they into templates that, for example, force students to write down the most obvious counterarguments. Experienced writers have read enough in the field to be able to pick up on these techniques implicitly, but beginners by definition have not. Graff and Birkenstein argue, and I think rightly, that this goes beyond rhetoric to actually learning the ways of thinking. Some students may not learn how to think about the flaws in their own argument, or even that they need to, until they are forced to write them with this kind of template.

In my own teaching, I’ve found it hard to explain in an abstract sense the need for clear writing over “beautiful” writing—and what students think is “beautiful” writing often feels cluttered and verbose to us. But when students see each other’s models and observe how much they actually understand from clear writing versus what they think is “good” writing, they start to self-diagnose their own excessive prose.

One challenge is that writing for politics requires some discipline-specific skills that might be more or less amenable to templates. Sarah James, George Soroka and I have a forthcoming JPSE piece on adapting tools from K-12 and composition studies for political science writing. But defining what we actually want from “good” political science writing seems often more folk wisdom than clearly defined—Ian Anson’s 2017 article on meaning-making is a notable and valuable exception—so as part of showing students what we want, there’s room for us to make this clearer to ourselves.

 Sarah, George and I will be leading a workshop at APSA TLC 2020 on implementing rubrics to improve student writing—Friday Feb. 7 @ 4:15 pm.

Writing as Learning

My last post discussed writing as a professional endeavor. Today: writing as a device for learning; i.e., why and how we as teachers assign writing to students.

Generally we present our students with some form of expository writing task. Perhaps we call it thesis-driven, discipline-oriented, argumentative, or research-based. Regardless of the label, there is an assumption of students locating relevant primary data by means of different methods that they understand how to use, evaluating the data in an appropriate manner while being aware of their own assumptions, reaching some conclusion, and effectively communicating all of this to an audience.

That’s the ideal. The reality? Students often don’t know how to find primary data, or which methods are best suited for analyzing it. They may not even know what methods are. They assume there is either one right answer, or that all possible answers are equal, because they don’t understand that some answers can be more strongly supported by data than others while even better answers await discovery in the future.

And so we default to assignments that direct students to preferred secondary or tertiary sources (a “text”), tell them to organize their explanations as competitions between two artificial, diametrically-opposed positions, or, sometimes, encourage them to dredge up arguments that arrive at positions they already favor. Students learn to hang evidence on a predetermined conclusion rather than derive a conclusion from the evidence.

This type of deductive exercise has been used by teachers since the age of the agora to build students’ rhetorical skills. Today, unfortunately, it can produce people with a facile ability to argue any position at any time without veering from a worldview that they hold to be sacrosanct.

So what’s the solution? I don’t really have one. Too few of the students I encounter are willing or able to draw reasonable conclusions from evidence they have independently located, so writing exercises that involve inductive reasoning get chucked out the window. It’s frustrating.