Changing a Comparative Politics Course, Part 4

My previous post explained how students will complete the template that identifies possible causes of either increased democracy or increased authoritarianism in two nation-states from 2000 to 2020. The next step in this project is for students to work in teams to produce qualitative comparative analyses. Here are my instructions for this collaborative assignment:

Continue reading “Changing a Comparative Politics Course, Part 4”

Changing a Comparative Politics Course, Part 3

As promised in my last post, here is Comparison 3, an assignment in which each student chooses independent variables and operationalizes them for the two cases chosen in Comparison 1. The instructions for the assignment:

Using the template that you submitted for Comparison 2, replace “Variable 1” and “Variable 2” with two independent variables that you think have a causal relationship with the dependent variable. Replace “Indicator 1” and “Indicator 2” with indicators of each of the independent variables you have selected — indicators are the means by which you will measure changes in your variables. Enter the data for the indicators for 2000 and ~ 2020 as shown in the template. Enter + or – to show where these measures increased or decreased.

Data on many indicators for independent variables can be found at:

Continue reading “Changing a Comparative Politics Course, Part 3”

Changing a Comparative Politics Course, Part 2

In Part 1 of this series, I discussed changing my approach to teaching students how to analyze the arguments contained in journal articles. I also think it is important for students to actually do some discipline-related research rather than just read about it. Previously in this course, my students compared two nation-states using either a most similar systems or most different systems design. That assignment never worked very well because of student confusion about the basic nature of cause and effect. I’ve decided to replace this with a scaffolded process culminating in a team-produced qualitative comparative analysis.

There are three individual assignments that I’m calling Comparison 1, 2, and 3. For Comparison 1, each student chooses two nation-states from a list. That’s it. The list comes from Freedom House’s rankings of citizen freedom in countries around the world; I selected a subset of states for which scores differed between 2000 and 2019 — so that students choose cases where the dependent variable varies over time.

For Comparison 2, students calculate a value for the dependent variable. Here are the instructions for the assignment:

Continue reading “Changing a Comparative Politics Course, Part 2”

Changing a Comparative Politics Course

Looking back at Spring 2020, and making changes accordingly for 2021, despite that semester’s pandemic-induced weirdness:

I decided to use Helen Brown Coverdale’s study huddle technique, in the hopes that it will allow students to become more proficient in decoding academic literature. I am dividing the class into teams of 4-5 students each. Half of each team will be “green” and half will be “red.” Each week, students are responsible for analyzing a journal article of the corresponding color. I chose to use green and red font in the syllabus instead of red/blue because my hyperlinks are blue, and I did not want students to be confused. In addition to the font color, I have included the words “green” and “red” in case of students with colorblindness.

For the analysis assignments, students will be completing this template, which I believe is simpler than the worksheet I used last spring. I also expect it to be easier for me to grade, given my rubric, shown below:

Continue reading “Changing a Comparative Politics Course”

Increasing Buyer Bewareness

As an example of one of my points in my last post — making the connection between assignments and course learning objectives explicit to students — I have created an ungraded, anonymous survey for the first day of class. One of the survey items is “I want to learn how to . . . ” Students can choose from the following responses:

(1) decode scholarly literature about comparative politics through frequent practice.

(2) improve my argumentative writing skills through frequent practice.

(3) improve my reading comprehension skills through frequent practice.

(4) None of the above.

Once students complete the survey, I will explain to them how the question corresponds to course assignments. On Mondays, students will submit journal article analyses (options 1 and 3). Wednesdays will be dedicated to reading responses (options 2 and 3). This will be the weekly routine across most of the semester. If anyone in the class chooses option 4, I will encourage those people to drop the course.

I will post details about my Monday and Wednesday assignments in the coming weeks.

The Busywork Dilemma

The pandemic-induced shift to online instruction has caused a rise in student complaints about “busywork” — the perception that assignments are meaningless tasks unrelated to their learning, rather than activities deliberately designed by instructors to promote understanding, engagement, and accountability. Julaine Fowlin, Assistant Director for Instructional Design at Vanderbilt University’s Center for Teaching, has written about how to prevent these misperceptions, which I encourage you to read. A summary of her advice:

First, make the connection between assignments and course learning objectives explicit to students. This may mean creating course content that explains the purpose of an assignment.

Second, assignments should reflect real-life applications that are typically encountered in a professional context; for example, authentic writing.

Third, help students stay engaged and motivated. This can be achieved in a variety of ways: by situating assignments in different sensory environments (e.g., debates, games, role play), by providing formative feedback quickly enough for them to adjust and feel a sense of accomplishment, and with assignments that remind students of their learning goals so that they reflect on what they are doing to achieve them.

Last, be true to yourself, by using assignments that communicate your own enthusiasm and passion for the subject matter.

Fall 2020: Looking Backward and Forward, Part 3

One last post about successes and failures from the previous semester: last summer a colleague pointed me toward Knight Lab Timeline JS, and, inspired by Matthew Wilson’s work on crowd-sourcing and self-instruction, I decided to include a timeline project in my undergraduate course on the Middle East. Setting up the project was relatively simple:

Students were already divided into teams for breakout discussions, presentations, and note-taking; I used the same teams for the timelines. I chose five Middle Eastern countries that featured prominently in assigned readings — Egypt, Iran, Iraq, Lebanon, and Saudi Arabia — and created corresponding files in Google Sheets using the spreadsheet template provide by Knight Lab. I gave each team access to its Google Sheet.

Students completed five graded individual assignments that were designed to prevent free riders and guarantee that teams were working on the project throughout the semester rather than only in a frenzied rush at the end. Here are the instructions for the assignment and its associated rubric:

Continue reading “Fall 2020: Looking Backward and Forward, Part 3”

Fall 2020: Looking Backward and Forward, Part 2

To continue evaluating my successes and failures from last semester: the attempt to create community in synchronous online undergraduate courses by dividing students into teams for breakout discussions, note-taking, and memo-writing.*

Zoom breakout discussions for reading responses worked fairly well. Before the semester started, I created a Google Slide file for each team to use for building presentations, and I randomly selected one team to present its conclusions once Zoom breakout rooms closed. I screen shared the presentation from my computer, since I had access to all the files. Students who did not participate in breakout discussions or in creating presentations were held accountable by their teammates in the evaluations completed at the end of the semester. The one aspect of breakout discussions that needs to change for next semester is also true for synchronous classes in general: students need to turn on their webcams. Video of faces is much better at facilitating community than black boxes.

Teams were allowed only one slide per presentation, but often the slides were badly designed — too much text, font too small, etc. In the future, I should require that students follow a specific format.

The Google Slide files ended up being a written record of breakout room discussions for each team; however, I don’t know if students used them as notes. Students definitely didn’t collaboratively write notes in the Google Doc files I had created. Teams either left these files blank, or just pasted screen captures from my PowerPoint presentations into them. Yet another example of students’ lack of note-taking skills.

The memo exercises were also a failure. In an individual graded assignment, students were supposed to make a recommendation in response to a prompt, and provide two different reasons in support of that recommendation. In teams, they were supposed to write a draft of a complete memo, guided by a template I had provided. I then chose one team’s memo at random to discuss as an example with the whole class. There were five iterations of this process. In the individual assignments, students sometimes submitted one reason, just stated in two different ways, in support of their recommendation. The drafts of complete memos produced by teams were usually disorganized and unpersuasive, and the quality of the writing did not improve with successive iterations. Most undergraduates simply lack the writing skills necessary for collaborating effectively on a task like this. Students should instead each write a single memo over the entire semester, in a step-by-step process requiring multiple revisions.

*Additional posts that were part of this series are here and here.

Fall 2020: Looking Backward and Forward

Time once again to evaluate my teaching successes and failures. As usual, I will focus on the failures, but to start, a happy accident: discovering setting options in Google Forms. I still use Google Forms for students’ teammate evaluations. I gave students these instructions for this past semester’s evaluations:

You have 10 points to distribute across members of your team according to each person’s contribution to team projects. These projects include course notes on Google Docs, reading response breakout discussions, and presentations. For example: If one person did all the work, award that person 10 points and the other members of the team 0 points. If one person did 70 percent of the work, a second person did 30 percent of the work, and the rest of the team did 0 percent of the work, award the first person 7 points, the second person 3 points, and everyone else 0 points. Total points awarded across all members of your team must equal 10 or your response will be discarded. I will use people’s responses to calculate an average ranking for each member of your team. This ranking determines the teammate evaluation portion of your course grade [as many as 50 points out of more than 1,000, or approximately five percent].

So, in my mind, this is a very low stakes assessment, without forced ranking. The five members of one team, in fact, noticed that they would all earn the full 50 points if they gave each other scores of 2. To me it was yet one more sign of their ability to collaborate productively.

But as usual, some other students submitted a completed Google Form multiple times or ranked their team members with, for example, scores of 10, 9, 9, 9, and 9. However, when designing the Google Form for another class, I serendipitously clicked on settings (gear icon) when trying to do something else. I was greeted by this dialog box:

As shown, users can capture respondents’ email addresses and limit people to one response. Checking these boxes makes it easy to prevent and decipher students’ errors. One can even convert a Google Form into an auto-graded quiz by going to the Quizzes tab. Maybe this is quite familiar to you, but it was new for me. I’ll be making use of the settings options from this point forward.

Eight students (out of fifty-four) emailed me complaints about the teammate evaluation portion of the grade once I had entered the information in the Canvas LMS. They perceived earning 20 or 30 points out of 50 as “failing,” even though I explicitly inform students in multiple ways that the final course grade is based solely on total points accumulated over the semester, not on the results of individual assessment instruments. I think students’ mistaking perception for reality is in part due to me listing the teammate evaluation as a 50-point assignment on Canvas. When students don’t earn the maximum possible points shown, they react as if I’m at fault for their performance. Next semester, I will see if Canvas allows me to label this item as worth 0 points, to make it look like the teammate rankings are “bonus” points.

When Students Ignore Feedback II

This semester I am teaching fifty-four undergraduate students in two courses. At present, ten of these students have D or F averages. One of them has previously stated that he will withdraw. While it’s not my responsibility to ensure that he actually follows through on this before the semester ends, I will limit my analysis to the remaining nine students.

Both courses have had six assignments so far, so fifty-four separate opportunities for the nine students to receive feedback on their writing. But only forty-one of these assignments, or approximately 75%, were submitted.

Students saw my feedback, in the form of brief comments and a marked rubric, on only nine of the forty-one submitted assignments — approximately 20%. How do I know this? The “student viewed” time stamp feature of Canvas, discussed previously and shown below. If there isn’t a “Student Viewed Document” message, the student didn’t look at the assignment after I graded it.

Six of the nine students have not looked at any of my feedback on past assignments, despite earning failing grades. I will now email them to explain the likely relationship between their grades and their unwillingness to take advantage of the feedback I provide. This action on my part assumes that students will read the email, which isn’t guaranteed. But if this doesn’t catch their attention, mid-semester grade reports might.