Changing a Comparative Politics Course

Looking back at Spring 2020, and making changes accordingly for 2021, despite that semester’s pandemic-induced weirdness:

I decided to use Helen Brown Coverdale’s study huddle technique, in the hopes that it will allow students to become more proficient in decoding academic literature. I am dividing the class into teams of 4-5 students each. Half of each team will be “green” and half will be “red.” Each week, students are responsible for analyzing a journal article of the corresponding color. I chose to use green and red font in the syllabus instead of red/blue because my hyperlinks are blue, and I did not want students to be confused. In addition to the font color, I have included the words “green” and “red” in case of students with colorblindness.

For the analysis assignments, students will be completing this template, which I believe is simpler than the worksheet I used last spring. I also expect it to be easier for me to grade, given my rubric, shown below:

Continue reading “Changing a Comparative Politics Course”

Incentivizing Engagement

A short note about a column in Inside Higher Ed about student engagement in flipped courses. Chandralekha Singh, physicist and director of the Science Education Research Center at the University of Pittsburgh, reports that she and her colleagues interviewed thirty-seven science majors about remote instruction. These students said that they simply did not do ungraded in-class and out-of-class components of flipped courses. For example, they did not watch videos or complete self-assessment exercises on a weekly basis; instead, these tasks were completed, if at all, right before exams. Synchronous class meetings, in which the students were expected to discuss this work, became useless.

This parallels my own experience: if it’s not graded, students won’t do it. Anything that doesn’t explicitly affect the final course grade is perceived as useless busywork.

Increasing Buyer Bewareness

As an example of one of my points in my last post — making the connection between assignments and course learning objectives explicit to students — I have created an ungraded, anonymous survey for the first day of class. One of the survey items is “I want to learn how to . . . ” Students can choose from the following responses:

(1) decode scholarly literature about comparative politics through frequent practice.

(2) improve my argumentative writing skills through frequent practice.

(3) improve my reading comprehension skills through frequent practice.

(4) None of the above.

Once students complete the survey, I will explain to them how the question corresponds to course assignments. On Mondays, students will submit journal article analyses (options 1 and 3). Wednesdays will be dedicated to reading responses (options 2 and 3). This will be the weekly routine across most of the semester. If anyone in the class chooses option 4, I will encourage those people to drop the course.

I will post details about my Monday and Wednesday assignments in the coming weeks.

The Busywork Dilemma

The pandemic-induced shift to online instruction has caused a rise in student complaints about “busywork” — the perception that assignments are meaningless tasks unrelated to their learning, rather than activities deliberately designed by instructors to promote understanding, engagement, and accountability. Julaine Fowlin, Assistant Director for Instructional Design at Vanderbilt University’s Center for Teaching, has written about how to prevent these misperceptions, which I encourage you to read. A summary of her advice:

First, make the connection between assignments and course learning objectives explicit to students. This may mean creating course content that explains the purpose of an assignment.

Second, assignments should reflect real-life applications that are typically encountered in a professional context; for example, authentic writing.

Third, help students stay engaged and motivated. This can be achieved in a variety of ways: by situating assignments in different sensory environments (e.g., debates, games, role play), by providing formative feedback quickly enough for them to adjust and feel a sense of accomplishment, and with assignments that remind students of their learning goals so that they reflect on what they are doing to achieve them.

Last, be true to yourself, by using assignments that communicate your own enthusiasm and passion for the subject matter.

Explaining the Exploding Clown Car

I had something else scheduled to appear today, but since readers might be making last minute changes to their syllabi to reflect recent events in the USA, here are some potentially useful resources for undergraduate instruction:

Zara Abrams, “What do we know about conspiracy theories?American Psychological Association, 18 November 2020.

Reza Aslan, “Is the Trump presidency a religious cult?Big Think, 15 April 2018.

Benedict Carey, “A Theory About Conspiracy Theories,” The New York Times, 28 September 2020.

Nicky Case, “To Build a Better Ballot: An Interactive Guide to Alternative Voting Systems,” December 2016.

Tyler Cowen, “The Theory of the Median Voter,” Marginal Revolution University.

Anna Kusmer and Carol Hills, “‘Even if a coup fails, it still damages your government’: What the US can learn from Sri Lanka’s coup attempt,” The World, 8 January 2021.

Richard Moulding et al., “Better the devil you know than a world you don’t? Intolerance of uncertainty and worldview explanations for belief in conspiracy theories,” Personality and Individual Differences 98 (2016): 345-354.

Radio Lab, “Tweak the Vote,” 5 November 2018.

James Purtill, “This model forecast the US’s current unrest a decade ago. It now says ‘civil war’,” ABC Triple J Hack, 17 June 2020.

Steve Saideman, “Why Do We Care About Ethnic Outbidding?Saideman’s Semi-Spew, 9 December 2015.

Milan W. Svolik, “Polarization versus Democracy,” Journal of Democracy 30, 3 (July 2019): 20-32.

Amanda Taub, “The Rise of American Authoritarianism,” Vox.com, 1 March 2016.

Brian Winter, “System Failure: Behind the Rise of Jair Bolsonaro,” Americas Quarterly, 24 January 2018.

Fall 2020: Looking Backward and Forward, Part 3

One last post about successes and failures from the previous semester: last summer a colleague pointed me toward Knight Lab Timeline JS, and, inspired by Matthew Wilson’s work on crowd-sourcing and self-instruction, I decided to include a timeline project in my undergraduate course on the Middle East. Setting up the project was relatively simple:

Students were already divided into teams for breakout discussions, presentations, and note-taking; I used the same teams for the timelines. I chose five Middle Eastern countries that featured prominently in assigned readings — Egypt, Iran, Iraq, Lebanon, and Saudi Arabia — and created corresponding files in Google Sheets using the spreadsheet template provide by Knight Lab. I gave each team access to its Google Sheet.

Students completed five graded individual assignments that were designed to prevent free riders and guarantee that teams were working on the project throughout the semester rather than only in a frenzied rush at the end. Here are the instructions for the assignment and its associated rubric:

Continue reading “Fall 2020: Looking Backward and Forward, Part 3”

Fall 2020: Looking Backward and Forward, Part 2

To continue evaluating my successes and failures from last semester: the attempt to create community in synchronous online undergraduate courses by dividing students into teams for breakout discussions, note-taking, and memo-writing.*

Zoom breakout discussions for reading responses worked fairly well. Before the semester started, I created a Google Slide file for each team to use for building presentations, and I randomly selected one team to present its conclusions once Zoom breakout rooms closed. I screen shared the presentation from my computer, since I had access to all the files. Students who did not participate in breakout discussions or in creating presentations were held accountable by their teammates in the evaluations completed at the end of the semester. The one aspect of breakout discussions that needs to change for next semester is also true for synchronous classes in general: students need to turn on their webcams. Video of faces is much better at facilitating community than black boxes.

Teams were allowed only one slide per presentation, but often the slides were badly designed — too much text, font too small, etc. In the future, I should require that students follow a specific format.

The Google Slide files ended up being a written record of breakout room discussions for each team; however, I don’t know if students used them as notes. Students definitely didn’t collaboratively write notes in the Google Doc files I had created. Teams either left these files blank, or just pasted screen captures from my PowerPoint presentations into them. Yet another example of students’ lack of note-taking skills.

The memo exercises were also a failure. In an individual graded assignment, students were supposed to make a recommendation in response to a prompt, and provide two different reasons in support of that recommendation. In teams, they were supposed to write a draft of a complete memo, guided by a template I had provided. I then chose one team’s memo at random to discuss as an example with the whole class. There were five iterations of this process. In the individual assignments, students sometimes submitted one reason, just stated in two different ways, in support of their recommendation. The drafts of complete memos produced by teams were usually disorganized and unpersuasive, and the quality of the writing did not improve with successive iterations. Most undergraduates simply lack the writing skills necessary for collaborating effectively on a task like this. Students should instead each write a single memo over the entire semester, in a step-by-step process requiring multiple revisions.

*Additional posts that were part of this series are here and here.

Fall 2020: Looking Backward and Forward

Time once again to evaluate my teaching successes and failures. As usual, I will focus on the failures, but to start, a happy accident: discovering setting options in Google Forms. I still use Google Forms for students’ teammate evaluations. I gave students these instructions for this past semester’s evaluations:

You have 10 points to distribute across members of your team according to each person’s contribution to team projects. These projects include course notes on Google Docs, reading response breakout discussions, and presentations. For example: If one person did all the work, award that person 10 points and the other members of the team 0 points. If one person did 70 percent of the work, a second person did 30 percent of the work, and the rest of the team did 0 percent of the work, award the first person 7 points, the second person 3 points, and everyone else 0 points. Total points awarded across all members of your team must equal 10 or your response will be discarded. I will use people’s responses to calculate an average ranking for each member of your team. This ranking determines the teammate evaluation portion of your course grade [as many as 50 points out of more than 1,000, or approximately five percent].

So, in my mind, this is a very low stakes assessment, without forced ranking. The five members of one team, in fact, noticed that they would all earn the full 50 points if they gave each other scores of 2. To me it was yet one more sign of their ability to collaborate productively.

But as usual, some other students submitted a completed Google Form multiple times or ranked their team members with, for example, scores of 10, 9, 9, 9, and 9. However, when designing the Google Form for another class, I serendipitously clicked on settings (gear icon) when trying to do something else. I was greeted by this dialog box:

As shown, users can capture respondents’ email addresses and limit people to one response. Checking these boxes makes it easy to prevent and decipher students’ errors. One can even convert a Google Form into an auto-graded quiz by going to the Quizzes tab. Maybe this is quite familiar to you, but it was new for me. I’ll be making use of the settings options from this point forward.

Eight students (out of fifty-four) emailed me complaints about the teammate evaluation portion of the grade once I had entered the information in the Canvas LMS. They perceived earning 20 or 30 points out of 50 as “failing,” even though I explicitly inform students in multiple ways that the final course grade is based solely on total points accumulated over the semester, not on the results of individual assessment instruments. I think students’ mistaking perception for reality is in part due to me listing the teammate evaluation as a 50-point assignment on Canvas. When students don’t earn the maximum possible points shown, they react as if I’m at fault for their performance. Next semester, I will see if Canvas allows me to label this item as worth 0 points, to make it look like the teammate rankings are “bonus” points.

Collaborative Quiz Experiment

Last week I gave a surprise collaborative quiz to one class, as a test run for possibly using this exercise in my synchronous online courses next semester. The quiz consisted of five multiple-choice questions on basic concepts, deployed in three iterations. First, students took the quiz individually on Canvas, which auto-graded students’ answers but did not reveal which were correct. The individual quiz was worth up to half a percent toward the course grade.

Second, I sent students into team breakout rooms to confer and decide which answers to submit as a group. This version of the quiz was also worth up to half of the course grade. I pasted the quiz into each team’s notes on Google Docs. Because the Canvas quiz tool does not have a “groups” setting, I had already created a Canvas assignment through which each team could submit its answers. Again students did not know which answers were correct — after class I had to read what teams had submitted and manually enter a quiz score for every student who had been present for the breakout room discussions.

Third, after breakout rooms closed, students answered the quiz’s questions yet again in the form of a Zoom poll. After closing the poll and sharing the results, I explained which answers were correct and offered to answer any questions.

Twenty-nine undergraduates are in the course. Three were completely “absent” — they never signed into Zoom during class that day. A fourth student logged out before I announced the group version of the quiz. For the remaining twenty-five students: twelve, or nearly fifty percent, scored higher on the collaborative quiz than on the individual quiz. Great! Three students, all members of the same team, scored lower on the former than on the latter. Ten students’ scores were unchanged.

Finally, the poll, which did not contribute to the course grade: One student left class by disconnecting from Zoom when breakout rooms closed. Of the remaining twenty-four students, nine got the same number of questions correct on the poll and the individual quiz. Ok. Three students did better on the former than they did on the latter. Good. Twelve scored worse on the poll. Terrible! I have no idea why this happened, given the improvement in scores on the collaborative quiz.

Study Buddies and Study Huddles

Today we have a guest post from Helen Brown Coverdale, a lecturer in political theory at University College London. She can be contacted at h [dot] coverdale [at] ucl [dot] ac [dot] uk.

The biggest challenge of moving my contemporary political philosophy courses online for the pandemic has been peer learning. How do I create opportunities for students to interact, especially when they are in different time zones?

For the last three years, I have paired postgraduate students with complementary skill sets at the start of the course. As part of this buddy system, I divide required readings into three categories: everyone, Team Red, and Team Blue. Within each pair, one student is part of Team Red and the other is part of Team Blue. I let the students in each pair decide who joins which team. Students exchange notes on readings with their study buddies. The practice is intended to be supportive, not onerous. I tell students that the notes they share need to identify a reading’s key thesis, the arguments it makes, and a strength or weakness, all in no more than half a page.

Because Red and Blue texts differ, study buddies are exposed to different perspectives on the same topic, but the volume of texts becomes more manageable—allowing students to engage in more close reading and less skimming. While students may lose some breadth, they gain a deeper understanding of what they have read by teaching their peers about it, and I have found that generally they do better academically.

However, there are always a few students for whom the study buddy arrangement is not as effective. To address this, and the inability to serendipitously form study groups during the pandemic, I have paired up the pairs. Two pairs of study buddies equal one study huddle.

I use huddles for breakout discussions, peer marking exercises, and engaging with asynchronous lecture content. Red and blue teams are perfect for in-class debates.

The feedback I have received from students about this system has been very positive: they feel supported. The huddles give them a more resilient method of getting through a course or module. With two members of each team in each huddle, there is always someone to discuss readings with or get notes from if one person gets sick. It’s also harder to be the one apathetic person in a group of four, especially since students learn about the wrongs of free riding early in the term.