My Latest Iteration of Teammate Evaluations

A brief note about end-of-semester teammate evaluations:

I again used Google Forms to distribute a survey for students to evaluate each other’s contributions to team projects, but I changed how I calculated this component of the course grade. Each student had twelve points to distribute across all team members, including themselves. The more valuable a person’s contribution to the team project, the more points that person was supposed to get. People who made equivalent contributions could have been awarded the same number of points, and if a person was judged as having made no contribution at all, he or she could have been given zero points.

When the Google Form closed, I computed an average number of points received for each student. I then divided this mean score by twelve and multiplied it by fifty (the teammate evaluation was worth 50 out of 1,000 points in the course). I used this formula because teams were larger than in previous semesters, and I assumed a few members of each team would do the heavy lifting with the rest doing little or no work. If the resulting number was fifty or higher, a student earned the full fifty points toward his or her course grade. If the result was below ten, the student earned nothing. For any number in between, I rounded to the nearest ten.

This past semester, I had a total of thirty-seven undergraduate students in two courses. Only thirty completed the evaluation. Four of the thirty completed the survey incorrectly — the scores they distributed across team members did not sum to twelve. I deleted their responses, as I had specified in email and in the Google Form’s directions.

In sum, approximately thirty percent of my students did not perform a simple task that could have benefited their own course grades.

As I speculated at the end of the Fall 2020 semester, I was able to label the teammate evaluation as being worth zero points on Canvas. Maybe that partially explains why no students have (so far) complained about this portion of the course grade.

Identifying How Assumptions Meet Reality

Four weeks until classes end, and I’m noticing some of the same problems in my comparative politics course that I saw a year ago. First, some students are not able to consistently locate a journal article’s main thesis, even though I simplified the assignment’s format, students discuss their work among themselves when creating presentations about the articles, and I review the organization of each article after the presentations. Second, students aren’t sharing notes about assigned articles despite my adaptation of Helen Brown Coverdale’s study huddle system. Since collaborative notetaking with Google Docs didn’t work, I assumed that students would at least share their completed article analyses with their green or red teammates. Nope. While the analyses are graded as individual assignments, the “sharing” aspect is not, so probably students see no reason to do it.

Seven years ago, I wrote about mistakenly assuming that students knew the meaning of methods in social science research. A similar problem might be occurring with thesis. Although students have probably heard the term since ninth grade English, maybe they still don’t really understand it. Or, even if they do understand, they could be unwilling to make the effort required to identify what and where it is in a text. As a more direct colleague put it, the problem can originate with stupidity, laziness, or a combination of both.

A solution might be to ask students to find where in the body of an article its title has been converted into a cause and effect statement. For example, I recently assigned “Authoritarian Survival: Why Maduro Hasn’t Fallen” by Javier Corrales (Journal of Democracy 31, 3). The thesis is essentially “Maduro hasn’t fallen because . . .”

As for the unwillingness of students to share their ideas about readings via collaborative notetaking, I would not be surprised if this stems from being taught since early childhood that reading is an isolated rather than a social activity. I.e., the ideal reading environment involves a room of one’s own, a blanket, a cup of tea, and possibly a cat, to ponder silently the meaning of what one has just read. This technique works fine for people like ourselves, because academia self-selects for the highly literate. But the average undergraduate student probably doesn’t know really know how to think about what they’re reading while they’re reading it. According to colleagues who know much more about this subject than I do, if reading is instead a public activity, the metacognition that occurs in the truly literate becomes visible and transferable to others. Social interaction facilitates a better understanding of the text.

Luckily we live in an era of digital tools that allow a reader to easily interact with a text and with other readers. One of these tools is Perusall, which a couple of English professors on my campus have been raving about. I have asked our IT support unit to link Perusall to my Canvas account so that I can start experimenting with it, hopefully before the semester ends. If that happens, I’ll report my observations here.

Forward, Not Back

A brief response* to Simon’s last post about not slipping back into old habits:

The Chronicle of Higher Education recently ran this article (paywalled) about how the pandemic might have permanently altered campuses. People interviewed for the article believed that the use of a hybrid delivery format that includes online, asynchronous components will persist, because even full-time, traditionally-aged students like the convenience and flexibility.

Another reason to continue the practice: hybrid design enables students to get out of a passive environment and into more active experiences. For example, last fall my previously 100% face-to-face course on economic development went on Zoom. I do not teach by lecturing in fifty-minute increments three times per week; students in my classes experience a lot of interaction with peers. However, many of these activities can’t be easily replicated in an online environment given the usual tools I have at my disposal. So as a substitute I created a series of assignments in which students documented evidence of economic inequality in the local community with photos and presented their findings in online asynchronous discussions. The assignments met my intended objectives and the students were really engaged, so I’m going to use them again in Fall 2021, when (most likely) the course will once again be delivered face-to-face on campus. But the assignments can’t be completed effectively in fifty-minute time blocks. My proposed solution? Just cut the students loose — not hold class on certain days. My hope is that the institutional hype about maximizing student learning matches reality, and my plan isn’t quashed by higher ups.

If you’re interested in modifying any of your course in a similar fashion, The Aga Khan University has produced an excellent step-by-step guide on the development, design, and implementation of online courses. The guide also applies to hybrid courses.

*The title of this post is also a very obscure reference to the phrase uttered by Muammar Qaddafi during the now decade-old Arab Spring — إلى الأمام — immortalized in Zenga Zenga.

Changing a Comparative Politics Course, Part 4

My previous post explained how students will complete the template that identifies possible causes of either increased democracy or increased authoritarianism in two nation-states from 2000 to 2020. The next step in this project is for students to work in teams to produce qualitative comparative analyses. Here are my instructions for this collaborative assignment:

Continue reading “Changing a Comparative Politics Course, Part 4”

Changing a Comparative Politics Course, Part 3

As promised in my last post, here is Comparison 3, an assignment in which each student chooses independent variables and operationalizes them for the two cases chosen in Comparison 1. The instructions for the assignment:

Using the template that you submitted for Comparison 2, replace “Variable 1” and “Variable 2” with two independent variables that you think have a causal relationship with the dependent variable. Replace “Indicator 1” and “Indicator 2” with indicators of each of the independent variables you have selected — indicators are the means by which you will measure changes in your variables. Enter the data for the indicators for 2000 and ~ 2020 as shown in the template. Enter + or – to show where these measures increased or decreased.

Data on many indicators for independent variables can be found at:

Continue reading “Changing a Comparative Politics Course, Part 3”

Changing a Comparative Politics Course, Part 2

In Part 1 of this series, I discussed changing my approach to teaching students how to analyze the arguments contained in journal articles. I also think it is important for students to actually do some discipline-related research rather than just read about it. Previously in this course, my students compared two nation-states using either a most similar systems or most different systems design. That assignment never worked very well because of student confusion about the basic nature of cause and effect. I’ve decided to replace this with a scaffolded process culminating in a team-produced qualitative comparative analysis.

There are three individual assignments that I’m calling Comparison 1, 2, and 3. For Comparison 1, each student chooses two nation-states from a list. That’s it. The list comes from Freedom House’s rankings of citizen freedom in countries around the world; I selected a subset of states for which scores differed between 2000 and 2019 — so that students choose cases where the dependent variable varies over time.

For Comparison 2, students calculate a value for the dependent variable. Here are the instructions for the assignment:

Continue reading “Changing a Comparative Politics Course, Part 2”

Changing a Comparative Politics Course

Looking back at Spring 2020, and making changes accordingly for 2021, despite that semester’s pandemic-induced weirdness:

I decided to use Helen Brown Coverdale’s study huddle technique, in the hopes that it will allow students to become more proficient in decoding academic literature. I am dividing the class into teams of 4-5 students each. Half of each team will be “green” and half will be “red.” Each week, students are responsible for analyzing a journal article of the corresponding color. I chose to use green and red font in the syllabus instead of red/blue because my hyperlinks are blue, and I did not want students to be confused. In addition to the font color, I have included the words “green” and “red” in case of students with colorblindness.

For the analysis assignments, students will be completing this template, which I believe is simpler than the worksheet I used last spring. I also expect it to be easier for me to grade, given my rubric, shown below:

Continue reading “Changing a Comparative Politics Course”

Increasing Buyer Bewareness

As an example of one of my points in my last post — making the connection between assignments and course learning objectives explicit to students — I have created an ungraded, anonymous survey for the first day of class. One of the survey items is “I want to learn how to . . . ” Students can choose from the following responses:

(1) decode scholarly literature about comparative politics through frequent practice.

(2) improve my argumentative writing skills through frequent practice.

(3) improve my reading comprehension skills through frequent practice.

(4) None of the above.

Once students complete the survey, I will explain to them how the question corresponds to course assignments. On Mondays, students will submit journal article analyses (options 1 and 3). Wednesdays will be dedicated to reading responses (options 2 and 3). This will be the weekly routine across most of the semester. If anyone in the class chooses option 4, I will encourage those people to drop the course.

I will post details about my Monday and Wednesday assignments in the coming weeks.

The Busywork Dilemma

The pandemic-induced shift to online instruction has caused a rise in student complaints about “busywork” — the perception that assignments are meaningless tasks unrelated to their learning, rather than activities deliberately designed by instructors to promote understanding, engagement, and accountability. Julaine Fowlin, Assistant Director for Instructional Design at Vanderbilt University’s Center for Teaching, has written about how to prevent these misperceptions, which I encourage you to read. A summary of her advice:

First, make the connection between assignments and course learning objectives explicit to students. This may mean creating course content that explains the purpose of an assignment.

Second, assignments should reflect real-life applications that are typically encountered in a professional context; for example, authentic writing.

Third, help students stay engaged and motivated. This can be achieved in a variety of ways: by situating assignments in different sensory environments (e.g., debates, games, role play), by providing formative feedback quickly enough for them to adjust and feel a sense of accomplishment, and with assignments that remind students of their learning goals so that they reflect on what they are doing to achieve them.

Last, be true to yourself, by using assignments that communicate your own enthusiasm and passion for the subject matter.

Fall 2020: Looking Backward and Forward, Part 3

One last post about successes and failures from the previous semester: last summer a colleague pointed me toward Knight Lab Timeline JS, and, inspired by Matthew Wilson’s work on crowd-sourcing and self-instruction, I decided to include a timeline project in my undergraduate course on the Middle East. Setting up the project was relatively simple:

Students were already divided into teams for breakout discussions, presentations, and note-taking; I used the same teams for the timelines. I chose five Middle Eastern countries that featured prominently in assigned readings — Egypt, Iran, Iraq, Lebanon, and Saudi Arabia — and created corresponding files in Google Sheets using the spreadsheet template provide by Knight Lab. I gave each team access to its Google Sheet.

Students completed five graded individual assignments that were designed to prevent free riders and guarantee that teams were working on the project throughout the semester rather than only in a frenzied rush at the end. Here are the instructions for the assignment and its associated rubric:

Continue reading “Fall 2020: Looking Backward and Forward, Part 3”