A brief post this week about the televised hearings of the U.S. House of Representatives’ January 6 committee.*
I teach democracy from a comparative perspective, a challenge when students have had the ideology of American exceptionalism drilled into them since birth.
When watching the second installment of the hearings, it occurred to me that they could serve as a reality check for students who tend to see “democracy” as a purely American phenomenon and whose culminating undergraduate achievement is a legalistic rehash of a 19th century Supreme Court opinion on the U.S. constitution’s Establishment Clause.
In my opinion, a much more meaningful exercise would be for students to research forms of democracy and threats to it globally. A class could be divided into teams with each team analyzing a different country in relation to the USA. Testimony from the hearings could be used to identify pivotal events that might or might not parallel what has happened in, for example, Venezuela.**
It just so happens that there are plenty of people who already thought of this kind of project — the folks at Democratic Erosion. Check out their sample syllabus for a semester-long course.
* full name: Select Committee to Investigate the January 6 Attack on the United States Capitol
** with readings such as Javier Corrales, “Authoritarian Survival: Why Maduro Hasn’t Fallen,” and Milan W. Svolik, “Polarization versus Democracy,” which appeared in Journal of Democracy in 2020 and 2019, respectively.
For the upcoming fall semester, I’m making another tweak to the system. Instead of ranking or distributing a set number of points, they will rate each other’s contributions on a three-level scale. And rather than email each team a link to a different Google Form, I have one Google Form for the entire class. I can either email the link to the whole class, or — more likely because it’s easier on my end — I can post the link in the Canvas LMS. Or, as I discussed in my last post, I can embed the Form’s iframe into a Canvas assignment.
Since I’ve set the Form to collect students’ email addresses, I’ll be able to discard the responses of any student who rates a team he or she does not belong to.
The evaluation is worth up to 50 points out of 1,000 in the course grading scale; the last item in the Form is simply a method of encouraging students to reflect on how well they and their teammates collaborated (instead of mindlessly entering numbers). As I did last semester, I will set the corresponding assignment in the Canvas gradebook as worth nothing, to avoid complaints about “losing” points because of their peers’ evaluation of their work.
A follow-up to my recent post about increasing the quality of students’ final products from collaborative research projects:
In my Spring 2021 research methods course, I gave students this outline to follow when writing their team’s research reports. I’ve revised the outline for Spring 2022. Each part in the new outline will get graded separately, with a summative grade for the entire report at the end of the semester.
I’m also thinking of being much more specific about the report’s layout, and grading the reports accordingly — similar to what has worked well with student presentations. I can envision the following criteria:
No more than two pages per part, which would limit the final report to eight pages.
Each part must include at least one data visualization — a chart or graph.
Looking at student performance in the 2020-2021 academic year, I see evidence that team research projects due at the end of the semester can’t be scaffolded solely around individually-graded assignments completed throughout the semester. For example, in my Middle East politics course, each student shared four individually-completed assignments with their teammates for use in their team’s historical timeline. In my research methods course, there were ten individual assignments that teammates were supposed to share with each other as drafts of sections of team research reports. While this approach does decrease free riding and encourage collaboration, it apparently does not ensure high quality research in the final product. Four of the five timelines that teams created in the Middle East course lacked mention of significant events. None of the four teams in the research methods course collected information from coffee farmers, processors, or distributors in Central America, despite my instructions to do so, nor did the final reports resemble the industry exemplars I had provided.
It seems that in students’ minds, my formative assessment of their individual work is totally unconnected to the summative assessment of their collaborative work. I probably need to break the team project into discrete, graded chunks, with each chunk layered on top of some of the individual assignments. Teams can use the feedback they receive on each successive chunk of the project to improve the quality of the final product.
A brief note about end-of-semester teammate evaluations:
I again used Google Forms to distribute a survey for students to evaluate each other’s contributions to team projects, but I changed how I calculated this component of the course grade. Each student had twelve points to distribute across all team members, including themselves. The more valuable a person’s contribution to the team project, the more points that person was supposed to get. People who made equivalent contributions could have been awarded the same number of points, and if a person was judged as having made no contribution at all, he or she could have been given zero points.
When the Google Form closed, I computed an average number of points received for each student. I then divided this mean score by twelve and multiplied it by fifty (the teammate evaluation was worth 50 out of 1,000 points in the course). I used this formula because teams were larger than in previous semesters, and I assumed a few members of each team would do the heavy lifting with the rest doing little or no work. If the resulting number was fifty or higher, a student earned the full fifty points toward his or her course grade. If the result was below ten, the student earned nothing. For any number in between, I rounded to the nearest ten.
This past semester, I had a total of thirty-seven undergraduate students in two courses. Only thirty completed the evaluation. Four of the thirty completed the survey incorrectly — the scores they distributed across team members did not sum to twelve. I deleted their responses, as I had specified in email and in the Google Form’s directions.
In sum, approximately thirty percent of my students did not perform a simple task that could have benefited their own course grades.
As I speculated at the end of the Fall 2020 semester, I was able to label the teammate evaluation as being worth zero points on Canvas. Maybe that partially explains why no students have (so far) complained about this portion of the course grade.
My previous post explained how students will complete the template that identifies possible causes of either increased democracy or increased authoritarianism in two nation-states from 2000 to 2020. The next step in this project is for students to work in teams to produce qualitative comparative analyses. Here are my instructions for this collaborative assignment:
One last post about successes and failures from the previous semester: last summer a colleague pointed me toward Knight Lab Timeline JS, and, inspired by Matthew Wilson’s work on crowd-sourcing and self-instruction, I decided to include a timeline project in my undergraduate course on the Middle East. Setting up the project was relatively simple:
Students were already divided into teams for breakout discussions, presentations, and note-taking; I used the same teams for the timelines. I chose five Middle Eastern countries that featured prominently in assigned readings — Egypt, Iran, Iraq, Lebanon, and Saudi Arabia — and created corresponding files in Google Sheets using the spreadsheet template provide by Knight Lab. I gave each team access to its Google Sheet.
Students completed five graded individual assignments that were designed to prevent free riders and guarantee that teams were working on the project throughout the semester rather than only in a frenzied rush at the end. Here are the instructions for the assignment and its associated rubric:
Another example of why it’s good to consult with librarians:
For the last several years in my globalization course, I’ve had student teams create and deliver presentations on their commodity chain analyses and ethnographies of consumption. Generally students build Powerpoint files for these assignments; occasionally someone uses Prezi. Simple rubrics make grading this work very easy. But the end products aren’t going to make recent graduates stand out from the competition when interviewing with prospective employers. It’s also difficult to convey the content of the entire project in a single presentation without showing a mind-numbing number of slides. Enter the storymap . . .
One of our librarians, a specialist in digital scholarship whom I’ll be working with next semester, introduced me to the digital storytelling tool from Esri,* a.k.a. the Environmental Systems Research Institute, which allows a person to create a multi-media presentation with ArcGIS. Rather than describe what this looks like, I’ll show you:
As I mentioned in my last post about changes to my globalization course, my original plan of assigning an ethnography in conjunction with a project for a community partner no longer seemed likely to serve its intended purpose, so I removed it mid-semester. As a replacement, I have assigned students the task of creating infographics, first individually, and then in teams of four. I will turn over the latter products to the community partner as one of the deliverables from the project. Directions for the individual assignment are as follows: Continue reading “Changing a Course on Globalization, Part 5”
The semester is half over, and it has become apparent that I need to make some on-the-spot changes to my globalization course. The first change is quite minor: students have added or dropped the course, necessitating an edit to my Canvas LMS survey for the Project Contribution Award. As I mentioned previously, the mechanics of this procedure would be extremely time-consuming with a large class.
The second change is much more . . . extensive. As part of a foundation grant, the class is formally partnered with a local non-profit organization, Aquidneck Community Table (ACT). Students are collecting and analyzing data on the food consumption patterns of local residents by means of face-to-face interviews and supermarket receipts. Course assignments related to this project include a food ethnography and a single class-wide report for ACT.