My impressions so far of the political forecasting project (described here, here, and here) that I created for the undergraduate course on the Middle East that I’m teaching this semester:
The quality of student writing on the different elements of CHAMP has improved over the first three iterations of the assignment. Their analyses are more specific and relevant to what they are trying to predict. I’m hoping that this is a sign of students’ increasing interest in the project. Citation of sources remains problematic for some students. No, a string of URLs is not a bibliography.
The quality of no-stakes classroom presentations has similarly improved. After each forecast, one member of each team presents his or her team’s collective conclusions without using any visual aids. The first two rounds of presentations were terrible. The third round was adequate. I’m now wondering, if I use this project in the future, whether I should grade the presentations against a simple rubric and introduce a rule that a different team member presents each round.
As for the forecasts themselves, here are the average probabilities assigned to each event for the three rounds of predictions students in the class have made so far:
For a fall semester course assignment, I scanned a book chapter and uploaded the resulting pdf to Perusall. I discovered that I could not accurately highlight any portion of the pdf using Perusall’s Annotate Text tool. I could, however, highlight rectangular areas of text using the Annotate Figure tool, shown below with the green underline. Apparently Perusall reads the pdf of the scanned document as an image file. I created a note in the assignment to inform students about which annotation tool they would need to use.
I put Perusall assignments into an online graduate course that I’m currently teaching. For the course’s first two weeks, students’ assignment scores were not syncing with the Canvas gradebook, nor were they visible to students in Perusall, until after the assignment deadline had passed. I had to manually release scores for each assignment. Perusall was not functioning as it had with my undergraduate courses in the spring semester, when assignment scores were always visible to students and were updated continuously in real time.
I eventually found the cause of the problem. I had not selected when to release scores to students in the settings page of the instructor’s dashboard:
Either this setting’s default had changed after the spring semester from “immediately, as students submit work” to one of the other options, or I had forgotten that I needed to change it when I was building the course on Perusall. Either way, the problem was easily solved. To this absent-minded professor, it was another demonstration of how easy Perusall is to use.
As promised, here is the prompt for the collaborative portion of the forecasting project in my upcoming Middle East course. There are two of these team deliverables — a draft report due after students have submitted the first three of their individual CHAMP assignments, and a final version (shown below) due at the end of the semester. In terms of contribution to course grade, the draft and final versions together are worth only a third of what the five individual assignments are worth. Also, a portion of the course grade will come from teammate evaluations.
Your team is interviewing for a job with Eurasia Group. The interview process requires that your team submit a jointly-written report on your team’s Forecasting Project topic using the CHAMP framework:
Your team’s final report should apply all five components of the CHAMP framework in a forecast that is no more than five pages of double-spaced 11- or 12-point font text. Do not use quotations of sources in the report. Reference source material using footnotes. See the list of syllabus readings for proper bibliographic format. Footnotes can be in 10-point font.
Only one member of your team needs to submit the document for this assignment.
Your team’s work will be assessed using the rubric below.
Now I just need to create rubrics for the project’s individual and team assignments . . .
As promised in my last post, here is an example of iterating so that students repeatedly practice the same skills.
As I’ve previously mentioned, I’m putting a forecasting project into my fall semester Middle East course. The project’s constituent assignments will be based on the CHAMP system recommended by people like Phil Tetlock. A brief description of CHAMP is at the end of this Financial Times article by the economist Tim Harford.
My prompt for the first CHAMP assignment reads:
You are interviewing for a job with Eurasia Group. The interview process requires that you submit a forecast on your team’s Forecasting Project topic. The forecast needs to use the CHAMP framework:
In a one-page, double-spaced, 11- or 12-point font document, answer these questions for the Comparisons portion of your forecast:
What other cases are comparable to this situation? How do they indicate what will happen this time?
My guiding questions for the other CHAMP assignments are:
What individuals, groups, and institutions played key roles in similar events in the past? How are these “power players” likely to influence the current situation?
What are the experts predicting about this situation? What is the view that lies in the middle of their assessments?
Are there mathematical models or empirical measures that can be used to gain insight into this situation? What do these models or measures indicate?
How has your thinking been affected by emotion and personal preference? How have you adjusted your analysis to account for these biases?
I’ll talk about the team-based aspects of this project in a future post.
I’ve begun integrating Perusall into my online, asynchronous graduate international relations courses. First up is a course in our master’s degree program that starts next month. I’ve chosen to start with this one because I typically assign an analysis of a peer-reviewed journal article in lieu of a midterm exam, and the questions in my Perusall assignments for undergraduates mirror my instructions for the article analysis. Regular Perusall assignments will give them opportunities to develop skills they will need for the article analysis.
While practice improves performance generally, in this case I see it as particularly important. A growing proportion of our M.A. students are undergrads who have opted for a fifth-year master’s degree. They begin taking graduate courses in their fourth year of college. My four-person department only has about ten political science majors per year, but given the organization of the department’s curriculum, I encounter only about half of these majors in the classroom prior to their graduation. This means a wide variation in content knowledge and writing ability among the majors who enter the five-year program and first pop up in my M.A. courses. Making the situation even more complicated: the two-year M.A. students are often mid-career military officers who have first-hand international experience and are very academically talented.
These courses are seven weeks long. Previously I assigned an extensive list of readings, two writing prompts, and discussion board participation each week. I’ve replaced one of the writing prompts with two Perusall assignments in each week. I’m hoping that this change will help build a sense of community among the students, which is more difficult to achieve in an asynchronous online environment than it is in a physical classroom. At minimum the use of Perusall should cause students to notice the superior skills of some of their classmates and stimulate them to increase their own efforts.
I like each of my undergraduate courses to have at least one authentic skill-based deliverable; for example, historical timelines or ArcGIS storymaps. I’m teaching the Middle East in the fall semester, and, having recently read books like Superforecasting by Tetlock and Noise by Kahneman, Sibony, and Sunstein, I hit upon students trying to predict events in the region.
I’ll have students assign percentage probabilities to a list of potential events and give them opportunities to periodically adjust these probabilities. At the end of the semester, students will earn points if the events have happened and lose points if they haven’t, with point amounts weighted against the associated percentages. My formula:
((Sum of the percentage probabilities student chose for each of his or her correct forecasts) – (Sum of the percentage probabilities student chose for each of his or her incorrect forecasts ))*50.
A hypothetical example:
The sun rises in the eastern sky
The capital of Turkey changes from Ankara to Istanbul
Barham Sali remains president of Iraq
Iran fires an ICBM with a nuclear warhead at Saudi Arabia
The points earned by this student would be: ((1.0 + 0.7) – (0.3 + 0.5)) * 50 = 45.
The negative effect of incorrect predictions should prevent students from trying to game the process by blindly assigning 100% probability to every event on the list. But they won’t actually suffer from incorrect predictions, since the scale for the final course grade tops out at 950 and more than 1,000 points will be available from all assignments, quizzes, and exams. If an event on the list happens before the semester ends, I will simply use each student’s most recent forecast to calculate point values.
Since forecasting accuracy improves when estimates are averaged, I will show the class the aggregate results of students’ forecasts each time they are updated. I’m also going to divide the class into teams, with the members each team taking the lead — through additional research and class presentations — on becoming more expert on one topic from the list. Potential events that I might put on the list:
The value of the Turkish lira against the U.S. dollar decreases to less than 22:1.
In Iran, the Assembly of Experts names a new Supreme Leader.
An anti-government protest in Cairo results in at least twenty demonstrators arrested, injured, and/or killed.
The president or prime minister of Lebanon is assassinated.
Turkey ends its occupation of Syrian territory.
I’m still working out more of the details on this idea, but I think it could be interesting for me and the students.
Another reflection on last semester’s comparative politics course . . .
I noticed a loose association between final course grades and students’ Perusall activity, so the cost-benefit of engaging or not engaging with Perusall assignments ought to be transparent to students.* Another plus: because Perusall scores student activity automatically with an AI algorithm, the assignments are basically “set and forget” on my end. This aspect was very convenient when I didn’t have the time or inclination to read all of the students’ annotations on certain assignments.
I’m so pleased with how Perusall functions that I’m going to incorporate it into my fall semester undergraduate courses.
They then had to answer this question: What mistakes are Nevada officials making with data about community college students?
As written, the Inside Higher Ed story describes people who should know better falling victim to omitted variable bias and confusing correlation with causation. Although I might be making similar mistakes in evaluating in-class discussion about the assignment, I think that students found it more interesting than most because the assignment was about other students.
Soon afterward, two similar items came across my radar:
In one of his recent contributions to this blog, Chad asks why students should attend class. In his experience
[C]lass attendance and academic performance are positively correlated for the undergraduate population that I teach. But I can’t say that the former causes the latter given all of the confounding variables.
Last year we reported on an accidental experiment in one of Patrick’s courses that allowed us to compare the impact of attendance and the submissions of tasks in online and on-campus groups in Maastricht University’s Bachelor in European Studies. We observed that that attendance appeared to matter more for the on-campus students, whereas handing in tasks was important for the online students.
This year the same course was fully taught on-campus again, although students were allowed to join online when they displayed symptoms of or had tested positive for Covid-19 (this ad-hoc online participation was, unfortunately, not tracked). We did the same research again and there are some notable conclusions to be drawn.
In the first-year BA course that we looked at, students learn how to write a research proposal (see here). The course is set up as a PBL course, so it does not come as a big surprise that attendance once again significantly impacted students’ chances of passing the course.