I put Perusall assignments into an online graduate course that I’m currently teaching. For the course’s first two weeks, students’ assignment scores were not syncing with the Canvas gradebook, nor were they visible to students in Perusall, until after the assignment deadline had passed. I had to manually release scores for each assignment. Perusall was not functioning as it had with my undergraduate courses in the spring semester, when assignment scores were always visible to students and were updated continuously in real time.
I eventually found the cause of the problem. I had not selected when to release scores to students in the settings page of the instructor’s dashboard:
Either this setting’s default had changed after the spring semester from “immediately, as students submit work” to one of the other options, or I had forgotten that I needed to change it when I was building the course on Perusall. Either way, the problem was easily solved. To this absent-minded professor, it was another demonstration of how easy Perusall is to use.
As promised, here is the prompt for the collaborative portion of the forecasting project in my upcoming Middle East course. There are two of these team deliverables — a draft report due after students have submitted the first three of their individual CHAMP assignments, and a final version (shown below) due at the end of the semester. In terms of contribution to course grade, the draft and final versions together are worth only a third of what the five individual assignments are worth. Also, a portion of the course grade will come from teammate evaluations.
Your team is interviewing for a job with Eurasia Group. The interview process requires that your team submit a jointly-written report on your team’s Forecasting Project topic using the CHAMP framework:
Comparisons
Historical Trends
Average Opinion
Mathematical Models
Predictable Biases
Your team’s final report should apply all five components of the CHAMP framework in a forecast that is no more than five pages of double-spaced 11- or 12-point font text. Do not use quotations of sources in the report. Reference source material using footnotes. See the list of syllabus readings for proper bibliographic format. Footnotes can be in 10-point font.
Only one member of your team needs to submit the document for this assignment.
Your team’s work will be assessed using the rubric below.
Now I just need to create rubrics for the project’s individual and team assignments . . .
As promised in my last post, here is an example of iterating so that students repeatedly practice the same skills.
As I’ve previously mentioned, I’m putting a forecasting project into my fall semester Middle East course. The project’s constituent assignments will be based on the CHAMP system recommended by people like Phil Tetlock. A brief description of CHAMP is at the end of this Financial Times article by the economist Tim Harford.
My prompt for the first CHAMP assignment reads:
You are interviewing for a job with Eurasia Group. The interview process requires that you submit a forecast on your team’s Forecasting Project topic. The forecast needs to use the CHAMP framework:
Comparisons
Historical Trends
Average Opinion
Mathematical Models
Predictable Biases
In a one-page, double-spaced, 11- or 12-point font document, answer these questions for the Comparisons portion of your forecast:
What other cases are comparable to this situation? How do they indicate what will happen this time?
My guiding questions for the other CHAMP assignments are:
Historical Trends
What individuals, groups, and institutions played key roles in similar events in the past? How are these “power players” likely to influence the current situation?
Average Opinion
What are the experts predicting about this situation? What is the view that lies in the middle of their assessments?
Mathematical Models
Are there mathematical models or empirical measures that can be used to gain insight into this situation? What do these models or measures indicate?
Predictable Biases
How has your thinking been affected by emotion and personal preference? How have you adjusted your analysis to account for these biases?
I’ll talk about the team-based aspects of this project in a future post.
I’ve begun integrating Perusall into my online, asynchronous graduate international relations courses. First up is a course in our master’s degree program that starts next month. I’ve chosen to start with this one because I typically assign an analysis of a peer-reviewed journal article in lieu of a midterm exam, and the questions in my Perusall assignments for undergraduates mirror my instructions for the article analysis. Regular Perusall assignments will give them opportunities to develop skills they will need for the article analysis.
While practice improves performance generally, in this case I see it as particularly important. A growing proportion of our M.A. students are undergrads who have opted for a fifth-year master’s degree. They begin taking graduate courses in their fourth year of college. My four-person department only has about ten political science majors per year, but given the organization of the department’s curriculum, I encounter only about half of these majors in the classroom prior to their graduation. This means a wide variation in content knowledge and writing ability among the majors who enter the five-year program and first pop up in my M.A. courses. Making the situation even more complicated: the two-year M.A. students are often mid-career military officers who have first-hand international experience and are very academically talented.
These courses are seven weeks long. Previously I assigned an extensive list of readings, two writing prompts, and discussion board participation each week. I’ve replaced one of the writing prompts with two Perusall assignments in each week. I’m hoping that this change will help build a sense of community among the students, which is more difficult to achieve in an asynchronous online environment than it is in a physical classroom. At minimum the use of Perusall should cause students to notice the superior skills of some of their classmates and stimulate them to increase their own efforts.
I like each of my undergraduate courses to have at least one authentic skill-based deliverable; for example, historical timelines or ArcGIS storymaps. I’m teaching the Middle East in the fall semester, and, having recently read books like Superforecasting by Tetlock and Noise by Kahneman, Sibony, and Sunstein, I hit upon students trying to predict events in the region.
I’ll have students assign percentage probabilities to a list of potential events and give them opportunities to periodically adjust these probabilities. At the end of the semester, students will earn points if the events have happened and lose points if they haven’t, with point amounts weighted against the associated percentages. My formula:
((Sum of the percentage probabilities student chose for each of his or her correct forecasts) – (Sum of the percentage probabilities student chose for each of his or her incorrect forecasts ))*50.
A hypothetical example:
% Probability
Event
Outcome
100
The sun rises in the eastern sky
Correct
30
The capital of Turkey changes from Ankara to Istanbul
Incorrect
70
Barham Sali remains president of Iraq
Correct
50
Iran fires an ICBM with a nuclear warhead at Saudi Arabia
Incorrect
The points earned by this student would be: ((1.0 + 0.7) – (0.3 + 0.5)) * 50 = 45.
The negative effect of incorrect predictions should prevent students from trying to game the process by blindly assigning 100% probability to every event on the list. But they won’t actually suffer from incorrect predictions, since the scale for the final course grade tops out at 950 and more than 1,000 points will be available from all assignments, quizzes, and exams. If an event on the list happens before the semester ends, I will simply use each student’s most recent forecast to calculate point values.
Since forecasting accuracy improves when estimates are averaged, I will show the class the aggregate results of students’ forecasts each time they are updated. I’m also going to divide the class into teams, with the members each team taking the lead — through additional research and class presentations — on becoming more expert on one topic from the list. Potential events that I might put on the list:
The value of the Turkish lira against the U.S. dollar decreases to less than 22:1.
In Iran, the Assembly of Experts names a new Supreme Leader.
An anti-government protest in Cairo results in at least twenty demonstrators arrested, injured, and/or killed.
The president or prime minister of Lebanon is assassinated.
Turkey ends its occupation of Syrian territory.
I’m still working out more of the details on this idea, but I think it could be interesting for me and the students.
Another reflection on last semester’s comparative politics course . . .
I noticed a loose association between final course grades and students’ Perusall activity, so the cost-benefit of engaging or not engaging with Perusall assignments ought to be transparent to students.* Another plus: because Perusall scores student activity automatically with an AI algorithm, the assignments are basically “set and forget” on my end. This aspect was very convenient when I didn’t have the time or inclination to read all of the students’ annotations on certain assignments.
I’m so pleased with how Perusall functions that I’m going to incorporate it into my fall semester undergraduate courses.
They then had to answer this question: What mistakes are Nevada officials making with data about community college students?
As written, the Inside Higher Ed story describes people who should know better falling victim to omitted variable bias and confusing correlation with causation. Although I might be making similar mistakes in evaluating in-class discussion about the assignment, I think that students found it more interesting than most because the assignment was about other students.
Soon afterward, two similar items came across my radar:
In one of his recent contributions to this blog, Chad asks why students should attend class. In his experience
[C]lass attendance and academic performance are positively correlated for the undergraduate population that I teach. But I can’t say that the former causes the latter given all of the confounding variables.
Last year we reported on an accidental experiment in one of Patrick’s courses that allowed us to compare the impact of attendance and the submissions of tasks in online and on-campus groups in Maastricht University’s Bachelor in European Studies. We observed that that attendance appeared to matter more for the on-campus students, whereas handing in tasks was important for the online students.
This year the same course was fully taught on-campus again, although students were allowed to join online when they displayed symptoms of or had tested positive for Covid-19 (this ad-hoc online participation was, unfortunately, not tracked). We did the same research again and there are some notable conclusions to be drawn.
In the first-year BA course that we looked at, students learn how to write a research proposal (see here). The course is set up as a PBL course, so it does not come as a big surprise that attendance once again significantly impacted students’ chances of passing the course.
I decided to survey my comparative politics class on their opinions about Perusall after the first exam. Of a total of thirteen students, only eight were in class on the day of the survey, so the results are in no way statistically representative. But here they are anyway. Each survey item was on a five-point scale, with 1 equal to “strongly disagree” and 5 as “strongly agree.”
Ave Score
Reading other people’s annotations helps me understand assigned readings.
4.1
The university should continue to offer Perusall as an option for undergraduate courses.
3.2
I find Perusall difficult to use.
2.4
I’m more likely to read assigned journal articles that are on Perusall.
3.3
Perusall helped me complete reading responses.
3.6
Perusall helped me study for the exam.
3.4
No obvious warning signs in the results. And my main objective in using Perusall — to increase students’ understanding of assigned readings — was the statement with which they most strongly agreed.
The class has scored on average 80% on Perusall assignments so far. In my opinion, this is a sign that Perusall’s assessment algorithm fairly evaluates the quality of students’ interaction with assigned readings. Since the marking process involves no effort on my part, it’s win-win situation. I’m now thinking of how I can incorporate Perusall into other courses.
As I noted in my first post about Perusall and in previous comments about teaching comparative politics, students have not demonstrated a sufficient level of engagement with or understanding of journal articles I’ve assigned. While collaboratively annotating journal articles ought to help solve this problem, I’m hoping to make the learning benefits of the process more transparent to students by connecting each Perusall assignment to one of my traditional reading responses.
Here is the prompt for all of the Persuall assignments:
Annotate the article to answer these questions:
Article’s subject—what is the question, puzzle, or problem examined?
What and where is the thesis?
What are the independent variables (causes) and how are they examined?
How are the independent variables related to the dependent variable (effect)?
What is the conclusion of the author(s)?
Here is an example of a reading response — the journal article in the Perusall assignment is at the top:
Why did the Arab Spring “succeed” in Tunisia but “fail” in Egypt and Libya?
The Perusall annotations and the reading response are due an hour before the start of the class in which the material will be discussed.
In today’s class, the first of the semester, students will be doing an ungraded practice run at using Perusall. The first graded Perusall assignment, along with its associated reading response, is due Wednesday morning. We’ll see how this goes.