American Autogolpe

A brief post this week about the televised hearings of the U.S. House of Representatives’ January 6 committee.*

Republic

I teach democracy from a comparative perspective, a challenge when students have had the ideology of American exceptionalism drilled into them since birth.

When watching the second installment of the hearings, it occurred to me that they could serve as a reality check for students who tend to see “democracy” as a purely American phenomenon and whose culminating undergraduate achievement is a legalistic rehash of a 19th century Supreme Court opinion on the U.S. constitution’s Establishment Clause.

In my opinion, a much more meaningful exercise would be for students to research forms of democracy and threats to it globally. A class could be divided into teams with each team analyzing a different country in relation to the USA. Testimony from the hearings could be used to identify pivotal events that might or might not parallel what has happened in, for example, Venezuela.**

It just so happens that there are plenty of people who already thought of this kind of project — the folks at Democratic Erosion. Check out their sample syllabus for a semester-long course.

* full name: Select Committee to Investigate the January 6 Attack on the United States Capitol

** with readings such as Javier Corrales, “Authoritarian Survival: Why Maduro Hasn’t Fallen,” and Milan W. Svolik, “Polarization versus Democracy,” which appeared in Journal of Democracy in 2020 and 2019, respectively.

A Tempting Experiment

You’re not trained for that

I recently listened to this episode of the Hidden Brain podcast, on using audible clickers to train humans how to throw a frisbee and perform surgery. Clickers seem to be very effective in part because they substitute for other, possibly emotion-laden reactions from the trainer.

I wondered how I might use clickers as a teaching tool, and had an email conversation with a psychology colleague who specializes in behavioral training. Here is the gist of the conversation:

Clickers provide immediate positive feedback for a specific, discrete action within a complex chain of behaviors, without the need to interrupt the chain as it unfolds.

Any process that is composed of multiple, discrete behaviors is amenable to clicker training, as long as the process can be observed by the teacher and the clicks can be delivered within a second or two of observing the targeted response. An activity like writing is probably not suitable for clickers, because the writing process can’t easily be separated into a series of precisely-defined behaviors, and it would require that the teacher continuously observe the student perform the writing task from start to finish.

However, the technique could be applied to something like class presentations — with clicks delivered when students complete important components of the presentation, such as using terminology correctly, answering an important question, speaking at an adequate volume, or making eye contact with the audience.

Clicks should initially be delivered each time the targeted behavior is displayed, but then systematically delivered less often as the behavior becomes more frequent. Likewise, they should only be used to indicate support for a desirable behavior rather than to signal disapproval of an undesirable behavior.

Last, teachers should obtain consent or buy-in from students about the process and its goals.

Can Students Predict the Future?

I like each of my undergraduate courses to have at least one authentic skill-based deliverable; for example, historical timelines or ArcGIS storymaps. I’m teaching the Middle East in the fall semester, and, having recently read books like Superforecasting by Tetlock and Noise by Kahneman, Sibony, and Sunstein, I hit upon students trying to predict events in the region.

I’ll have students assign percentage probabilities to a list of potential events and give them opportunities to periodically adjust these probabilities. At the end of the semester, students will earn points if the events have happened and lose points if they haven’t, with point amounts weighted against the associated percentages. My formula:

((Sum of the percentage probabilities student chose for each of his or her correct forecasts) – (Sum of the percentage probabilities student chose for each of his or her incorrect forecasts ))*50.

A hypothetical example:

% ProbabilityEventOutcome
100The sun rises in the eastern skyCorrect
30The capital of Turkey changes from Ankara to IstanbulIncorrect
70Barham Sali remains president of IraqCorrect
50Iran fires an ICBM with a nuclear warhead at Saudi ArabiaIncorrect

The points earned by this student would be: ((1.0 + 0.7) – (0.3 + 0.5)) * 50 = 45.

The negative effect of incorrect predictions should prevent students from trying to game the process by blindly assigning 100% probability to every event on the list. But they won’t actually suffer from incorrect predictions, since the scale for the final course grade tops out at 950 and more than 1,000 points will be available from all assignments, quizzes, and exams. If an event on the list happens before the semester ends, I will simply use each student’s most recent forecast to calculate point values.

Since forecasting accuracy improves when estimates are averaged, I will show the class the aggregate results of students’ forecasts each time they are updated. I’m also going to divide the class into teams, with the members each team taking the lead — through additional research and class presentations — on becoming more expert on one topic from the list. Potential events that I might put on the list:

  • The value of the Turkish lira against the U.S. dollar decreases to less than 22:1.
  • In Iran, the Assembly of Experts names a new Supreme Leader.
  • An anti-government protest in Cairo results in at least twenty demonstrators arrested, injured, and/or killed.
  • The president or prime minister of Lebanon is assassinated.
  • Turkey ends its occupation of Syrian territory.

I’m still working out more of the details on this idea, but I think it could be interesting for me and the students.

Thorstein Veblen Was Right?*

Not your area, get back to work

Tangentially related to Simon’s last post about the use of metrics to rank universities . . .

I am not one who subscribes to the idea that academic employment is the pursuit of a “life of the mind.” I find no satisfaction in constructing my self-identity around my job. But others do. So off we go on another stroll down the yellow brick road.

According to a recent report by the American Academy of Arts and Sciences, fewer people are obtaining graduate degrees in the humanities. The number of humanities master’s and doctoral degrees peaked in 2012 and 2015, respectively. Walk backward from these peaks using average time to degree: more people enrolled in graduate school to avoid the terrible labor market caused by the 2008 market collapse and subsequent recession. Although the numbers of awarded master’s and doctoral humanities degrees have decreased substantially from their peaks, the number of advertised tenure-line jobs in these fields has fallen by much more — 40 to 60 percent.

Focused study of the humanities has also dropped precipitously at the undergraduate level, for the reasons detailed by the author of that linked essay. Fewer students majoring in these fields means fewer PhD-holders are needed to teach them.

Meanwhile, on the other side of the metaphorical disciplinary aisle, we have a study showing that less than 30 percent of people awarded PhDs in the sciences, engineering, and mathematics were in tenure-line academic positions. Most were in nonacademic careers. For these folks, the grass outside the Ivory Tower is greener.

But you might be thinking: how does this relate to me, given that I’m in the social sciences? Let’s take a look at the work environment for the people, regardless of academic specialty, who do become university faculty. Last year, the American Association of University Professors (AAUP) published the results of a survey on faculty governance. AAUP images of the results are embedded below. Respondents felt that they were in charge of evaluating student performance and generating curricular content, but thought that they had far less authority over program delivery, IP, and admissions policies:

The picture is far worse when it comes to faculty perceptions of managerial hiring, planning, and budgeting processes:

It seems that a large portion of university faculty in the USA see their work environment as one in which they are told, “You do the teaching and grading, we’ll make all of the important decisions ourselves.” This bifurcation between how faculty would like universities to operate and how they actually operate is, for me, additional evidence of higher education’s long-term structural transformation.

*For those of you who don’t get the reference, a summary.

Perusall 4

Another reflection on last semester’s comparative politics course . . .

I noticed a loose association between final course grades and students’ Perusall activity, so the cost-benefit of engaging or not engaging with Perusall assignments ought to be transparent to students.* Another plus: because Perusall scores student activity automatically with an AI algorithm, the assignments are basically “set and forget” on my end. This aspect was very convenient when I didn’t have the time or inclination to read all of the students’ annotations on certain assignments.

I’m so pleased with how Perusall functions that I’m going to incorporate it into my fall semester undergraduate courses.

Previous posts on Perusall:

Perusall

Perusall 2

Perusall 3

*With only twelve students in the course by the end of the semester, I’m not going to bother to try to calculate correlation coefficients.

Creating Wicked Students 3

Time to reflect on the previous semester’s successes and failures:

I might be on to something with the Wicked Problems that I created for my comparative politics course. Previous posts on the subject are here and here. A brief synopsis of the activity: in class, teams of students have to quickly determine and present a possible solution to an unstructured, authentic problem. I put four of these exercises into the course:

  • Political risk consultants recommend to Volkswagen executives which of two sub-Saharan African states is most suitable for establishing a new automobile manufacturing site and sales network.
  • Defense Intelligence Agency analysts identify which of three Latin American U.S. allies is most susceptible to a Russian GRU election disinformation campaign.
  • The United States Institute for Peace delivers a conference speech on constitutional design for leaders of Libya’s major political parties that compares constitutionally-established institutions of government across four states.
  • Members of Iran’s Mujahedin-e-Khalq create a strategy for overthrowing the Islamic Republic by examining revolutionary movements in four other states.

Students found the exercises engaging. My exams included a question that asked students to reflect on what they learned about their problem-solving ability from each Wicked Problem, and their answers indicated a reasonable degree of meta-cognition.

But it was obvious that students failed to use the methods of comparison that I repeatedly demonstrated during class discussions. I expected students to organize their cases and variables into a simple table, like I had, but they didn’t. So, for example, instead of something like this:

BotswanaNigeria
Ethnically heterogeneousNoYes
Prior civil warNoYes
Major oil exporterNoYes
High level of political riskNoYes

students presented the equivalent of this:

Nigeria has a large population and represents a larger automobile market than Rwanda, so Volkswagen should site its new operation in Nigeria.

I suppose the solution is to require that students create their presentations by filling in a blank table, which will force them to select cases and variables in a logical manner.

TLC @ APSA

This year’s Teaching and Learning Conference will be held on Saturday, September 17, as part of the APSA’s annual meeting in Montreal. Full details on the program and registration process are here. The early bird rate for conference registration ends on July 11.

Scientific Teaching: A Review

A colleague who was cleaning out his office gave me a copy of Scientific Teaching by Jo Handelsman, Sarah Miller, and Christine Pfund (W.H. Freeman and Co., 2008). Intrigued by the title, I gave it a quick read. The book contains some general information on active learning and presents a template for organizing faculty development workshops on topics like assessment, but it was not the guide to effective teaching that I had expected. The book does not discuss empirically-backed research on how people learn. At all.

Instead, Handelsman, Miller, and Pfund favorably discuss learning styles, a zombie educational concept that refuses to die. They heavily reference Learning Styles and Pedagogy in Post-16 Learning: A Systematic and Critical Review by Frank Coffield, David Moseley, Elaine Hall, and Kathryn Ecclestone (Learning and  Skills  Research  Centre, 2004) as support for their argument. In the process, they fundamentally mischaracterize the report’s findings.

For example, on page 9, they write that Coffield et al. (2004) “identified over 70 unique approaches to learning styles . . [that] range from models that explain learning styles as innate . . . ‘flexibly stable’ or . . . that contribute to learning efficacy.” Coffield et al. (2004) state very clearly that these are claims made by those who advocate for the concept of learning styles, not that evidence exists for those claims. In fact, when Coffield et al. (2004) examined thirteen commonly used learning-style inventories, they found that twelve did not meet one or more basic criteria for internal consistency, test-retest reliability, construct validity, and predictive validity. They conclude that the field of learning styles ‘‘is bedeviled by vested interests because some of the leading developers of learning style instruments have themselves conducted the research into the psychometric properties of their own tests, which they are simultaneously offering for sale in the marketplace . . . After more than 30 years of research, no consensus has been reached about the most effective instrument for measuring learning styles and no agreement about the most appropriate pedagogical interventions” (p. 137).

The lack of evidence for the existence learning styles was also discussed in detail by Harold Pashler,  Mark McDaniel, Doug  Rohrer, and Robert Bjork in ‘‘Learning Styles: Concepts and Evidence’’ (Psychological  Science in the Public Interest  9, 3 [2008]). They note in this article that adjusting teaching  techniques  to students’ expressed preferences for particular forms of instruction (i.e., learning styles) does not correlate to observable cognitive or skill aptitudes, and that only a handful of published studies citing the existence of learning  styles had conducted valid experimental tests. The lack of evidence for learning styles was also discussed in this 2009 interview with the cognitive psychologist Daniel Willingham.

In sum, Scientific Teaching‘s reliance on a concept that was widely discredited both before and soon after its publication renders it misleading and, therefore, useless.

The Marshmallow Tower Game

Along the lines of my last post, I’ve tweaked another game that I have used previously — the marshmallow challenge. My goal was to illustrate how economic development can be considered a collective action problem in which trust plays a key role. Here are the rules of the game:

  • Each team has 18 minutes to build a tower topped by a marshmallow using the materials provided.
  • The members of the team that builds the tallest tower earn 25 points each.
  • A “Red” player secretly placed on your team gets 25 points if their real team wins.
  • If a team correctly identifies its Red player, each team member wins 25 points. Only one guess per team.

The debriefing discussion included my brief description of Rousseau’s stag hunt scenario, and these questions:

  • If one considers the height of a tower as an indicator of a society’s level of economic development, why did some societies (teams) develop more quickly than others?
  • Did cultural values promote trust among team members?
  • What was in each person’s best interest? Were these interests achieved?
  • How did having a Red on your team affect your team’s behavior?
  • Who do you think the Reds were? Why?
  • How does it feel to be accused of being a Red?

At the very end of the discussion, I revealed that there were no Red players.

The class had ten students that I divided into three teams. One team’s tower collapsed when time expired, but none of the teams exhibited a high degree of dysfunction due to suspicions about the identity of its Red player. As usual, I think the game would work better in a class with more students.

The Bandit Game

In an attempt to rectify the failure of my previous classroom game on ethnic heterogeneity, democracy and dictatorship, I created another game that included a loss aversion component. I intended the game to demonstrate the concepts found in Mancur Olson’s 1993 article, “Dictatorship, Democracy, and Development” (The American Political Science Review 87, 3). Here are the rules for game’s initial version:

  • Each person gets a playing card and 4 chips.
  • The class is divided into small groups.
  • The person with the highest card value in each group is a bandit.
  • The game has five rounds.
  • Each group’s bandit confiscates 1, 2, 3, or 4 chips each round from every other group member. This decision is made by the bandit. The bandit has to confiscate at least 1 chip from each group member each round, assuming the group member has a chip.
  • After round 1, 2, 3, and 4, each non-bandit gets 1 additional chip if they have ended the round with > 0 chips.
  • The person in each group with the most chips after round 5 earns points equivalent to the number of chips in their possession.

Version 2 of the game has the same rules as Version 1, plus:

  • A bandit can switch to a different group after each of rounds 1-4. The bandit with a higher value card turns another group’s bandit into an ordinary person.
  • The new bandit takes the eliminated bandit’s chips and can keep them or distribute some or all of them in any manner to members of their new group.

Version 3 has the same rules as Versions 1 and 2, plus:

  • Members of a group can eliminate a bandit if (a) they have card suits different from the bandit’s suit, and (b) the combined value of their cards exceeds the value of the bandit’s card. If a bandit is eliminated, the bandit’s chips are distributed equally among the challengers.
  • A bandit can retain control if (a) group members with cards of the same suit as the bandit’s decide to ally with the bandit and (b) the combined value of cards of this suit exceeds that of the bandit’s challengers.

Before play started, I stacked the deck with cards from only three suits because of the small class size — thirteen students are registered for the course, but only eleven showed up. I divided these eleven students into three groups.

For all versions of the game, all bandits confiscated the same number of chips from their group’s members in each round, even though the rules did not specify that they had to do this. In Version 1, one bandit confiscated all the chips from every group member in one round, which ended that group’s game play for the remaining rounds — demonstrating that it’s better for a stationary bandit to extract only a portion of wealth from the populace at any given time. During Version 2, no bandit changed groups, and in Version 3, no one tried to eliminate a bandit.

This game worked better than the last one, but it still needs a much larger number of participants for it to function as intended.