Statecraft in the International Relations Classroom

Today we have a guest post from Eric Cox, an associate professor at Texas Christian University. He can be contacted at e[dot]cox[at]tcu[dot]edu.

Does the online Statecraft simulation improve student learning when used as a key component of international relations classes? I explored this question in a Journal of Political Science Education article through a controlled comparison of two IR course sections taught during the same semester. One section was randomly chosen to participate in Statecraft, the other was assigned a research paper. The primary finding of the study was that students in both sections performed similarly on exams when controlling for other factors.

Statecraft is a turn-based simulation that divides students into “countries” that they govern. Each country must choose its form of government, economic system, and other attributes. Players also choose whether to focus on domestic spending priorities such as schools, hospitals and railroads, or on military capabilities. They must deal with terrorism, the melting of Ice Mountain, pirates, and rumors. The simulation is, to put it mildly, complex. I have been using it for just over a decade.

To try to put the students doing the research paper on an equal footing with those engaged with Statecraft, I dedicated several days of class to instruction in research writing skills and peer review. The students in this section spent roughly the same amount of time in class on their paper as the students in the Statecraft section did on the simulation. Both groups also wrote about the same amount.

At the end of the semester, I compared class performance on three exams and gave students a brief survey on their experiences. The initial findings were surprising: the research paper class did much better on exams but were less satisfied with the research assignment than the Statecraft students were with the simulation. I obtained access to students’ GPA when entering the course, and re-ran my analysis with GPA, whether students were taking the course for a grade, and whether students were political science majors as controls. Once these controls were introduced, the effect of Statecraft went away. The strongest predictor of course performance was their incoming GPA. Students with high prior GPAs made As, B students made Bs, and so on. Academic performance was independent of the research paper or Statecraft assignment. However, students in the Statecraft section showed a strong preference for the simulation over a traditional research paper, and students in the research paper section indicated they would have rather done Statecraft. Subsequent student evaluations have also demonstrated the relative popularity of Statecraft.

That said, my use of Statecraft has evolved, something I discuss in detail in my chapter of Teaching International Relations. Foremost, I dedicate class time to the simulation, and draw examples from the simulation when discussing IR theory, issue areas, and current events. Students have indicated that the simulation gives them a greater appreciation for the complexity of international relations and the challenges leaders face. 

Editor’s note: previous posts on Statecraft can be found here.

Observing Observation

Two weeks ago, students in my economic development and environmental politics course played my simulation on freshwater resource scarcity in Asia. If my memory is correct, it was the first time running the simulation in the physical classroom, and I was interesting in whether students behaved differently in the face-to-face environment compared to a prior iteration of the simulation that occurred online.

You can lead the students to knowledge . . .

The underlying mechanics of the simulation were unchanged: six teams, each representing a different country with one or more transnational rivers crossing its territory. Turn by turn, the population expands, more food must be produced, and water demand increases, yet countries are building dams upriver and rainfall declines because of climate change. Eventually a country has a famine and millions of refugees spill into its neighbors.

This time around I added a victory condition: members of the team with the greatest percentage growth in GDP per capita when the simulation ended earned five points (out of a thousand) toward their final grades. I put a copy of the simulation’s spreadsheet, which shows how actions taken by teams affect water availability, food production, hydroelectricity generation, and GDP, on the LMS and encouraged students to experiment with it before the simulation started.

Student did seem more engaged with the simulation in the classroom than they had online, though it was far easier for me to observe their interactions. The real surprise was how baffled students were about the cause and effect relationships built into the spreadsheet. Growth in GDP requires growth in hydroelectric capacity, which only comes from building dams. Yet teams were hesitant to build dams. By the end of the simulation, China, for example, had stockpiled enough of a reserve to have constructed over one hundred dams, yet it had built only a handful. The largest change in GDP among the six teams? Only 1.1 percent over a twelve year period.

Students clearly had not tried to figure out the spreadsheet before the simulation started, and none of them seemed to understand the relationship between economic growth, food, and water. Consequently, many of them flailed about helplessly as their country’s water supply steadily dwindled. When asked during the debriefing why they chose inaction instead of action, I got mostly blank looks. As I’ve noted before, many students seem to have little understanding of cause and effect; instead, in their worlds, stuff just happens. While I would prefer not adding multiple assignments to the course to force students to work with the simulation’s causal relationships before the simulation actually begins, it might be necessary.

(Very) asynchronous online negotiating

Relevant time-keeping device

As I’ve mentioned before, part of my new role involves designing a negotiation exercise of an online, asynchronous programme.

This presents a number of rather basic problems, so consider this a bit of my attempt to try and work them out.

First up is the asynchronicity.

A fundamental part of negotiating is interaction, so if you can’t do that there-and-then, you have to deal with a major challenge. In this case, our usual cycle for students is a week, within which we set work for them to fit around their other commitments. Since most of our students are working, or have other major life obligations, that means it’s really hard to ask for anything speedier.

Even if most could turn things around in a matter of days, we can’t be certain that everyone can, so those not able to would suffer in the exercise.

Secondly, there a debrief issue.

The materials we produce are intended to be used for several years: our role is relatively separate from delivery, as our system of associate lecturers handle most of the pedagogic queries and support. If we accept that negotiation must have debriefing (and I certainly do), then how do we fit that into this system? Is it my work, or associates’, or do we have some generic points to reflect upon, and how would any of these models operate?

Finally, we have the tiny question of scale.

I don’t know how many students will be using this exercise in any given presentation (as we call our delivery), so I need a negotiation that can cope with both a large number of participants and a varying number of participants. Our plans say 80, but that’s neither here nor there, except in the most general of terms.

Oh, and I have to assume none of the students will have any prior experience in negotiating.

So what to do?

I’ve been working around some different abstracted options for a while to handle all of this, and it might be useful to consider these for a while. They vary by how much of a ‘negotiation’ they involve, since that interaction issue strikes me as the most fundamental one.

Obviously, the starting model is a set-up where there is direct student-to-student negotiation: it’s prototypical and best allows them to develop practical skills. But it needs much time, much support and debrief. Plus you have to work out roles.

So maybe you could have instead a ‘negotiation’ with an automated interlocutor: a ‘chose-your-own-adventure’ approach, effectively, but with a computer programme rather than a paper-based text. It can be played individually, paths/outcomes are fixed so feedback is easy, but it’s not so very much like actual negotiating.

A different direction would be to ask students to do the prep work for a negotiation: drawing up negotiating briefs, setting out positions and the like. This is crucial part of negotiating, so it’s prototypical, but without the pointy end of testing out ideas. It’s more manageable for support and debrief, but probably isn’t as engaging.

And most distantly of all, you could ask students to study a real-world negotiation, through the lens of some theory. That’s also a good skill to learn, but it’s not so hands-on as any of the others.

In short, it’s a world of compromises.

For our purposes, we really want to build practical skills, so we’re currently closest to the first option: the ‘proper’ negotiation. As we often discuss here, the purpose of the exercise needs to be clear to you and to the student, otherwise it’s pointless making choices. In that sense, having the discussion with the rest of the team was an essential step in moving this on.

My tentative model right now looks like the following, working within the constraints I have.

In my 4 week block for this, and alongside other work they need to do, I’m planning to give students a crash course in how to negotiate (week 1); two (and maybe three) rounds of negotiating (weeks 2-4); and some debriefing (week 4).

The block topic is international challenges to political stability, so I’ll be using a climate change topic as the substantive focus, which also allows me to use a UNFCC-style format, with a couple of hundred roles that I can allocate to individuals. Those roles will have an order, so we start by populating key representative states (in terms of the different preferences) and then work through to everyone else, so we can accommodate the varying numbers. Probably that means making a generic position pack, plus some headlines for each role, with some requirement to expand on that through their own research.

The training would be some materials on practical negotiating, plus an option to download a small crisis game, to play offline with friends/family or even just to muse upon.

The main section would then require students to post positions/text on a forum each week, ideally to build a single text for final approval. This will require relatively simple technology, but does rely on students to be able to build coalitions and engage in discussion, which will be an issue for some.

To keep debrief viable, we’d probably need to start with a draft text – to keep things within relatively clear bounds – then provide cues to students to aid their own reflection, with some debrief points that could track key issues within the draft. This should make it more possible to keep associates on top of what’s gone on.

And that’s about as far as I’ve got on this.

There are lots of practicalities to work through, at all steps, but we think the basic design is viable. As I work through those, I’ll write more, but I’d love to hear thoughts.

The Challenge Game

Elia Elisa Cia Alves

Today we have a guest post from Elia Elisa Cia Alves, Federal University of Paraíba (UFPB), and Ana Paula Maielo Silva and Gabriela Gonçalves Barbosa, State University of Paraíba (UEPB), of Brazil. Elia Elisa Cia Alves can be contacted at eliacia [at] gmail [dot] com.

The Challenge Game was developed by a group of professors at the State University of Paraiba and the Mettrica Lab in Brazil. It is suitable for teaching concepts in international relations theory, such as state survival within an anarchic system, the security dilemma, alliances and the balance of power, and hegemony.

Ana Paula Maielo

To play this game in the classroom, you will need 1) approximately 8 to 50 students who can play either individually or in teams, depending on the purpose to which the game is put, 2) candy, points, or some other reward that can be distributed, and 3) a method of determining the winner of a challenge between two parties, such as dice (high roll wins), rock-paper-scissors, or an online random number generator. Also, the rules of the game should be visible to students during the game.

The game is played in four rounds of approximately ten minutes each. A challenge is a one-candy bet (a loss results in one piece of candy being taken away) with a 50% probability of winning. Any individual or team that is challenged must participate in the challenge. Only one challenge should occur at a time so that the instructor can note what happens. A student or team that ends up with zero candy can no longer issue challenges; they are “dead” for the remainder of the round.

Round 1: Each student starts with one piece of candy. The winner of a challenge takes one piece of candy from the loser and can then challenge someone else. Any student who loses all of his or her candy is out of the game for the round. Depending on class size, the instructor may want to limit each student to a maximum number of challenges.

Round 2: Candy is distributed unequally among students. Most students should have 1-2 candies, a few students should have 3, and only a couple of students should have 4. The instructor may want to allow students to form alliances, in which case students can borrow candies from each other if needed. However, the loan is optional.

Round 3: Group students into teams. Distribute candy unequally among teams as in Round 2. Each team represents a nation-state. Students within a team decide, using any decision making method they choose, whether the team challenges any other team. As in Round 2, the instructor might allow teams to form alliances.

Round 4: Group students into teams and distribute candy as in Round 3. The professor grants special rules to only teams that have the greatest number of candies, such as altering their odds of winning a challenge. After the game, the professor should debrief the class to link theoretical international relations concepts to students’ experiences of the game. In our JPSE article, we suggest several questions that can be used as part of the debriefing.

Designing to constraints

Seamless integration

It’s summertime, so in between the flood warnings (seriously), it’s time to be doing some Big Thinking about teaching.

As part of my new role at the Open University, I’m contributing to a new Masters in IR, including the development of a simulation exercise.

I’ll be writing a lot more about this simulation in the next couple of years, mainly because the constraints are very different from those I’ve worked to before, with a big pile of knock-on consequences.

As a completely new programme, we’ve got relatively more space to manoeuvre than would be usually the case, but still the constraints loom rather large. As such, I’m dwelling on my third step of my usual approach to such situations.

For those unfamiliar with the OU, it’s the UK’s largest university (nearly the enrolment of the University of California system) working almost entirely on a distance-learning model. We have a lot of adult learners and a very flexible approach to working up to qualifications: you take a module at a time.

The new Masters will be entirely remote, with a taught module that runs for 36 weeks, followed by a dissertation. For most of that 36 weeks, we provide a collection of teaching materials – written and audio/visual – through our website, with structured activities for students, building up to interim and final pieces of assessment.

My role, as part of the central teaching staff, is to create those materials, which have to be able to stand being used by students for several years before a refresh, with activities supervised and moderated by a large team of associates, who handle the bulk of the direct interactions with students.

The upshot here is that I’ve been trying to work up a negotiation simulation that fits a number of requirements that are usually not that conducive to such things:

  • Student numbers will be variable across iterations;
  • I can’t assume all students will be doing this via our website (we have a significant number of students with various accessibility challenges, so they might only be able to learn via a printed version of our materials);
  • As such, synchronous interaction is not an option;
  • Even asynchronous interaction will be a problem for some;
  • And I can’t assume any prior knowledge of negotiation.

As the old joke about getting directions in Ireland goes, you wouldn’t start from here.

But that’s been precisely why I’ve enjoyed my first months here: it’s not run-of-the-mill and I’m being forced to think about how to manage the situation, rather than simply reinvent the wheel.

For those of you not moving jobs, then remember that you too are working to constraints, but you might just have internalised them to a degree. None of us gets a completely free hand, or even something close to one.

The response here is to work with the constraints, not against them.

Whether it’s a oddly-shaped room, or a limit on your timetabled time with students, or making necessary adjustments for students with disabilities, or building in assessment obligations, or a departmental edict against X, Y or Z; then it’s the same thing. Whatever things might be blocked, then other things become possible.

The beauty of education is that it’s not uniform and that there’s no one correct way to do it: variety is a good thing, for so many reasons.

In my case, I’ve used those constraints to explore the options with the rest of the team. That meant presenting a number of basic models to them, with their benefits and disadvantages, all grounded in the question of what purpose this simulation is fulfilling within the programme.

Off the back of that discussion, I’m not working up an approach that combines at least two of those models, which we’ll discuss again in September. And as we settle on things, I’ll write more about how that might work and the further integration and delivery challenges that have to be addressed.

Improving Simulation Efficacy With a Scaffolded Final Exam

A follow-up to my post in April about making exams exercises in active learning:

From the very beginning of my teaching career, I’ve emphasized, or at least tried to emphasize, the importance of being able to construct evidence-based arguments. My exams are almost always intended to evaluate students’ proficiency at this task. As I mention in the post linked to above, the final exam for my comparative politics course in Spring 2020 included the stock phrase of:

reference course readings to support your argument.

For the final exam in Spring 2021, I substituted:

support your argument with 1) information from the Gerkhania server on Discord, and 2) cited references to at least two red and two green journal articles listed in the syllabus.

Explicitly requiring the citation of four articles that students were nominally already familiar with from previous assignments resulted in greater use of scholarly evidence in exam essays than had typically occurred in the past. Students sometimes didn’t use these sources in ways that actually supported their arguments, but in these cases I could tell that at least an attempt had been made.

However, to my surprise, not a single student referred to specific player behavior during the simulation. That is not how students read “information from the Gerkhania server on Discord.” Instead, they summarized the simulation’s outcome or, worse, repeated the general background information on Gerkhania that I had provided before the simulation began. So, for 2022, the exam prompt should probably include something like:

support your argument with 1) examples of specific actions made by players during the Gerkhania simulation, and 2) cited references to at least two red and two green journal articles listed in the syllabus.

This is all well and good, because my main purpose for the final exam is summative assessment of learning. But I also want the final exam to help me gauge whether the Gerkhania simulation contributed effectively to this learning. While the first part of my potential exam prompt gets at this question indirectly, I think more is needed. So I have been thinking about “scaffolding” the final exam around the simulation.

I typically run Gerkhania over three sessions. It occurred to me that I could assign something like the following after each session:

Which theoretical perspective best explains players’ behavior in today’s Gerkhania session? Why? Refer to specific player actions and Course Reading X in your response.

These assignments would be short pieces of writing, easy for students to complete and for me to grade. They would allow students to practice for the final exam, and they would function as a contemporaneous reflective through-briefing rather than just a post-hoc debriefing. And I would be able to observe whether students’ ability to construct evidence-based arguments about the simulation improved over time.

Discord With Gerkhania

In my comparative politics course this past semester, I ran my usual Gerkhania simulation on Discord as an experiment. Discord is a free social media platform that Amanda has discussed previously. It was a positive experience, for the following reasons:

I had never used Discord before, yet it was very easy to figure out. Discord’s design is very intuitive and setting up the simulation was very simple. Students also found Discord easy to learn.

Students interacted more with each other than they did last year when I used Webex, despite a similarly small class. Webex does not allow for spontaneous communication between participants except for one-to-one chat messages. When building the Discord server, I granted students access to different communication channels according to their roles in the simulation. For example, a student representing an ethnic Khan who practiced the Montian religion had access to channels devoted to each group and could automatically message other Khans or Montians at any time. As server host, I could observe and participate in these conversations in real time.

Discord permits text, voice, and video communication. I deliberately chose not to use its videoconferencing capability and none of the students used it either. We communicated with each other solely through text messages. I believe this enhanced rather than degraded the experience in comparison to Webex — no black boxes instead of faces, and no interrupted video or audio because of low-bandwidth internet connections. A user interface that facilitates text communication also means Discord is suitable for running a simulation like Gerkhania asynchronously rather synchronously, something that isn’t realistic with video-based platforms.

My use of Discord also meant that students automatically had a complete record of the simulation’s events that they could reference for the final exam. I did not have to take any additional steps, like create and share a recording, for the class to have a history of what had transpired.

Active Learning is More than Just Simulations

I’ve been in a few sessions recently where well-meaning faculty point out how important active learning is—true!—and then immediately mention ‘simulations and games’ as key examples of active learning (AL). Also true! But let’s be clear, simulations and games aren’t the only kind of active learning. They aren’t the most common kind, the easiest to do, or even what I would recommend that most faculty start with. When the right simulation or game is chosen, executed well, and debriefed effectively, it can be a great learning tool. But games and simulations are neither necessary nor sufficient for active learning, and I want to encourage everyone to think more broadly about how to increase AL in their classes.

Active learning is any tool, technique, or approach that calls on learners to actively engage in the learning process. The point is not the tool itself, but adopting a learner-centric approach that ensures that students are not simply passive recipients of information. ‘Activating’ the students, then, is about asking them to think, process, and make connections about the material, rather than just listen, read, or write down information. In some cases, a passive approach makes sense! Sometimes you really do just have to transmit information. The problem arises when we consistently turn to passive approaches without considering and experimenting with active approaches, which have a solid record of producing better engagement and learning. See for example Deslauriers et al 2019, where even students who thought they learned more from a more passive approach actually learned more from an active one.

Simulations and games, then, can be active or passive, depending on whether everyone has the tools to effectively participate or actively watch and listen. Watching others play a game is only active if the observers are prompted to provide comments and input based on their observations. In such cases, they are active observers. Even participation doesn’t necessarily make the experience ‘active’. A simulation or role-play exercise where a student is too anxious about their performance or grade to pay attention and fully participate is not active for that student. So AL is not just about the activity you do, but how you use it and help students learn from it.

Moreover, AL encompasses so much more than simulations and games. Structuring a lecture around a provocative question, where students are encouraged to think through the steps as you go along, can be active. So can asking good discussion questions that lead to dynamic student to student debates. Asking students at the end of class to reflect on what they learned that day (or what was still confusing) is a method of active learning, and in can be done in one minute at the end of class, or as a written, audio, or video journal they create throughout the term. 

When you consider that active learning can really be just small interventions in teaching (as Jim Lang puts it in his book,Small Teaching: Everyday Lessons from the Science of Learning), it suddenly becomes achievable for everyone. Simulations and games are sometimes a tough sell—they can seem juvenile or take too much time away from other content. But active learning? The benefits are clear and centering such techniques doesn’t actually require much work or time.

Even this blog makes this mistake—we are Active Learning in Political Science, and yet most of our coverage is on games and simulations. So consider this a call for a broader approach, one that brings legions more faculty into the world of active learning, without requiring a conversion to the gaming world. Let’s look for the small interventions that anyone can use—from a great discussion question to a good group activity to great reflective prompts—and be more careful with how we define and explain what active learning really is.

Online Simulations & Games Using Discord

I mentioned Discord about a year ago as we were all turning to virtual instruction at the start of the pandemic. I want to return to it specifically in the games and simulations context, though, as it has some really useful properties that can aid those instructors looking for a way to run their online simulations. If you are ready to start thinking about how to run Model UN, Diplomacy, or other complex simulations online, you should really consider Discord.

Discord is a social media platform used by gamers, podcasters, and other content creators to connect with their communities. Each group has their own discord ‘server’, a private space that you can only enter with an invitation. Inside, you can create text and voice based ‘channels’ that let you structure conversations by topic. These channels can be open to everyone on the server or private and hidden. As the server creator or administrator, you also have a lot of latitude for customizing settings–such as making something read-only or enabling ‘slow mode’, which prevents any one person from dominating the conversation. And server members can message each other individually or create small groups for private conversation. The text conversation is asynchronous, but it is easy to jump into a voice channel for voice-only or video conversations.

This kind of format lends itself very well to running complex simulations. There are several key needs for running an online simulation:

  • Instructors must be able to review rules and procedures, share documents and updates, and take questions from students, publicly and privately.
  • Students need to be able to post in-character public messages for other participants to see.
  • Students need to be able to post privately to their teammates, if they have them.
  • Students need to be able to send private messages to other students for secret negotiations.
  • Students may need to post files or links, share their screen, or jump onto a quick voice conversation.

It is easy to do all of this in Discord, without the constraints of a standard learning management system/virtual learning environment. By creating ‘roles’ in the server with different permissions, you can divide students by their teams or in-game roles and set channels that only they can access and that can identify them within the server. This makes communication much easier. For example, if you are running a UN Security Council simulation, you can create a ‘role’ for each country in Discord. You might not need to set up private channels for each country if there is only one person in each role, but this allows students to message each other without having to check a list of who is playing what role. They could also have a public channel for making speeches, and another where they upload and discuss the wording on resolutions. If you are running a full UN simulation with many different committees, you can have channels dedicated to the General Assembly and each committee, and private channels dedicated to each country so members of the same team can talk privately and share information. Discord therefore supports simulations both large and small.

I’m using Discord right now to run a game of Diplomacy in my ISA Career Course on Games and Simulations in International Relations (with Victor Asal). There are plenty of online platforms that you can use, but I chose to use Discord because I didn’t know in advance if I would have more than 7 players. Most online platforms don’t allow for teams–but Discord does. Here is what the server looks like:

As you can see, I have a general channel for administrative purposes. I’ve since created a new read-only channel called ‘maps and result’s where I post the outcomes of each game turn along with updated maps. The public channels–text and voice–are open to all players if they want to openly communicate. Italy has made a call for peace and protection of the status quo–but no responses so far! the other channels are organized by country category. Each country has a private text and voice channel open only to their team and the facilitators. They also have a private ‘orders’ channel where they submit orders for their units each turn. I use those channels to adjudicate each turn. If they want to message another team, all they have to do is right-click on the name of the person they want to message (their country name is next to their name) and select ‘message’ and that will open up a private conversation for negotiations. The person-shaped icon in the top right of the screen pops up the list of server members for this purpose. It will also tell you who is online in case you want to invite them into a voice chat.

Running the game this way instead of over email or through an online game system gives me several advantages as an instructor. I can keep tabs on most of the gameplay, although some private conversations I would only see if I’m invited to join them (something you can require if you want). I also have a record after the gameplay of everything that happened, which is useful for debriefing, grading, and assessment. The interface is easy to use, and once students get familiar with it, you can reuse it for different games and exercises throughout your course. I can also allow ‘observers’–people who want to watch but not play. I can give them as much access as I want–for example, I can limit them to read- and listen-only so they can’t interfere with the game play.

I’ve used discord for running an monthly trivia game as well as a 200+ person multi day conference, so I can attest to its robust capabilities. It is free, accessible from outside the US, pretty easy to learn, and has a robust mobile app that make it accessible to students. The main downsides are that the server creator needs to put in a bit of work to figure out how to set up the server to meet your needs, and that the video and screen sharing systems aren’t always reliable. Asynchronous text channels and voice channel work just fine though.

I know a lot of faculty want to run simulations but are restricted by social distancing or virtual classrooms. If you are ready to try something new, try Discord. I have no relationship with the company and am not being compensated by them for this post–I just want to recommend something that I’ve found very useful in my own teaching.

Benefits of Student Reflection

Today we have a guest post from Colin Brown, assistant teaching professor, and Jennifer Ostojski, Ph.D. candidate, from the political science department at Northeastern University. They can be contacted at  colin [dot] brown [at] northeastern [dot] edu and ostojski [dot] j [at] northeastern [dot] edu.

This year we have had to adapt the short, focused simulations for reinforcing material that we like to use in the classroom to the virtual environment. This adaptation has caused us to think more about the value of independent student reflection in relation to group debriefings.

Colin had previously developed a simulation of coalition-building in Germany (available here at APSA Educate) for introductory comparative politics, which had two main learning objectives: (1) gain familiarity with German political parties as an example of multipartism, and (2) understand that big, centrist parties can still exert a lot of agenda-setting power in sometimes-chaotic multiparty systems. A key part of the exercise is the bargaining that occurs as students walk around the physical classroom.

In Spring 2020, we switched to online teaching two weeks before Colin had scheduled the simulation in his course. He made it an optional extra-credit online exercise, in which about one-third of the class participated. In lieu of a debriefing, students submitted ungraded answers to three questions:

1. What did you find hardest about reaching a coalition agreement?

2. What new perspective does this give you on the German case in particular?

3. What might be some of the strengths and weaknesses of coalition governments, and how did those play out here?

We used slightly different online versions of the simulation in Fall 2020. In Colin’s course, students stayed muted/invisible and used the private chat function to communicate during simulation sessions. Jennifer’s larger class used breakout rooms with students communicating with one another behind the scenes via Zoom chat, a classroom Slack channel, and social media (which more directly simulated the more intentionally chaotic in-person discussions). Colin assigned students to parties right as the simulation began while Jennifer provided students with party roles beforehand.

Based on the written responses and discussions, students in our courses learned the central lessons of the simulation equally well, and equal to the in-person format in prior years, despite the difference in communication methods and the timing of role assignments. However, Colin’s Spring cohort seemed to demonstrate better knowledge of both the specifics of the German system and broader concepts about multipartism, whereas the students in our Fall courses displayed more learning of broad concepts than of specific details. We found it interesting that the Spring students seemed to pick up more details from the simulation despite it being, well, March 2020. Our hunch is that writing responses to the reflection questions caused students to spend some minimal amount of time and effort checking whether they were correctly using relevant concepts. Although it is hard to rule out selection effects, engaging in independent reflection might benefit students’ learning whether the simulation is online or in-person, even if it is not the most memorable or visible part of the exercise.