Model Diplomacy is a series of free National Security Council simulations put out by the Council on Foreign Relations. Michelle used it in her class last year, and I decided based on her positive experience to try it in my intro to IR class this year. In this post I’m going to explain the basics of Model Diplomacy, discuss my experience using it, and give my recommendation. Spoiler Alert: I loved it.
In a post last year, I talked about the potential of using annotation software like CritiqueIt to make the reading process more collaborative. In short, by creating a single copy of the reading that students can mark up together online, there’s the potential for creating discussion prior to and during class, and also for getting students to see course readings as statements in a dialogue.
My first use of CritiqueIt was promising, but I’m less satisfied after having further used it in two undergraduate seminars plus a graduate-level, continuing education course.
Two things have continued to work, probably still making the tool a net positive. First, as a diagnostic tool CritiqueIt makes class prep easier, because it gives me a window into what students find interesting or are struggling with. Students indicate their interest implicitly or explicitly, and they also seem relatively fine with using their comments to signal that something doesn’t make sense—especially useful when they’re having difficulty with something I didn’t expect. Second, they seem to like it. Students seem to perceive it as a cool new gimmick, and I seem to get credit for trying it.
However, while CritiqueIt lets me know what students want the conversation in class to be about, it hasn’t generated a conversation among students on its own. Students have posted a few responses to other students’ annotations, but the kind of exchange I mentioned in the original post hasn’t happened consistently. Students seem to be completing the assignment because it sends me a signal that they have, in fact, engaged with the reading. This provides me with feedback for me, as mentioned above, but was not my ultimate reason for using the tool.
Since I want students to see political science writings as part of an ongoing exchange of ideas, there are three changes that I’ll be implementing next semester, thanks to insights from my colleague Daniel Smail, who has been experimenting with the same tool in his history courses:
- Build CritiqueIt into the entire semester. Students need time to get used to the tool, and the expectation that it’s an integral part of their work.
- Assign early readers. If everyone reads the night or morning before class, there’s less incentive to start a dialogue that none of their peers will respond to. By dividing up the collaborative readings and having one or two students make their annotations three or four days before class, there will be more time for students to jump into the conversation.
- Work CritiqueIt into summative assessment. This also normalizes the use of the tool, and gives students the incentive to develop better commenting skills. Students will need several days to virtually hand the document back and forth so this has to be accounted for in scheduling other assignments. But giving them a longer piece of journalism on the broad course theme and having them react to it, and then to each other, knowing that their comments will be graded on some explicit rubric, might be a better way to tease out their ability to respond critically to arguments—and actually use something they learned from class.
I’ve been telling students in my first-year seminar that the design of a good game often simultaneously combines chance, strategy, competition, and cooperation. About a month ago I invented a simple game to demonstrate how this could be accomplished.
I took the class outside, defined starting and finishing lines — about twenty-five meters apart — and divided the class into teams. The game had only one rule: every person on a team had to keep his or her left hand on the right foot of another teammate. First team to reach the finish line “won” the game. Continue reading
Earlier this summer I wrote about two changes that I made to my five-week online summer course, Law, Courts and Politics: using Slack for class communication and specifications grading. Both experiments were a success.
Slack was a great addition. I found it easy to set up and to use. Students liked it. Thanks to the resources I noted in my earlier post, I created a simple structure: channel for each week was the home of announcements, files, links, and discussion — the center of the course. The introduction channel gave students the ability to practice and the questions forum got some use, especially early in the term.
Because Slack has excellent apps for all mobile and computer platforms, I hoped that it would encourage regular communication, which it did. Total posts in the weekly channels ranged from 62 in week 2 to 90 in week 4. I posted reminders and introduced topics, but most posts were from the students. Nine or ten students active each week; one student never posted in the weekly discussion forums. I was pleased that a group of students began posting mid-week and continued through the end of the week. Students picked up quickly on hashtags for topics and connecting to their fellow students via the @ symbol, which facilitated interaction. Posts were fairly long too, especially when you consider they were writing on their phones. I had expected phone use to result in short responses to comments, but that didn’t happen. Continue reading
Do online learning teams help students recall facts and/or analyze political problems? Instructors often use online chat rooms and discussions as convenient tools for engaging teams of students in the process of writing, reading, and reacting, helping them recall information, and giving them opportunities for analyzing problems. Despite much agreement about the benefits of online teams, there is little evidence supporting their value.
I collected data on this subject from four sections of my introductory world politics course. Two sections from 2014 with a total of 75 students employed online learning teams, and two sections from the Fall of 2015 and Spring of 2016 with a total enrollment of 86 did not. The classes met at the same time, the same days, in the same classroom, and received the same lectures.
The learning teams were created by randomly assigning five or six students to groups, with each group given its own online chat room. Students received careful instructions regarding the use of chat rooms. They were required to post chat room comments at least twice each week, first with a response to the week’s assignment and second with their reaction to other students’ posts. This activity counted for ten percent of the final grade. Students in the sections that did not use online learning teams were instructed to study material on their own.
I assessed students’ learning with in-class exams on conceptual definitions and a take-home essay in which they used theory to analyze an historical event—for example, applying deterrence theory to the outbreak of the Gulf War in 1990.
A chi square analysis showed that students’ grades on exams and the paper assignment did not significantly differ between the sections with the online learning teams and those without. The findings imply that the online learning teams had no effect on student performance.
These results invite reflection as to why online teams did not lead to evidence of superior factual recall or analytical ability. More than two-thirds of students across all sections earned a B or better on the exams, indicating good recall. While fewer students earned an A grade on the analytical paper, only a small number earned grades below a C+. Perhaps the lectures, problem exercises, and readings that were part of all course sections were enough to generate this result.
Further research should explore the relative efficacy of online teams and weekly study guides. Given that my research did not control for variables like a student’s prior GPA, year in college, major, or extra-curricular activities, it also might be useful to include this kind of data in future investigations.
For additional information, please refer to John M. Rothgeb, “The Efficacy of Learning Teams: A Comparative Analysis,” Journal of Political Science Education 9, 3 (2013): 336-344, or contact him at the email address listed above.
I’m taking a break from specs grading this week–not because I don’t have anything to write about, but actually because I’m too busy writing specs and grading homework modules to write up everything that’s going on. Plus we are in the midst of a search, and I’m buried in applications. I’ll be back on topic next week with my thoughts about grading, and some micro adjustments I had to make to the course as a result of my reflections.
When I’m not talking about specs grading, I try to share some quick and easy ideas for teaching that can make a big difference. These often fall into the vein of James Lang’s Small Changes in Teaching, both his book and his series over at the Chronicle of Higher Ed (definitely worth checking out!).
Today’s idea is about using audio and oral assignments in the classroom.
I spent the majority of my first class of the semester – or I should say, the students spent the majority of their first class creating community norms, or classroom guidelines. This exercise was valuable for a number of reasons.
As classes start up again for the fall semester, this might be a good time to revisit some great ALPS posts on how to approach the first days and weeks of the term. We recommend:
Start the way you finish–A reminder that active learning–regardless of when you may have scheduled a simulation or other activity–begins on day one.
Picture it! A game for the first year students that teaches students to create and read maps while learning how to get around campus.
Close reading a syllabus–why not quiz your students on the syllabus to incentivize them to read it thoroughly?
Government in our lives--an idea for starting that first class session of American politics.
Today’s lucky winner is… and The other side of presenting–two posts that look at how to approach student presentations, from requiring all students to show up ready to present on the day’s topic to teaching students how to be the audience for a presentation.
Happy Fall Semester!
Discussion sections (‘tutorials’ at my institution) are frequently my most and least favorite part of teaching. When they work well, they can be more than the sum of their parts. Informed by the week’s readings, students will drive discussions in unexpected directions, critiquing the work and applying it to new examples. Small group activities will be dynamic and student-led, leaving participants with a deeper appreciation of the material and their own preconceptions.
When they fail, they fall to the lowest common denominator. Unprepared students waste valuable time derailing conversations, trying to hide the fact that they haven’t done the readings. Well prepared students become resentful when, class after class, their peers scramble to extract Cliff’s Notes versions of the material from them. The best outcome to be hoped for class is that students come away with a basic understanding of the essential arguments in the readings.
Other than avoiding the dreaded after-lunch time slot for sections, I’ve not been able to do much to ensure that a section runs smoothly. Some classes just worked and some didn’t. Some students simply don’t seem to be motivated by the threat of a poor participation grade or the promise of a vigorous discussion.
Last Fall, I tried a new strategy: grading preparation instead of participation. I allowed students in my States and Markets class to self-report their level of preparedness and claim a commensurate grade for each class session. I would then spend the first 10-15 minutes of the session calling on three random students (assisted by an on-screen random number generator) to verify that they were prepared. After the verification portion of the class was finished, I led the section as usual — sometimes with discussion and sometimes with activities. Continue reading
Taking Simon’s recent post about encouraging student feedback in a different direction:
Yes, students often perceive and understand differently than I do, and I agree that removing barriers to their acquisition of knowledge as an important part of my job. But in many cases students are as different from one another as they are from me, and some of them are simply not interested in learning.
For example, I’m still using the Quality of Failure essay in all my courses as an end-of-semester exercise in meta-cognitive reflection. Compare these quotes from essays written by two students in a course that just ended:
“When I realized that we only really went over the homework in class, I mentally decided that I didn’t really want to participate because I had already written my response and it had already been graded.”
“While I feel that I have achieved my goal of learning about new populations, I also feel that this was achieved for other reasons than what I previously mentioned. For instance, the one thing that I never really took into consideration was the fact that discussions with my peers would end up being the most influential factor in learning what I did this semester.”
The first student decided early on that she would learn nothing from hearing about the perspectives of her peers during classroom discussions, while the second student was surprised to find that this aspect of the course was by far the most valuable.
The pedagogical “experts” might say that I should meet all students where they are and adjust to all the ways in which students define their interests. But I refuse to accommodate those who are too close-minded to try something that challenges their own view of themselves.