Last week I attended the first APSA Centennial Center Teaching Workshop in Washington, DC, an event organized by APSA staff (thanks Julia!), Joyce Kaufman of Whittier College, and Victor Asal of the University of Albany-SUNY. The subject of the workshop? Teaching international relations.
A few thoughts about the event:
- The participants came from institutions with wildly different enrollments and missions, but teaching was primary to their professional life. They approached the praxis of teaching with intentionality and an interest in continuous improvement, despite changing student demographics, declining resources, and organizational inertia. Several of us felt that a stark difference exists between the notion of political science as a community of scholars and the realities of the workplace. For more on this topic, see Jennifer Hochschild’s recent letter to the editors of PS — the “Mismatch between (Some of ) APSA and (Some) Political Scientists.”
- Many undergraduate students could benefit from basic training in epistemology. They often ignorant of the difference between cause and effect, the explanatory and predictive functions of theory, and the role of the scientific method in evaluating truth claims. Students typically don’t know what questions are the right questions to ask or how to understand the answers they get.
- People use a variety of course frameworks to expose students to international relations theories and methods. Some employ a critical issues focus, in which topics like climate change and human rights function as springboards for analysis. Others build their courses around case studies or simulations. This diversity in approach points to the dis-utility of a one-size-fits-all canonically-oriented textbook.
- International relations can help students better understand human behavior and become more adept at social interactions. Traditionally-aged undergraduates want to perceive themselves as unbiased adults capable of thinking strategically, yet games can easily elicit quite a different response. Placing students in situations where the system is rigged against them can make them more fully grasp the individual effects of discrimination and structural inequality as well as the importance of civil discourse in a democratic society.
The workshop gave me some insight into what other people consider to be best practices in the teaching of international relations. The conversations were productive and enjoyable. I hope APSA continues to organize this type of workshop.
Even when there is an institutional effort to promote pedagogical innovation it often comes to little. I use to teach at a small liberal arts college that took teaching very seriously. Some there – me, for instance – had a research program, but most did not. And with an 8 course a year teaching load, that isn’t much of a surprise.
Well, you might say, the perfect environment for sharing teaching innovations. Not. There were two problems. First and foremost, pedagogy in the various disciplines differs considerably. This is because disciplines develop epistemes that don’t lend themselves to sharing across disciplines. Both chemists and biologists do labs, but the differences are greater then the similarities. Political science and sociology both use stats, but the ways they use them and the problems they address are so different that there isn’t much overlap when it come to techniques in the disciplines. We often found ourselves listening to presentations that had little to tell us about how to improve how we taught.
Second, the standards for what constituted a useful innovation differed as greatly as the epistemes did. I’m a political scientist and I introduced the use of extended simulations – Reacting to the Past games – at my campus. I immediately used the evaluation instruments for our freshman seminar to test whether using Reacting games made a difference and, lo and behold, they did and I proved it with proper stats. Other disciplines, especially in the humanities, but also in the hard sciences, presented us with new innovations that they had made no attempt to verify at all. This made their presentations a lot less useful, unless I could find more systematic studies to back them up. (I did in a couple of cases.) Further, the resistance to using even well established techniques was considerable since a) no one wanted to do the extra work and b) every one thought that the effects found in the studies were localized (“University A isn’t like us so the results don’t apply.”).
I think this is up to the disciplines themselves rather then the universities.