Something of a live update from the Simulations & Role Play track at the 2015 TLC, and since I’m trying to emulate the debonair Susherwood, imagine this being voiced with an Austin Powers accent.
A snapshot of my conference paper and presentation this year:
Last fall I taught two undergraduate courses, introduction to international relations and a section of a new first-year seminar. Both courses had student learning outcomes related to global empathy. Global empathy is like “regular” empathy–an awareness of the mental states of others and an emotional response to others’ emotions–but in situations where those others have ethnic, economic, or geographic backgrounds that are different from ourselves. In essence it’s an ability to sense how and why images like the ones embedded in this post might cause negative reactions in people with different cultural backgrounds.
I organized each course around different kinds of simulations; for the IR course, Statecraft and Chasing Chaos negotiation scenarios, and Twine for the first-year seminar.
I hypothesized that a pretest/posttest survey instrument would enable me to demonstrate the following:
- that students were meeting empathy-related learning outcomes
- which class showed the greatest improvement in global empathy indicators
I thought this would give me a sense of which type of simulation I used was the most effective at promoting global empathy. For the pretest/posttest survey instrument, I used the Alexandrian Inventory, a survey based on what was used in a previous research collaboration with past TLC presenters.
As is typical for me, the results of the Alexandrian Inventory did not give me any evidence for determining which course might have had a greater positive effect on students’ global empathy. Scores on the survey’s questions generally declined from the pretest to the posttest. Only two changes were statistically significant at a confidence interval of < .05, both of these changes occurred in first-year seminar, and both were in the negative direction.
My research design was flawed for several reasons. First, my sample sizes for the pretest/posttest were small because the two classes I tried to compare had relatively few students. Second, I had no control group where there was no simulation. While students’ global empathy indicators might have plunged even further in the posttest if they had not participated in any simulation, there is no way to determine this. Third, my pretest/posttest design can’t account for the numerous other influences on student attitudes during a semester–such as job stress, their academic performance or lack thereof in other courses, failed romantic relationships, or what students ate for breakfast on the day of the posttest. Unfortunately it’s difficult, given the university I work at, to bypass these constraints.
My paper is available on APSAConnect, and I’ll be uploading it within the next week or two to SSRN and Digital Commons. Or you can contact me here if you’re interested and I’ll email you a copy.
2 Replies to “Global Empathy and Simulations”
It’s worth saying that Chad’s work here at TLC has been really helpful in guiding us to think about how we test the utility of simulations. It’s fair to say that while we have moved beyond the “why should we do simulations” debate, we still lack some nice clear data that point to unequivocal gains for students over other pedagogies. Chad’s work is a step towards producing that.
Which is ‘groovy’ [because I so very much talk like that]
Comments are closed.