I very rarely incorporate feature films into my courses — Dr. Strangelove has been one of the few exceptions — but anyone who is teaching political psychology should take a look at Inside Out from Pixar. The film, for which the psychologists Paul Ekman and Dacher Keltner functioned as consultants, is a visual representation an 11-year old girl’s mind. Emotions take the center stage, especially in regards to memory formation and retrieval, but imagination, attention, and other processes also play into the film’s plot. More detail on the science incorporated into Inside Out is here.
The New York Times recently published an interactive illustration of confirmation bias: guess the rule obeyed by a sequence of three numbers. I won’t go into detail about the game other than to say that it’s wonderfully simple and I definitely fell into the mind trap. Some implications for politics and business are presented after players submit their answers and this can provide a launch point for class discussion.
The puzzle nicely complements the Zendo game in which players create hypotheses about an arrangement of blocks — by demonstrating the cognitive biases that affect much of our decision making.
In honor of the 800th anniversary of the Magna Carta, the New York Times has created this interactive online quiz. The quiz is a good example of how testing one’s memory can be both useful and fun.
A while back I wrote a series of posts on reworking my first-year seminar. My assumption was that this fall’s version would meet three days a week, as happened in the course’s initial iteration. I recently learned that instead it will meet on Tuesdays and Thursdays. Given that much of the course involves student-to-student interaction in the classroom, the new schedule necessitated further changes. To start, I dropped the book that I had originally fit into the last third of the semester, and with it plans for a class-wide Twine project. The course now looks like this:
- Team-based Twines on the book An Ordinary Man (Rwanda).
- Simulation exercises on the first four cases in Chasing Chaos (Rwanda, Sudan, Sri Lanka, Sierra Leone).
- Team-based Twines on the last case in Chasing Chaos (Haiti).
Since this is a course for incoming college students, I added The New Science of Learning and some other meta-cognitive content on skills for academic success. This means that students will have on average three writing assignments on readings per week even though the class only meets twice a week, which I think this is a good thing. Students won’t be able to forget about the course between Thursdays and Tuesdays.
As I discussed in my informal assessment back in January, I had a problematic formulation for the briefing memo that prepared students for each Chasing Chaos simulation. I’ve rewritten the assignment instructions accordingly, and created a new sample memo for students to use as a guide. The effort that I’m putting into the design of this course reflects something about how college works that I’ll discuss in more detail in my next two posts.
Terry Doyle and Todd Zakrajsek, The New Science of Learning: How to Learn in Harmony with Your Brain, Stylus Publishing, 2013.
The New Science of Learning is a very concise and easy-to-read advice guide for undergraduates that is based on the findings of cognitive science research. I’ll be using it this fall in my first-year seminar. My hope is that it will help students, many of whom are not that well prepared for college, improve their academic performance. Here is one of the book’s authors speaking at Quinnipiac University, from the perspective of how to teach more effectively.
I’ve created these writing assignments that correspond to the chapters of the book:
- Of the different practices that help people learn more effectively, which is the one that you currently use the least frequently? What would you need to change in your life so that you used it more frequently?
- Think about the last three nights. How well did you sleep on each of these nights? What changes would enable you to sleep better? How can you implement these changes?
- Thinks about the last three days. At what times were you physically active and for how long? How did your levels of mental alertness change during the day? Do you notice any pattern between physical activity and alertness?
- Do you write notes by hand in your college courses? Do you annotate text that is assigned in these courses? Why? Given the benefits on learning of note-taking and annotating reading assignments, how well will you perform academically this semester?
- Describe an assignment in one of your courses this semester that reflects the pattern recognition principle of similarity/difference, proximity, figure-ground, or cause/effect. What is the assignment and how does it reflect the principle? What will be the effect on your understanding or memory of the material?
- Name an activity in which you use either distributed practice or elaboration. What is a specific change that you can make in your daily behavior to better incorporate either one into your college experience?
- What is your approach to failure? Do you embrace the possibility of it or try to avoid it at all costs? When you fail, what is your reaction? Based on your answers to these questions, do you have a growth mindset or a fixed mindset toward learning? Why?
- Do you engage in task shifting? When? What is a specific change that you can make in your daily behavior to reduce task shifting?
This is the fifth semester that I have used the quality of failure essay as an end-of-semester exercise in meta-cognitive reflection, after Amanda* first posted about the concept almost three years ago.
Even though the essay is only two to three pages, I make it worth the equivalent of half of a letter grade because it forces students to acknowledge that they are responsible for their own learning. Or so I thought.
Out of my forty-five students this semester, sixteen failed to submit the assignment on time or at all. This occurred despite an eight-day window in which students could upload their work to the course website and my continuing effort to prevent them from engaging in learned helplessness. Those who did not meet the deadline ranged from freshmen to seniors and most of them had performed at a mediocre (or worse) level throughout the semester. Some approached me after the final to ask if they could turn in this and other unsubmitted assignments to salvage their grades (no). Conversely and as usual, the best students submitted their essays the earliest.
To avoid the stream of “Uh, I forgot” emails next semester, I’m creating a “failure to learn from failure” assignment, worth half as much as the quality of failure essay. This second essay will be only for those students who do not meet the deadline of the first one, and I’ll set the window of availability for forty-eight hours. Students will have to specify why they failed to take advantage of the first essay in addition to following the same instructions on the essay’s content.
Although my perceptions might be getting clouded by age, I believe that my experience this semester is part of an increasingly stark bimodal distribution among students at my employer. Some come in already possessing the motivation and skills necessary to succeed in college. A nearly equal portion arrive assuming that they will never be held accountable for their willed ignorance. Unfortunately some in the latter category apparently never discover in four years of college that this assumption is false.
*recently tenured and promoted, evidence that good things can happen to good people.
More on this semester’s comparative politics course:
After doing the in-class exercise on how to produce a concise thesis statement, I created this template for students to use when writing essays in which they apply theory to historical events. We shall see if students take advantage of it on the next essay assignment.
Student presentations have also been problematic. My instructions for this task have been:
Your team’s presentation needs to discuss which theoretical perspective (rational actor, structure, or culture) best matches the readings for your theme for this geographic region. Include specific examples from the readings to support your argument.
Teams create their presentations after their members have individually written essays that accomplish the same function, a deliberate sequence on my part. However, the presentations have been terribly organized — no clear thesis statement and few to no examples drawn from readings that actually support whatever argument students think they are communicating.
It’s possible that my directions are still too broad and that students need more step-by-step instructions; if so, the easy solution is to modify the template that I created for essays and require that teams use it when designing presentations.
I don’t really want to do this. I prefer students to be creative in their approach to solving problems and to take responsibility for their learning. To continuously break tasks down into smaller pieces and decrease the need for effort or ingenuity risks turning students into box-checking monkeys. Yet without enough structure it is unlikely that an assignment will serve its intended purpose.
This tension reflects the difficulty in overcoming the problem of transfer. My assignments — which in this course include fourteen one-page responses to readings, five multi-page essays, and five presentations — represent multiple opportunities for each student to develop a single skill, the effective communication of an argument. Yet students don’t see this. They are blind to the possibility that a technique that they have learned to use in one context can be successfully applied in another one. After fifteen years as a professor, I still am trying to figure out how to move students from needing a list of steps to follow to being able to recognize that they already have the tools needed to figure things out for themselves.
Courtesy of Charles Gleek at Games Without Frontiers, I became aware of an interesting discussion of grade distributions at TPRS Q & A:
It’s a bit long but worth reading. The author, Chris Stolz, points out that perceptions about “proper” grade distributions are sometimes based on ignorance of basic statistical principles. Classes frequently are not statistically representative — they contain too few students, and students choose to enroll in them for non-random reasons. Assessment instruments often produce a ceiling effect that masks evidence of improvement in students who come into a course already possessing a high level of proficiency.
The end result can be a class composed of students who are either predisposed to do well in the course (possibly the main reason they enrolled in it to begin with) or who learn enough over the course of a semester to earn high marks on summative assessments. This reduces variation in the grade distribution and skews the curve to the right — instead of a normal distribution from F to A, with most students getting a C, the majority of the class ends up with A’s and B’s. A person who does not understand statistics assumes this happened because of grade inflation.
Several years ago I abandoned grading students against a normal distribution curve for these reasons. I also became much less concerned with testing students’ ability to reproduce factual information on “objective” exams because I knew that the vast majority of what they regurgitated would never move into their long-term memories. I did not (and still don’t) believe that their lives would be fundamentally altered for the worse if they failed to remember that the Turks captured Constantinople in 1453 or that realist IR theory derives in large part from the writings of Thucydides, Hobbes, and Machiavelli.
I thought that students would benefit more from multiple opportunities to demonstrate how well they could apply concepts in novel ways and effectively communicate their findings. How does this look in reality? Below is my grading system for a course that I’m teaching now.
In this course, final letter grades are based on a 1,000 point scale, in which students need only earn 950 points to obtain an A. Obviously with a total of 1,080 points available, it’s quite possible for a student to earn a high grade if he or she simply keeps plugging away at all the various assignments. But this is exactly what I want — for many students, continuous effort will result in improvement across the semester. Constant practice is also makes it more likely that students retain something after the course ends. And students feel better about themselves and their environment with frequent feedback on their performance.
Since this system of assessment makes it more likely that students will be able to demonstrate proficiency by the end of the semester, my grade distribution shifts to the right. Is this grade inflation? I will argue that it isn’t, because the student’s final grade is not based on a hastily thrown together end-of-semester essay that the instructor simply marks as an A or B.
A snapshot of my conference paper and presentation this year:
Last fall I taught two undergraduate courses, introduction to international relations and a section of a new first-year seminar. Both courses had student learning outcomes related to global empathy. Global empathy is like “regular” empathy–an awareness of the mental states of others and an emotional response to others’ emotions–but in situations where those others have ethnic, economic, or geographic backgrounds that are different from ourselves. In essence it’s an ability to sense how and why images like the ones embedded in this post might cause negative reactions in people with different cultural backgrounds.
I hypothesized that a pretest/posttest survey instrument would enable me to demonstrate the following:
- that students were meeting empathy-related learning outcomes
- which class showed the greatest improvement in global empathy indicators
I thought this would give me a sense of which type of simulation I used was the most effective at promoting global empathy. For the pretest/posttest survey instrument, I used the Alexandrian Inventory, a survey based on what was used in a previous research collaboration with past TLC presenters.
As is typical for me, the results of the Alexandrian Inventory did not give me any evidence for determining which course might have had a greater positive effect on students’ global empathy. Scores on the survey’s questions generally declined from the pretest to the posttest. Only two changes were statistically significant at a confidence interval of < .05, both of these changes occurred in first-year seminar, and both were in the negative direction.
My research design was flawed for several reasons. First, my sample sizes for the pretest/posttest were small because the two classes I tried to compare had relatively few students. Second, I had no control group where there was no simulation. While students’ global empathy indicators might have plunged even further in the posttest if they had not participated in any simulation, there is no way to determine this. Third, my pretest/posttest design can’t account for the numerous other influences on student attitudes during a semester–such as job stress, their academic performance or lack thereof in other courses, failed romantic relationships, or what students ate for breakfast on the day of the posttest. Unfortunately it’s difficult, given the university I work at, to bypass these constraints.
My paper is available on APSAConnect, and I’ll be uploading it within the next week or two to SSRN and Digital Commons. Or you can contact me here if you’re interested and I’ll email you a copy.
Being a fan of scientific research, I often complete surveys. I recently received a survey by email that asked me to choose between fictional medical treatment options — A and B. In each of the survey’s scenarios, figures for the risk of infection, cancer, and surgery associated with each treatment option were provided. A screen capture of one of these scenarios in the survey is below.
The survey was sent to me by physicians at a premier public university in the USA who were acting as the principle investigators for the project, which was sponsored by a national non-profit patient advocacy organization. One assumes that the survey was seen by many sets of highly-educated eyeballs before it was distributed to the public, so I was stunned to see that the survey had a very basic methodological flaw.