Presentation Fail

More on this semester’s comparative politics course:

Blind MonkeyAfter doing the in-class exercise on how to produce a concise thesis statement, I created this template for students to use when writing essays in which they apply theory to historical events. We shall see if students take advantage of it on the next essay assignment.

Student presentations have also been problematic. My instructions for this task have been:

Your team’s presentation needs to discuss which theoretical perspective (rational actor, structure, or culture) best matches the readings for your theme for this geographic region. Include specific examples from the readings to support your argument.

Teams create their presentations after their members have individually written essays that accomplish the same function, a deliberate sequence on my part. However, the presentations have been terribly organized — no clear thesis statement and few to no examples drawn from readings that actually support whatever argument students think they are communicating.

It’s possible that my directions are still too broad and that students need more step-by-step instructions; if so, the easy solution is to modify the template that I created for essays and require that teams use it when designing presentations. 

I don’t really want to do this. I prefer students to be creative in their approach to solving problems and to take responsibility for their learning. To continuously break tasks down into smaller pieces and decrease the need for effort or ingenuity risks turning students into box-checking monkeys. Yet without enough structure it is unlikely that an assignment will serve its intended purpose.

This tension reflects the difficulty in overcoming the problem of transfer. My assignments — which in this course include fourteen one-page responses to readings, five multi-page essays, and five presentations — represent multiple opportunities for each student to develop a single skill, the effective communication of an argument. Yet students don’t see this. They are blind to the possibility that a technique that they have learned to use in one context can be successfully applied in another one. After fifteen years as a professor, I still am trying to figure out how to move students from needing a list of steps to follow to being able to recognize that they already have the tools needed to figure things out for themselves. 

When Grade Inflation Isn’t

Norman Mailer Report Card

Courtesy of Charles Gleek at Games Without Frontiers,  I became aware of an interesting discussion of grade distributions at TPRS Q & A:

What grades should kids get? Notes on Evaluation for the Mathematically Challenged.

It’s a bit long but worth reading. The author, Chris Stolz, points out that perceptions about “proper” grade distributions are sometimes based on ignorance of basic statistical principles. Classes frequently are not statistically representative — they contain too few students, and students choose to enroll in them for non-random reasons. Assessment instruments often produce a ceiling effect that masks evidence of improvement in students who come into a course already possessing a high level of proficiency.

The end result can be a class composed of students who are either predisposed to do well in the course (possibly the main reason they enrolled in it to begin with) or who learn enough over the course of a semester to earn high marks on summative assessments. This reduces variation in the grade distribution and skews the curve to the right — instead of a normal distribution from F to A, with most students getting a C, the majority of the class ends up with A’s and B’s. A person who does not understand statistics assumes this happened because of grade inflation.

Several years ago I abandoned grading students against a normal distribution curve for these reasons. I also became much less concerned with testing students’ ability to reproduce factual information on “objective” exams because I knew that the vast majority of what they regurgitated would never move into their long-term memories. I did not (and still don’t) believe that their lives would be fundamentally altered for the worse if they failed to remember that the Turks captured Constantinople in 1453 or that realist IR theory derives in large part from the writings of Thucydides, Hobbes, and Machiavelli. 

I thought that students would benefit more from multiple opportunities to demonstrate how well they could apply concepts in novel ways and effectively communicate their findings. How does this look in reality? Below is my grading system for a course that I’m teaching now. 

Grading Scheme

In this course, final letter grades are based on a 1,000 point scale, in which students need only earn 950 points to obtain an A. Obviously with a total of 1,080 points available, it’s quite possible for a student to earn a high grade if he or she simply keeps plugging away at all the various assignments. But this is exactly what I want — for many students, continuous effort will result in improvement across the semester. Constant practice is also makes it more likely that students retain something after the course ends. And students feel better about themselves and their environment with frequent feedback on their performance.

Since this system of assessment makes it more likely that students will be able to demonstrate proficiency by the end of the semester, my grade distribution shifts to the right. Is this grade inflation? I will argue that it isn’t, because the student’s final grade is not based on a hastily thrown together end-of-semester essay that the instructor simply marks as an A or B.

Global Empathy and Simulations

Something of a live update from the Simulations & Role Play track at the 2015 TLC, and since I’m trying to emulate the debonair Susherwood, imagine this being voiced with an Austin Powers accent.

A snapshot of my conference paper and presentation this year:

India BarbieLast fall I taught two undergraduate courses, introduction to international relations and a section of a new first-year seminar. Both courses had student learning outcomes related to global empathy. Global empathy is like “regular” empathy–an awareness of the mental states of others and an emotional response to others’ emotions–but in situations where those others have ethnic, economic, or geographic backgrounds that are different from ourselves. In essence it’s an ability to sense how and why images like the ones embedded in this post might cause negative reactions in people with different cultural backgrounds.

I organized each course around different kinds of simulations; for the IR course, Statecraft and Chasing Chaos negotiation scenarios, and Twine for the first-year seminar.

I hypothesized that a pretest/posttest survey instrument would enable me to demonstrate the following:

  • that students were meeting empathy-related learning outcomes
  • which class showed the greatest improvement in global empathy indicators

I thought this would give me a sense of which type of simulation I used was the most effective at promoting global empathy. For the pretest/posttest survey instrument, I used the Alexandrian Inventory, a survey based on what was used in a previous research collaboration with past TLC presenters.

China Barbie DollAs is typical for me, the results of the Alexandrian Inventory did not give me any evidence for determining which course might have had a greater positive effect on students’ global empathy. Scores on the survey’s questions generally declined from the pretest to the posttest. Only two changes were statistically significant at a confidence interval of < .05, both of these changes occurred in first-year seminar, and both were in the negative direction.

My research design was flawed for several reasons. First, my sample sizes for the pretest/posttest were small because the two classes I tried to compare had relatively few students. Second, I had no control group where there was no simulation. While students’ global empathy indicators might have plunged even further in the posttest if they had not participated in any simulation, there is no way to determine this. Third, my pretest/posttest design can’t account for the numerous other influences on student attitudes during a semester–such as job stress, their academic performance or lack thereof in other courses, failed romantic relationships, or what students ate for breakfast on the day of the posttest. Unfortunately it’s difficult, given the university I work at, to bypass these constraints.

My paper is available on APSAConnect, and I’ll be uploading it within the next week or two to SSRN and Digital Commons. Or you can contact me here if you’re interested and I’ll email you a copy.

Madness to the Method

Being a fan of scientific research, I often complete surveys. I recently received a survey by email that asked me to choose between fictional medical treatment options — A and B. In each of the survey’s scenarios, figures for the risk of infection, cancer,  and surgery associated with each treatment option were provided. A screen capture of one of these scenarios in the survey is below.

The survey was sent to me by physicians at a premier public university in the USA who were acting as the principle investigators for the project, which was sponsored by a national non-profit patient advocacy organization. One assumes that the survey was seen by many sets of highly-educated eyeballs before it was distributed to the public, so I was stunned to see that the survey had a very basic methodological flaw.

Continue reading

Data Musings

MusesAs another fall semester begins, I thought I’d toss out two recent items about data.

First, an article in the New York Times describes how students are circumventing universities’ fortress-like approach to data management to get the information they need in ways that are the most useful to them. Meanwhile many universities don’t even understand what they are trying to regulate. (More on the Yale experience here.)

Second, an article in the Chronicle of Higher Education points out that attempts to tailor instruction according to data on students’ learning styles, personality types, or average ability level fail because of fundamental attribution error and the ecological fallacy.

This Is Your Brain On Learning

Fried EggsI’ve written before about the need for educators to know something about the cognitive basis for learning. Otherwise our students learn a lot less than they otherwise could. I recently stumbled upon this excellent editorial on the subject written by Arthur Graesser, a psychology professor at the University of Memphis. The editorial presents principles of learning that come from two reports that are worth reading. One of these is Organizing Instruction and Study to Improve Student Learning, a list of recommended instructional practices along with the degree of evidence that exists in support of each practice. The other is 25 Principles to Guide Pedagogy and the Design of Learning Environments.

Here are a few of the learning principles they mention:

  • Space learning over time.
  • Help students learn how to effectively manage the time they spend studying.
  • Testing enhances learning, particularly when the tests are aligned with important content.
  • Stories and example cases are typically remembered better than facts and abstract principles — information should be embedded within a narrative.
  • Motivation to learn is higher when content and skills are anchored in real world problems that matter to the student.
  • Deep learning is stimulated by problems that create cognitive disequilibrium, such as obstacles to goals, contradictions, conflict, and anomalies.

Mars Attacks

 

Writing = Active Learning

John C. Bean
John C. Bean

I’ve been reading Engaging Ideas: The Professor’s Guide to Integrating Writing, Critical Thinking, and Active Learning in the Classroom, by John C. Bean (2nd edition, 2011). I wish someone had given me this book while I was in graduate school. Here are a few of the many useful pieces of information that it contains:

  • Data indicates that student engagement in a course correlates to the amount of writing assigned in it. More writing, more engagement.
  • Content coverage vs. writing skills training is a false dichotomy. Students learn better if they think about content in ways other than rote memorization, and writing is a cognitively more demanding form of thinking. When writing, students use important disciplinary concepts while simultaneously practicing methods of inquiry and argumentation (p. 90).
  • More writing does not necessarily lead to more time grading for the instructor. If writing assignments are properly designed then grading time can actually decrease.
  • The end-of-semester research paper is wasted effort except for highly-skilled, upper-level undergraduates who are already familiar with disciplinary conventions (p. 91).
  • Effective writing assignments present students with a contextualized problem — a task — that immediately gives them a role to fulfill, an audience to communicate with, and a format to follow (p. 98).

My one criticism of the book is that it presents critical thinking the way that most academics do — as a single skill. In the book, critical thinking is (so far) not clearly defined or operationalized, perhaps because what academics often refer to as critical thinking is actually a whole set of cognitive processes, some of which are easier to develop than others.

Study Tips for Online Learning

OwlA while back I wrote about providing explicit study skills training in a new first-year seminar. It occurred to me that the vast majority of study skills advice is written for students who attend face-to-face courses on a physical campus. Meanwhile enrollment in online courses continues to increase. Here is a brief list of tips for online learners based on my experience teaching online for the last six years.

  • Determine whether you can realistically commit the necessary time and effort to do well in a course that is likely to move much faster than its on-campus equivalent.
  • Make sure your hardware and software meet the technological prerequisites for the course. Find out well the course begins what versions of which software you will need. Make sure installed plug-ins, extensions, and security settings are current. You can contact the university’s designated staff for assistance; the course instructor is unlikely to know what particular technological updates might be needed.
  • Have a technological emergency back-up plan. What will you do if your computing device stops working or you lose internet access at home? Not being able to access the course site or upload documents for even a few days can ruin your performance.
  • Securely store your work across multiple devices and/or the cloud. Products like Google Drive, Dropbox, and Evernote are all free. Use them. If you are not using a product that automatically updates your work, set a timer that reminds you to save what you’ve done every thirty minutes.
  • Ask the instructor for a copy of the syllabus before the course begins. Read it. Get a sense of how the course is organized.
  • If at all possible, create an ergonomically efficient working environment — screen at eye height, fingers on the keyboard with a 90 degree angle in the elbows, back straight. If your body is positioned comfortably, learning will be easier.
  • Plan to devote enough time to do well. You will not be able to ignore reading, writing, and online discussions all week and then “catch up” on Sundays. If you can’t commit the necessary time, don’t take the course.
  • If you work on the course at multiple points throughout the week, and vary your activities within each time period — by mixing reading, writing, online discussion, and research — you will learn more. Your brain will have more frequent and more efficient opportunities to strengthen memory.
  • Take notes, preferably handwritten, of what you read. I won’t go into detail here about the cognitive reasons for why writing notes by hand benefits learning, but highlighting simply doesn’t help you retain information.
  • Read strategically. Read the introduction first. Identify the main argument(s) that the author will be presenting. Read the conclusions. Then go back to find what evidence the author has presented in the body of the reading assignment to support the conclusions. Argument-conclusion-evidence is what you need to write notes about.
  • Minimize distractions. Shut down email and Facebook, turn off the TV and cell phone while working.
  • Writing requires the synthesis of information, which is more cognitively taxing than reading; therefore, it is especially important to carve out distraction-free blocs of time for this task. Reading is easier to accomplish in small chunks, so take advantage of time when you are otherwise not occupied — for example, while in a waiting room or riding a bus. Always have reading and note-taking material with you.
  • Enlist the support of family and friends. If you have children who do homework every weeknight after dinner, do yours with them. You’ll be setting the example that learning is a life-long process and they will respect you for it. Get your spouse to do more of the housework. Organize study dates with a friend at a cafe.
  • Make an effort to connect with classmates, whether through discussion or when working on collaborative assignments. The more you can create a sense of community in the course, the more you will get out of it.

Helping Students Learn How to Learn

I’ve been thinking a lot lately about what we know about how people learn. As a long-time reader and now contributor to this blog, it probably doesn’t come as a surprise that I believe active learning is one piece of the puzzle. But, as a good social scientist, I’m always looking for evidence. I’m happy to see books using cognitive psychology and neuroscience and related fields to give us more insight into how students learn.

A friend recently shared this article and I’m excited to pick up the book Make it Stick: The Science of Successful Learning and read it this summer.

The article includes a handy graphic that I plan to share with my students this fall, listing five facts to teach them how to learn. Of the five facts, the second (“Apply the new material to your own life”) and fourth (“Use your whole brain”) most closely relate to why I incorporate active learning into my teaching. I hope, that by sharing this research, I’ll purchase some “buy-in” from the students for these methods.

Adversity Indicators

Adversity PlantIf you haven’t read Who Gets to Graduate?, a New York Times article by Paul Tough, you should. The article points out that some relatively simple interventions being used at the University of Texas are associated with significant improvements in retention and graduation rates for college students who belong to minority or economically-disadvantaged groups. These are the very students who represent an increasing proportion of the total undergraduate population in the USA but are most at risk of not completing a college education.

The treatments that have been so effective are designed to strengthen students’ beliefs that they belong in college and that they have the ability to succeed in college. So logical it’s obvious, you might think, but you should ask yourself these questions:

  • How much of your university’s curricular content and delivery, especially for the courses that students encounter in the first year of college, sends the opposite message?
  • Does your university collect data so that it knows not just which students fail, but why, and at what point the downward slide begins?
  • What do you do in and outside of the classroom that encourages a sense of belonging and ability among different groups of students? Do you even know what backgrounds students might be coming from?