Discussion-Based Quizzes 2

First, I hope to see folks at some of the pedagogy-oriented sessions at this week’s ISA meeting in Montreal. Feel free to chat up me or the illustrious Dr. Simon Usherwood.

Second, a follow-up to my post last month about no-stakes quizzes and class discussion.

I gave students another quiz, on survivorship bias, a topic of the reading assigned earlier in the week. Here is the prompt for the questions (note that the scenario is a fictionalized version of a famous World War II example):

The United States has gone to war against Zambia. A high percentage of U.S. warplanes are being shot down by the Zambian military. You work for the U.S. Department of Defense as an operations research analyst. You have been given the task of recommending where additional armor plating should be installed on U.S. warplanes to better protect them from enemy fire. The image below shows a representative pattern of damage from Zambian anti-aircraft fire to U.S. warplanes that have made it back to home base. 

Responses to question 1:

Responses to question 2:

Despite the tiny sample, I think the quiz scores indicate the ease of memorizing a concept’s definition while being unable to meaningfully apply it. Students frequently equate memorization with knowing and hence learning, when mostly it’s not.

Discussion-Based Quizzes

Alternative title: Stuff Just Happens, a persistent student mental paradigm that I’ve written about before (here and here).

I’m teaching an undergraduate introduction to research methods course this semester, for which I created a series of practice quizzes that contribute nothing to the final course grade. I expected the quizzes, which range from one to three questions, to function as retrieval practice and launch pads for class discussion. This is the same idea behind polling students to check their comprehension while lecturing, but I used the Canvas LMS quiz tool instead of a polling platform because 1) I was too lazy to learn the latter, and 2) I wanted all course content housed in one place.

The activity is not working as well as I thought it would, for a few reasons. First, Canvas identifies the correct answer to each question when displaying quiz results, as shown below, which shuts down discussion about which option is correct and why. A pie chart that doesn’t label the correct answer would work better — i.e., Google Forms.

Second, this is a small class that meets at 8:00 a.m. The quantity and quality of discussion declines markedly when only a few students are absent, which is usually the case.

But most importantly, given their answers to the quiz questions, students are still having a lot of difficulty forming plausible and potentially valid causal inferences from the data that I present. I’ve given six practice quizzes so far, and on average students answer questions correctly only about 50% of the time. Here is a typical example from a prior quiz:

Based on the visual representation of data below, a logical hypothesis is:

Serendipity in Research Methods

Sometimes it is easier to demonstrate real-world relevance than others.

Last week students in my research methods course read Charles Wheelan, Naked Statistics, Ch. 12, and Ashley A. Smith, “Students Taking More Credit Courses and Introductory Math Faring Well,” Inside Higher Ed, 7 December 2018.

They then had to answer this question: What mistakes are Nevada officials making with data about community college students?

As written, the Inside Higher Ed story describes people who should know better falling victim to omitted variable bias and confusing correlation with causation. Although I might be making similar mistakes in evaluating in-class discussion about the assignment, I think that students found it more interesting than most because the assignment was about other students.

Soon afterward, two similar items came across my radar:

Students prefer mixing and matching online with on-campus courses.

Common premises about college students are wrong.

I shared these with my students, as additional examples of analyzing (or not) data about their peers.

Critical thinking and the Ukraine invasion

I’m not an IR person, and I know it.

Unfortunately, a lot of the people I follow on social media do think they are now specialists in warfare, diplomacy or the operations of civil nuclear facilities. These people were also once ‘experts’ in epidemiology, Brexit, macroeconomics, US presidential politics, populism, immigration and many other things besides.

I have my doubts.

This is probably also a problem you face as you try to make sense of the world around us: yes, you know some people who do actually really know stuff, but they get buried in a big pile of hot takes, motivated reasoning and even propaganda.

So what to do?

I’m guessing that Ukraine is an easier case for the readership of ALPS blog to handle, since it’s closer to many of our research interests: even if we don’t work on relevant topics ourselves, then we know the people who do and tap into their expertise.

Of course, as the whole Mearsheimer thing has shown in the past week, even very competent people come up with dubious positions, although you at least get lots of material for your next IR theory class.

(For my part, I’ve limited myself to working up the one element I do feel competent to speak on).

However, for your students this might still be at the edge of their knowledge, abilities and confidence, so how can we help them parse the situation?

For me, task number one has be a strong refresher on how to evaluate information (and it’ll be a refresher, because of course you teach this as a matter of course, right?).

That means making sure they understand the importance of verification, of triangulation, of expertise and of all the other things that we have probably internalised over the years. If we running a class that needed to engage with this I’d be asking students to locate good guides to how to do this, then pulling them together into a master document that they can all use for their subsequent research.

For as fluid as case as an active conflict, information is incomplete and often contradictory, so giving students the tools to determine what they know and what it means is essential. The growing OSINT community is a really good starting point for looking at the operational end of things, while the more strategic reasoning requires engagement with those working in a number of different domains, including Russian politics, military doctrine and sanctions.

As we’ve seen in recent years with whatever crisis you care to imagine, there is a huge potential to access properly informed and well-evidenced specialists on any given topic. But that means cutting through the guff and being able to contextualise what we read.

And that’s a great life-skill to be developing in our students, regardless.

What You Think Depends On Where You Stand

Our superb librarians survey students and faculty annually. Results from this year’s survey are in. Student responses to one of the questions:

Faculty responses:

Notice that the frequencies of responses from these two groups are essentially mirror images of each other. Students are extrinsically motivated by grades, so they think in instrumental terms: I need correctly formatted citations and the specified minimum number of sources. Otherwise my grade will be negatively affected. Knowing whether a source is reputable is far less important. Faculty think the reverse: the ability to locate scholarly source material and analyze information for bias matters most.

I have tried to solve this problem in the past, and could not find a satisfactory solution. Consequently, I have focused more on curating quality content for student to consume than on marking down students because of their reliance on websites that are top-listed in Google searches. In fact, it’s one of the reasons I decided to stop assigning traditional research papers.

Given the survey results though, the problem extends far beyond my small corner of the curriculum. I’m not going to solve it independently.

Readers might find these other posts on information literacy skills to be of interest:

The Methods Silo Effect and Fixing Poor Research Skills

Googling

Write Your Own Headlines Activity

Write Your Own Headlines Activity

This post comes from Chelsea Kaufman, assistant professor of political science at Wingate University. She can be contacted at c[dot]kaufman[at]wingate[dot]edu.

In teaching undergraduate research methods, I often find that the students are intimidated by the subject matter and don’t see its relevance to their lives. I have increasingly emphasized to students that it prepares them to be savvy consumers of political information wherever they might encounter it. This approach introduces an additional challenge, however: students often lack the information literacy skills to evaluate the sources that they access. If I want students to have the skills to evaluate the political information they encounter, I obviously need to teach them these skills. How exactly can this be accomplished? 

It is not enough to tell students which sources are acceptable, because people tend to trust information that aligns with their political predispositions. Simply lecturing to students about the dangers of misinformation can reinforce false beliefs and increase their distrust of reliable sources. 

To avoid this conundrum, I have students write their own headlines based on public opinion poll data. I first find a poll with results covered in several media outlets. I then send students a link to (or print out) the results of the poll, without providing them any context as to how it was covered in the media. After writing the headlines, students share them and compare theirs with those of their classmates and with published headlines about the data. Students learn to interpret data and evaluate whether it was given accurate coverage in the media. As the final part of the lesson, I then ask them to evaluate the polling methods used to obtain the data, by, for example, considering how a question’s wording might have impacted the responses. 

You can view detailed instructions for the activity on APSA Educate. You can also read more about this topic and find examples of additional activities in my article Civic Education in a Fake News Era: Lessons for the Methods Classroom or my chapter in The Palgrave Handbook of Political Research Pedagogy

Possible Improvement To Team Research Projects

A follow-up to my recent post about increasing the quality of students’ final products from collaborative research projects:

In my Spring 2021 research methods course, I gave students this outline to follow when writing their team’s research reports. I’ve revised the outline for Spring 2022. Each part in the new outline will get graded separately, with a summative grade for the entire report at the end of the semester.

I’m also thinking of being much more specific about the report’s layout, and grading the reports accordingly — similar to what has worked well with student presentations. I can envision the following criteria:

No more than two pages per part, which would limit the final report to eight pages.

Each part must include at least one data visualization — a chart or graph.

No photographic images.

How to measure whether your teaching’s working

As long as a cat…

As we hurtle towards the summer ‘break’ and everyone remembers the deadline they cut you some slack on, it’s also a time when we’re often thinking about next semester.

For those of you with interests in making L&T a bigger part of your work, one obvious route is researching and publishing on what you do in the classroom.

Often that might be about trying out something different with students, which you think generates benefits for their learning, and might be of use to others in the same situation: we’ve published lots of such pieces from our guest authors here at ALPS.

While the thing you’re doing is the obvious centre of attention, the second element – whether it works – sometimes gets a bit lost (speaking as someone who reviews a good number of journal submissions in this field), so I thought it’s useful to think a bit more about this.

Measuring learning turns out to be a less-than-simple task: if it weren’t, then we’d all know about how to do it. The problem turns in part on the multiplicity of things we might consider, and in part on the difficulty in making any accurate/meaningful measure of these things.

Learning is not simply about knowledge, but also skills, social capital and much more. Each of those itself has many sub-elements, not all of which might be immediately obvious to anyone, nor equally important to everyone. Likewise, learning happens at lots of different speeds, so do you focus on the immediate gains, or something more long-term?

The (faint) silver lining to this particular cloud is that everyone’s in the same boat. I’m yet to see a comprehensive evaluation tool that I could recommend to you, even though there a number of really good ideas out there (for example, this or this (which makes the good point that students’ perception of what they learn isn’t the same as teachers’ measure of what they learn)).

The important thing here is to be mindful of this from the start of any pedagogic research, embedding your measurement protocol into the design from the start, rather than hoping it’ll come to you later: a short post-course questionnaire about whether your students liked the thing they did isn’t likely to suffice.

That means thinking about what elements you focus on measuring (and why), then on how you’ll measure them. In particular, think about whether and how you can have a control for your teaching intervention: if it’s not practical to have another group of students not doing it, then will pre/post testing cover things robustly enough? Just like your other research, try to control your variables as much as you can, so you can be more confident about isolating effects.

And it also means asking for help if you’re unsure. Your institution probably has a great bunch of people centrally who work on just these kinds of projects and who can give you excellent advice and support. Likewise, you can ask us here or online about specific ideas: it’s worth looking back at our posts for suggestions and colleagues who’ve worked on similar things.

Do all that and your pedagogic research will be off to a flying start (which might be the only flying you get to do).

A Lesson Learned About Team Research Projects

Looking at student performance in the 2020-2021 academic year, I see evidence that team research projects due at the end of the semester can’t be scaffolded solely around individually-graded assignments completed throughout the semester. For example, in my Middle East politics course, each student shared four individually-completed assignments with their teammates for use in their team’s historical timeline. In my research methods course, there were ten individual assignments that teammates were supposed to share with each other as drafts of sections of team research reports. While this approach does decrease free riding and encourage collaboration, it apparently does not ensure high quality research in the final product. Four of the five timelines that teams created in the Middle East course lacked mention of significant events. None of the four teams in the research methods course collected information from coffee farmers, processors, or distributors in Central America, despite my instructions to do so, nor did the final reports resemble the industry exemplars I had provided.

It seems that in students’ minds, my formative assessment of their individual work is totally unconnected to the summative assessment of their collaborative work. I probably need to break the team project into discrete, graded chunks, with each chunk layered on top of some of the individual assignments. Teams can use the feedback they receive on each successive chunk of the project to improve the quality of the final product.