Discussion-Based Quizzes 2

First, I hope to see folks at some of the pedagogy-oriented sessions at this week’s ISA meeting in Montreal. Feel free to chat up me or the illustrious Dr. Simon Usherwood.

Second, a follow-up to my post last month about no-stakes quizzes and class discussion.

I gave students another quiz, on survivorship bias, a topic of the reading assigned earlier in the week. Here is the prompt for the questions (note that the scenario is a fictionalized version of a famous World War II example):

The United States has gone to war against Zambia. A high percentage of U.S. warplanes are being shot down by the Zambian military. You work for the U.S. Department of Defense as an operations research analyst. You have been given the task of recommending where additional armor plating should be installed on U.S. warplanes to better protect them from enemy fire. The image below shows a representative pattern of damage from Zambian anti-aircraft fire to U.S. warplanes that have made it back to home base. 

Responses to question 1:

Responses to question 2:

Despite the tiny sample, I think the quiz scores indicate the ease of memorizing a concept’s definition while being unable to meaningfully apply it. Students frequently equate memorization with knowing and hence learning, when mostly it’s not.

Performing as academics

What’s your evidence base for this?

I had the too-rare pleasure to chat with an anthropologist last night.

We are both part of a big European-funded project that is having its final conference in Brussels this week, with a focus on connecting to policy-makers and other practitioners in the field of refugee protection.

It’s a big deal, for me as much as anyone: I’ve been leading on the conference planning for the past 18 months as we bring together speakers from at least three continents and juggle a very large number of moving parts.

The conclusion of the first day has shown that planning does pay off, from bringing some marker pens that got used within ten minutes of arriving to thinking about the structure of the sessions.

Let’s focus on the second of these elements here, important though ready access to marker pens might be.

From the start, I was clear with myself that I didn’t want this to be an academic conference.

By that I mean the format you know from pretty much every single big conference you go to: 90 minute sessions with 4 or 5 papers, hopefully not just read out in a monotone but quite possibly, with 5 minutes for someone to make more of a comment than a question.

The world has plenty of those kinds of things already and this project does not need to add to the pile.

Our work has a very clear and evident policy application: studying refugee protection in Europe and beyond was already a big thing three years ago when we started, even before the effects of covid or the war in Ukraine shook things up even more.

So I wanted to break out of the mold as much as possible.

My initial suggestion that we should just have an afternoon where we sat in an informal space and attendees came up to chat with us was not popular, so we rowed back to the format we have now: some keynotes, some roundtables with significant non-academic presence, and lots of time for chat/networking.

In broad terms, this seems to be working in breaking down barriers between academics and practitioners: our lead investigator did a great job of opening the event with a clear single message about what our work shows to practitioners, for example.

But it’s also hard.

At points, we drift back into the comfort of roles – and this is where the anthropologist and I found common cause – rather than challenging ourselves to push back and to make things work for us.

This isn’t even so much about trying not to talk in jargon or to argue on precise points of definition (although also that), as it is about falling into classroom modes, with lecturers and students, each doing what they are supposed to.

If we want to build more effective and significant links from academia into other communities then we have to be mindful of this; we have to try really hard to change the incentives for our colleagues to adapt appropriately.

That means more work on formats, more challenging of assumptions, more reflection about the specific needs of others.

I’m taking this conference as a win even before it ends, but I know I could do better and next time I will try harder.

Getting your bearings

Not in full-time education, bu still interested in learning

My household is full of people involved in education, either as students or as teachers. However it’s only this academic year that my (much) better half has moved into a regular teaching role, after years of research-only activity.

It’s been great to see how she has moved from anxiety about what to do to reflection on how best to make her classes work and how it all marries up to her research activity, not least because it’s also given me an opportunity to reflect on what I do too.

Indeed, it was this process itself that was perhaps the most stimulating, since it highlighted one difficulty for a new teacher, namely what – practically – to do.

Training programmes are typically on-the-go, using your classroom experience to stimulate reflection on pedagogy and learning and theory-level stuff. But how to get to a point of being confident about stepping into that classroom in the first place?

What seems to be missing is a simple ‘how to’ guide that offers practical advice on really mundane stuff, like dealing with questions, or designing your first lecture, or what to keep in mind for your first session.

All the stuff you have internalised from your teaching, basically.

Looking around, I don’t see a resource like this, which surprises me (and also makes me wonder if I’m just looking in the wrong places): we all have to start somewhere.

In my case, a long time ago, it came from a lot of watching people teach/try to teach me and reflecting on what did and didn’t work. But that’s not a normal path, I’m aware.

Maybe you came to this some other way, in which case I’d love to hear about it, or indeed anything like the resource I describe.

This comes back to some of the themes we touched on in Baltimore, namely lowering costs to entry to better pedagogy: it’s as true for ‘basic’ stuff as it is for fancier approaches like active learning. One reason why teaching doesn’t have the same rep as research is precisely that we don’t institute the same level of training in the latter.

But that might another post for another day.

Discussion-Based Quizzes

Alternative title: Stuff Just Happens, a persistent student mental paradigm that I’ve written about before (here and here).

I’m teaching an undergraduate introduction to research methods course this semester, for which I created a series of practice quizzes that contribute nothing to the final course grade. I expected the quizzes, which range from one to three questions, to function as retrieval practice and launch pads for class discussion. This is the same idea behind polling students to check their comprehension while lecturing, but I used the Canvas LMS quiz tool instead of a polling platform because 1) I was too lazy to learn the latter, and 2) I wanted all course content housed in one place.

The activity is not working as well as I thought it would, for a few reasons. First, Canvas identifies the correct answer to each question when displaying quiz results, as shown below, which shuts down discussion about which option is correct and why. A pie chart that doesn’t label the correct answer would work better — i.e., Google Forms.

Second, this is a small class that meets at 8:00 a.m. The quantity and quality of discussion declines markedly when only a few students are absent, which is usually the case.

But most importantly, given their answers to the quiz questions, students are still having a lot of difficulty forming plausible and potentially valid causal inferences from the data that I present. I’ve given six practice quizzes so far, and on average students answer questions correctly only about 50% of the time. Here is a typical example from a prior quiz:

Based on the visual representation of data below, a logical hypothesis is:

Really? Actually really?


Some more time to digest the lessons of Baltimore’s TLC, and I’m coming back once again to a big theme (for me) about how we use simulations and games.

At several points during the conference, I found myself pondering how we reconnect sims and games back into the rest of our teaching practice, or even whether we do that at all.

The argument I’d make (and I made) is that sims are essentialisations of the world: they take some aspect – an event, a relationship, a dynamic – and places it front and centre, stripped of all the other stuff so we can see it more clearly for our edification.

That essentialisation gives us the core of the gameplay of the sim. The mechanism we want to focus on logically also informs the choice of how we focus on it. So if we want to understand voting dynamics, we build a sim with voting in it, plus whatever elements we want students to consider as factors.

None of this should be controversial – it’s how all teaching works, in practice. We draw attention to something, to allow us to think about, discuss and learn about it.

Sims are just a more overt example of this.

But that essentialisation also implies a need to have an active reconnection, post-activity.

It’s why we talk so much about debriefing: we are trying to draw lessons out of the activity, back into the other modes of teaching we use, both within a course and more generally.

By definition, active learning places the locus of learning within the student, so to know what they have learnt, we (as teachers/instructors/facilitators) have to work to pull that out of them.

But what does that involve?

Typically, when we talk about debrief we are looking at ‘what did you learn?’ type questions. Asking students to vocalise their understandings and experiences in a more-or-less structured manner.

Obviously here, we can cover all the usual bases of substantive knowledge, practical skills and broader socialisation into communities.

However, the whole essentialisation process means we also need to ask a different type of question too: ‘how did this experience match up to the actual thing you’re studying?’

One of the first papers I wrote on sims was about realism and the ways we can make our sims versimilitudinous [sp?], but sims are always approximations to reality and we need students to be both aware of and thoughtful about this.

In essentialising, we strip away ‘the other stuff’ because it’s not that important to our objective. But ‘not that important’ doesn’t mean ‘unimportant’.

So part of our debriefing needs to ask students whether and how the simplifications of the sim distort our understanding of what matters. To go back to that voting dynamics exercise I mentioned earlier, if we exclude a particular element, do students now think that element isn’t relevant?

One way I’ve tried to explore this is being asking students to come up with ways they could make the sim work in a more realistic manner: and often I’ll use that the next time out. This process invites either constructive additions or thoughtful reflection about why something can’t be included (and the impact thereof).

None of this is to say that sims aren’t any good; only that they are imperfect tools (like all pedagogies) and so we have to ensure that students recognise and reflect on that.

The persistent gap in simulation design literature

About eight years ago, I wrote the following:

….there remains a severe lack of basic ‘how-to’ guides (whether grounded in higher pedagogic theory or not); instead, the typical process of spreading use of simulations is by word of mouth and the adaptation of existing models to new situations. 

I was arguing that while there was a metric tonne of individual case studies of the ‘show and tell’ type and another pile of more theoretical musings on the nature and purpose of simulation games, there was a gap in-between.

The theory pieces were great for that paragraph at the top of your paper for justifying ‘why simulations’ but didn’t tell you how to go about actually doing one, while the ‘show and tell’ stuff sometimes gave you enough to re-run that particular sim, that sim was never quite what you needed.

No tools, no easy way forward.

All of this came flooding back during TLC in Baltimore this last weekend, with many of the colleagues on the track talking about starting out on their sim-building journey and looking for help.

Even I, with a pretty healthy dose of experience, found that when I was designing an online, asynchronous sim, there was basically nothing at all that worked through the options and mechanics. If you read my paper, you’ll see that much of it ended up being a working from first principles.

This persistence of a lack of operational literature is frustrating.

It’s also quite understandable, in that there aren’t that many people who are bothered enough not just to build their own sim, but also to abstract from that out to general design and implementation principles.

And yes, before you ask, I did write various things, gathered here. But that’s not really enough.

Maybe you need something like Classroom Assessment Techniques, the classic handbook from Cross, or maybe you need something else more dynamically and interactively built online, but as long as we lack these kinds of resources then we’re doomed to lots of people spending lots of time they don’t need to, reinventing wheels that have been reinvented many, many times before or – and worse -just giving up because it’s too tough.

As usual with such thoughts, while I can identify the problem, I can’t offer a easy solution.

Maybe you can, in which case I’d love to hear from you. As would many others.

Advice for Job-Fillers

Although probably too late in the calendar year to put into practice, here is a flip-side follow-up to my prior post about campus interviews:

It’s time to stop the costly performative rituals that are contributing to the disintegration of the academy.

Move to a bi-annual or even tri-annual hiring cycle. Academia is the only industry that I know of that limits hiring to an annual schedule. If you operate your searches on a staggered timeline, your applicant pool will probably improve and your top choices will be less likely to be snagged by competitors.

Stop requiring recommendation letters at the beginning of the search process. Demanding otherwise wastes the time of letter writers and applicants. We all know that an overwhelming percentage of these letters are never read because the applications they are part of quickly get tossed. Get references only for those on your short list and then check them.

While the science supports eliminating job interviews entirely, this probably isn’t going to happen, so at least make them less onerous. The pandemic demonstrated that there is no need to bring finalists to campus. And there is no demonstrated benefit in subjecting them to one to two days of back-to-back meetings with people who have no direct effect on the specified duties of the position. Is it essential for every candidate to have a 30-minute conversation with the Associate Director of Strategic Student Development Initiatives? No one who interviews for an IT or facilities management staff position has to suffer through this, and those units function perfectly well.

Finally, per the article linked to above, structure the applicant evaluation process to minimize bias and noise. Use rubrics to score candidates on clearly defined criteria. Collect the results, average them, and distribute this information to the search committee before discussion of the applicants’ relative merits. This will help prevent any single person in the room from unreasonably affecting the outcome.

#TLC2023 prep time

I open by freely admitting that this post is totally self-serving, but since it’s the first time in ages that I actually wrote the paper ahead of the conference, I’m going to extol the virtues of writing your paper ahead of the conference.

APSA’s TLC is nearly upon us and the heritage it brings of a track model means that pre-written and -circulated papers is still very strong. You can check out the submissions so far here.

As an opportunity to get focused and specific feedback on your work, especially if you’re looking to publish, this is pretty hard to beat: getting a bunch of people working in the same field, in a room for most of two days, sharing ideas, isn’t something you get to do that often in conferences.

And if, cough, one of those people is an editor on a leading L&T journal, then all the better.

Having already started to work my way through the papers on my track, I can see we’ll be having a lot of very productive debate in Baltimore, which we’ll be doing our best to share with you all.

How To Identify Problem Students

While going down the YouTube rabbit hole early last September, I stumbled across this video by a Canadian lawyer about the three types of clients to avoid. These clients display urgency, flattery, or (lack of) responsibility — often simultaneously. As stated in the video, these signals occur in any customer service industry. I’ve certainly seen them, and probably you have, too.

Urgency — a student claims to have an emergency that requires your immediate action. Questions for you to ask: “Is this a real or perceived emergency? Did the situation arise because of the student’s behavior?” In a serendipitous example, two weeks after watching the video, I received an email from a student with “URGENT CONCERN” in the subject line. It wasn’t urgent, nor was it my concern.

Flattery — a student says that you are the only professor that can resolve their problem. It is an attempt to distract you from the real cause of the situation. E.g., “This is my favorite course, but it’s the only one I’m doing badly in this semester, and if my GPA drops below X, I will lose my scholarship and have to drop out of college. Are there any extra credit assignments?”

Responsibility — nothing is the student’s fault. For example (actual email I received last month): “The wi-fi is completely shut down on campus and I can’t submit anything, I’ve been trying to for the past hour. I know our assignment is due and I’ve tried submitting it but I don’t know what to do. I can attach the writing here but can’t upload anything to Canvas.” My response? “The library has computers on every floor.”

How soon is too soon?

We’re in the process of building some new undergraduate modules for our degrees here and it’s been reminding me of a rather basic dilemma in programme design: when should we start students on the ‘hard stuff’?

The air quotes point to a first issue, namely that there are different kinds of ‘hard stuff’ out there.

Part of it is topics that require more advanced skills or more detailed knowledge, often applied, that seems to rest on a strong foundation of core competences: think advanced quants modelling or very particular policy issues.

But it is also the openly structured, student-led work in general. Obvious case in point: we put the dissertation at the end so the student can explore a subject in their own way, having got a bunch of stuff from us.

You’ll be shocked* to learn that I disagree with many people about this model.

Firstly, we typically get students joining us who’ve been socialised into a particular learning model by schools – much of it learning to hit the ‘right’ elements from the curriculum – that we try to shift towards more critical and self-reflective approaches, so why not work on that from Day One?

Secondly, as the examples above show, you could argue that dropping students straight into more ‘advanced’ work might be a strong incentive for them to buy into different ways of working, plus if you know your students are new to it then you can make some allowances about how far they can go.

Case in point: I’ve been looking at courses with my son and there’s one that involves an independent project every year of study. It’s a good way to highlight the applied value of the other content, practice for more ambitious work down the line and a training ground for thinking and reflection.

Finally, given how most degree classifications put weight on later stages of study, does it make utilitarian sense only to be introducing ‘hard stuff’ in those later stages, instead of early on, when they can iron out the wrinkles?

In practical terms, my experience has been that first years are more willing to do something new (because they don’t know ‘how things are’) and they have more ability than we typically recognise, even if their achievement isn’t the same as a final year student.

If we’re using active learning systems, then that’s all fine, because these aren’t so predicated on prior learning, but instead give space to students to take things as far as they can. Moreover, the applied nature of many such environments also makes it easier for new students to understand the wider value of what they’re doing.

Yes, it also means having a robust system of student support as they make the transition, but that should be something that we provide in any case. Indeed, the values of more advanced study – reflection, criticality, resilience – might ultimately help them to feel better able to support themselves.

None of this is particularly novel (as Amanda could tell you), but it’s still good to be reminded of it as we build new content.

Our confidence in building ambitious learning environments for our students is likely to translate into students who can become more confident about their learning.

* Not shocked