Designing to constraints

Seamless integration

It’s summertime, so in between the flood warnings (seriously), it’s time to be doing some Big Thinking about teaching.

As part of my new role at the Open University, I’m contributing to a new Masters in IR, including the development of a simulation exercise.

I’ll be writing a lot more about this simulation in the next couple of years, mainly because the constraints are very different from those I’ve worked to before, with a big pile of knock-on consequences.

As a completely new programme, we’ve got relatively more space to manoeuvre than would be usually the case, but still the constraints loom rather large. As such, I’m dwelling on my third step of my usual approach to such situations.

For those unfamiliar with the OU, it’s the UK’s largest university (nearly the enrolment of the University of California system) working almost entirely on a distance-learning model. We have a lot of adult learners and a very flexible approach to working up to qualifications: you take a module at a time.

The new Masters will be entirely remote, with a taught module that runs for 36 weeks, followed by a dissertation. For most of that 36 weeks, we provide a collection of teaching materials – written and audio/visual – through our website, with structured activities for students, building up to interim and final pieces of assessment.

My role, as part of the central teaching staff, is to create those materials, which have to be able to stand being used by students for several years before a refresh, with activities supervised and moderated by a large team of associates, who handle the bulk of the direct interactions with students.

The upshot here is that I’ve been trying to work up a negotiation simulation that fits a number of requirements that are usually not that conducive to such things:

  • Student numbers will be variable across iterations;
  • I can’t assume all students will be doing this via our website (we have a significant number of students with various accessibility challenges, so they might only be able to learn via a printed version of our materials);
  • As such, synchronous interaction is not an option;
  • Even asynchronous interaction will be a problem for some;
  • And I can’t assume any prior knowledge of negotiation.

As the old joke about getting directions in Ireland goes, you wouldn’t start from here.

But that’s been precisely why I’ve enjoyed my first months here: it’s not run-of-the-mill and I’m being forced to think about how to manage the situation, rather than simply reinvent the wheel.

For those of you not moving jobs, then remember that you too are working to constraints, but you might just have internalised them to a degree. None of us gets a completely free hand, or even something close to one.

The response here is to work with the constraints, not against them.

Whether it’s a oddly-shaped room, or a limit on your timetabled time with students, or making necessary adjustments for students with disabilities, or building in assessment obligations, or a departmental edict against X, Y or Z; then it’s the same thing. Whatever things might be blocked, then other things become possible.

The beauty of education is that it’s not uniform and that there’s no one correct way to do it: variety is a good thing, for so many reasons.

In my case, I’ve used those constraints to explore the options with the rest of the team. That meant presenting a number of basic models to them, with their benefits and disadvantages, all grounded in the question of what purpose this simulation is fulfilling within the programme.

Off the back of that discussion, I’m not working up an approach that combines at least two of those models, which we’ll discuss again in September. And as we settle on things, I’ll write more about how that might work and the further integration and delivery challenges that have to be addressed.

Review of McGuire’s Teach Students How to Learn

I stumbled across Teach Students How to Learn by Saundra Yancy McGuire (Stylus, 2015). Like The New Science of Learning by Doyle and Zakrajsek, it contains some useful advice. Here is a brief review:

The bad

The book has an excessive amount of personal anecdote — such as conversations with and exam scores of individual students — but no presentation of statistically significant findings on overall changes in students’ performance. The author also favorably discusses learning styles and the Myers-Briggs inventory, neither of which is scientifically supported. A more concise presentation with a greater emphasis on empirical evidence would be more persuasive.

The good

McGuire’s focus is on teaching students about the benefits of metacognition, including a specific method of introducing them to Bloom’s taxonomy (Chapter 4). Why is this effective? In high school, students earn high grades without much effort, so they enter college suffering from illusory superiority and ignorant of the actual learning process. Coaching students on specific study strategies (Chapter 5) will therefore benefit them. One example: as professors, we typically know what shortcuts to employ to efficiently find and retain information contained in a book. Students, in contrast, may not know what an index is or how to use one. McGuire also rightly discusses the role of motivation in student learning (Chapters 7-9), and she points out that there are both student-related and professor-related barriers to motivation. These barriers can be mitigated by the instructor.

A final comment

The underlying assumption of this book is that students want to learn, and if they are equipped with the right tools, college becomes a more valuable and rewarding experience for them and their professors. While I think this is a noble and generally accurate sentiment, I’m seeing an increasing number of U.S. undergraduate students for whom college is simply a credentialing process. For these students, the diploma is the goal, learning is not.

‘sup, D?

As we roll into the summer break, I have little to offer in the way of insight on matters pedagogic, so I’ll leave you to ponder the following:

This got a lot of attention over on Twitter last week, with opinion less than settled about whether it was A Good Take or A Bad Take. You can read the many, many comments by clicking through.

Me, I’m a pretty relaxed person about names, not least because of several decades of mangling of my surname and occasional uncertainty about what my actual title is.

I know I’d rather have a student contact me to ask something and make Simon (even Si), than for them to be too worried about honorifics to email at all.

Yes, there are pitfalls all over the place – from hierarchies of power to false matey-ness – so probably the best thing is to talk to students about it directly, when you meet them first.

And as the OP put it, it’s not something for the syllabus, because no-one reads that, apparently.

Which might raise some questions about why your syllabus isn’t working. But that can be one for next time.

Call for Editor(s): Journal of Political Science Education

The American Political Science Association is seeking applications and nominations for editorship of the Journal of Political Science Education. Applications can be from individuals or teams, and are due by September 1. Full details are here.

A big thank you to the outgoing editorial team for their excellent management of this journal over the last few years.

Failing to succeed

One thing that has been really good about being part of ALPS has been the community around it.

For example, this week’s post is inspired by my former colleague and general force of nature, Maxine David, who pushed out this thread the other day (click to read it all):

Essentially, Maxine’s asking the same question that I think we’ve all asked at some point: what are we trying to achieve in our classes?

As you’ll see from the responses to the thread, I started to sketch out a position, but I’d like to expand on it here some more.

Amanda and Nina have long championed failure in the classroom as a valuable learning experience for students. Their argument – which I also hold to – is that hitting nominal targets is good, but not a complete education: not hitting them encourages students to reflect more on the process of learning (and application) that they’ve undertaken. Think of it as being analogous to playing a game, where not hitting the (rather different) target makes you go back and try again, with the thought of why it didn’t work before in your mind.

This model requires us to acknowledge that learning has multiple targets.

Yes, we want students to know stuff and know how to do stuff (which we can catch with summative assessments), but we also want students to know how to know all this. Becoming a reflexive learner and a critical thinker is a core skill for building capacity to learn throughout the rest of one’s life and it’s a skill that has no easy metric, nor any obvious threshold.

And thresholds were my first thought when I read Maxine’s thread.

When we assess, we typically look for evidence of meeting some threshold: does the student demonstrate that they know enough about X or enough about how to do Y? Those thresholds are present in our grading and those institutional matrices that benchmark us all to common standards.

[Cough]

Maxine rightly points out that we cannot really ever separate out the formative and summative elements of assessment: if we genuinely value the development of reflexive learning, then we absolutely shouldn’t be trying to separate them out in the first place.

But this position is vanishingly rare in academia these days. Yes, I tell my doctoral students that a good viva should see every singly person coming out of the room having learnt something, but even that’s not a given.

Easy as it would be to blame the pressures of QA culture and metrification for all this, we also have to recognise that we often don’t create opportunities within our own classes. Even if we aren’t allowed to make adjustments for support received (as Maxine suggests), we should still be trying to instil a culture of collaboration, reflection and development among our students and between them and us.

In so doing we might start to reclaim some of that learning opportunity that will serve everyone in the class well, wherever they are and whatever they do.

UPDATE:

You might have seen that England is going through some very pointed discussions about racism, following the European football championships. This tweet from one of the national team players exactly captures the point:

Formative Assessment: Abort, Retry, Fail?

Two can play this game

Something of a response to Simon’s June 1 post on transitioning from pedagogical theory to teaching practice: he wrote, in part, “assessment is always formative and should be always linked to the feedback and adaptation process.” In theory, I agree. In practice, while I can lead students to feedback, I am still unable to make them read it.

As I’ve written before, the Canvas LMS has a “student viewed” time stamp feature that shows whether a student looks at my feedback on an assignment — my comments and a tabular rubric with cells that I’ve highlighted — after I have graded it. Generally, though, given the lack of time stamps, many students simply ignore this information. An example, with data: my annual spring semester course on comparative politics. In 2018 and 2019, I taught this course in the physical classroom. In 2020, the latter half of the course was online because of the coronavirus pandemic. In 2021, the course was delivered online for the entire semester. For each iteration, I tallied the number of students who looked at the first three, the third to last, and the second to last reading responses after I graded them. Results are below. N is number of students in the class; not every student in a class completed every assignment. The eyeball columns indicate the how many students viewed an assignment after I had graded it; the eyeball with a slash is the opposite.

While I can understand students not bothering to revisit assignments that they earned full marks on, I don’t understand why students who earn less than full marks frequently ignore information that would allow them to do better in the future. Anyone have an explanation?

A comment on commenting

When the camera-man turns up at your home

As you will have noticed, there’s a pandemic going on.

I mention this mainly because I’ve spent the past week listening to/watching/reading people give their opinion/knowledge/expertise about what to do with Covid and I realise I’ve become a bit inured to the difference between opinion, knowledge and expertise.

The trigger came a few evenings ago, when someone who I know is usually pretty good about these things getting drawn into questions from a presenter about whether certain government advice was sensible or not. To be clear, pandemics is not their research at all.

Mostly, they just about hovered on the right side of framing things in terms of what they did know about, but it was a pretty close shave.

Perhaps it was that closeness that made me reflect again about all the times we have people in the media opine on such questions when they have no evidence base and (seemingly) little understanding of relevant issues.

Your country might have the same problem.

This matters for academics because we might become part of a response to this. Our work is grounded in a degree of rigour, and presented with a good understanding of what we do and don’t know and what caveats might apply.

But it’s not always that simple.

Most obviously, as we specialise, the more we know that we don’t know, or rather; the more we know that someone else could give a more useful answer.

Unfortunately, the media doesn’t work like that. Journalists know who is likely to be available and worth contacting, based on previous experiences, to generate impactful comment. Your self-discipline might not be someone else’s.

Of course, the counter-argument to that is that if you decline to comment, then some motor-mouth might fill the gap, so shouldn’t we be trying to avoid that?

Some thoughts occur here.

Firstly, we all need to remember that academic work has public value in some way. Research is not only for research’s sake, but helps to advance human understanding. That implies that we all have some societal responsibility to translate our activity back into forms that can be shared more widely.

Secondly, we should all be making sure that we feel suitably prepared for engaging with the media. No, it’s not always the most pleasant experience, but I would say that the large predominance of journalists have a genuine interest in understanding things better and what to know what you know. If your institution offers media training (and it probably does), then get it: it can take the edge off your concerns.

Thirdly, while we need to respect the limits of our knowledge and expertise, that shouldn’t stop us helping the media to work through their questions. In particular, we can help journalists understand that the way they frame debates might be misleading: walking them through a different way of looking at things can be as important for them as getting a quote. So think about what you might bring to the table, especially if the initial contact strikes you as an odd way of approaching a topic.

Finally, if you can’t help a journalist, then direct them to someone who can. If we want to break journalists out of their eco-systems of contacts, then we are really well-placed to help them do that. So reply with a polite decline, but also a couple of people to try instead. Think about the diversity of those names too, be that for gender or seniority.

If we want the media to do a better job of discussing issues, then we have to play our part too.

Possible Improvement To Team Research Projects

A follow-up to my recent post about increasing the quality of students’ final products from collaborative research projects:

In my Spring 2021 research methods course, I gave students this outline to follow when writing their team’s research reports. I’ve revised the outline for Spring 2022. Each part in the new outline will get graded separately, with a summative grade for the entire report at the end of the semester.

I’m also thinking of being much more specific about the report’s layout, and grading the reports accordingly — similar to what has worked well with student presentations. I can envision the following criteria:

No more than two pages per part, which would limit the final report to eight pages.

Each part must include at least one data visualization — a chart or graph.

No photographic images.

Finding motivation

Start…

This past weekend, I went for a walk with my son. Older readers will remember him from his Lego days: he’s a bit taller now.

Our walk went from central London, along the river Thames, heading back to our home. If we’d completed it, then we’d have hit 100,000 steps and walked about 50 miles (80km).

Spoiler: we didn’t complete it.

But what’s this to you?

As I sit here now, with my feet still somewhat tender, I’m thinking about motivation and where we get it from. That applies as much to the classroom as it does to walks.

In the latter case, we tried to it for a variety of reasons. These included:

  • 100,000 is the highest badge that Fitbit offer for daily step count;
  • Last year, we did some similar long walks and only got up to 60,000 steps;
  • Covid – there’s been a lack of other things we might do;
  • It’s nice to sometimes turn the chat into action;
  • It’s nice to have a joint thing to do, together;
  • We each think we’re fitter than the other one.

Now, none of these are particular good reasons to wake up before 4am to catch a train to walk for 10 hours solid, but they were our reasons.

To use more formal language, there’s a mix of intrinsic and extrinsic motivation going on here: more the former than the latter, since we’re not usually that bothered about Fitbit badges. Essentially, we did it because we wanted to do it.

…middle…

A comparison with our classes is instructive here, because while students are typically not obliged to take a degree programme, they often are bound by course requirements, our syllabus and class set-up to do much as we say.

Even when we try to use active learning, we have to recognise that the boundaries of that are quite narrowly defined. It’s really rare to be offering up something that is broadly unstructured for students to make of it as they will.

For me and my son, our aim was roughly to see if we could get to 100,000 steps in a day. Last year’s effort had involved walking around and around our house (seriously), so we wanted to try something less demoralising. But there’s where we can see the two sides of such potential flexibility.

On the one hand, we could pick any route, done any how, at any time. Yes, I suggested the flattest possible option, and one with multiple bail-out points (luckily), but we might just as well have heading the other way from our house and made for the seaside. That scope to try whatever we want can be very liberating, and also enlightening, since our discussions beforehand made us think a lot about the various factors we’d need to consider (food, drink, loo stops, weather, scenery, maybe walking at night, etc.)

But that freedom can also be inhibiting. To get to the starting line (in both senses) requires much more engagement and reflection. For some that be too much, too daunting.

In the course of getting ready for this, we both did some research. Mine had more online resources about managing feet during long walks; his had more YouTube videos. One thing I did find was a site that organises events, including a river Thames walk of 50 miles. If we’d signed up for that (it’s in a few weeks), we’d have been supported all the way, with proper meals and stewards and a broom wagon to collect us.

But it wouldn’t have been the same. And it wouldn’t have been what we wanted.

Which might be the final point to consider: your motivation isn’t someone else’s.

Look back at that list, up top. It’s my list, not our list. I think it’s not so different from my son’s but that’s for him to know and to articulate (he declined the option to co-author this post). But that difference didn’t stop us from doing the walk, or from enjoying it, or from learning about our current limits.

Maybe the lesson here is that everyone comes to learning experiences with their own priorities and motivations, and as educators it’s for us to work with that. Like a good undergrad, I could note that education literally means ‘drawing out’, which is what this is all about.

If we can recognise what everyone brings and if we can create spaces that resonate with those differences, then we can all gain from it, both students and educators.

One to think about, as we wait for the train back home, if only to take our mind off our soles.

…end

Comparative Politics 2021: An Additional Post-Mortem

In January and February, I wrote about changes I had made in my annual comparative politics course:

I’ve already written about the problematic results from one assignment and a potential solution, and a way of improving the final exam prompt.

Student output in the collaborative qualitative comparative analyses, an assignment discussed in the Part 4 post linked above, proved disappointing. Despite extensive scaffolding, students did not demonstrate an understanding of how to determine cause and effect. Here is a simplified version of what one team produced; the other team’s QCA was nearly identical:

PhilippinesEgyptVenezuelaHungary
Less Economic GrowthYesYesYesYes
Less Citizen Political ParticipationYesYesYesYes
Higher Disease Mortality YesYesYesYes
Greater AuthoritarianismYesYesYesYes

There is no variation in the values for the independent or dependent variables. Nor are the independent variables likely to be causes, rather than effects, of the dependent variable — the direction of causality is ambiguous. The QCA provides no logical explanation for increased authoritarianism.

So next time around, I will have to specify that 1) variables must vary, and 2) causes can’t be effects.

From my perspective, these kinds of assignments get more prescriptive and less analytical with each tweak I make to them. While I don’t want them to devolve into mindless fill-in-the-blank worksheet exercises, it seems that is where they are headed.