Red flag

The first week of any course/module matters. It’s your first and best chance to make a good impression on your students, to engage them with what is to come.

So you might imagine that I was a bit concerned to find that less than half my students turned up for the lecture last week.

And you’d be right.

Yes, 0900 on a Friday is a shitty timeslot, especially if you’re the kind of student whose weekend starts on Thursday evening (as I seem to recall mine did), but that’s hardly enough to explain it.

Looking at the VLE, a lot of students still haven’t visited the module pages, so it also can’t be that they saw we were flipping and decided lectures were dispensable (see last week’s post on this).

Oddly, my seminar tutor tells me turnout was not bad through the rest of the day for the seminars.

My concern is that having started off on a not-turning-up foot, that will only continue and get worse, even with all the great stuff that’s going on in those sessions.

This isn’t so much narcissism as it is anxiety that if most students miss the sessions where we explain how the online assessment tool works, then we’ll have a bit of a car crash in a month’s time when they have to use, um, the online assessment tool.

Nil desperandum.

Usually, my hope is that this is where one of you comes up with a good idea, but in the meantime, I’ve got a couple of strategies to try out.

Firstly, I’n going to be making more of the lectures to demonstrate to those present their value, in the hope of them spreading the word. Part of that will be thinking about how only those that attend can get easy access to the graphics we’re building together (see last week’s post again).

Secondly, I’ll be upping my work in messaging to everyone on the module why the lecture is useful to them, via emails and the VLE. I’ll also be talking with my seminar tutor about how we can make a stronger link of substantive content between lecture and seminar.

I travel hopefully, but also realistically: to have missed out on being able to hook people in week 1 is a big challenge, but let’s see what we can do with it all.

Advice From Journal Editors

This post is based on an APSA TLC 2020 presentation by the editorial teams of the Journal of Political Science Education and European Political Science. Any errors are my own.

Prior to submitting a manuscript, authors should check whether its subject matter and length corresponds to the aims and scope of the journal. JPSE will publish material that fits into any one of four clearly-defined categories: SoTL, political science instruction, reflections on teaching and the academy, and reviews of educational resources. EPS has a similar list of the types of articles it publishes. A manuscript on a topic that falls outside of a specified category of interest will likely be rejected before it is sent out for review.

From my own experience, skimming through the contents of a journal’s recent issues can be very helpful in determining whether that journal is an appropriate choice for a manuscript submission.

Similarly, volunteering to act as an anonymous reviewer for JPSE or EPS gives one some insight into what others are submitting and what in the end appears in print. Both journals need more potential reviewers to accelerate the review process. Please contact their editorial boards to volunteer.

Journals often receive many submissions about certain topics but few to no submissions about others, making it difficult for editors to publish diverse content. For JPSE, these topics include civic engagement and intersectionality. The editors encouraged people to submit manuscripts that present innovative approaches to these subjects.

Curriculum design in European Studies, International Relations and Political Science

This guest post comes from Patrick Bijsmans & Afke Groen, Maastricht University.

We are going to be honest with you from the outset: this blog is not concerned with our teaching experience, but rather with an ongoing research project that we are working on with our colleague Johan Adriaensen and our student assistant Caterina Pozzi (both also Maastricht University). And it gets worse: this is a blog that ends with a cry for help.

We are working on a research project studying undergraduate curriculum design in European Studies, International Relations and Political Science. Surprisingly, there is relatively little research on actual curriculum design within the Scholarship of Teaching and Learning, in particular when it comes to such broad fields.

Sure, there has been a debate about what curriculums in these fields should look like. Some of our colleagues have, for instance, asked whether there is, or should be, such a thing as a core curriculum in European Studies, while others have looked at interdisciplinarity in the field of Politics. Similarly, at the policy level there have been some attempts to flesh out benchmarks and standards in European Studies, and International Relations and Politics.

But what is missing is a thorough attempt to build a database of programmes in European Studies, International Relations and Politics, and to compare the characteristics of these programmes.

This is where our ongoing research project comes in. The project builds on previous work by Johan and us, published in the Journal of Contemporary European Studies and European Political Science (in production). Both articles concern the training and monitoring of generic skills in active learning environments. Our new project takes a broader perspective on skills and methods in curriculum design. We conduct a meta-study of undergraduate programmes offered by the member institutions of APSA, ECPR and UACES. We particularly explore three key themes: (1) the teaching of skills, practical experience and employability; (2) the degree of interdisciplinarity; and (3) the flexibility and coherence of the programme.

All in all, we hope to provide (1) a unique and comprehensive database of how curricula are organised in practice. On this basis, (2) we will distinguish various types of curriculums and evaluate their advantages and disadvantages. Our final objective is to (3) formulate best practices for university teachers and programme developers. As such, the database also promises to be a useful resource for university policies, in particular in light of challenges such as the constantly changing objects of study in European Studies, International Relations and Politics and an increasingly diverse and international student body.

Although we are still in the phase of gathering data, we can already share a couple of interesting observations with you. For one, while some universities seem to think that programmes in European Studies, Politics and International Relations are no longer really necessary, it is good to see that this has certainly not meant that future students cannot choose from a wide array of such programmes.

Indeed, the curriculums that we have coded so far look quite different. For instance, our own BA in European Studies seems to pay much more specific attention to methods and skills development through separate courses (and many of them). Another striking difference between programmes, is the extent of choice offered to students; while some programmes consist of large, compulsory courses mostly, others include a wide array of electives or ‘tracks’ from diverse fields of studies (sometimes with over 100 or even 200 optional courses!).

The latter is also one of our main challenges: it is not always clear what exactly constitutes a programme’s curriculum. Often, the respective websites are not very clear – generally university websites are rather dense – and it is impossible to find core programme documents that might help us here. This is particularly the case for Eastern European and US programmes, which often revolve around a major/minor set-up.

Hence, we need your help! If you are based at a university and/or are teaching in a programme that is a member of APSA, ECPR and UACES, your input would be very welcome. If there is any documentation that you think might help us code Eastern European and US programmes, we would be very grateful if you could send it to patrick.bijsmans@maastrichtuniversity.nl.

We do offer something in return. First, we will keep you posted through Twitter and blogs. Second, we hope to organise panels and workshops on curriculum design at conferences, such as during this year’s European Teaching & Learning Conference in Amsterdam. If you would like to contribute to such get-togethers, do let us know. Finally, our aim is to eventually provide colleagues with access to our database, starting with those of you who help us move the project forward!

Trying to make more of our time together

For reasons now lost in the minutes of a senior suite meeting, it’s the first week of our second semester here. On the plus side, the weekend’s storm didn’t do any damage; on the minus, it’s been nearly two months since our students last sat in class.

Bold

Second semester means it’s also time for my first-year UG module on European integration. You might recall that last year I flipped it all around.

One of the big issues with that format was that very few students ended up coming to class. A large part of that seemed to be that they felt they were getting enough information from the video lecture, not least as I was using the lecture slot to deal with Q&A [not much Q, a lot of ex-temporising A].

With that in mind, I’m going to try a slightly different approach this time round.

I’ll still be leaving space for Q&A in the lecture, but most of the session will be filled with getting the students to draw assorted visual representations of elements of the European Union.

Thus, one week I might ask them to produce a diagram of the EU, or the factors that need to be considered when analysing it.

Since I’ve got 120 bodies in the class, my idea is to have a Google Doc they can access and then upload their picture (either by drawing directly on it, or by adding a photo of something they’ve done by hand). That way, I’ll get their work real-time and can display it back to them for discussion and further refinement.

At the end of the lecture, I plan to go away and produce something to summarise their contributions: maybe with a little commentary too.

Doing this will, I hope, generate more interest than simply waiting for someone to ask a question, and produce material that cuts across the rest of the provision, so they see value in contributing.

The danger is, of course, that if I’m summarising afterwards, then students might not see so much point in attending the lecture session itself, since they’ll still get access to it all. But I’ll cross that bridge as and when we get to it.

In the meantime, it’s off to the QR code generator…

The Muddiest Point, Updated

Many of you are probably already acquainted with the muddiest point technique — asking students to identify the one aspect of a lesson or assignment that they are the most confused by. Often this is accomplished by distributing index cards for students to write on. This semester I’m using an electronic version in a 200-level honors course on Asia: a survey on our Canvas LMS, completed in the last few minutes of class on days for which some kind of lecture or discussion is scheduled. The survey consists of the question “What are you most curious or confused about from class today?” Students automatically earn one point toward the final grade by answering it.

With a paperless process, I don’t have to try to decipher students’ handwriting. And I have an archive of students’ responses that I don’t have to transport or store.

Far more importantly, the surveys are demonstrating the difference between my knowledge base and that of my students — which I otherwise would be mostly oblivious to.

For example, my mind automatically defaults to thinking in terms of power, authority, and legitimacy whenever I’m confronted with the task of analyzing an authoritarian state. Or I recall concepts like ethnic identity when discussing nationalism. Or I know that geography is political rather than an immutable law of the universe — as demonstrated by the origins of labels like Far East, Middle East, and Near East. This is not the case with the majority of students in the class, given their survey responses so far.

Habit-forming

At least it’s not filthy

As APSA TLC heaves into sight once more, I’m reminded that it was the last time it rolled into Albuquerque that the idea for this blog was formed. Possibly over the consumption of various items of local cuisine.

I can’t attend in person this year, due to the weight of obligations back here, but it’s still a good moment to reflect on the nine years (!) that have followed.

In particular, I’m struck by the way in which I’ve formed a habit around posting over the years. And it’s something that I’ve been asked about several times recently.

As I’ve possibly related beforehand, we started off with a weekly rota, since we recognised that content is king. I got Tuesdays, and I did it for a couple of months, very assiduously, as did we all.

Then I went on holiday – it’s a European thing – and didn’t have posts lined up. This was commented on, and I was sufficiently peeved to be called out on it that I made sure I posted every single week for the next couple of years (including other periods of leave (having discovered the ‘delay posting’ option)).

In retrospect, that was possibly the best nudge I could have got to stick with this.

I’m a bit more sensible about it all now, taking breaks when I’m away, but this is now one of the bedrocks of my diary, along with my Thursday morning slot for my other blog. And my Monday morning reminder to do a vlog, and my Friday morning note about adding stuff to ResearchFish (if you don’t know, don’t ask).

As my resident psychologist tells me, it takes a long time for habits to form and stick and that’s certainly been true here.

With time, it’s gotten easier to write a blog post, in terms of just getting going and pulling it together quickly, even as it’s gotten hard to find a new thing to say. Indeed, I have a vague sense that I’ve written something like this before at some point.

Practically speaking, there is a pattern that seems to emerge. At first, it’s new and fun and you have things you know you want to do or say, so it’s not a problem. But then there’s the sticky patch, where you’ve satisfied your initial curiousity and where the harder issues creep in: the most obvious is that the new thing takes time away from other things.

It’s only by working through that patch that one gets to the habit stage: where you find a new balance and the more structural benefit of what you do.

And this isn’t just about blogging, but the sum of your practice. I’ve been the same with trying new teaching methods or with new elements in my research.

So as much I always encourage people to try new things, I’d also encourage you to stick with them beyond that first rush.

If I’d have given up on this blog, then I’d probably not have gotten into half the other stuff I’ve done since and I’d have missed out on a bunch of great experiences.

You’ve gotta start somewhere and you’ve gotta start sometime, so why not now?

Thoughts on Thought Experiments

Today we have a guest post from Adam Irish, an assistant professor of political science at California State University, Chico. He can be contacted through his faculty webpage at https://www.csuchico.edu/pols/people/tenure-line-faculty/irish-adam.shtml.

During graduate school I worked at the University of Illinois’ Center for Innovation in Teaching and Learning. Fresh from Teach For America and surrounded by EdD’s, I internalized the pedagogical research that supports active learning. As I sifted through the variety of techniques – each promising a marginal gain – I began to fill my lessons with more and more learning activities. Initially, this strategy of accumulation worked. It differentiated me from other TA’s, won me teaching awards, landed me a teaching post-doc, and then a tenure-track job at a teaching-focused university.

Yet designing and teaching classes that leap from activity to activity can be exhausting – start with a quick write, then a mini-lecture, next a think-pair-share, now group discussions, back to whole class review and on and on. Lately I find myself asking: does including more learning activities equal better teaching?

My suspicion is that, in many cases, less may be more.

Consider the humble thought experiment. A student imagines a given scenario and reasons. Popular among ancient Greek philosophers and turn of the century physicists alike, thought experiments persist in today’s classrooms. For example, Harvard professor Michael Sandel begins his popular course – Justice – with the Trolley Problem. You are aboard a runaway trolley, standing at the controls. You could turn the trolley down a sidetrack and kill one lone worker or allow the trolley to barrel into five workers straight ahead. What is the right thing to do? Every semester in a packed lecture hall, hundreds take hold of the trolley controls, reasoning about justice – no trolley required.

But could a well-crafted thought experiment generate enough discussion for an entire political science class? I have found Peter Singer’s “drowning child” experiment pairs well with foreign aid and John Rawls’ “veil of ignorance” drapes easily over discussions of welfare state politics. Beyond borrowing from philosophers, we can create our own thought experiments: Imagine you awoke this morning to find that five years had passed and the U.S. is no longer a democracy. What events do you suspect caused US democracy to collapse? In this authoritarian U.S., how might your life be different?

I recently designed a thought experiment to encourage thinking like a multi-national corporation (MNC) – a perennial difficult perspective for my students.

“Imagine that you just had a striking insight. You figured out something the world desperately needs. Fast forward through time. See the montage of days and nights spent building up your business, hiring and firing people, spending your first million dollars. Who bought a massive house? How about a nice car? Chartered jets for vacations abroad? Where did you go? Good.

But wait, the global economy is highly competitive. Earnings statements are due every quarter. Your company has taken on shareholders. They want more profit, more growth – so you opened offices and factories abroad. Still your competitors are circling, threatening to steal customers or that next big contract. The media is digging into your business practices. If you want to keep your job as CEO, you have beat your competitors. Can you see yourself sitting at that gleaming conference table, leading this month’s global strategy meeting?”

In the hour-long discussion that follows I ask what sort of business empires my students imagined building. From there we explore the overlapping and conflicting interests of MNCs and host states. Repeatedly, we return to their imagined Facebook-like successes to analyze the fraught international relations of MNCs.  Beyond examples from the reading, this activity lures student into the position of a CEO – one perhaps willing to fight against environmental or labor regulations. 

In my experience, doing more with less via thought experiments slows down the classroom. Students need time to create a rich narrative to draw from, whether they are steering a trolley, wandering an authoritarian U.S., or running their own MNC. Likewise, professors must spend time crafting and then presenting robust narrative structures that students can inhabit.  For example, see how Sandel builds tension in the trolley problem.

What if the next time you sat down to plan a lesson – a coffee cup steaming beside your keyboard, notes and books scattered about – you tried building that lesson around a single activity? Imagine that.

Seen Exams

Everyone’s working with seen papers…

This past semester I got to try out using a seen exam for the first time.

For those of you unfamiliar with it, you publish the exam paper some time ahead of the sitting date (a week, in this case), so students can prepare their responses, which they then write under controlled exam controls (without notes or materials to hand).

The logic of this is that it provides a more meaningful test of students’ abilities, since they since have to revise, plan and produce, but without the added peril of “I can’t find a question I can do” or “I answered the question wrong”.

Having inherited the format from a colleague, I was keen to try it out, especially since last year’s use of an open-book, online exam had worked very well. Indeed, this year’s module was with the same students.

The practicalities are very simple indeed: an email to the class and a posting on the VLE at the appropriate time, plus being available through the week to answer any queries or clarifications.

The day before the exam I emailed everyone again, just to run through any points that had come up and to remind them again that the format meant some things were different from a ‘normal’ exam.

Firstly, my expectations on factual accuracy would be higher, since they’d have had time to prepare.

Secondly, I’d like to see more references to the literature: not direct quotes, but certainly mention of relevant authors.

And most importantly, I’d expect clear organisation and argument in each of their answers.

So?

Having now finished my marking, I’m able to say a bit about how this all played out.

As with the other format, this approach seems to be good for pulling up the tail of students who might otherwise have found things difficult: even the worse-performing student still produced relevant answers with some detail.

Likewise, the almost total absence of factual errors and of very short answers was a pleasant development, suggesting everyone had actually done work for the exam.

So the knowledge front seems to be positive.

Having seen a few students straight after the exam, I’m not sure that they found it any less stressful though: yes, they knew what the questions would be, but they also noted that they were also conscious I would be marked in line with that, so maybe their extra work wouldn’t count for anything.

While we’ve yet to complete all the feedback cycle, I think that anxiety is understandable, but hasn’t played out. Instead, the performance of the class has been strengthened and their capacity in the subject will be that bit more for future modules they take.

In sum, this exam has further convinced me that closed-book, unseen exams aren’t that useful, either in measuring knowledge or managing student stress: unless I have to use them in future, I’m not going to be.

Modeling “Good” Writing

Today we have a guest post from Colin Brown, assistant teaching professor in the Department of Political Science at Northeastern University. He can be reached at colin [dot] brown (at) northeastern [dot] edu.

A few weeks ago, Shana Gadarian made a point on Twitter about explicitly teaching writing that I strongly endorsed. Getting our students to write better will, at minimum, make our job easier. If we’re lucky, it might actually promote better thinking.

For upper-level students, very open-ended prompts sometimes lead to really creative and insightful thinking—and a dissertation is of course the vaguest prompt of all. But our expectations often rest on implicit assumptions about what we think of as “good” political science. Amanda Rosen has written about this in the context of transparency. As she points out, telling students “write a five-page essay” doesn’t acknowledge that essay means different things in different fields, and many of our students will not yet know what this even means in political science.

Clarity is critical for essay prompts, especially for introductory students. While long, detailed instructions might help point students toward what they should think about, students new to a field often don’t have the context to know what is most important in a long prompt. To them, any sentence with a question mark might appear to be equally important—causing them to focus on what we thought was a minor point and producing a disconnect between what we want to assess and what we actually assess when our implicit expectations aren’t met.

Here are what I think were a failed and a relatively successful attempt to do this in my past semester’s intro comparative politics class. Students told me that while the instructions were explicit in the first, it was hard to know where to start and which parts to emphasize. With the latter prompt, they said it was not only clear what to do but why they were doing it.

One question I’ve found to be a bit polarizing in talking with colleagues is whether to provide model papers or templates. Is it a better way to make our aims clear, or does it cause students to just parrot back the template? I’ve always found myself on the side of providing models. Graff and Birkenstein’s They Say, I Say has solidified my thinking for why. They identify common rhetorical devices that mirror the most common academic ways of thinking, which they into templates that, for example, force students to write down the most obvious counterarguments. Experienced writers have read enough in the field to be able to pick up on these techniques implicitly, but beginners by definition have not. Graff and Birkenstein argue, and I think rightly, that this goes beyond rhetoric to actually learning the ways of thinking. Some students may not learn how to think about the flaws in their own argument, or even that they need to, until they are forced to write them with this kind of template.

In my own teaching, I’ve found it hard to explain in an abstract sense the need for clear writing over “beautiful” writing—and what students think is “beautiful” writing often feels cluttered and verbose to us. But when students see each other’s models and observe how much they actually understand from clear writing versus what they think is “good” writing, they start to self-diagnose their own excessive prose.

One challenge is that writing for politics requires some discipline-specific skills that might be more or less amenable to templates. Sarah James, George Soroka and I have a forthcoming JPSE piece on adapting tools from K-12 and composition studies for political science writing. But defining what we actually want from “good” political science writing seems often more folk wisdom than clearly defined—Ian Anson’s 2017 article on meaning-making is a notable and valuable exception—so as part of showing students what we want, there’s room for us to make this clearer to ourselves.

 Sarah, George and I will be leading a workshop at APSA TLC 2020 on implementing rubrics to improve student writing—Friday Feb. 7 @ 4:15 pm.

Stress and active learning

One for the EU specialists…

This weekend I caught up with an old friend. He works for a software company, overseeing the sales team.

Recently, he’s been doing some work with occupational psychologists, to get a better handle on the team’s stress levels. He told me about all this over a cuppa, including the SCARF model, which I’d not heard of.

SCARF is a diagnostic framework for identifying sources of stress, where individuals encounter challenges to their Status, Certainty, Autonomy, Relatedness (being part of the group) and Fairness.

There’s a useful summary (and diagram) here.

Listening to my friend, telling me how this works for his team (status is the big thing, apparently), I was struck by how this works in the educational context.

For example, one of the reasons why assessment is so stressful is that it hits most of these areas: students might feel success brings status with teaching staff, it’s relatively uncertain, it’s out of their control, and it’s not necessarily a fair way to judge achievement. The gain of a shared experience with other students pales next to all this.

Clearly, there are general lessons about student welfare to be picked up from this model, but it’s also useful to consider how it relates to active learning.

In traditional, transmission-centred approaches, life might appear to be relatively stress-free: most of the time you sit then, soaking up material, with the occasional bouts of panic at assessment time.

By contrast, active learning might be more challenging.

The biggest issue is likely to be the increased requirement for autonomy: active learning requires participation and the production of contributions on a rolling basis. This front-loads requirements on students, at a point where they might feel they know relatively little (raising issues of status (you want to look good in front of friends) and relatedness (you don’t want to get marginalised in the group if you fail)).

Similarly, the relative absence of the instructor means students have to self-regulate more than usual, so fairness might become more of a factor than in a situation where fairness gets imposed from above.

And it’s also worth highlighting that the model points to active learning being more stressful for teaching staff too, with lower status, higher uncertainty and a big hit to autonomy: no longer is everyone doing just what you want of them.

Despite this, I think that active learning’s benefits outweigh these costs.

Firstly, precisely because students are brought actively into the process from the start, they have much more time to prepare themselves for any summative assessment, both in terms of having to consider materials and of practising producing ideas. The stress is spread out, rather than concentrated at the back end.

But equally, if stress is managed properly, it also comes with raised engagement. If we are making our active learning spaces safe (as we always should be), then we are offering students both the opportunity and the tools to manage stress better, which not only points them to thinking more about the matter in hand, but also how to deal with other sources of stress in their life.

We’re helping our students to learn about the world and how to engage with it. That means skills matter at least as much as substantive knowledge. And handling stress is one of those skills. Yes, active learning is more stressful for all involved, but the benefits that flow from that are ones that might serve us all well.