Advice From Journal Editors

This post is based on an APSA TLC 2020 presentation by the editorial teams of the Journal of Political Science Education and European Political Science. Any errors are my own.

Prior to submitting a manuscript, authors should check whether its subject matter and length corresponds to the aims and scope of the journal. JPSE will publish material that fits into any one of four clearly-defined categories: SoTL, political science instruction, reflections on teaching and the academy, and reviews of educational resources. EPS has a similar list of the types of articles it publishes. A manuscript on a topic that falls outside of a specified category of interest will likely be rejected before it is sent out for review.

From my own experience, skimming through the contents of a journal’s recent issues can be very helpful in determining whether that journal is an appropriate choice for a manuscript submission.

Similarly, volunteering to act as an anonymous reviewer for JPSE or EPS gives one some insight into what others are submitting and what in the end appears in print. Both journals need more potential reviewers to accelerate the review process. Please contact their editorial boards to volunteer.

Journals often receive many submissions about certain topics but few to no submissions about others, making it difficult for editors to publish diverse content. For JPSE, these topics include civic engagement and intersectionality. The editors encouraged people to submit manuscripts that present innovative approaches to these subjects.

The Muddiest Point, Updated

Many of you are probably already acquainted with the muddiest point technique — asking students to identify the one aspect of a lesson or assignment that they are the most confused by. Often this is accomplished by distributing index cards for students to write on. This semester I’m using an electronic version in a 200-level honors course on Asia: a survey on our Canvas LMS, completed in the last few minutes of class on days for which some kind of lecture or discussion is scheduled. The survey consists of the question “What are you most curious or confused about from class today?” Students automatically earn one point toward the final grade by answering it.

With a paperless process, I don’t have to try to decipher students’ handwriting. And I have an archive of students’ responses that I don’t have to transport or store.

Far more importantly, the surveys are demonstrating the difference between my knowledge base and that of my students — which I otherwise would be mostly oblivious to.

For example, my mind automatically defaults to thinking in terms of power, authority, and legitimacy whenever I’m confronted with the task of analyzing an authoritarian state. Or I recall concepts like ethnic identity when discussing nationalism. Or I know that geography is political rather than an immutable law of the universe — as demonstrated by the origins of labels like Far East, Middle East, and Near East. This is not the case with the majority of students in the class, given their survey responses so far.

Thoughts on Thought Experiments

Today we have a guest post from Adam Irish, an assistant professor of political science at California State University, Chico. He can be contacted through his faculty webpage at https://www.csuchico.edu/pols/people/tenure-line-faculty/irish-adam.shtml.

During graduate school I worked at the University of Illinois’ Center for Innovation in Teaching and Learning. Fresh from Teach For America and surrounded by EdD’s, I internalized the pedagogical research that supports active learning. As I sifted through the variety of techniques – each promising a marginal gain – I began to fill my lessons with more and more learning activities. Initially, this strategy of accumulation worked. It differentiated me from other TA’s, won me teaching awards, landed me a teaching post-doc, and then a tenure-track job at a teaching-focused university.

Yet designing and teaching classes that leap from activity to activity can be exhausting – start with a quick write, then a mini-lecture, next a think-pair-share, now group discussions, back to whole class review and on and on. Lately I find myself asking: does including more learning activities equal better teaching?

My suspicion is that, in many cases, less may be more.

Consider the humble thought experiment. A student imagines a given scenario and reasons. Popular among ancient Greek philosophers and turn of the century physicists alike, thought experiments persist in today’s classrooms. For example, Harvard professor Michael Sandel begins his popular course – Justice – with the Trolley Problem. You are aboard a runaway trolley, standing at the controls. You could turn the trolley down a sidetrack and kill one lone worker or allow the trolley to barrel into five workers straight ahead. What is the right thing to do? Every semester in a packed lecture hall, hundreds take hold of the trolley controls, reasoning about justice – no trolley required.

But could a well-crafted thought experiment generate enough discussion for an entire political science class? I have found Peter Singer’s “drowning child” experiment pairs well with foreign aid and John Rawls’ “veil of ignorance” drapes easily over discussions of welfare state politics. Beyond borrowing from philosophers, we can create our own thought experiments: Imagine you awoke this morning to find that five years had passed and the U.S. is no longer a democracy. What events do you suspect caused US democracy to collapse? In this authoritarian U.S., how might your life be different?

I recently designed a thought experiment to encourage thinking like a multi-national corporation (MNC) – a perennial difficult perspective for my students.

“Imagine that you just had a striking insight. You figured out something the world desperately needs. Fast forward through time. See the montage of days and nights spent building up your business, hiring and firing people, spending your first million dollars. Who bought a massive house? How about a nice car? Chartered jets for vacations abroad? Where did you go? Good.

But wait, the global economy is highly competitive. Earnings statements are due every quarter. Your company has taken on shareholders. They want more profit, more growth – so you opened offices and factories abroad. Still your competitors are circling, threatening to steal customers or that next big contract. The media is digging into your business practices. If you want to keep your job as CEO, you have beat your competitors. Can you see yourself sitting at that gleaming conference table, leading this month’s global strategy meeting?”

In the hour-long discussion that follows I ask what sort of business empires my students imagined building. From there we explore the overlapping and conflicting interests of MNCs and host states. Repeatedly, we return to their imagined Facebook-like successes to analyze the fraught international relations of MNCs.  Beyond examples from the reading, this activity lures student into the position of a CEO – one perhaps willing to fight against environmental or labor regulations. 

In my experience, doing more with less via thought experiments slows down the classroom. Students need time to create a rich narrative to draw from, whether they are steering a trolley, wandering an authoritarian U.S., or running their own MNC. Likewise, professors must spend time crafting and then presenting robust narrative structures that students can inhabit.  For example, see how Sandel builds tension in the trolley problem.

What if the next time you sat down to plan a lesson – a coffee cup steaming beside your keyboard, notes and books scattered about – you tried building that lesson around a single activity? Imagine that.

Modeling “Good” Writing

Today we have a guest post from Colin Brown, assistant teaching professor in the Department of Political Science at Northeastern University. He can be reached at colin [dot] brown (at) northeastern [dot] edu.

A few weeks ago, Shana Gadarian made a point on Twitter about explicitly teaching writing that I strongly endorsed. Getting our students to write better will, at minimum, make our job easier. If we’re lucky, it might actually promote better thinking.

For upper-level students, very open-ended prompts sometimes lead to really creative and insightful thinking—and a dissertation is of course the vaguest prompt of all. But our expectations often rest on implicit assumptions about what we think of as “good” political science. Amanda Rosen has written about this in the context of transparency. As she points out, telling students “write a five-page essay” doesn’t acknowledge that essay means different things in different fields, and many of our students will not yet know what this even means in political science.

Clarity is critical for essay prompts, especially for introductory students. While long, detailed instructions might help point students toward what they should think about, students new to a field often don’t have the context to know what is most important in a long prompt. To them, any sentence with a question mark might appear to be equally important—causing them to focus on what we thought was a minor point and producing a disconnect between what we want to assess and what we actually assess when our implicit expectations aren’t met.

Here are what I think were a failed and a relatively successful attempt to do this in my past semester’s intro comparative politics class. Students told me that while the instructions were explicit in the first, it was hard to know where to start and which parts to emphasize. With the latter prompt, they said it was not only clear what to do but why they were doing it.

One question I’ve found to be a bit polarizing in talking with colleagues is whether to provide model papers or templates. Is it a better way to make our aims clear, or does it cause students to just parrot back the template? I’ve always found myself on the side of providing models. Graff and Birkenstein’s They Say, I Say has solidified my thinking for why. They identify common rhetorical devices that mirror the most common academic ways of thinking, which they into templates that, for example, force students to write down the most obvious counterarguments. Experienced writers have read enough in the field to be able to pick up on these techniques implicitly, but beginners by definition have not. Graff and Birkenstein argue, and I think rightly, that this goes beyond rhetoric to actually learning the ways of thinking. Some students may not learn how to think about the flaws in their own argument, or even that they need to, until they are forced to write them with this kind of template.

In my own teaching, I’ve found it hard to explain in an abstract sense the need for clear writing over “beautiful” writing—and what students think is “beautiful” writing often feels cluttered and verbose to us. But when students see each other’s models and observe how much they actually understand from clear writing versus what they think is “good” writing, they start to self-diagnose their own excessive prose.

One challenge is that writing for politics requires some discipline-specific skills that might be more or less amenable to templates. Sarah James, George Soroka and I have a forthcoming JPSE piece on adapting tools from K-12 and composition studies for political science writing. But defining what we actually want from “good” political science writing seems often more folk wisdom than clearly defined—Ian Anson’s 2017 article on meaning-making is a notable and valuable exception—so as part of showing students what we want, there’s room for us to make this clearer to ourselves.

 Sarah, George and I will be leading a workshop at APSA TLC 2020 on implementing rubrics to improve student writing—Friday Feb. 7 @ 4:15 pm.

Writing as Assessment

To illustrate the dilemma I presented in my last post: the possible devolution of the final exam for one of my courses.

My Fall 2018 exam was an attempt at an authentic writing exercise, but students had to choose one of two positions and use course readings. This meant I supplied the available arguments and evidence, and the exam was actually an assessment of rhetorical skill. Students didn’t demonstrate an ability to use concepts that I thought were crucial for the arguments they had developed.

For the final exam in Fall 2019, I ended up giving students a choice of arguments — “basis for aid policy to Egypt” and “plan for Louisiana’s future” — and I added this to the instructions for the exam:

Apply relevant concepts like discounting the future, moral hazard, etc.

Students still had to select one of two predetermined positions regardless of the argument chosen, and again I specified the pool of evidence they could draw from. And students still didn’t demonstrate knowledge of concepts listed in the exam’s instructions.

What next? I could have a final exam that asks students to, for example, “make an evidence-based determination of whether moral hazard exists in a location affected by climate change.” But this type of exam prompt might introduce even more problems.

Writing as Learning

My last post discussed writing as a professional endeavor. Today: writing as a device for learning; i.e., why and how we as teachers assign writing to students.

Generally we present our students with some form of expository writing task. Perhaps we call it thesis-driven, discipline-oriented, argumentative, or research-based. Regardless of the label, there is an assumption of students locating relevant primary data by means of different methods that they understand how to use, evaluating the data in an appropriate manner while being aware of their own assumptions, reaching some conclusion, and effectively communicating all of this to an audience.

That’s the ideal. The reality? Students often don’t know how to find primary data, or which methods are best suited for analyzing it. They may not even know what methods are. They assume there is either one right answer, or that all possible answers are equal, because they don’t understand that some answers can be more strongly supported by data than others while even better answers await discovery in the future.

And so we default to assignments that direct students to preferred secondary or tertiary sources (a “text”), tell them to organize their explanations as competitions between two artificial, diametrically-opposed positions, or, sometimes, encourage them to dredge up arguments that arrive at positions they already favor. Students learn to hang evidence on a predetermined conclusion rather than derive a conclusion from the evidence.

This type of deductive exercise has been used by teachers since the age of the agora to build students’ rhetorical skills. Today, unfortunately, it can produce people with a facile ability to argue any position at any time without veering from a worldview that they hold to be sacrosanct.

So what’s the solution? I don’t really have one. Too few of the students I encounter are willing or able to draw reasonable conclusions from evidence they have independently located, so writing exercises that involve inductive reasoning get chucked out the window. It’s frustrating.

Writing as Project

If you’re like me — a contractual teaching load of seven courses per academic year, plus overloads, committee work, and administrative duties — you tell yourself that you’ll work diligently on those unfinished conference papers and journal manuscripts during the winter holidays. And then life happens, time slips away, and suddenly the spring semester is about to begin.

There are simple tools — which aren’t part of the standard graduate program curriculum, but should be — that can help you become a more productive writer. I’ll mention two.

Stretch & SMART

The stretch goal is your ultimate objective or ambition; the whole project. For example, write a complete draft of a book chapter. SMART is an acronym that describes the actions that need to be taken to reach one’s objective:

  • Specific — actions must be defined and discrete, such as create a literature review that will be part of the book chapter.
  • Measurable — actions must be countable so that progress can be gauged. Each morning, find and read twelve peer-reviewed articles that are relevant to the book chapter. For each article, write a phrase or sentence on, respectively, its methods, findings, and quality.
  • Achievable — create the conditions needed to complete the above tasks. Clear morning schedule, turn off email.
  • Realistic — ensure that the tasks can actually be accomplished. Don’t go down rabbit holes; on the first day select which journals will be searched, a date range, and other limiting criteria.
  • Timeline — establish a schedule with an endpoint. I am devoting one hour each morning to the literature review. If I define my search on Monday, and then subsequently locate four articles per day, then I will have a total of twelve articles by the end of the allotted time on Thursday and can begin writing the literature review on Friday morning.

There are many definitions of Stretch & SMART; if the one above is unclear, others can be found with a quick internet search.

Front Forty & Back Forty

Front Forty & Back Forty maps the tasks that are part of a project and tracks which of those tasks have been completed. The technique was invented by my colleague and illustrator extraordinaire, Susannah Strong. An explanation is here. Make sure to scroll down to the sample map.

Simulating the EU’s Foreign Affairs Council, Part 2

Today we have the second of two posts from Nicholas Wright, of the political science department at University College London. He can be reached at nicholas [dot] wright [at] ucl [dot] ac [dot] uk.

Some key insights from the simulation:

  • Talking to colleagues matters. While all students produced negotiation briefs, very few used their briefs as a basis for preliminary discussions with colleagues. Some did seek to build alliances with like-minded partners, but few outside the formal leadership group of HR/VP, Presidency and institutions attempted to build bridges to those they thought would have opposing positions. All recognised the importance of knowing what their partners wanted, but not all acted on it. Those that did felt better prepared for the meeting.
  • Respond to the draft agenda. Several students felt that the agenda did not sufficiently reflect the issues. However, even though it was circulated well in advance, none of the member states engaged directly with the HR/VP to discuss whether it could be amended, even though it was deliberately anodyne to enable flexibility.
  • Time flies. Everyone felt the pressure of time, especially in the second hour. They all thought they had more time for discussion than they did and did not consider time allocated for the debrief. Despite the Chair encouraging them to move as swiftly as possible to the second question, it was neglected.
  • Being heard matters . . . but so does taking part. With any group of students there will be those who are more forthright and vocal, so part of the challenge is to encourage everyone to participate as fully as possible. Ultimately, the time is theirs and this year everyone made at least some contribution. France, Germany and Hungary were all quite active, while Ireland less so. The UK representative struggled to get the attention of the chair, partly because of the table layout, but also because she felt constrained by the impact of Brexit—thereby, wittingly or not, reflecting the reality of these meetings since 2016!

I drew three lessons from the simulation that I can apply to the future:

  • Picking a good leadership group matters. This is quite a challenge as roles are assigned early in the term and it is not always clear at that stage who will have the requisite skills to manage the meeting. But this year, I feel my choice was vindicated – the HR/VP was effective and was ably supported by the EEAS and Commission.
  • Time management is crucial. This year I deliberately reduced the number of questions to two to allow even more time for discussion and negotiation, but did not anticipate that the discussions would become so dominated by the first question. Next year I will reduce the initial tour de table from 3 to 2 minutes and in the pre-briefing with the HR/VP really emphasise the need to be strict on time. We’ll see if that makes a difference.
  • Build on the negotiation component of the module. There is an opportunity to include more training in negotiation prior to the simulation. Adding a short exercise in preceding weeks that deals with specific negotiating scenarios would improve the practical aspects of the module and probably the simulation itself.

Simulating the EU’s Foreign Affairs Council, Part 1

Today we have the first of two guest posts from Nicholas Wright, of the political science department at University College London. He can be reached at nicholas [dot] wright [at] ucl [dot] ac [dot] uk.

My favourite class of the autumn term is when we simulate a crisis meeting of the EU’s Foreign Affairs Council (FAC) as part of my ‘EU in the World’ postgraduate module. It’s a great opportunity to turn from (sometimes quite dry) conceptual and theoretical discussions about EU actorness (or lack thereof) and test out in practice some of our assumptions – e.g. about how negotiation and decision-making work in the FAC, how far traditional power attributes bestow advantage, etc. It’s also a great opportunity for the students to take the lead while I remain on the sidelines and observe.

This year, our meeting focused on Turkey’s military deployment in northern Syria. The students had just two and a quarter hours to reach a consensus and draft answers to two questions:

  • Should the EU respond any further to the crisis?
  • Does the EU need to reset its longer-term relationship with Turkey, particularly regarding Turkey’s path to possible future EU membership?

The outcome of the meeting was interesting. Beyond rejecting any form of EU Common Security and Defence Policy (CSDP) military or civilian mission, the group could not reach consensus on anything. Caught up in discussion of the first question, which I had intended to be resolved swiftly, they had insufficient time to adequately address the second. The resulting diplomatic statement offered little in the way of concrete action while substantive discussion of the EU-Turkey relationship was postponed for a future meeting.

This outcome was initially considered a failure in the subsequent debriefing. But in dissecting what happened, the ‘failure’ highlighted to very good effect the challenges posed by this kind of policy discussion, especially when actors with clear status quo positions seek to prevent a more proactive policy response.

Using the simulation:

The simulation takes place in Week 7. In Week 3 students are informed about the topic, provided with briefing documents, and assigned their roles. The class is capped at twenty students so not all EU member states can be represented. One is assigned the role of HR/VP (High Representative of the Union for Foreign Affairs and Security Policy) who acts as chair, and one each acts as the holder of the rotating presidency, the European External Action Service (EEAS), and the relevant Commission Directorate-General. The remaining students are assigned a member state and wherever possible they represent their home state.

As preparation, students submit a 1,000-word negotiation brief in Week 5. Although the simulation itself is formative, the brief is assessed and must set out the historical role of their actor in relation to the Common Foreign and Security Policy (CFSP), their goals and red lines, and a strategy for how the meeting will be approached. Students may disclose the contents of their briefs to one other, but only after submission.

In Week 6, the HR/VP, in consultation with the rotating presidency, EEAS, and Commission, produces a preliminary agenda for the meeting. From that point, students are actively encouraged to consult with each other up until the simulation starts. To facilitate this, I created a discussion forum on the module’s Moodle page, and this year students also used WhatsApp.

The simulation starts with a brief introduction where I remind them of the ground rules including the time limit. Then the HR/VP takes over the task of facilitating the discussions, beginning with an opening tour de table.

Approximately twenty minutes before the end of the simulation, I introduce a ‘live’ update that is intended to disrupt their deliberations, test their ability to think on their feet, and get them to demonstrate their understanding of their actor’s interests and priorities. In this case it was a Turkish decision to suspend the 2016 refugee agreement with the EU, which resulted in a hastily drafted additional paragraph at the end of students’ conclusions.

We conduct a twenty-five minute debriefing after the simulation. Students consider whether they achieved the goals they had identified in their original briefs, reasons for why this happened, and what they could have done differently.

The Joy of Documenting a Job Done

Not knowing whether one has actually helped students learn is one of the most frustrating aspects of teaching. Assuming an absence of megalomania or the Dunning-Kruger effect, indications that we’ve made a difference are typically quite vague and ambiguous. So I was pleasantly surprised — as in, “hey, maybe students really did benefit from this” — by the results of a knowledge probe that I launched at the beginning and end of the semester in my course on economic development and environmental politics.

The knowledge probe was an ungraded quiz that asked questions about a few basic economic concepts, administered through Google Forms in the first and last weeks of the semester. Results, showing percentage of respondents answering correctly, are below.

Pre
N = 21
Post
N = 16
% Change
Poverty Trap5210092
Diminishing Returns to Capital527544
Skill Matching5881,660
Common Pool Resource Problem488169
Moral Hazard38100163

Obviously this wasn’t the perfect tool. Sample sizes are too small for statistical significance. And a slightly different proportion of students reported previously taking a university economics course on the pre-test than on the post-test. But the numbers at minimum suggest that students learned something over the semester, which gave me a sense of satisfaction that I otherwise wouldn’t have.