Challenges in Using Policy Briefs for Assessment

Today we have a guest post from Simon Lightfoot, Professor of Politics and the Faculty of Social Science’s Pro-Dean for Student Education, University of Leeds. He can be contacted at S [dot] J [dot] Lightfoot [at] leeds [dot] ac [dot] uk.

In recent article I reflected on my use of policy briefs as an assessment task on a final year module Politics of Aid. What surprised me as I was writing the article was how many expected and unexpected challenges arose because of decisions I had made in the design of the module.

I expected there to be some student concern about a relatively novel assessment task introduced in the final year. To counter that I encouraged at first and then made obligatory the submission of a draft. The unexpected challenge was that many students were unsure as to how the deal with drafts. There were always calls for more generic advice despite the fact that each student was able to get tailored individual feedback on their draft.

I thought that students would welcome individual feedback but overlooked the fact that personalised feedback can be specific and very personal. Just as academics bemoan the infamous reviewer 2 (who turns out to have a twitter feed dedicated to their special brand of reviews), students receive feedback in the same personalised way we do. It became clear that we need to ensure that students know how to use feedback if drafts are to be beneficial, and that students need to understand that revision of one’s writing is a central part of the research process.

The drafting/redrafting issue has grown in significance now that the policy brief is 100% of the module assessment. Though intended to reduce the assessment burden on students, the change just raised the stakes—it became a one strike assessment task, which caused some students to feel more pressured. At 1,500 words, the policy brief is very short compared to many other assessment tasks, yet it must demonstrate a high level of research, synthesis, and structure, which requires time. Convincing students to dedicate enough time to their writing proved to be another challenge. As Chagas-Bastos and Burges (2018) found, ‘it is consequently necessary to continuously emphasize the importance of revising and editing, actively encouraging students to deliberately think in terms of drafts’ if they are to produce good policy briefs.

Monday Morning Advice

A few items from elsewhere that might be of help for instructors who are new to teaching online:

First, here are five reasons not to use timed exams. In my opinion thse arguments apply whether one is teaching online or face-to-face.

Second, here are some very practical, easy-to-implement techniques for becoming a better writer — suitable for both professors and students.

Last, methods of identifying whether your employer’s financial viability has suddenly gotten much worse — editorials that I wrote in 2017 and 2019.

The Benefits of Rubrics

Today we have a guest post from Sarah E. James (sarah [dot] james [at] g [dot] harvard [dot] edu), Colin Brown (colin [dot] brown [at] northeastern [dot] edu), and George Soroka (soroka [at] fas [dot] harvard [dot] edu.)

At the 2020 APSA Teaching and Learning Conference, we led a workshop on writing rubrics for the political science classroom. Rubrics may not be a cure-all for student writing, but we do believe that college faculty underestimate their benefits for teaching advanced writing skills. They can also be a powerful tool for developing a mindful teaching practice.

The use of rubrics is extremely common at the K-12 level (at least here in the U.S.), and there is considerable evidence to support their effectiveness at teaching beginning writing skills (Heidi Andrade’s work is a good starting point, for example, here, here, and here). There has been less evidence for their effectiveness at the university level, but the few existing studies point in the same general direction as the elementary ed studies:

  • Rubrics help students learn specific skills, and make them aware of their own learning processes.
  • Rubrics make grading more efficient, while simultaneously increasing student perceptions of how fair the grading is.
  • Rubrics help expose aspects of the “hidden curriculum” in our writing expectations and may help the most disadvantaged writers the most.

Our key takeaway for participants in our workshop: rubrics let you deal with subjective measures, but in a transparent way, and without being arbitrary. Generating a good rubric requires you to be clear about what you actually value, and on what you expect students to be able to demonstrate. From the students’ side, this is a clear signal of where you want them to spend most of their time. From the instructor’s side, this is a good way to make sure that you’re following the adage of “if you didn’t teach it, don’t test it.” And when we think of the kind of genre-specific writing skills we demand of students, this sort of clarity can be extremely helpful for students who may “write well” in a general sense but who may have no experience in how things like evidence, counterarguments, citations, and literature reviews work in political science specifically.

Rubrics can only capture so much, and when you use one, you are limited to only a certain number of skills or aspects in your assessment. At our TLC workshop, the most common concern our participants had was, “what happens if we end up having to give a good grade to a bad paper?” This is a (small) risk, but we encouraged our participants to step back for a second and think about the rubric as a mindful teaching exercise. If a paper feels like it should get a lower grade than the rubric suggests, are there skills that should be included explicitly in the rubric? (They can be added next time!) If not, then what’s causing you to think the grade should be lower—and is it really something that should be entering into your assessment?

For those interested in designing their own rubrics, we provided examples for an introductory and an upper-level course, as well as a worksheet to help in setting it up. Our examples are designed to focus much more on discipline-specific skills (using evidence, critical thinking, professional norms) than on the quality of prose itself, and our instinct (tested to limited effect in our JPSE article) is that this is the most productive use of rubrics in the college-level classroom. But the structure of rubrics allows them to be adapted to the instructor’s aims, whatever they are—and they force the instructor to make those aims clear to themselves and to their students.

When Students Ignore Feedback

While I don’t comment on student writing nearly as much as some professors do, I expect students to at least read what I do write. A colleague recently pointed out that our Canvas LMS displays a date stamp at the top right of the Speedgrader web page when a student has viewed a previously-submitted assignment after an instructor has commented on it. I had never noticed this before, I guess because the date stamp’s font is rather small. Here it is, indicated by the red arrow:

This feature became very useful in a course in which students are required to write a series of memos that all have the same format. Last week, a student taking the course sent me this email:

I’m not sure what is expected from the assignments, my memo 3 was completely different from 2 yet your comment says see comments about memo 2. I am a second semester senior doing grad classes that had a 3.6 gpa last semester. Somehow I’m failing every single assignment in a freshman level class, while still attending every single class except one and participating in class basically every session. 

I looked at the student’s submissions for memos 1, 2, and 3 — no date stamp. My comments had been ignored. My reply to the student’s email:

The memo is a standard method of efficiently communicating information that is used in a wide variety of professional environments. I’m surprised you haven’t yet had much of an opportunity to practice this form of writing, so here is what I am willing to do: you can earn up to 10 points by emailing me by noon on Friday a memo that discusses how well you incorporated my feedback on your Memo 1, provided by my comments on your work on Canvas, into your Memo 2, and the same for Memo 3 in respect to my comments on your Memo 2.

Completion of my “extra credit opportunity” would have required the student to admit that he had not read my comments and thus ignored the feedback I had provided.

The student did not respond.

Design Fail or Attention Fail?

I recently graded a writing assignment for one of my courses and I’m wondering if it’s an example of “You can lead students to the education but you can’t make them learn.”

The instructions for the assignment:

You have been given the task of submitting a memo to the National Security Council  that answers the following question:

  • Will Nigeria remain a single state or divide into separate states?

Download the memo template; use it to format your work. Turn the question above into a declarative sentence and use it as the memo’s executive summary.  Write two subsections that support your argument using evidence taken from course readings. Each subsection should be a single paragraph. Reference sources with in-text parenthetical citations. 

The information above was repeated by the memo template itself and by the rubric attached to the assignment. From my perspective, the assignment is totally straightforward and the assessment criteria are completely transparent. Yet . . .

Several students wrote memos on the question of “Should?” rather than the actual question of “Will?

Many students also failed the “Turn the question above into a declarative sentence . . .” part. A few representative examples of what they submitted as an executive summary:

  • “In the current course, Nigeria will see a deterioration in multiple states because of the lack of agreement over these issues and the failure of the government to uphold a true democracy. Nigeria is a fledgling state on the verge of an inner collapse due to current trends.”
  • “The United States should help Nigeria dissolve into multiple sovereign states, by mediating the separation process. Nigeria is currently facing a slew of ethnic and social conflicts the country is made up of 36 states which legally sew divisions between regional outsiders and regional natives, this has sparked ethnic and religious conflicts.”
  • “The best path forward for Nigeria is to remain a single state. Splitting the nation up now would only be detrimental to Nigeria’s ability to control its sphere of influence across the African continent. Splitting Nigeria into multiple states would challenge the work that has gone on for years to make it politically equitable and would not account for the vast cultural differences of the nation.”

And so on.

I’m wondering how I should interpret what happened. Is this simply a case of failing to follow directions? If not, I don’t know how I can make things more obvious.

Modeling “Good” Writing

Today we have a guest post from Colin Brown, assistant teaching professor in the Department of Political Science at Northeastern University. He can be reached at colin [dot] brown (at) northeastern [dot] edu.

A few weeks ago, Shana Gadarian made a point on Twitter about explicitly teaching writing that I strongly endorsed. Getting our students to write better will, at minimum, make our job easier. If we’re lucky, it might actually promote better thinking.

For upper-level students, very open-ended prompts sometimes lead to really creative and insightful thinking—and a dissertation is of course the vaguest prompt of all. But our expectations often rest on implicit assumptions about what we think of as “good” political science. Amanda Rosen has written about this in the context of transparency. As she points out, telling students “write a five-page essay” doesn’t acknowledge that essay means different things in different fields, and many of our students will not yet know what this even means in political science.

Clarity is critical for essay prompts, especially for introductory students. While long, detailed instructions might help point students toward what they should think about, students new to a field often don’t have the context to know what is most important in a long prompt. To them, any sentence with a question mark might appear to be equally important—causing them to focus on what we thought was a minor point and producing a disconnect between what we want to assess and what we actually assess when our implicit expectations aren’t met.

Here are what I think were a failed and a relatively successful attempt to do this in my past semester’s intro comparative politics class. Students told me that while the instructions were explicit in the first, it was hard to know where to start and which parts to emphasize. With the latter prompt, they said it was not only clear what to do but why they were doing it.

One question I’ve found to be a bit polarizing in talking with colleagues is whether to provide model papers or templates. Is it a better way to make our aims clear, or does it cause students to just parrot back the template? I’ve always found myself on the side of providing models. Graff and Birkenstein’s They Say, I Say has solidified my thinking for why. They identify common rhetorical devices that mirror the most common academic ways of thinking, which they into templates that, for example, force students to write down the most obvious counterarguments. Experienced writers have read enough in the field to be able to pick up on these techniques implicitly, but beginners by definition have not. Graff and Birkenstein argue, and I think rightly, that this goes beyond rhetoric to actually learning the ways of thinking. Some students may not learn how to think about the flaws in their own argument, or even that they need to, until they are forced to write them with this kind of template.

In my own teaching, I’ve found it hard to explain in an abstract sense the need for clear writing over “beautiful” writing—and what students think is “beautiful” writing often feels cluttered and verbose to us. But when students see each other’s models and observe how much they actually understand from clear writing versus what they think is “good” writing, they start to self-diagnose their own excessive prose.

One challenge is that writing for politics requires some discipline-specific skills that might be more or less amenable to templates. Sarah James, George Soroka and I have a forthcoming JPSE piece on adapting tools from K-12 and composition studies for political science writing. But defining what we actually want from “good” political science writing seems often more folk wisdom than clearly defined—Ian Anson’s 2017 article on meaning-making is a notable and valuable exception—so as part of showing students what we want, there’s room for us to make this clearer to ourselves.

 Sarah, George and I will be leading a workshop at APSA TLC 2020 on implementing rubrics to improve student writing—Friday Feb. 7 @ 4:15 pm.

Writing as Assessment

To illustrate the dilemma I presented in my last post: the possible devolution of the final exam for one of my courses.

My Fall 2018 exam was an attempt at an authentic writing exercise, but students had to choose one of two positions and use course readings. This meant I supplied the available arguments and evidence, and the exam was actually an assessment of rhetorical skill. Students didn’t demonstrate an ability to use concepts that I thought were crucial for the arguments they had developed.

For the final exam in Fall 2019, I ended up giving students a choice of arguments — “basis for aid policy to Egypt” and “plan for Louisiana’s future” — and I added this to the instructions for the exam:

Apply relevant concepts like discounting the future, moral hazard, etc.

Students still had to select one of two predetermined positions regardless of the argument chosen, and again I specified the pool of evidence they could draw from. And students still didn’t demonstrate knowledge of concepts listed in the exam’s instructions.

What next? I could have a final exam that asks students to, for example, “make an evidence-based determination of whether moral hazard exists in a location affected by climate change.” But this type of exam prompt might introduce even more problems.

Writing as Learning

My last post discussed writing as a professional endeavor. Today: writing as a device for learning; i.e., why and how we as teachers assign writing to students.

Generally we present our students with some form of expository writing task. Perhaps we call it thesis-driven, discipline-oriented, argumentative, or research-based. Regardless of the label, there is an assumption of students locating relevant primary data by means of different methods that they understand how to use, evaluating the data in an appropriate manner while being aware of their own assumptions, reaching some conclusion, and effectively communicating all of this to an audience.

That’s the ideal. The reality? Students often don’t know how to find primary data, or which methods are best suited for analyzing it. They may not even know what methods are. They assume there is either one right answer, or that all possible answers are equal, because they don’t understand that some answers can be more strongly supported by data than others while even better answers await discovery in the future.

And so we default to assignments that direct students to preferred secondary or tertiary sources (a “text”), tell them to organize their explanations as competitions between two artificial, diametrically-opposed positions, or, sometimes, encourage them to dredge up arguments that arrive at positions they already favor. Students learn to hang evidence on a predetermined conclusion rather than derive a conclusion from the evidence.

This type of deductive exercise has been used by teachers since the age of the agora to build students’ rhetorical skills. Today, unfortunately, it can produce people with a facile ability to argue any position at any time without veering from a worldview that they hold to be sacrosanct.

So what’s the solution? I don’t really have one. Too few of the students I encounter are willing or able to draw reasonable conclusions from evidence they have independently located, so writing exercises that involve inductive reasoning get chucked out the window. It’s frustrating.

Writing as Project

If you’re like me — a contractual teaching load of seven courses per academic year, plus overloads, committee work, and administrative duties — you tell yourself that you’ll work diligently on those unfinished conference papers and journal manuscripts during the winter holidays. And then life happens, time slips away, and suddenly the spring semester is about to begin.

There are simple tools — which aren’t part of the standard graduate program curriculum, but should be — that can help you become a more productive writer. I’ll mention two.

Stretch & SMART

The stretch goal is your ultimate objective or ambition; the whole project. For example, write a complete draft of a book chapter. SMART is an acronym that describes the actions that need to be taken to reach one’s objective:

  • Specific — actions must be defined and discrete, such as create a literature review that will be part of the book chapter.
  • Measurable — actions must be countable so that progress can be gauged. Each morning, find and read twelve peer-reviewed articles that are relevant to the book chapter. For each article, write a phrase or sentence on, respectively, its methods, findings, and quality.
  • Achievable — create the conditions needed to complete the above tasks. Clear morning schedule, turn off email.
  • Realistic — ensure that the tasks can actually be accomplished. Don’t go down rabbit holes; on the first day select which journals will be searched, a date range, and other limiting criteria.
  • Timeline — establish a schedule with an endpoint. I am devoting one hour each morning to the literature review. If I define my search on Monday, and then subsequently locate four articles per day, then I will have a total of twelve articles by the end of the allotted time on Thursday and can begin writing the literature review on Friday morning.

There are many definitions of Stretch & SMART; if the one above is unclear, others can be found with a quick internet search.

Front Forty & Back Forty

Front Forty & Back Forty maps the tasks that are part of a project and tracks which of those tasks have been completed. The technique was invented by my colleague and illustrator extraordinaire, Susannah Strong. An explanation is here. Make sure to scroll down to the sample map.

Does the Question Determine the Answer?

Regular readers of this blog know that I sometimes ponder the clarity of my assignment and exam prompts (some past posts on this subject are here, here, and here). Students sometimes don’t hit what, in my mind, the question targets, so I revise in the hopes of creating a prompt that is more transparent. But I don’t want prompts to be answerable with a Jeopardy-like regurgitation of facts. I want students to exert some cognitive effort to figure out how to apply concepts that are relevant to the question at hand.

Usually this situation occurs with my undergraduates, but I’m noticing it more frequently with master’s degree students. A recent example is an assignment from my graduate-level introduction to comparative politics course:

Continue reading “Does the Question Determine the Answer?”