Challenges in Using Policy Briefs for Assessment

Today we have a guest post from Simon Lightfoot, Professor of Politics and the Faculty of Social Science’s Pro-Dean for Student Education, University of Leeds. He can be contacted at S [dot] J [dot] Lightfoot [at] leeds [dot] ac [dot] uk.

In recent article I reflected on my use of policy briefs as an assessment task on a final year module Politics of Aid. What surprised me as I was writing the article was how many expected and unexpected challenges arose because of decisions I had made in the design of the module.

I expected there to be some student concern about a relatively novel assessment task introduced in the final year. To counter that I encouraged at first and then made obligatory the submission of a draft. The unexpected challenge was that many students were unsure as to how the deal with drafts. There were always calls for more generic advice despite the fact that each student was able to get tailored individual feedback on their draft.

I thought that students would welcome individual feedback but overlooked the fact that personalised feedback can be specific and very personal. Just as academics bemoan the infamous reviewer 2 (who turns out to have a twitter feed dedicated to their special brand of reviews), students receive feedback in the same personalised way we do. It became clear that we need to ensure that students know how to use feedback if drafts are to be beneficial, and that students need to understand that revision of one’s writing is a central part of the research process.

The drafting/redrafting issue has grown in significance now that the policy brief is 100% of the module assessment. Though intended to reduce the assessment burden on students, the change just raised the stakes—it became a one strike assessment task, which caused some students to feel more pressured. At 1,500 words, the policy brief is very short compared to many other assessment tasks, yet it must demonstrate a high level of research, synthesis, and structure, which requires time. Convincing students to dedicate enough time to their writing proved to be another challenge. As Chagas-Bastos and Burges (2018) found, ‘it is consequently necessary to continuously emphasize the importance of revising and editing, actively encouraging students to deliberately think in terms of drafts’ if they are to produce good policy briefs.

Monday Morning Advice

A few items from elsewhere that might be of help for instructors who are new to teaching online:

First, here are five reasons not to use timed exams. In my opinion thse arguments apply whether one is teaching online or face-to-face.

Second, here are some very practical, easy-to-implement techniques for becoming a better writer — suitable for both professors and students.

Last, methods of identifying whether your employer’s financial viability has suddenly gotten much worse — editorials that I wrote in 2017 and 2019.

The Benefits of Rubrics

Today we have a guest post from Sarah E. James (sarah [dot] james [at] g [dot] harvard [dot] edu), Colin Brown (colin [dot] brown [at] northeastern [dot] edu), and George Soroka (soroka [at] fas [dot] harvard [dot] edu.)

At the 2020 APSA Teaching and Learning Conference, we led a workshop on writing rubrics for the political science classroom. Rubrics may not be a cure-all for student writing, but we do believe that college faculty underestimate their benefits for teaching advanced writing skills. They can also be a powerful tool for developing a mindful teaching practice.

The use of rubrics is extremely common at the K-12 level (at least here in the U.S.), and there is considerable evidence to support their effectiveness at teaching beginning writing skills (Heidi Andrade’s work is a good starting point, for example, here, here, and here). There has been less evidence for their effectiveness at the university level, but the few existing studies point in the same general direction as the elementary ed studies:

  • Rubrics help students learn specific skills, and make them aware of their own learning processes.
  • Rubrics make grading more efficient, while simultaneously increasing student perceptions of how fair the grading is.
  • Rubrics help expose aspects of the “hidden curriculum” in our writing expectations and may help the most disadvantaged writers the most.

Our key takeaway for participants in our workshop: rubrics let you deal with subjective measures, but in a transparent way, and without being arbitrary. Generating a good rubric requires you to be clear about what you actually value, and on what you expect students to be able to demonstrate. From the students’ side, this is a clear signal of where you want them to spend most of their time. From the instructor’s side, this is a good way to make sure that you’re following the adage of “if you didn’t teach it, don’t test it.” And when we think of the kind of genre-specific writing skills we demand of students, this sort of clarity can be extremely helpful for students who may “write well” in a general sense but who may have no experience in how things like evidence, counterarguments, citations, and literature reviews work in political science specifically.

Rubrics can only capture so much, and when you use one, you are limited to only a certain number of skills or aspects in your assessment. At our TLC workshop, the most common concern our participants had was, “what happens if we end up having to give a good grade to a bad paper?” This is a (small) risk, but we encouraged our participants to step back for a second and think about the rubric as a mindful teaching exercise. If a paper feels like it should get a lower grade than the rubric suggests, are there skills that should be included explicitly in the rubric? (They can be added next time!) If not, then what’s causing you to think the grade should be lower—and is it really something that should be entering into your assessment?

For those interested in designing their own rubrics, we provided examples for an introductory and an upper-level course, as well as a worksheet to help in setting it up. Our examples are designed to focus much more on discipline-specific skills (using evidence, critical thinking, professional norms) than on the quality of prose itself, and our instinct (tested to limited effect in our JPSE article) is that this is the most productive use of rubrics in the college-level classroom. But the structure of rubrics allows them to be adapted to the instructor’s aims, whatever they are—and they force the instructor to make those aims clear to themselves and to their students.

The Muddiest Point, Updated

Many of you are probably already acquainted with the muddiest point technique — asking students to identify the one aspect of a lesson or assignment that they are the most confused by. Often this is accomplished by distributing index cards for students to write on. This semester I’m using an electronic version in a 200-level honors course on Asia: a survey on our Canvas LMS, completed in the last few minutes of class on days for which some kind of lecture or discussion is scheduled. The survey consists of the question “What are you most curious or confused about from class today?” Students automatically earn one point toward the final grade by answering it.

With a paperless process, I don’t have to try to decipher students’ handwriting. And I have an archive of students’ responses that I don’t have to transport or store.

Far more importantly, the surveys are demonstrating the difference between my knowledge base and that of my students — which I otherwise would be mostly oblivious to.

For example, my mind automatically defaults to thinking in terms of power, authority, and legitimacy whenever I’m confronted with the task of analyzing an authoritarian state. Or I recall concepts like ethnic identity when discussing nationalism. Or I know that geography is political rather than an immutable law of the universe — as demonstrated by the origins of labels like Far East, Middle East, and Near East. This is not the case with the majority of students in the class, given their survey responses so far.

Seen Exams

Everyone’s working with seen papers…

This past semester I got to try out using a seen exam for the first time.

For those of you unfamiliar with it, you publish the exam paper some time ahead of the sitting date (a week, in this case), so students can prepare their responses, which they then write under controlled exam controls (without notes or materials to hand).

The logic of this is that it provides a more meaningful test of students’ abilities, since they since have to revise, plan and produce, but without the added peril of “I can’t find a question I can do” or “I answered the question wrong”.

Having inherited the format from a colleague, I was keen to try it out, especially since last year’s use of an open-book, online exam had worked very well. Indeed, this year’s module was with the same students.

The practicalities are very simple indeed: an email to the class and a posting on the VLE at the appropriate time, plus being available through the week to answer any queries or clarifications.

The day before the exam I emailed everyone again, just to run through any points that had come up and to remind them again that the format meant some things were different from a ‘normal’ exam.

Firstly, my expectations on factual accuracy would be higher, since they’d have had time to prepare.

Secondly, I’d like to see more references to the literature: not direct quotes, but certainly mention of relevant authors.

And most importantly, I’d expect clear organisation and argument in each of their answers.

So?

Having now finished my marking, I’m able to say a bit about how this all played out.

As with the other format, this approach seems to be good for pulling up the tail of students who might otherwise have found things difficult: even the worse-performing student still produced relevant answers with some detail.

Likewise, the almost total absence of factual errors and of very short answers was a pleasant development, suggesting everyone had actually done work for the exam.

So the knowledge front seems to be positive.

Having seen a few students straight after the exam, I’m not sure that they found it any less stressful though: yes, they knew what the questions would be, but they also noted that they were also conscious I would be marked in line with that, so maybe their extra work wouldn’t count for anything.

While we’ve yet to complete all the feedback cycle, I think that anxiety is understandable, but hasn’t played out. Instead, the performance of the class has been strengthened and their capacity in the subject will be that bit more for future modules they take.

In sum, this exam has further convinced me that closed-book, unseen exams aren’t that useful, either in measuring knowledge or managing student stress: unless I have to use them in future, I’m not going to be.

Writing as Assessment

To illustrate the dilemma I presented in my last post: the possible devolution of the final exam for one of my courses.

My Fall 2018 exam was an attempt at an authentic writing exercise, but students had to choose one of two positions and use course readings. This meant I supplied the available arguments and evidence, and the exam was actually an assessment of rhetorical skill. Students didn’t demonstrate an ability to use concepts that I thought were crucial for the arguments they had developed.

For the final exam in Fall 2019, I ended up giving students a choice of arguments — “basis for aid policy to Egypt” and “plan for Louisiana’s future” — and I added this to the instructions for the exam:

Apply relevant concepts like discounting the future, moral hazard, etc.

Students still had to select one of two predetermined positions regardless of the argument chosen, and again I specified the pool of evidence they could draw from. And students still didn’t demonstrate knowledge of concepts listed in the exam’s instructions.

What next? I could have a final exam that asks students to, for example, “make an evidence-based determination of whether moral hazard exists in a location affected by climate change.” But this type of exam prompt might introduce even more problems.

The Joy of Documenting a Job Done

Not knowing whether one has actually helped students learn is one of the most frustrating aspects of teaching. Assuming an absence of megalomania or the Dunning-Kruger effect, indications that we’ve made a difference are typically quite vague and ambiguous. So I was pleasantly surprised — as in, “hey, maybe students really did benefit from this” — by the results of a knowledge probe that I launched at the beginning and end of the semester in my course on economic development and environmental politics.

The knowledge probe was an ungraded quiz that asked questions about a few basic economic concepts, administered through Google Forms in the first and last weeks of the semester. Results, showing percentage of respondents answering correctly, are below.

Pre
N = 21
Post
N = 16
% Change
Poverty Trap5210092
Diminishing Returns to Capital527544
Skill Matching5881,660
Common Pool Resource Problem488169
Moral Hazard38100163

Obviously this wasn’t the perfect tool. Sample sizes are too small for statistical significance. And a slightly different proportion of students reported previously taking a university economics course on the pre-test than on the post-test. But the numbers at minimum suggest that students learned something over the semester, which gave me a sense of satisfaction that I otherwise wouldn’t have.

Petty Officer Usherwood?

It’s nearly Christmas, if anyone’s short of ideas

Yesterday I had about a third of my students turn up for class. Possibly that was related to the deadline for an essay due for me later that day – certainly the number who turned up was about the same as the number who’d already submitted the work.

Since I’ve known that this was going to be an awkward timing since the start of semester, back in early October, I’d left some of the session open, so I could be flexible about what to do, including not asking for any specific prep beyond the general reading.

In the event, I spent a block of the class talking about assessment. Unsurprisingly, since they’d already submitted, none of the students who turned up wanted to talk about the essay, but they did want to talk about the exam, which’ll be after the Christmas break.

So we discussed how that would work (we’re doing a seen-paper format, so they get it a week beforehand) and what I was looking for.

So what’s the problem?

Well, the people who turned up yesterday are the ones who most likely didn’t need the discussion, either because they’d have worked out the salient points already, or because they’d have asked. Indeed, the student who asked about the exam some weeks ago was there.

The issue is for those who didn’t turn up, the ones still working on their essay a couple of hours ahead of the deadline, the ones will the poor attendance throughout the semester.

This is a classic of the Matthew Principle: those that have, get more. And it’s not really helped by me being a bit petty-minded.

I could have waited until next week’s final class to discuss the exam – and probably someone who wasn’t there yesterday will ask about it – but I have also spent two months trying to reinforce the message that the rational choice for a student who’s finding it hard going is to come to class, because that’s the best place to get the essentials together, and to get questions answered.

Partly, this is about incentives. For my other class, on negotiation, I have great attendance, mainly because the classwork is very active and because the assessment is about what you’ve done in class. In this case, the work is more mixed and it’s not directly linked.

Maybe I need to be thinking about whether I can change that, in a way that works for the subject matter.

But maybe I also need to think more about how much this is a case of taking horses to water: where do me responsibilities lie and where do they end?

Maybe one for class discussion next week.

Exam Study Skills Advice

If you’re like me, you are finding that more students need help with basic study skills than previously. You might also find it difficult to explain learning strategies that for many academics have been automatic behaviors since elementary school. Loleen Berdahl of the University of Saskatchewan has created a handy screen-capture video about studying effectively for final exams, available here, just in time for the end of the semester in the USA and Canada. Feel free to share it with your students.

Helping students do better in essays: what are the options of the seminar tutor?

This guest post is part of a series linked to the publication of G. Pleschova & A. Simon (eds.) Learning to teach in central Europe: Reflections from early career researchers. This post comes from Liudmila Mikalayeva.

Daniela Jaklová Střihavková tells her story of a highly motivated seminar tutor in a first-year course on social work. The problem she sets to solve is a mismatch between the grader’s expectations and the students’ performance: the average grade for exam essays is between 16 and 18 points in 2016-2017, which is far from the maximum 30 points. What can the tutor do to fill this gap between the expectations and the performance?

The setup of the course will be familiar to many young instructors: while a senior professor gives weekly content-focused lectures and grades students’ work at the end of the course, a junior tutor is in charge of interactive seminars where students discuss and apply knowledge. The examination consists mainly of two essays checking students’ understanding of the nature of social work and of the social worker’s role, formulated in proper academic language. The essays are challenging for students, since they receive little training in writing them in the course itself.

Jaklová Střihavková very reasonably suggests improving the course through constructive alignment: course design should guarantee the fit between teaching goals, teaching methods and assignments. So if a major teaching goal is to enable students to define the role of the social worker in a concrete case, they should receive guidance on how to do it, have an opportunity to practice it and receive feedback before the exam.

Practically, the author introduced two additional seminar sessions where students practiced a task similar to that of the exam essay. Combining individual and group work, she provided space for the students to confront the complex task and reflect on their own performance. While she cannot prove that the essays received better grades because of these changes, both the grader and the students were happier with the learning outcome.

The effort by the seminar tutor to bridge the gap between the expectations of the grader and the actual students’ work was however only partly successful. Even after the additional seminars, students continued to feel unsure what the grader expected from them and the grader was still unhappy with how they used disciplinary vocabulary. I see three issues explaining the persistence of the gap.

A relatively minor point is that oral exercises may not be effective enough for supporting students’ success in written tasks. A much more important drawback, underlined by the author herself, is the absence of clear and explicit criteria for grading: the professor would need to make an effort detailing the requirements. And, most significantly, the course structure as such is at the core of the problem: the person grading students’ work is not directly in touch with the students and is senior enough to have forgotten how challenging it is for undergraduate students to understand and use academic jargon and navigate the often-implicit expectations to their work.

Jaklová Střihavková is right to point that to improve learning outcomes students need space to reflect on the curriculum, but young and senior instructors should become more reflective as well. Clarifying expectations, aligning content, teaching approaches and assignments, communicating among themselves and with the students is key and cannot be replaced by teaching experience alone. Students as well as instructors will benefit from it.