Modeling “Good” Writing

Today we have a guest post from Colin Brown, assistant teaching professor in the Department of Political Science at Northeastern University. He can be reached at colin [dot] brown (at) northeastern [dot] edu.

A few weeks ago, Shana Gadarian made a point on Twitter about explicitly teaching writing that I strongly endorsed. Getting our students to write better will, at minimum, make our job easier. If we’re lucky, it might actually promote better thinking.

For upper-level students, very open-ended prompts sometimes lead to really creative and insightful thinking—and a dissertation is of course the vaguest prompt of all. But our expectations often rest on implicit assumptions about what we think of as “good” political science. Amanda Rosen has written about this in the context of transparency. As she points out, telling students “write a five-page essay” doesn’t acknowledge that essay means different things in different fields, and many of our students will not yet know what this even means in political science.

Clarity is critical for essay prompts, especially for introductory students. While long, detailed instructions might help point students toward what they should think about, students new to a field often don’t have the context to know what is most important in a long prompt. To them, any sentence with a question mark might appear to be equally important—causing them to focus on what we thought was a minor point and producing a disconnect between what we want to assess and what we actually assess when our implicit expectations aren’t met.

Here are what I think were a failed and a relatively successful attempt to do this in my past semester’s intro comparative politics class. Students told me that while the instructions were explicit in the first, it was hard to know where to start and which parts to emphasize. With the latter prompt, they said it was not only clear what to do but why they were doing it.

One question I’ve found to be a bit polarizing in talking with colleagues is whether to provide model papers or templates. Is it a better way to make our aims clear, or does it cause students to just parrot back the template? I’ve always found myself on the side of providing models. Graff and Birkenstein’s They Say, I Say has solidified my thinking for why. They identify common rhetorical devices that mirror the most common academic ways of thinking, which they into templates that, for example, force students to write down the most obvious counterarguments. Experienced writers have read enough in the field to be able to pick up on these techniques implicitly, but beginners by definition have not. Graff and Birkenstein argue, and I think rightly, that this goes beyond rhetoric to actually learning the ways of thinking. Some students may not learn how to think about the flaws in their own argument, or even that they need to, until they are forced to write them with this kind of template.

In my own teaching, I’ve found it hard to explain in an abstract sense the need for clear writing over “beautiful” writing—and what students think is “beautiful” writing often feels cluttered and verbose to us. But when students see each other’s models and observe how much they actually understand from clear writing versus what they think is “good” writing, they start to self-diagnose their own excessive prose.

One challenge is that writing for politics requires some discipline-specific skills that might be more or less amenable to templates. Sarah James, George Soroka and I have a forthcoming JPSE piece on adapting tools from K-12 and composition studies for political science writing. But defining what we actually want from “good” political science writing seems often more folk wisdom than clearly defined—Ian Anson’s 2017 article on meaning-making is a notable and valuable exception—so as part of showing students what we want, there’s room for us to make this clearer to ourselves.

 Sarah, George and I will be leading a workshop at APSA TLC 2020 on implementing rubrics to improve student writing—Friday Feb. 7 @ 4:15 pm.

Writing as Assessment

To illustrate the dilemma I presented in my last post: the possible devolution of the final exam for one of my courses.

My Fall 2018 exam was an attempt at an authentic writing exercise, but students had to choose one of two positions and use course readings. This meant I supplied the available arguments and evidence, and the exam was actually an assessment of rhetorical skill. Students didn’t demonstrate an ability to use concepts that I thought were crucial for the arguments they had developed.

For the final exam in Fall 2019, I ended up giving students a choice of arguments — “basis for aid policy to Egypt” and “plan for Louisiana’s future” — and I added this to the instructions for the exam:

Apply relevant concepts like discounting the future, moral hazard, etc.

Students still had to select one of two predetermined positions regardless of the argument chosen, and again I specified the pool of evidence they could draw from. And students still didn’t demonstrate knowledge of concepts listed in the exam’s instructions.

What next? I could have a final exam that asks students to, for example, “make an evidence-based determination of whether moral hazard exists in a location affected by climate change.” But this type of exam prompt might introduce even more problems.

Writing as Learning

My last post discussed writing as a professional endeavor. Today: writing as a device for learning; i.e., why and how we as teachers assign writing to students.

Generally we present our students with some form of expository writing task. Perhaps we call it thesis-driven, discipline-oriented, argumentative, or research-based. Regardless of the label, there is an assumption of students locating relevant primary data by means of different methods that they understand how to use, evaluating the data in an appropriate manner while being aware of their own assumptions, reaching some conclusion, and effectively communicating all of this to an audience.

That’s the ideal. The reality? Students often don’t know how to find primary data, or which methods are best suited for analyzing it. They may not even know what methods are. They assume there is either one right answer, or that all possible answers are equal, because they don’t understand that some answers can be more strongly supported by data than others while even better answers await discovery in the future.

And so we default to assignments that direct students to preferred secondary or tertiary sources (a “text”), tell them to organize their explanations as competitions between two artificial, diametrically-opposed positions, or, sometimes, encourage them to dredge up arguments that arrive at positions they already favor. Students learn to hang evidence on a predetermined conclusion rather than derive a conclusion from the evidence.

This type of deductive exercise has been used by teachers since the age of the agora to build students’ rhetorical skills. Today, unfortunately, it can produce people with a facile ability to argue any position at any time without veering from a worldview that they hold to be sacrosanct.

So what’s the solution? I don’t really have one. Too few of the students I encounter are willing or able to draw reasonable conclusions from evidence they have independently located, so writing exercises that involve inductive reasoning get chucked out the window. It’s frustrating.

Writing as Project

If you’re like me — a contractual teaching load of seven courses per academic year, plus overloads, committee work, and administrative duties — you tell yourself that you’ll work diligently on those unfinished conference papers and journal manuscripts during the winter holidays. And then life happens, time slips away, and suddenly the spring semester is about to begin.

There are simple tools — which aren’t part of the standard graduate program curriculum, but should be — that can help you become a more productive writer. I’ll mention two.

Stretch & SMART

The stretch goal is your ultimate objective or ambition; the whole project. For example, write a complete draft of a book chapter. SMART is an acronym that describes the actions that need to be taken to reach one’s objective:

  • Specific — actions must be defined and discrete, such as create a literature review that will be part of the book chapter.
  • Measurable — actions must be countable so that progress can be gauged. Each morning, find and read twelve peer-reviewed articles that are relevant to the book chapter. For each article, write a phrase or sentence on, respectively, its methods, findings, and quality.
  • Achievable — create the conditions needed to complete the above tasks. Clear morning schedule, turn off email.
  • Realistic — ensure that the tasks can actually be accomplished. Don’t go down rabbit holes; on the first day select which journals will be searched, a date range, and other limiting criteria.
  • Timeline — establish a schedule with an endpoint. I am devoting one hour each morning to the literature review. If I define my search on Monday, and then subsequently locate four articles per day, then I will have a total of twelve articles by the end of the allotted time on Thursday and can begin writing the literature review on Friday morning.

There are many definitions of Stretch & SMART; if the one above is unclear, others can be found with a quick internet search.

Front Forty & Back Forty

Front Forty & Back Forty maps the tasks that are part of a project and tracks which of those tasks have been completed. The technique was invented by my colleague and illustrator extraordinaire, Susannah Strong. An explanation is here. Make sure to scroll down to the sample map.

Does the Question Determine the Answer?

Regular readers of this blog know that I sometimes ponder the clarity of my assignment and exam prompts (some past posts on this subject are here, here, and here). Students sometimes don’t hit what, in my mind, the question targets, so I revise in the hopes of creating a prompt that is more transparent. But I don’t want prompts to be answerable with a Jeopardy-like regurgitation of facts. I want students to exert some cognitive effort to figure out how to apply concepts that are relevant to the question at hand.

Usually this situation occurs with my undergraduates, but I’m noticing it more frequently with master’s degree students. A recent example is an assignment from my graduate-level introduction to comparative politics course:

Continue reading

What Do Grades Mean?

What do grades actually mean? I began pondering this question while designing a course for the fall semester. Theoretically a grade indicates the amount of knowledge or skill that a student possesses. But really? Those of us working in the USA are quite familiar with grade inflation. A final grade of C today probably doesn’t indicate the same level of knowledge or skill proficiency as the C from fifty years ago. There is also the persistent problem of knowing whether our assessment tools are measuring the types of learning that we think they are/want them to. And it is probably safe to assume that, both in and out of the classroom, there is a lot of learning happening but we just aren’t interested in trying to measure it. The situation gets even more complex given that — again, in the USA — a “learning activity” often won’t function as intended if students believe that it has no discernible effect on their course grades.

I structure my syllabi so that the sum total of points available from all assessed work is greater than what it needed for any particular final grade. For example, a student might need to accumulate at least 950 points over the semester for an A, but there could be 1,040 points available. I do this to deliberately create wiggle room for students — with so many assignments, students don’t need to get perfect scores on, or complete, all of them. While this leads to higher grades in my courses than if I graded strictly on a bell curve, I want to give students plenty of opportunities to practice, fail, and improve. And I firmly believe that sloppy writing indicates slopping thinking, while good writing indicates the converse. So in reality what I’m doing with most of my assignments is evaluating the writing abilities of my students.

This system often produces a bimodal grade distribution that is skewed to the right. Expend a lot of effort and demonstrate a certain level of proficiency, and you will get a grade somewhere between an A and a B-. Choose not to expend the effort, or consistently demonstrate an inability to perform at a minimum level, and you will get a D or an F. I’m comfortable with this result, in part because I know from the cognitive science research on learning that repeated exposure and frequent testing builds long term memory.

This leads me to the reason for doubting that grades my courses mean the same thing as they do in courses where the only assessment is done through mid-term and final exams composed of multiple-choice questions. Yes, the proportion of A’s in the latter might be lower than in the former, but I bet on average my students are retaining more. At least I like to think that’s the case. There is no way for me to be sure.

Research the Write Way

Credit for the subject of today’s post goes to Lindsay LaChapelle, writing center acting director, and Alicia Vaandering, education and instructional design librarian, at my university. They can be contacted, respectively, at lindsay [dot] lachapelle [at] salve [dot] edu and alicia [dot] vaandering [at] salve [dot] edu.

I attended a presentation by the above-mentioned individuals on leveraging the expertise of academic support staff when using research-driven writing assignments. Undergraduates trying to complete these assignments typically lack proficiency in critical skills like:

  • Identifying peer-reviewed journals for literature reviews.
  • Knowing when and how to cite sources.
  • Decoding the rhetorical techniques of scholarly literature.
  • Applying the authentic writing principles of role, audience, and format.

For example, students often locate a pile of sources that they believe are pertinent, and only then begin writing. Or they do the writing first and then seek out what they think are a sufficient number of not-really-that-relevant sources to cite in the text.

Incorporating instruction from your university’s writing and research specialists can help solve these problems. The first step in the process is to consult with writing center and library staff, to share assignment information, identify your goals and expectations for the assignment, plan classroom workshop content, and schedule the workshops in relation to assignment due dates.

Workshop topics should reflect the four phases of the research writing process: exploration, reflection, revision, and becoming a creator of information. Each workshop can target one or more of the specific objectives for each phase, as shown in this table. “L” indicates a practice that normally falls within a librarian’s area of expertise, while “W” indicates a focus on writing. Think of the table as a menu from which to choose the skills in which your students most need training.

The classroom workshops allow both academic support staff and the course instructor to interact with students about their work as they proceed through the research and writing process. They can include probes of students’ pre-existing knowledge of college-level research and writing principles and the review of students’ work as they complete different components of the assignment.

Collaborating with librarians and writing specialists benefits both me and my students because it:

  • Reduces redundancy in instruction.
  • Shares the workload for lesson planning, delivering instruction, and assessment.
  • Reduces the instructor-to-student ratio in the classroom.
  • Models the collaborative nature of academic research.
  • Allows writing center staff to identify research-related problems that require librarian support, and vice versa — instead of leaving students in the dark about the kind of specialized help they might need and who can provide that help.

When Education Harms

Alternative title for this post:

Why Many Teacher-Training Programs Should Be Abolished

Perhaps some of you — at least in the USA — have noticed the phenomenon of college students using multi-syllabic words out of context. The student clearly does not know what the word he or she inserted into the sentence actually means.

I used to think this was an attempt to impress me in the hopes of getting a higher grade on the writing assignment — pull a complicated-sounding but inaccurate word from an online thesaurus instead of using something simpler. But perhaps the behavior is really a sign that the student is deficient in some basic literacy skills.

As pointed out in this National Public Radio story, millions of children in the USA do not learn how to read well at an early age because of the unscientific and ineffective methods used by their teachers. If children fall behind in the first few years of primary school, it’s probably difficult for them to become proficient readers later on. I’m now wondering if these deficits in literacy persist all the way into college.

Negatives of Nuance

Related to Simon‘s and Amanda‘s recent posts about failing to get the results one expects, here is a very simple example.

My first writing assignment prompt this semester for my course on economic development and environmental change read as follows:

Purpose of this response: learn about the nature of poverty and perceptions of the poor.

1. Read the rubric below.

2. Read/watch:

  • MRU: Basic Facts of Growth and Development.
  • Easterly, Ch. 1 and Intermezzo, p. 5-19.
  • Emily Badger and Quoc Trung Bui, “In 83 Million Eviction Records, a Sweeping and Intimate New Look at Housing in America,” The New York Times.
  • Hans Rosling, “The Magic Washing Machine,” TEDWomen, December 2010.

Answer the following question:

  • William Easterly writes “When those of us from rich countries look at poor countries today, we see our own past poverty.” (p. 19). Is this an accurate view of poverty outside and inside the USA? Why?

My intent in asking the question — which I thought was obvious from the “statement of purpose” that prefaced it — was to get students to think about the differences between how they think about the poor at home and about the poor in countries that they’ve probably only seen on the news, if at all. In other words, I wanted them to start thinking about why our perceptions of the poor often depend on where the poor are located.

Instead students wrote about wealth and poverty from a historical perspective — that the USA once had a low level of economic development, as measured by GDP or income per capita, and that’s where other countries are today. The USA developed into a high-income, industrialized society, so probably other countries, even the poorest ones, will eventually do the same. There was no real discussion of the nature of poverty in the USA or why it exists.

So I need to a better question, something like “Are the causes of poverty in the USA the same as in other countries? Why?” or “If the USA is an economically-developed country, why does it still have poor people?”

Beyond the Essay: Briefing Memos

Today we have a guest post from Vincent Druliolle, an assistant professor at Universidad Carlos III de Madrid. He can be reached at Vincent[dot]Druliolle[at]gmail[dot]com.

Undergraduates are repeatedly told that what they study is somehow relevant for practice, yet most assignments are structured as academic essays—even though only a handful of them will end up opting for an academic career. A few years ago, I decided that my students should have the opportunity to develop non-academic writing skills, and started assigning a briefing memo about an ongoing conflict.

The briefing memo is indeed a format widely used in government, international organisations, consultancies, and NGOs. However, because of the large range of topics and theoretical perspectives covered by my module and the limited number of teaching weeks, I had to find a way of integrating such an activity into my small-group seminars. I came up with the idea of making the memo a preparatory activity for my in-class simulation on peacebuilding and transitional justice.

The briefing memo differs from the traditional essay in both content and format. It is policy-oriented, because it is aimed at practitioners and decision-makers, and it presents information in a concise and attractive manner. It requires critically analysing source material beyond the standard academic literature, selecting what’s most relevant, and presenting it in a way that can convey the complexities of the conflict analysed.

Most students have never written a memo, but I don’t give them any guidelines. Instead, I ask them to look up examples that they can use as models. I prefer to ask the students to present their memos in class and discuss the difficulties of writing it. The first seminar of the simulation is thus about comparing and learning from the work of one’s fellow classmates. For class discussion, I recommend selecting at least a very good memo, a (very) bad one, and a few with significantly different formats and/or content. The greater the variety of memos, the better. I want the students to learn from each other, so I adopt the role of a facilitator, asking them to explain why they’ve chosen a given format and/or content, and fostering a class discussion about these aspects.

Many students admit that, as I warn them beforehand, it’s difficult at the beginning to figure out how they have to write the memo. Instead of assessing it at this stage, I ask the students to submit a revised version after the simulation that reflects what they’ve learnt from their classmates’ memos. Guidelines about how to write a memo can be provided at this stage or even afterward as part of a debriefing.

While writing the memo is an activity in its own right, in this case it is also a way for the students to acquire the knowledge necessary to participate in the simulation. They learn what information the memo should include because they have to put themselves in the shoes of the actors for whom the memo is written in the first place. In this way, the memo prepares students for the simulation, while the simulation provides an opportunity for them to reflect on the writing of the memo. And for the instructor, memos are quicker (and less boring) to mark than essays.