Writing as Assessment

To illustrate the dilemma I presented in my last post: the possible devolution of the final exam for one of my courses.

My Fall 2018 exam was an attempt at an authentic writing exercise, but students had to choose one of two positions and use course readings. This meant I supplied the available arguments and evidence, and the exam was actually an assessment of rhetorical skill. Students didn’t demonstrate an ability to use concepts that I thought were crucial for the arguments they had developed.

For the final exam in Fall 2019, I ended up giving students a choice of arguments — “basis for aid policy to Egypt” and “plan for Louisiana’s future” — and I added this to the instructions for the exam:

Apply relevant concepts like discounting the future, moral hazard, etc.

Students still had to select one of two predetermined positions regardless of the argument chosen, and again I specified the pool of evidence they could draw from. And students still didn’t demonstrate knowledge of concepts listed in the exam’s instructions.

What next? I could have a final exam that asks students to, for example, “make an evidence-based determination of whether moral hazard exists in a location affected by climate change.” But this type of exam prompt might introduce even more problems.

Writing as Learning

My last post discussed writing as a professional endeavor. Today: writing as a device for learning; i.e., why and how we as teachers assign writing to students.

Generally we present our students with some form of expository writing task. Perhaps we call it thesis-driven, discipline-oriented, argumentative, or research-based. Regardless of the label, there is an assumption of students locating relevant primary data by means of different methods that they understand how to use, evaluating the data in an appropriate manner while being aware of their own assumptions, reaching some conclusion, and effectively communicating all of this to an audience.

That’s the ideal. The reality? Students often don’t know how to find primary data, or which methods are best suited for analyzing it. They may not even know what methods are. They assume there is either one right answer, or that all possible answers are equal, because they don’t understand that some answers can be more strongly supported by data than others while even better answers await discovery in the future.

And so we default to assignments that direct students to preferred secondary or tertiary sources (a “text”), tell them to organize their explanations as competitions between two artificial, diametrically-opposed positions, or, sometimes, encourage them to dredge up arguments that arrive at positions they already favor. Students learn to hang evidence on a predetermined conclusion rather than derive a conclusion from the evidence.

This type of deductive exercise has been used by teachers since the age of the agora to build students’ rhetorical skills. Today, unfortunately, it can produce people with a facile ability to argue any position at any time without veering from a worldview that they hold to be sacrosanct.

So what’s the solution? I don’t really have one. Too few of the students I encounter are willing or able to draw reasonable conclusions from evidence they have independently located, so writing exercises that involve inductive reasoning get chucked out the window. It’s frustrating.

Writing as Project

If you’re like me — a contractual teaching load of seven courses per academic year, plus overloads, committee work, and administrative duties — you tell yourself that you’ll work diligently on those unfinished conference papers and journal manuscripts during the winter holidays. And then life happens, time slips away, and suddenly the spring semester is about to begin.

There are simple tools — which aren’t part of the standard graduate program curriculum, but should be — that can help you become a more productive writer. I’ll mention two.

Stretch & SMART

The stretch goal is your ultimate objective or ambition; the whole project. For example, write a complete draft of a book chapter. SMART is an acronym that describes the actions that need to be taken to reach one’s objective:

  • Specific — actions must be defined and discrete, such as create a literature review that will be part of the book chapter.
  • Measurable — actions must be countable so that progress can be gauged. Each morning, find and read twelve peer-reviewed articles that are relevant to the book chapter. For each article, write a phrase or sentence on, respectively, its methods, findings, and quality.
  • Achievable — create the conditions needed to complete the above tasks. Clear morning schedule, turn off email.
  • Realistic — ensure that the tasks can actually be accomplished. Don’t go down rabbit holes; on the first day select which journals will be searched, a date range, and other limiting criteria.
  • Timeline — establish a schedule with an endpoint. I am devoting one hour each morning to the literature review. If I define my search on Monday, and then subsequently locate four articles per day, then I will have a total of twelve articles by the end of the allotted time on Thursday and can begin writing the literature review on Friday morning.

There are many definitions of Stretch & SMART; if the one above is unclear, others can be found with a quick internet search.

Front Forty & Back Forty

Front Forty & Back Forty maps the tasks that are part of a project and tracks which of those tasks have been completed. The technique was invented by my colleague and illustrator extraordinaire, Susannah Strong. An explanation is here. Make sure to scroll down to the sample map.

The Joy of Documenting a Job Done

Not knowing whether one has actually helped students learn is one of the most frustrating aspects of teaching. Assuming an absence of megalomania or the Dunning-Kruger effect, indications that we’ve made a difference are typically quite vague and ambiguous. So I was pleasantly surprised — as in, “hey, maybe students really did benefit from this” — by the results of a knowledge probe that I launched at the beginning and end of the semester in my course on economic development and environmental politics.

The knowledge probe was an ungraded quiz that asked questions about a few basic economic concepts, administered through Google Forms in the first and last weeks of the semester. Results, showing percentage of respondents answering correctly, are below.

Pre
N = 21
Post
N = 16
% Change
Poverty Trap5210092
Diminishing Returns to Capital527544
Skill Matching5881,660
Common Pool Resource Problem488169
Moral Hazard38100163

Obviously this wasn’t the perfect tool. Sample sizes are too small for statistical significance. And a slightly different proportion of students reported previously taking a university economics course on the pre-test than on the post-test. But the numbers at minimum suggest that students learned something over the semester, which gave me a sense of satisfaction that I otherwise wouldn’t have.

Registration Deadline TLC 2020

A reminder that the early bird registration for the 2020 APSA Teaching and Learning Conference is December 14.

As I have said before, this conference is not the standard sequence of tedious, badly-attended panel sessions. Attendees join a working group on a particular topic for the length of the conference. There are also hands-on workshops between sessions. And this TLC will convene in glorious Albuquerque, New Mexico, where in 2011 a conversation led to the creation this blog. Full conference details are at the APSA’s TLC webpage.

Syllabus Design

Last week a colleague and I led a workshop on syllabus design for junior faculty. The workshop focused on a method that I call EA2 — engage, apply, and assess.

First step in building a syllabus for a course is to identify the essential student learning outcomes (SLOs). For each SLO, students:

  1. Engage with corresponding content.
  2. Practice applying knowledge or skills associated with the SLO.
  3. Get assessed on how well they have achieved the SLO.

Here is an example from my comparative politics syllabus:

Red box is the SLO. Blue box is the content. Green box is the exercise in application — an argumentative writing assignment. Purple box is the assessment.

This sequence is used for each of the course’s SLOs, turning the syllabus into a map that shows exactly what the course consists of and why.

What Do Faculty Think?

As my university’s director of faculty development, charged with designing a new Center for Teaching & Learning, I surveyed faculty to try to get a sense of how they felt about their jobs. Survey results are in and I have done a preliminary sort of the data. Here are my initial impressions:

  • Both full- and part-time faculty derive much satisfaction from helping students learn and seeing signs that their teaching had an effect. But not a single respondent referred to student evaluations of teaching. The instrument simply isn’t on instructors’ radar as an informative, useful tool. (Probably because it’s not.)
  • Only 2 of the 79 full-time faculty who completed the survey mentioned collaborating with colleagues to foster student achievement. Teaching seems to be regarded, in the end, as a solitary endeavor.
  • On Likert-scaled questions about teaching, research, and service, full-time faculty were the most satisfied with their teaching (4.3 out of 5) and the least satisfied with their research (3.2). Perhaps this explains why only a handful of both full- and part-time faculty expressed a desire for pedagogical training. Since respondents frequently cited high teaching loads as the main impediment to engaging in more research, opportunities to learn how to teach more efficiently — for example, by spending less time on grading — might be well-received.
  • Although satisfaction with research had the lowest numerical score, responses to open-ended questions about committee service were far more negative than comments about teaching or research. Faculty signaled frustration with the inequitable distribution of service commitments, meetings that were badly managed and time-consuming, and a general lack of concrete outcomes from committee work.
  • In general, faculty feel that there are too many conflicting demands on their time. As a result, they feel forced to reduce the scholarship that — in their minds — is inherent to being a professor. Notable in its absence is any mention of the scholarship of teaching and learning.

The Article Summary

(Photo credit: Joanne H. Lee, Santa Clara University)

Today we have a guest post about teaching the research process by Anne Baker, assistant professor of political science at Santa Clara University. She can be reached at aebaker [at] scu [dot] edu.

Getting students to use academic articles for research papers can be a challenge. In my experience, many students, even those in upper-level courses, are not familiar with search engines such as JSTOR, Lexus Nexus, or Political Science Complete. And if students do happen to use Google Scholar, they frequently rely on excerpts from sources instead of entire articles that they might not have access to. So, what can be done to replace these habits with better practices?

In my advanced writing course on the presidency, I have developed a class activity which provides students with skills they will need if they are going to successfully locate and utilize academic references for their research papers. First, I want them to be able to use the library’s website to access search engines. Second, I want them to understand that research is an iterative process. Sometimes you don’t find what you need for a variety of reasons and you should be able to determine what those reasons are—whether its human error, the need for a wider search net, or that no one has written on the topic (this last possibility always surprises the Google generation). Third, students need to become acquainted with the literature on the presidency, including the subfield’s primary journal, by discovering how research practices in political science have changed overtime, even in a subfield which remains largely qualitative.

I have students work in pairs and I provide them with two search terms related to the institution of the presidency (e.g. signing statements, executive orders, oath of office). I pick the search terms carefully knowing that some topics have no scholarship and represent dead ends and others have later but not earlier scholarship or vice versa. The first step of the activity provides instructions about how to first locate JSTOR on the library’s website and then how to access Presidential Studies Quarterly using JSTOR’s advanced search options. Helpfully, for the purposes of this activity, JSTOR only has copies of the journal until 2000. To access later copies, students have to use the Wiley database, which students have to figure out how to find.

For each search term, I have students locate one article published in the last few years and then another for 1995-2000—a total of four articles. Next, students identify the research question and method the authors used, noting whether it is qualitative or quantitative, the sources of data regardless of method, the type of analysis (e.g. text, interviews, statistical), and the date of publication. After they have their four articles and perform this analysis, I ask them to compare the results of both searches. Finally, we have a class discussion in which we explore road blocks and challenges encountered and review how the field has changed over time.

I have found that this activity makes students more likely to cite academic articles in their final research papers and use them more effectively to support their arguments. Students also exhibit a much better understanding of the subfield and are more likely to use the other search engines that they encountered while on the library’s website. And they learn that research takes time and requires shifting your strategies to find the information you need.

More On What Students (Don’t) See

A recent meeting with a student inspired a follow-up to my last post about how students do and do not respond to information in a course syllabus. A month into the semester, the student said that he hadn’t been regularly submitting assignments because he was broke and reluctant to ask his parents for money to buy textbooks.

Here is the relevant section the syllabus:

Readings

  • This course requires a basic digital subscription to The New York Times: http://www.nytimes.com.  Use your university email address for the academic discount.
  • Abhijit Banerjee and Esther Duflo, Poor Economics: A Radical Rethinking of the Way to Fight Global Poverty, Public Affairs, 2011.
  • Several chapters from William Easterly, The Elusive Quest For Growth: Economists’ Adventures and Misadventures in the Tropics, MIT Press, Cambridge, 2001, available for free as an e-book from the library.
  • Articles on the library’s journal databases, or at indicated webpages.

Here is what I had to explain to the student:

  • The New York Times gives free access to up to ten articles per month, and an unlimited number of articles can be read by using the library’s computers.
  • Poor Economics can be checked out from the library at no cost and the book’s full text can be found as a free download after a few seconds of internet searching.
  • As stated in the syllabus, The Elusive Quest For Growth is also available for free via the library’s catalog.

The student’s reaction when I said this? Astonishment. All he had seen — or rather, bothered to investigate — was the price of Poor Economics at the campus bookstore, because that’s what was listed there as the required book for the course.

I can accept a small amount of responsibility for this situation because I discarded the syllabus quiz when I completely retooled the course in 2018. But mostly it seems to be an extreme case of learned helplessness. I was a first-generation college student for whom the expense of college was a major concern, and I have met many people over the years who, like me, found the cost of textbooks prohibitive — long before the existence of rental textbooks, digital editions, and eBay. Our first stop at the beginning of every semester was the library to see if required textbooks were available for check out or on reserve. We also searched local used bookstores, or borrowed books from other students.

So, next year, the syllabus quiz returns, and it will include questions about where to find books.

What Do Students See?

What do you conclude about the organization of this course, specifically the quizzes, based on the image below — part of the homepage for the course website?

The document containing the course syllabus is formatted in a similar manner.

To me, the course obviously contains a series of topical units, each ending with a quiz that tests knowledge of that unit.

Given the number of students who are emailing me questions like “What will Quiz X cover?”, it’s not so obvious to many of them. Apparently students don’t know how to read a syllabus, even when they do read it.