Regular readers of ALPS might notice error messages at the top of webpages and other glitches. Our web hosting service migrated to a newer version of PHP that is apparently incompatible with certain aspects of WordPress. We are trying to get this sorted out and meanwhile minimize any disruptions to the blog.
I recently graded a writing assignment for one of my courses and I’m wondering if it’s an example of “You can lead students to the education but you can’t make them learn.”
The instructions for the assignment:
You have been given the task of submitting a memo to the National Security Council that answers the following question:
- Will Nigeria remain a single state or divide into separate states?
Download the memo template; use it to format your work. Turn the question above into a declarative sentence and use it as the memo’s executive summary. Write two subsections that support your argument using evidence taken from course readings. Each subsection should be a single paragraph. Reference sources with in-text parenthetical citations.
The information above was repeated by the memo template itself and by the rubric attached to the assignment. From my perspective, the assignment is totally straightforward and the assessment criteria are completely transparent. Yet . . .
Several students wrote memos on the question of “Should?” rather than the actual question of “Will?”
Many students also failed the “Turn the question above into a declarative sentence . . .” part. A few representative examples of what they submitted as an executive summary:
- “In the current course, Nigeria will see a deterioration in multiple states because of the lack of agreement over these issues and the failure of the government to uphold a true democracy. Nigeria is a fledgling state on the verge of an inner collapse due to current trends.”
- “The United States should help Nigeria dissolve into multiple sovereign states, by mediating the separation process. Nigeria is currently facing a slew of ethnic and social conflicts the country is made up of 36 states which legally sew divisions between regional outsiders and regional natives, this has sparked ethnic and religious conflicts.”
- “The best path forward for Nigeria is to remain a single state. Splitting the nation up now would only be detrimental to Nigeria’s ability to control its sphere of influence across the African continent. Splitting Nigeria into multiple states would challenge the work that has gone on for years to make it politically equitable and would not account for the vast cultural differences of the nation.”
And so on.
I’m wondering how I should interpret what happened. Is this simply a case of failing to follow directions? If not, I don’t know how I can make things more obvious.
Prior to submitting a manuscript, authors should check whether its subject matter and length corresponds to the aims and scope of the journal. JPSE will publish material that fits into any one of four clearly-defined categories: SoTL, political science instruction, reflections on teaching and the academy, and reviews of educational resources. EPS has a similar list of the types of articles it publishes. A manuscript on a topic that falls outside of a specified category of interest will likely be rejected before it is sent out for review.
From my own experience, skimming through the contents of a journal’s recent issues can be very helpful in determining whether that journal is an appropriate choice for a manuscript submission.
Similarly, volunteering to act as an anonymous reviewer for JPSE or EPS gives one some insight into what others are submitting and what in the end appears in print. Both journals need more potential reviewers to accelerate the review process. Please contact their editorial boards to volunteer.
Journals often receive many submissions about certain topics but few to no submissions about others, making it difficult for editors to publish diverse content. For JPSE, these topics include civic engagement and intersectionality. The editors encouraged people to submit manuscripts that present innovative approaches to these subjects.
Many of you are probably already acquainted with the muddiest point technique — asking students to identify the one aspect of a lesson or assignment that they are the most confused by. Often this is accomplished by distributing index cards for students to write on. This semester I’m using an electronic version in a 200-level honors course on Asia: a survey on our Canvas LMS, completed in the last few minutes of class on days for which some kind of lecture or discussion is scheduled. The survey consists of the question “What are you most curious or confused about from class today?” Students automatically earn one point toward the final grade by answering it.
With a paperless process, I don’t have to try to decipher students’ handwriting. And I have an archive of students’ responses that I don’t have to transport or store.
Far more importantly, the surveys are demonstrating the difference between my knowledge base and that of my students — which I otherwise would be mostly oblivious to.
For example, my mind automatically defaults to thinking in terms of power, authority, and legitimacy whenever I’m confronted with the task of analyzing an authoritarian state. Or I recall concepts like ethnic identity when discussing nationalism. Or I know that geography is political rather than an immutable law of the universe — as demonstrated by the origins of labels like Far East, Middle East, and Near East. This is not the case with the majority of students in the class, given their survey responses so far.
To illustrate the dilemma I presented in my last post: the possible devolution of the final exam for one of my courses.
My Fall 2018 exam was an attempt at an authentic writing exercise, but students had to choose one of two positions and use course readings. This meant I supplied the available arguments and evidence, and the exam was actually an assessment of rhetorical skill. Students didn’t demonstrate an ability to use concepts that I thought were crucial for the arguments they had developed.
For the final exam in Fall 2019, I ended up giving students a choice of arguments — “basis for aid policy to Egypt” and “plan for Louisiana’s future” — and I added this to the instructions for the exam:
Apply relevant concepts like discounting the future, moral hazard, etc.
Students still had to select one of two predetermined positions regardless of the argument chosen, and again I specified the pool of evidence they could draw from. And students still didn’t demonstrate knowledge of concepts listed in the exam’s instructions.
What next? I could have a final exam that asks students to, for example, “make an evidence-based determination of whether moral hazard exists in a location affected by climate change.” But this type of exam prompt might introduce even more problems.
My last post discussed writing as a professional endeavor. Today: writing as a device for learning; i.e., why and how we as teachers assign writing to students.
Generally we present our students with some form of expository writing task. Perhaps we call it thesis-driven, discipline-oriented, argumentative, or research-based. Regardless of the label, there is an assumption of students locating relevant primary data by means of different methods that they understand how to use, evaluating the data in an appropriate manner while being aware of their own assumptions, reaching some conclusion, and effectively communicating all of this to an audience.
That’s the ideal. The reality? Students often don’t know how to find primary data, or which methods are best suited for analyzing it. They may not even know what methods are. They assume there is either one right answer, or that all possible answers are equal, because they don’t understand that some answers can be more strongly supported by data than others while even better answers await discovery in the future.
And so we default to assignments that direct students to preferred secondary or tertiary sources (a “text”), tell them to organize their explanations as competitions between two artificial, diametrically-opposed positions, or, sometimes, encourage them to dredge up arguments that arrive at positions they already favor. Students learn to hang evidence on a predetermined conclusion rather than derive a conclusion from the evidence.
This type of deductive exercise has been used by teachers since the age of the agora to build students’ rhetorical skills. Today, unfortunately, it can produce people with a facile ability to argue any position at any time without veering from a worldview that they hold to be sacrosanct.
So what’s the solution? I don’t really have one. Too few of the students I encounter are willing or able to draw reasonable conclusions from evidence they have independently located, so writing exercises that involve inductive reasoning get chucked out the window. It’s frustrating.
If you’re like me — a contractual teaching load of seven courses per academic year, plus overloads, committee work, and administrative duties — you tell yourself that you’ll work diligently on those unfinished conference papers and journal manuscripts during the winter holidays. And then life happens, time slips away, and suddenly the spring semester is about to begin.
There are simple tools — which aren’t part of the standard graduate program curriculum, but should be — that can help you become a more productive writer. I’ll mention two.
Stretch & SMART
The stretch goal is your ultimate objective or ambition; the whole project. For example, write a complete draft of a book chapter. SMART is an acronym that describes the actions that need to be taken to reach one’s objective:
- Specific — actions must be defined and discrete, such as create a literature review that will be part of the book chapter.
- Measurable — actions must be countable so that progress can be gauged. Each morning, find and read twelve peer-reviewed articles that are relevant to the book chapter. For each article, write a phrase or sentence on, respectively, its methods, findings, and quality.
- Achievable — create the conditions needed to complete the above tasks. Clear morning schedule, turn off email.
- Realistic — ensure that the tasks can actually be accomplished. Don’t go down rabbit holes; on the first day select which journals will be searched, a date range, and other limiting criteria.
- Timeline — establish a schedule with an endpoint. I am devoting one hour each morning to the literature review. If I define my search on Monday, and then subsequently locate four articles per day, then I will have a total of twelve articles by the end of the allotted time on Thursday and can begin writing the literature review on Friday morning.
There are many definitions of Stretch & SMART; if the one above is unclear, others can be found with a quick internet search.
Front Forty & Back Forty
Front Forty & Back Forty maps the tasks that are part of a project and tracks which of those tasks have been completed. The technique was invented by my colleague and illustrator extraordinaire, Susannah Strong. An explanation is here. Make sure to scroll down to the sample map.
Not knowing whether one has actually helped students learn is one of the most frustrating aspects of teaching. Assuming an absence of megalomania or the Dunning-Kruger effect, indications that we’ve made a difference are typically quite vague and ambiguous. So I was pleasantly surprised — as in, “hey, maybe students really did benefit from this” — by the results of a knowledge probe that I launched at the beginning and end of the semester in my course on economic development and environmental politics.
The knowledge probe was an ungraded quiz that asked questions about a few basic economic concepts, administered through Google Forms in the first and last weeks of the semester. Results, showing percentage of respondents answering correctly, are below.
N = 21
N = 16
|Diminishing Returns to Capital||52||75||44|
|Common Pool Resource Problem||48||81||69|
Obviously this wasn’t the perfect tool. Sample sizes are too small for statistical significance. And a slightly different proportion of students reported previously taking a university economics course on the pre-test than on the post-test. But the numbers at minimum suggest that students learned something over the semester, which gave me a sense of satisfaction that I otherwise wouldn’t have.
A reminder that the early bird registration for the 2020 APSA Teaching and Learning Conference is December 14.
As I have said before, this conference is not the standard sequence of tedious, badly-attended panel sessions. Attendees join a working group on a particular topic for the length of the conference. There are also hands-on workshops between sessions. And this TLC will convene in glorious Albuquerque, New Mexico, where in 2011 a conversation led to the creation this blog. Full conference details are at the APSA’s TLC webpage.
Last week a colleague and I led a workshop on syllabus design for junior faculty. The workshop focused on a method that I call EA2 — engage, apply, and assess.
First step in building a syllabus for a course is to identify the essential student learning outcomes (SLOs). For each SLO, students:
- Engage with corresponding content.
- Practice applying knowledge or skills associated with the SLO.
- Get assessed on how well they have achieved the SLO.
Here is an example from my comparative politics syllabus:
Red box is the SLO. Blue box is the content. Green box is the exercise in application — an argumentative writing assignment. Purple box is the assessment.
This sequence is used for each of the course’s SLOs, turning the syllabus into a map that shows exactly what the course consists of and why.