This semester I am teaching fifty-four undergraduate students in two courses. At present, ten of these students have D or F averages. One of them has previously stated that he will withdraw. While it’s not my responsibility to ensure that he actually follows through on this before the semester ends, I will limit my analysis to the remaining nine students.
Both courses have had six assignments so far, so fifty-four separate opportunities for the nine students to receive feedback on their writing. But only forty-one of these assignments, or approximately 75%, were submitted.
Students saw my feedback, in the form of brief comments and a marked rubric, on only nine of the forty-one submitted assignments — approximately 20%. How do I know this? The “student viewed” time stamp feature of Canvas, discussed previously and shown below. If there isn’t a “Student Viewed Document” message, the student didn’t look at the assignment after I graded it.
Six of the nine students have not looked at any of my feedback on past assignments, despite earning failing grades. I will now email them to explain the likely relationship between their grades and their unwillingness to take advantage of the feedback I provide. This action on my part assumes that students will read the email, which isn’t guaranteed. But if this doesn’t catch their attention, mid-semester grade reports might.
More on creating community in an online course (previous posts are here and here): a collaborative writing exercise.
For several years, I’ve used memos as an authentic writing assignment. Or tried to. Often the results haven’t met my expectations. I have had students write multiple complete memos in a course, all in the same format, assuming that they will apply my feedback from the previous memo to the next one. Instead, students repeat the same mistakes over and over again.
For the upcoming semester, I decided to turn memo writing into a group activity, on the chance that collaborating with peers might produce better results. As in previous semesters, I will provide source material and the prompt. In a graded individual assignment, each student will write only specific portions of a memo, described below in the sample instructions and rubric. Students will be able to use a new memo template and guidelines that will be available on Canvas.
From where I stand, information literacy skills are important, because they help one identify and demolish specious claims made by authority figures. An assignment that, for example, forces students to locate three peer-reviewed journal articles is practice in finding credible information. It also allows students to determine whether a topic is suitable for a semester-long research project.
To me, these outcomes are both beneficial and rather obvious. But from the students’ perspective, the assignment could simply be yet another meaningless hoop to jump through on the way to getting another A+ on a transcript. Given the sources many students cited in the different stages of their storymap projects, it looks like too many of them customarily take the latter approach to research.
Therefore, in future courses that involve research projects, I should create assignments that are limited to the task of locating scholarly sources and place those assignments at the beginning of the semester. I should demonstrate why this skill is useful outside of the classroom.
I’ve noticed a similar problem with student writing — really basic errors that indicate a lack of proofreading. I don’t expend more effort evaluating a student’s work than the student did creating it. But I do know that sloppy writing indicates sloppy thinking and that the former advertises one’s propensity for the latter to the rest of the world. Again, I should demonstrate early in the semester why it’s important to proofread one’s work before it reaches an audience. My favorite example? The missing Oxford comma that cost a dairy company US$5 million.
I’m also seeing, from the last few journal article worksheets students are submitting, that many still do not have a clear understanding of how evidence-based arguments are constructed in academic literature. An author typically poses a research hypothesis or question at the beginning of a journal article and concludes with the same hypothesis or question reworded as declarative statement. I.e., “Why is the sky blue?” in the introduction with “The sky is blue because . . . ” as the conclusion. Yet on worksheets some students are writing that the hypothesis is about one thing while the conclusion is about some other thing. So again, students need practice in understanding the components of a written argument in scholarly literature, and that practice needs to happen early in the semester.
In principle I’m talking about scaffolding. But many of my assignments are attempts at getting students to builds several different skills simultaneously. I think I need to disentangle my goals for these assignments so that they target only one skill at a time.
Per my philosophy of never letting a good crisis go to waste, I’ve already started thinking about what has and hasn’t worked in this unusual semester.
In my globalization course, students seem to have sufficiently mastered the tools needed to create storymaps. To my surprise, nearly all of them learned how to use the software during two in-class lessons led by our digital scholarship librarian.
Yet with the semester almost over many still don’t seem to understand that U.S. News and Forbes are not peer-reviewed academic journals. If I teach this course again — it’s rotating to a colleague next year — I should probably include assignments at the beginning of the semester in which students are explicitly graded on their ability to locate appropriate sources. Currently this information literacy skill is only assessed through the rubric attached to the three storymap assignments.
In my comparative politics course, I will soon try to run my Gerkhania simulation online for the first time. To make things even more interesting, the class is down to eight students and the simulation is heavily modified from previous versions. I’ll report what happens in a few weeks.
Article’s subject—what is the question, puzzle, or problem examined?
What and where is the thesis?
What is the theoretical perspective (rational actor, culture, structure)? How do you know this?
What are the independent variables (causes) examined?
What is the dependent variable (effect) examined?
What is the conclusion of the author(s)?
My reason for doing this, other than filling up the remainder of an extended semester? It had become clear before the campus closed that students were often skipping over assigned journal articles and reading only the accompanying news stories that illustrated the articles’ theoretical arguments.
Some students are still unable to correctly identify an author’s thesis or conclusions — despite the classroom exercises during the first half of the semester. So in the future, students are going to get more instruction and more (graded) practice in how to read academic literature.
While I don’t comment on student writing nearly as much as some professors do, I expect students to at least read what I do write. A colleague recently pointed out that our Canvas LMS displays a date stamp at the top right of the Speedgrader web page when a student has viewed a previously-submitted assignment after an instructor has commented on it. I had never noticed this before, I guess because the date stamp’s font is rather small. Here it is, indicated by the red arrow:
This feature became very useful in a course in which students are required to write a series of memos that all have the same format. Last week, a student taking the course sent me this email:
I’m not sure what is expected from the assignments, my memo 3 was completely different from 2 yet your comment says see comments about memo 2. I am a second semester senior doing grad classes that had a 3.6 gpa last semester. Somehow I’m failing every single assignment in a freshman level class, while still attending every single class except one and participating in class basically every session.
I looked at the student’s submissions for memos 1, 2, and 3 — no date stamp. My comments had been ignored. My reply to the student’s email:
The memo is a standard method of efficiently communicating information that is used in a wide variety of professional environments. I’m surprised you haven’t yet had much of an opportunity to practice this form of writing, so here is what I am willing to do: you can earn up to 10 points by emailing me by noon on Friday a memo that discusses how well you incorporated my feedback on your Memo 1, provided by my comments on your work on Canvas, into your Memo 2, and the same for Memo 3 in respect to my comments on your Memo 2.
Completion of my “extra credit opportunity” would have required the student to admit that he had not read my comments and thus ignored the feedback I had provided.
I recently graded a writing assignment for one of my courses and I’m wondering if it’s an example of “You can lead students to the education but you can’t make them learn.”
The instructions for the assignment:
You have been given the task of submitting a memo to the National Security Council that answers the following question:
Will Nigeria remain a single state or divide into separate states?
Download the memo template; use it to format your work. Turn the question above into a declarative sentence and use it as the memo’s executive summary. Write two subsections that support your argument using evidence taken from course readings. Each subsection should be a single paragraph. Reference sources with in-text parenthetical citations.
The information above was repeated by the memo template itself and by the rubric attached to the assignment. From my perspective, the assignment is totally straightforward and the assessment criteria are completely transparent. Yet . . .
Several students wrote memos on the question of “Should?” rather than the actual question of “Will?”
Many students also failed the “Turn the question above into a declarative sentence . . .” part. A few representative examples of what they submitted as an executive summary:
“In the current course, Nigeria will see a deterioration in multiple states because of the lack of agreement over these issues and the failure of the government to uphold a true democracy. Nigeria is a fledgling state on the verge of an inner collapse due to current trends.”
“The United States should help Nigeria dissolve into multiple sovereign states, by mediating the separation process. Nigeria is currently facing a slew of ethnic and social conflicts the country is made up of 36 states which legally sew divisions between regional outsiders and regional natives, this has sparked ethnic and religious conflicts.”
“The best path forward for Nigeria is to remain a single state. Splitting the nation up now would only be detrimental to Nigeria’s ability to control its sphere of influence across the African continent. Splitting Nigeria into multiple states would challenge the work that has gone on for years to make it politically equitable and would not account for the vast cultural differences of the nation.”
And so on.
I’m wondering how I should interpret what happened. Is this simply a case of failing to follow directions? If not, I don’t know how I can make things more obvious.
Today we have a guest post from Colin Brown, assistant teaching professor in the Department of Political Science at Northeastern University. He can be reached at colin [dot] brown (at) northeastern [dot] edu.
A few weeks ago, Shana Gadarian made a point on Twitter about explicitly teaching writing that I strongly endorsed. Getting our students to write better will, at minimum, make our job easier. If we’re lucky, it might actually promote better thinking.
For upper-level students, very open-ended prompts sometimes lead to really creative and insightful thinking—and a dissertation is of course the vaguest prompt of all. But our expectations often rest on implicit assumptions about what we think of as “good” political science. Amanda Rosen has written about this in the context of transparency. As she points out, telling students “write a five-page essay” doesn’t acknowledge that essay means different things in different fields, and many of our students will not yet know what this even means in political science.
Clarity is critical for essay prompts, especially for introductory students. While long, detailed instructions might help point students toward what they should think about, students new to a field often don’t have the context to know what is most important in a long prompt. To them, any sentence with a question mark might appear to be equally important—causing them to focus on what we thought was a minor point and producing a disconnect between what we want to assess and what we actually assess when our implicit expectations aren’t met.
Here are what I think were a failed and a relatively successful attempt to do this in my past semester’s intro comparative politics class. Students told me that while the instructions were explicit in the first, it was hard to know where to start and which parts to emphasize. With the latter prompt, they said it was not only clear what to do but why they were doing it.
One question I’ve found to be a bit polarizing in talking with colleagues is whether to provide model papers or templates. Is it a better way to make our aims clear, or does it cause students to just parrot back the template? I’ve always found myself on the side of providing models. Graff and Birkenstein’s They Say, I Say has solidified my thinking for why. They identify common rhetorical devices that mirror the most common academic ways of thinking, which they into templates that, for example, force students to write down the most obvious counterarguments. Experienced writers have read enough in the field to be able to pick up on these techniques implicitly, but beginners by definition have not. Graff and Birkenstein argue, and I think rightly, that this goes beyond rhetoric to actually learning the ways of thinking. Some students may not learn how to think about the flaws in their own argument, or even that they need to, until they are forced to write them with this kind of template.
In my own teaching, I’ve found it hard to explain in an abstract sense the need for clear writing over “beautiful” writing—and what students think is “beautiful” writing often feels cluttered and verbose to us. But when students see each other’s models and observe how much they actually understand from clear writing versus what they think is “good” writing, they start to self-diagnose their own excessive prose.
One challenge is that writing for politics requires some discipline-specific skills that might be more or less amenable to templates. Sarah James, George Soroka and I have a forthcoming JPSE piece on adapting tools from K-12 and composition studies for political science writing. But defining what we actually want from “good” political science writing seems often more folk wisdom than clearly defined—Ian Anson’s 2017 article on meaning-making is a notable and valuable exception—so as part of showing students what we want, there’s room for us to make this clearer to ourselves.
Sarah, George and I will be leading a workshop at APSA TLC 2020 on implementing rubrics to improve student writing—Friday Feb. 7 @ 4:15 pm.
My last post discussed writing as a professional endeavor. Today: writing as a device for learning; i.e., why and how we as teachers assign writing to students.
Generally we present our students with some form of expository writing task. Perhaps we call it thesis-driven, discipline-oriented, argumentative, or research-based. Regardless of the label, there is an assumption of students locating relevant primary data by means of different methods that they understand how to use, evaluating the data in an appropriate manner while being aware of their own assumptions, reaching some conclusion, and effectively communicating all of this to an audience.
That’s the ideal. The reality? Students often don’t know how to find primary data, or which methods are best suited for analyzing it. They may not even know what methods are. They assume there is either one right answer, or that all possible answers are equal, because they don’t understand that some answers can be more strongly supported by data than others while even better answers await discovery in the future.
And so we default to assignments that direct students to preferred secondary or tertiary sources (a “text”), tell them to organize their explanations as competitions between two artificial, diametrically-opposed positions, or, sometimes, encourage them to dredge up arguments that arrive at positions they already favor. Students learn to hang evidence on a predetermined conclusion rather than derive a conclusion from the evidence.
This type of deductive exercise has been used by teachers since the age of the agora to build students’ rhetorical skills. Today, unfortunately, it can produce people with a facile ability to argue any position at any time without veering from a worldview that they hold to be sacrosanct.
So what’s the solution? I don’t really have one. Too few of the students I encounter are willing or able to draw reasonable conclusions from evidence they have independently located, so writing exercises that involve inductive reasoning get chucked out the window. It’s frustrating.
I’ve always considered myself an approachable teacher; someone students can come to with questions or worries or just for a talk. And from what I hear, I am considered to be approachable.
Still, I am noticing something that worries me. I have been having open office for about 9 years now, but fewer students have been showing up. Weeks go by when no one comes, even in periods when I am teaching and coordinating courses.
Turnout during open office hours again was low during the first weeks of this year, when I coordinated and taught a first-year course on academic research and writing. At the end, students write a short paper. These are randomly distributed among teaching staff, myself plus 10 other colleagues – together we teach 25 problem-based learning groups of about 12 students. As soon as results are out, all students, whether they have failed or passed, are invited to meet with the person who marked their paper to discuss the assessment during scheduled open office hours.
This year I asked colleagues to inform me about the number of students that had shown up. The table below shows the data for those who failed the course. Interestingly one colleague had to do her open office hours via Skype; no less than 7 out of 9 students showed up. Yet, there is some research that suggests that using technology does not make a huge difference.
Number of failed students
Number of failed students attending open office hours
Why did so few students show up?
I decided to ask some simple questions to the students themselves during a session in our mentor programme. The approximately 100 students who attended (out of nearly 300) might not be representative of the group of students that does not turn up in my office. But I still learned something interesting.
Of the 86 students completing questions via an online survey tool, 36 had failed the course and 29 had attended the open office hours. Those who attended, generally did so to get clarification regarding their paper’s assessment.
Of those who did not attend, some simply stated that they passed the course and saw no need to discuss the feedback. Others referred to having been sick, stressed and/or busy with the new courses – when asked, quite a few of these students did not write to staff to ask for another appointment.
Asked why they thought others had not come, some answered that these must be lazy students or that they missed motivation because they knew what they had done wrong.
But quite a few answers touched upon something that we might all too easily overlook, namely students’ expectations regarding feedback opportunities. These answers did not just concern not knowing what to do with feedback. For instance, one student wrote that students who did not show up might be “insecure and/or uncomfortable with getting feedback”. Another student wrote that “you have limited time with the tutors and tutors often have a lot of work and not much time for you”.
This is something that I want to explore in more detail. I have already briefly discussed this with our academic writing advisor, and we may want to see whether we can specifically address this issue in a forthcoming curriculum review.
But what about solutions for the here and now? There are many ways in which open office are organised, but what works best?
One colleague suggested changing times. Admittedly, my open office hours are Wednesdays from 08:30-09:30, but this never was a problem – and the feedback open office hours during the aforementioned course were scheduled in the afternoon. Elsewhere in cyberspace people have been suggesting other solutions, including a rethink of faculty office space. I’d love to squeeze in a couch, but my office is rather tiny.
On Twitter someone suggested that the wording ‘open office hours’ is unclear to students and that ‘student drop-in hours’ may make more sense. So, the name plate next to my door now mentions my student drop-in hours and so does the syllabus of an upcoming course.
Let’s see what happens. I hope students will come and talk to me again. The door’s open, simply turn up at the stated time!