Sometimes the best way to find out why students do what they do is to ask them.
During a recent lunchtime conversation with a colleague, I learned about the “one-word check-in” — asking students to each describe, with a single adjective, how they felt at that moment. I decided to incorporate this into a data collection exercise that I hoped would demonstrate one benefit of taking notes in class — a problem for which I still haven’t figured out a solution.
My hypothesis: students who took notes — a more cognitively-engaging activity than just listening — would be more likely to feel better by the end of class.
I collected data in my course on globalization, which meets twice a week in seventy-five minute sessions from 9:30 a.m. to 10:45 a.m. The class, when everyone attends, has only twenty-five students, so my results are not statistically significant.
As students were entering the classroom and settling into their chairs, I gave each person three Post-it notes, along with a playing card dealt from a stacked deck (more on this further down). I told everyone to marked their Post-it notes with the suit and number of the playing card each had received. This allowed me to sort the Post-its by individual student afterward. Students should also number each Post-its with a 1, 2, or 3, to simplify keeping them in the correct sequence after class. I didn’t think of this at the time, but luckily I kept each pile of Post-it notes separate after they were collected.
At the beginning of class, students wrote a one-word check-in on Post-it #1.
After the discussion of that day’s reading response, students wrote on Post-it #2 answers to “Have I written any notes during today’s class?” and “Why?”
Students then clustered into teams to discuss plans for an upcoming project assignment. Note that this introduces a methodological flaw in my research design, but it turned out to be irrelevant.
At the end of class, students wrote a one-word check-out on Post-it #3.
A different randomly-selected student collected each set of Post-it notes after students had finished writing on them, which he or she placed face down on a table. The goal here was to make it obvious that I was trying to preserve the anonymity of students’ responses. However, I had dealt cards from a stacked deck (low value cards on the bottom) so that I could identify which responses were from men and which were from women — because I expected that women would be more likely to take notes.
Now for the results. Out of 23 students who were in class that day . . .
Postscript to my February 14 post on colleges and universities in financial trouble: the College of New Rochelle, whose enrollment woes I profiled in 2017, will probably close this summer. Details are contained in this Inside Higher Ed story.
According to its federal tax filings, the College of New Rochelle had a negative net operating margin for every single fiscal year from 2011 through 2016. During this period its operational expenses per FTE undergraduate increased by almost 45 percent. The increase for fiscal years 2012 through 2016 was nearly 53 percent.
New Rochelle is yet another example of a private, non-profit college that did not sufficiently reduce its operational costs when enrollment plummeted after the Great Recession. Expenses per student ballooned until the college became insolvent.
One of the joys of being department chair is creating a curriculum map for information literacy learning outcomes — as part of a five-year program review for a department that is only two years old. Since I’m teaching research methods, a requirement for students in all three of the department’s interdisciplinary majors, I decided to make information literacy a focus of the course. I designed several brief assignments based on chapters in Charles Wheelan’s Naked Statistics that pertain to evaluating information sources for authority, reliability, and relevance. These tasks in turn complement, in my mind at least, two larger assignments: Amanda’s Best Breakfast in Town project and writing a research proposal.
I thought I’d post some of those assignments here on the blog along with an assessment of how well students did on them. First topic on the list is hypothesis construction:
Given the availability of mobile phone coverage in Ghana, Nigeria, and Tanzania, how can we infer which country is the most violent? Why? (Generate a hypothesis about a relationship between mobile phone coverage and violence.)
Students did a good job thinking of possible causal relationships between mobile phone use and violence. Class discussion included ways to operationalize the concepts of violence, wealth, and happiness, which we did with some quick internet research. Students did not find an association between homicide rate and the amounts of mobile phone coverage in Ghana, Nigeria, and Tanzania, which then led to the topic of sample size. The assignment seemed to work as I had intended.
Something of an update to my last post on the slow-motion tsunami in U.S. higher education: Green Mountain College and Oregon College of Art and Craft will close at the end of this semester. Essentially the same fate will befall Hampshire College, because its board of trustees has limited the Fall 2019 incoming class to only about seventy deferred and early decision admits. Few of them will enroll and current students will transfer out, hastening Hampshire College’s impending insolvency.
Applying my measurement of change in annual total expenses per FTE undergraduate from fiscal years 2011 to 2016 to these schools, I get the following percentages:
27: Green Mountain College
24: Oregon College of Art and Craft
25: Hampshire College
Note that these figures are far lower than those for several of the colleges and universities listed in my last post. Does an increase of 25 percent or more over a six-year period in the average cost per full-time undergraduate indicate that a private, tuition-dependent, small-enrollment institution is at high risk of closure? I’ll say, “Yes.”
What’s the figure for the college or university at which you work?
Some comments on a recent study of active learning published in the journal PLOS One — “Knowing is half the battle” by Shaw et al. The study reports on data gathered in an introductory biology course that was taught with active learning techniques from 2013 to 2016. Post-course scores on a concept and skill inventory were significantly higher than pre-course scores, which the authors take as an indication that students learned. Inventory scores from traditionally-taught iterations of the course are not reported. Without a control group, we have no idea whether the new pedagogy is more effective at generating desired learning outcomes than the old one. This is the typical flaw in research on active learning.
But there is a silver lining to this study. The researchers also measured student perceptions. Over time, students increasingly attributed their learning to course structure and pedagogy. Student course evaluations usually correlate with grades, but in this case, grades did not significantly change from year to year. So it appears that students’ expectations about the course eventually aligned more closely with how the course was taught.
This points to a phenomenon that I have noticed anecdotally: if you suddenly adopt an active learning pedagogy, prepare to be criticized initially by students, especially if all the other instructors that students encounter continue to teach in a traditional way.
Why Many Teacher-Training Programs Should Be Abolished
Perhaps some of you — at least in the USA — have noticed the phenomenon of college students using multi-syllabic words out of context. The student clearly does not know what the word he or she inserted into the sentence actually means.
I used to think this was an attempt to impress me in the hopes of getting a higher grade on the writing assignment — pull a complicated-sounding but inaccurate word from an online thesaurus instead of using something simpler. But perhaps the behavior is really a sign that the student is deficient in some basic literacy skills.
As pointed out in this National Public Radio story, millions of children in the USA do not learn how to read well at an early age because of the unscientific and ineffective methods used by their teachers. If children fall behind in the first few years of primary school, it’s probably difficult for them to become proficient readers later on. I’m now wondering if these deficits in literacy persist all the way into college.
Another post on changes this year in my comparative politics course:
As usual, students are reading a lot of academic journal articles, especially from the Journal of Democracy. Although the writing in this journal is very user-friendly — concise sentences, little jargon — students lack the kind of familiarity with the genre that I do. Identifying and evaluating the elements of the author’s argument is a skill that gets better with practice, and the undergraduate students that I see need a lot of practice.
I regularly assign journal article analyses in my graduate courses. My original instructions for this assignment were too long so I simplified them. But I can’t assume that the process of analyzing the argument made in a text is immediately understandable to the average undergraduate. Years ago, I used an in-class exercise in textual analysis in an attempt to give undergrads some training in this skill. An actual example of the exercise can be found here. But I was never quite satisfied with the results.
On the first day of class this semester, I tried a new exercise, in part to prepare students for Seymour Martin Lipset’s “The Social Requisites of Democracy Revisited: 1993 Presidential Address,” from American Sociological Review 59, 1. This article includes an abstract that handily functions as a summary for the reader. Journal of Democracy articles don’t have abstracts, so I redacted it. I projected the article’s introduction on the wall screen and asked the class to examine each paragraph in sequence to identify Lipset’s subject (which is stated at end of the first page and beginning of the second page).
I then divided the class into groups of two or three students each, and gave each group copies of a different section of the article. Each section presents a particular set of characteristics that, in Lipset’s opinion, facilitates the institutionalization of democracy. I asked students in each group to identify the characteristics discussed in the section that group had been given. Each group then reported its findings to the class, which I wrote on the board.
The exercise seem to work well in terms of demonstrating how to pull apart a journal article’s argument, and it made the first day of class a lot more productive than it usually is. The challenge will be to engage students in this type of exercise using articles that have a more complex structure.
In addition to creating new writing prompts for my comparative politics course this year, I have re-arranged the order in which students encounter different topics. Last year’s version of the course was sequenced as follows:
Why the change? Last year I found myself explicating about research methods used in comparative politics before students had any significant exposure to what actually gets compared. Instead of encountering puzzling real-world situations that might have excited their curiosity, they had to fixate on the mechanics of doing a most similar systems design or a qualitative comparative analysis.
This year these assignments won’t begin until the second third of the semester. I won’t have to rush through my material on methods, and I will have more opportunities in class to ask students “What kind of research design might allow us to compare these cases in a way that allows us satisfactorily answer the question?”
In line with the first and third bullet points in my post last year about teaching comparative politics, I’ve tried to make the relationships between course learning objectives, readings, and writing assignments more transparent to students. I’ve done this in part by making writing prompts refer more explicitly to what I want students to learn. For example, here is last year’s assignment about Venezuela, which I placed in the section of the course about democracy:
Scott Mainwaring and Timothy R. Scully, “Latin America: Eight Lessons for Governance,” Journal of Democracy 19, 3 (July 2008): 113-127.
Uri Friedman, “How Populism Helped Wreck Venezuela,” The Atlantic, 4 Jun 2017.
Moisés Naím and Francisco Toro, “Venezuela Is Falling Apart,” The Atlantic, 12 May 2016.
Juan Cristobal Nagel, “Venezuela’s Constitutional Crisis,” Caracas Chronicles, 12 January 2016.
Meridith Kohut and Isayen Herrera, As Venezuela Collapses, Children Are Dying of Hunger,” The New York Times, 17 December 2017.
Of Mainwaring and Scully’s eight lessons, which is most relevant for Venezuela? Why?
Answering the above question requires reading the Journal of Democracy article, which is good. Yet the question also demands that students apply a general framework to a specific context that is totally unfamiliar to them. A few newspaper and magazine articles aren’t enough to give students a clear sense of what is happening in Venezuela’s political system. The end result is a badly-constructed rhetorical situation likely to generate answers that aren’t relevant to the learning objectives behind the assignment.
Here is the 2019 version of the assignment, which I have placed in the section of the course on political protest:
One last post about teaching my redesigned course on development last semester:
Is the ability to follow directions what distinguishes the excellent from the average student?
Writing assignments in my courses require students to synthesize information from a variety of source material into a single, cohesive argument. Exams are no different. My instructions for the final exam included “refer to relevant course readings” and “see the rubric below for guidance on how your work will be evaluated.” The rubric contained the criterion “use of a variety of relevant course readings.”
I assumed that these statements would translate in students’ minds as “my exam grade will suffer tremendously if I don’t reference any of the course readings.” Yet nine of the fifteen students who took the exam did not use any readings, despite having written about them earlier in the semester. Four others only referred to a single reading. Only two students incorporated information from several different readings.
Maybe I’m wrong, but I don’t think I’m at fault here.