Comparative Politics 2019, Part 3

Another post on changes this year in my comparative politics course:

As usual, students are reading a lot of academic journal articles, especially from the Journal of Democracy. Although the writing in this journal is very user-friendly — concise sentences, little jargon — students lack the kind of familiarity with the genre that I do. Identifying and evaluating the elements of the author’s argument is a skill that gets better with practice, and the undergraduate students that I see need a lot of practice.

I regularly assign journal article analyses in my graduate courses. My original instructions for this assignment were too long so I simplified them. But I can’t assume that the process of analyzing the argument made in a text is immediately understandable to the average undergraduate. Years ago, I used an in-class exercise in textual analysis in an attempt to give undergrads some training in this skill. An actual example of the exercise can be found here. But I was never quite satisfied with the results.

On the first day of class this semester, I tried a new exercise, in part to prepare students for Seymour Martin Lipset’s “The Social Requisites of Democracy Revisited: 1993 Presidential Address,” from American Sociological Review 59, 1. This article includes an abstract that handily functions as a summary for the reader. Journal of Democracy articles don’t have abstracts, so I redacted it. I projected the article’s introduction on the wall screen and asked the class to examine each paragraph in sequence to identify Lipset’s subject (which is stated at end of the first page and beginning of the second page).

I then divided the class into groups of two or three students each, and gave each group copies of a different section of the article. Each section presents a particular set of characteristics that, in Lipset’s opinion, facilitates the institutionalization of democracy. I asked students in each group to identify the characteristics discussed in the section that group had been given. Each group then reported its findings to the class, which I wrote on the board.

The exercise seem to work well in terms of demonstrating how to pull apart a journal article’s argument, and it made the first day of class a lot more productive than it usually is. The challenge will be to engage students in this type of exercise using articles that have a more complex structure.

Comparative Politics 2019, Part 2

In addition to creating new writing prompts for my comparative politics course this year, I have re-arranged the order in which students encounter different topics. Last year’s version of the course was sequenced as follows:

  • Methods
  • Theory
  • State and Nation
  • Democracy
  • Authoritarianism
  • Political Transitions
  • Political Economy
  • Resistance, Rebellion, and Revolution
  • Gerhkania simulation

This time around the sequence is:

  • Theory
  • Nation and State
  • Democracy
  • Methods
  • Authoritarianism
  • Political Transitions
  • Resistance, Rebellion, and Revolution
  • Political Economy
  • Gerkhania simulation

Why the change? Last year I found myself explicating about research methods used in comparative politics before students had any significant exposure to what actually gets compared. Instead of encountering puzzling real-world situations that might have excited their curiosity, they had to fixate on the mechanics of doing a most similar systems design or a qualitative comparative analysis.

This year these assignments won’t begin until the second third of the semester. I won’t have to rush through my material on methods, and I will have more opportunities in class to ask students “What kind of research design might allow us to compare these cases in a way that allows us satisfactorily answer the question?”  

Comparative Politics 2019, Part 1

In line with the first and third bullet points in my post last year about teaching comparative politics, I’ve tried to make the relationships between course learning objectives, readings, and writing assignments more transparent to students. I’ve done this in part by making writing prompts refer more explicitly to what I want students to learn. For example, here is last year’s assignment about Venezuela, which I placed in the section of the course about democracy:

Read:

  • Scott Mainwaring and Timothy R. Scully, “Latin America: Eight Lessons for Governance,” Journal of Democracy 19, 3 (July 2008): 113-127.
  • Uri Friedman, “How Populism Helped Wreck Venezuela,” The Atlantic, 4 Jun 2017.
  • Moisés Naím and Francisco Toro, “Venezuela Is Falling Apart,” The Atlantic, 12 May 2016.
  • Juan Cristobal Nagel, “Venezuela’s Constitutional Crisis,” Caracas Chronicles, 12 January 2016.
  • Meridith Kohut and Isayen Herrera, As Venezuela Collapses, Children Are Dying of Hunger,” The New York Times, 17 December 2017.

Of Mainwaring and Scully’s eight lessons, which is most relevant for Venezuela? Why?

Answering the above question requires reading the Journal of Democracy article, which is good. Yet the question also demands that students apply a general framework to a specific context that is totally unfamiliar to them. A few newspaper and magazine articles aren’t enough to give students a clear sense of what is happening in Venezuela’s political system. The end result is a badly-constructed rhetorical situation likely to generate answers that aren’t relevant to the learning objectives behind the assignment.

Here is the 2019 version of the assignment, which I have placed in the section of the course on political protest:

Continue reading

The Difference Between Good and Bad?

One last post about teaching my redesigned course on development last semester:

Is the ability to follow directions what distinguishes the excellent from the average student?

Writing assignments in my courses require students to synthesize information from a variety of source material into a single, cohesive argument. Exams are no different. My instructions for the final exam included “refer to relevant course readings” and “see the rubric below for guidance on how your work will be evaluated.” The rubric contained the criterion “use of a variety of relevant course readings.”

I assumed that these statements would translate in students’ minds as “my exam grade will suffer tremendously if I don’t reference any of the course readings.” Yet nine of the fifteen students who took the exam did not use any readings, despite having written about them earlier in the semester. Four others only referred to a single reading. Only two students incorporated information from several different readings.  

Maybe I’m wrong, but I don’t think I’m at fault here.



Life Planning

For most academics, the gears of course planning grind exceedingly fine. We tinker with projects, lectures, and assignments, trying to create what we imagine as the ideal learning experience. But that’s frequently not what we do outside of the classroom.

The winter holiday break is a good time to take stock of one’s life and position oneself better for the future. Although it’s never too late, the sooner you begin taking charge of your personal affairs, the better. So, some basics:

I ask these questions because, if your experience has been anything like mine, you didn’t get trained in personal financial management while in graduate school, and you probably haven’t utilized whatever training might be available through your employer.  

Happy Holidays 2018

Last month I wrote about the multi-year death spiral at Iowa Wesleyan University. My 2017 column for Inside Higher Ed discussed four broad signs that a small college or university is headed toward failure. But how can a faculty member employed by a tuition-dependent institution like Iowa Wesleyan get a firmer grip on his or her employer’s financial health?

One way to do this is to calculate the percentage change over time in a school’s annual total expenses per full-time equivalent (FTE) undergraduate. The larger the expansion in expenses per student, the worse the school’s financial condition and the lower the chances of its long-term survival. 

Is this measurement the only sign of serious trouble? No, but it’s a good rule of thumb that is simple to calculate. Enrollment data can be obtained from the Integrated Postsecondary Education Data System (IPEDS). Not-for-profit colleges and universities report their operational expenses on Line 18 of IRS Form 990, which are available from organizations like ProPublica or Charity Navigator.

I decided apply this rule of thumb to several colleges and universities that suffered declining enrollment, eliminated academic programs, or were otherwise reported as in financial difficulty.  My analysis uses data from fiscal years 2011 through 2016. Why use this time span? Prior to FY 2011, colleges and universities were trying to cope with the immediate effects of the Great Recession, which, as I have previously argued, accelerated what are probably near-permanent changes in undergraduate enrollment. It seemed fair to give schools six years after the economy had begun to stabilize to adjust to the new normal. Finally, when I began my analysis, the most recent publicly-available federal tax filings were from FY 2016.

Below are my results, ordered from the smallest increase in expenses per FTE undergraduate to the largest. I bear none of these schools any ill will. Many have histories of serving marginalized populations. But I predict that at least half of them will close within the next five years. 

Continue reading

Iterating Student Game Design

More final thoughts on my heavily-revised course on development from last semester: as explained in Parts 4 through 6 below, I included a scaffolded series of assignments on design thinking through SCAMPER, a method for creative problem-solving. In a debriefing discussion on the last day of class, one student expressed frustration that the game she and her team had built was not graded. I only graded how well students had written their evaluations of other teams’ games. 

I thought this was a fair point, and said so. But my past use of peer review of student-designed games had proven to be useless — teams simply gave other teams’ games full marks regardless of the games’ actual quality.  And I really did not want to get involved in the minutiae of assessing the quality of all the games that students had created.

Then I thought of applying the last phase of design thinking — experimentation and iteration — to the problem at hand, and this plan came to mind:

  • Compress teaching about design thinking and the related preparatory assignments into a shorter period of time (e.g., first half of the semester).
  • Teams of student design games. 
  • Each team plays and evaluates a game created by another team.
  • I provide the evaluations of each game to its creators.
  • Each team then uses the evaluations as feedback to improve the design of its game. 
  • There is a second, final round of game play. This time each team scores the other team’s game against a rubric. The rubric focuses on how well the second version of the game incorporated the feedback on the initial design.

This sequence might satisfy students’ expectation that everything they do must be graded. 

Links to the original series on redesigning this course:

To Quiz or Not to Quiz, Part 3

Some final thoughts on adding in-class quizzes to my course on economic development:

For six of the nine quizzes administered so far, students answered only half of the questions correctly. Given the results of my survey on students’ study habits, I am increasingly convinced that the problem of transfer is contributing to their poor performance. Perhaps I should create a series of real world-based practice exercises for next year’s iteration of this course. These exercises could be an additional connection to the reading assignments.

Even though each quiz has a maximum of four questions, the quiz-taking eats up a significant amount of classroom time. Perhaps I should impose a time limit. If I put the quizzes online for completion outside of class, students will be able to search for correct answers, which defeats my purpose of testing recall to strengthen memory.

The quizzes have helped me identify what students still don’t know. Reviewing questions in class after grading each quiz might have helped students better understand the concepts that they had been tested on. But the final exam that I created for the course (Part 8 below) will allow me to only indirectly infer whether this occurred. Maybe next year I should repeat some of the same questions across multiple quizzes, or introduce summative exams, to get a better idea of whether students are in fact learning what they are being quizzed about.

Links to the original series on redesigning this course:

Ethically Simulating

This post was inspired by a story in The New York Times: a U.S. Marine discovered his daughters reading a “choose your own adventure” book with a chapter about a battle in Afghanistan in which he actually fought. He thought the book presented a superficial view of war and wrote an editorial on the subject. The book’s publisher decided to stop selling it and halt production of four other similar titles.

The story got me thinking about my own teaching. I often place students in simulated environments that in real life are horrendous — such as genocide, civil war, and natural disasters.* I do this because I believe it does a better job than just reading a text at getting students invested in the subject and developing a less-biased understanding of others. In the past these simulations have even included the digital version of choose your own adventure books.  

Yet I probably don’t pay enough attention to the risk that these exercises can come across as trite games completely divorced from reality. On the one hand, I teach undergraduates who in many cases have lived a materially comfortable life within a psychologically-comfortable bubble. Their world is diametrically opposed to the one that I am hoping they are learning about, and that probably gives them far less of an ability to empathize than I would like. On the other hand, my graduate courses are filled with active and former military personnel. Many of them have direct experience thinking through situations that my simulations attempt to artificially replicate.  

Perhaps I should be asking students “Did this simulation respect reality in a way that contributed to your learning?”

*Inside Disaster: Haiti, the death of which I reported in 2015, has apparently been resurrected via subscription-only access. While its history demonstrates the inherent problems of online simulations, I strongly recommend this product.

Alternatives to Traditional Research Papers

Today we have another guest post by Charity Butcher, Associate Professor of Political Science at Kennesaw State University. She can be reached at cbutche2[at]kennesaw[dot]edu.

Research papers are a common tool used to help students learn about a particular topic. However, students have become accustomed to using information in different ways, and will also be expected to present information differently in their future careers. I therefore decided to give students in my American Foreign Policy course the option of writing a traditional research paper or completing the same research project in a different format – a podcast, video, or poster. Nearly half the students in the class chose one of the alternative formats.

For the assignment, students were asked to choose a current American foreign policy issue, such as U.S. relations with a specific country or a broader foreign policy topic like development aid, human trafficking, climate change, or terrorism. Students first submitted proposals that outlined their topic, included a preliminary bibliography, and identified which format they had chosen. The end product had to describe the foreign policy issue and its importance to the U.S., analyze past U.S. foreign policies on the subject, and recommend future policy. Regardless of the format, students were evaluated on how well they addressed these elements.

Each of the alternative formats had pros and cons. For podcasts, students could include information that was similar in quantity to the traditional research paper. On the other hand, some students first wrote a paper and then read it for the podcast, making some podcasts less dynamic and creative than I had hoped. Overall the podcast option seemed to generate the same effects as a research paper, but added an extra step for students.  

The videos were more dynamic than the podcasts and generally included the same amount of content as a traditional paper. Students were very creative in how they presented information, signaling a bit more thinking than the traditional paper. The downside was that the videos were significantly more time consuming than papers for students to produce. Several students experienced technical problems.

Posters, which students had to present in class, were quite successful. The poster option allowed students to practice their presentation skills, though this occupied class time. It was also more difficult for students to include as much information on a poster as in a paper, though some of this additional information did get communicated in presentations.

Overall, I felt this experiment was successful. In the future, I will eliminate the podcast option and have more specific grading rubrics for each project format. Grades for video and poster formats should incorporate criteria on visual design and presentation delivery. I may also add other presentation options, such as Prezi. I may even add a blogging option!