Call for Editor(s): Journal of Political Science Education

The American Political Science Association is seeking applications and nominations for editorship of the Journal of Political Science Education. Applications can be from individuals or teams, and are due by September 1. Full details are here.

A big thank you to the outgoing editorial team for their excellent management of this journal over the last few years.

Formative Assessment: Abort, Retry, Fail?

Two can play this game

Something of a response to Simon’s June 1 post on transitioning from pedagogical theory to teaching practice: he wrote, in part, “assessment is always formative and should be always linked to the feedback and adaptation process.” In theory, I agree. In practice, while I can lead students to feedback, I am still unable to make them read it.

As I’ve written before, the Canvas LMS has a “student viewed” time stamp feature that shows whether a student looks at my feedback on an assignment — my comments and a tabular rubric with cells that I’ve highlighted — after I have graded it. Generally, though, given the lack of time stamps, many students simply ignore this information. An example, with data: my annual spring semester course on comparative politics. In 2018 and 2019, I taught this course in the physical classroom. In 2020, the latter half of the course was online because of the coronavirus pandemic. In 2021, the course was delivered online for the entire semester. For each iteration, I tallied the number of students who looked at the first three, the third to last, and the second to last reading responses after I graded them. Results are below. N is number of students in the class; not every student in a class completed every assignment. The eyeball columns indicate the how many students viewed an assignment after I had graded it; the eyeball with a slash is the opposite.

While I can understand students not bothering to revisit assignments that they earned full marks on, I don’t understand why students who earn less than full marks frequently ignore information that would allow them to do better in the future. Anyone have an explanation?

Possible Improvement To Team Research Projects

A follow-up to my recent post about increasing the quality of students’ final products from collaborative research projects:

In my Spring 2021 research methods course, I gave students this outline to follow when writing their team’s research reports. I’ve revised the outline for Spring 2022. Each part in the new outline will get graded separately, with a summative grade for the entire report at the end of the semester.

I’m also thinking of being much more specific about the report’s layout, and grading the reports accordingly — similar to what has worked well with student presentations. I can envision the following criteria:

No more than two pages per part, which would limit the final report to eight pages.

Each part must include at least one data visualization — a chart or graph.

No photographic images.

Comparative Politics 2021: An Additional Post-Mortem

In January and February, I wrote about changes I had made in my annual comparative politics course:

I’ve already written about the problematic results from one assignment and a potential solution, and a way of improving the final exam prompt.

Student output in the collaborative qualitative comparative analyses, an assignment discussed in the Part 4 post linked above, proved disappointing. Despite extensive scaffolding, students did not demonstrate an understanding of how to determine cause and effect. Here is a simplified version of what one team produced; the other team’s QCA was nearly identical:

PhilippinesEgyptVenezuelaHungary
Less Economic GrowthYesYesYesYes
Less Citizen Political ParticipationYesYesYesYes
Higher Disease Mortality YesYesYesYes
Greater AuthoritarianismYesYesYesYes

There is no variation in the values for the independent or dependent variables. Nor are the independent variables likely to be causes, rather than effects, of the dependent variable — the direction of causality is ambiguous. The QCA provides no logical explanation for increased authoritarianism.

So next time around, I will have to specify that 1) variables must vary, and 2) causes can’t be effects.

From my perspective, these kinds of assignments get more prescriptive and less analytical with each tweak I make to them. While I don’t want them to devolve into mindless fill-in-the-blank worksheet exercises, it seems that is where they are headed.

Designing For a Resilient Fall Semester

Maybe the first half of 2020 taught you the value of preparing for the unexpected. Or maybe it didn’t. Either way, regardless of where you are located on the planet, there is a chance that your autumn classroom instruction will be interrupted, whether by a campus outbreak of Covid-19, a hurricane, a wildfire, or something else.

Here are two tools to help you make your teaching more resilient:

My table-top exercise on strengthening instructional continuity, which can be played alone or with colleagues.

An animated online game on course design, created by Gray Kane of the Gertrude C. Ford Center for Teaching and Learning at Delta State University.

Nobody expected the Spanish Inquisition.

A Lesson Learned About Team Research Projects

Looking at student performance in the 2020-2021 academic year, I see evidence that team research projects due at the end of the semester can’t be scaffolded solely around individually-graded assignments completed throughout the semester. For example, in my Middle East politics course, each student shared four individually-completed assignments with their teammates for use in their team’s historical timeline. In my research methods course, there were ten individual assignments that teammates were supposed to share with each other as drafts of sections of team research reports. While this approach does decrease free riding and encourage collaboration, it apparently does not ensure high quality research in the final product. Four of the five timelines that teams created in the Middle East course lacked mention of significant events. None of the four teams in the research methods course collected information from coffee farmers, processors, or distributors in Central America, despite my instructions to do so, nor did the final reports resemble the industry exemplars I had provided.

It seems that in students’ minds, my formative assessment of their individual work is totally unconnected to the summative assessment of their collaborative work. I probably need to break the team project into discrete, graded chunks, with each chunk layered on top of some of the individual assignments. Teams can use the feedback they receive on each successive chunk of the project to improve the quality of the final product.

Improving Simulation Efficacy With a Scaffolded Final Exam

A follow-up to my post in April about making exams exercises in active learning:

From the very beginning of my teaching career, I’ve emphasized, or at least tried to emphasize, the importance of being able to construct evidence-based arguments. My exams are almost always intended to evaluate students’ proficiency at this task. As I mention in the post linked to above, the final exam for my comparative politics course in Spring 2020 included the stock phrase of:

reference course readings to support your argument.

For the final exam in Spring 2021, I substituted:

support your argument with 1) information from the Gerkhania server on Discord, and 2) cited references to at least two red and two green journal articles listed in the syllabus.

Explicitly requiring the citation of four articles that students were nominally already familiar with from previous assignments resulted in greater use of scholarly evidence in exam essays than had typically occurred in the past. Students sometimes didn’t use these sources in ways that actually supported their arguments, but in these cases I could tell that at least an attempt had been made.

However, to my surprise, not a single student referred to specific player behavior during the simulation. That is not how students read “information from the Gerkhania server on Discord.” Instead, they summarized the simulation’s outcome or, worse, repeated the general background information on Gerkhania that I had provided before the simulation began. So, for 2022, the exam prompt should probably include something like:

support your argument with 1) examples of specific actions made by players during the Gerkhania simulation, and 2) cited references to at least two red and two green journal articles listed in the syllabus.

This is all well and good, because my main purpose for the final exam is summative assessment of learning. But I also want the final exam to help me gauge whether the Gerkhania simulation contributed effectively to this learning. While the first part of my potential exam prompt gets at this question indirectly, I think more is needed. So I have been thinking about “scaffolding” the final exam around the simulation.

I typically run Gerkhania over three sessions. It occurred to me that I could assign something like the following after each session:

Which theoretical perspective best explains players’ behavior in today’s Gerkhania session? Why? Refer to specific player actions and Course Reading X in your response.

These assignments would be short pieces of writing, easy for students to complete and for me to grade. They would allow students to practice for the final exam, and they would function as a contemporaneous reflective through-briefing rather than just a post-hoc debriefing. And I would be able to observe whether students’ ability to construct evidence-based arguments about the simulation improved over time.

From the Back Office

A few updates related to maintaining this blog, per Simon’s recent post about institutionalizing one’s digital footprint:

The web host for ALPS is upgrading its server software to PHP version 7.4 in a few weeks. Supposedly this change is compatible with the blog’s current WordPress layout and will not introduce any errors. Fingers crossed, given what happened a little more than a year ago.

A few months ago, as part of an effort to distribute the blog’s content to a wider audience, I had to hire a programmer to disable the default WordPress RSS feed and create a new feed:

https://activelearningps.com/?call_custom_simple_rss=1

So if you use the RSS feed instead of being notified of new posts by email, and ALPS suddenly disappeared from your RSS reader, that’s why.

While the above information might make me sound like a tech guru, I’m not. Part of maintaining this website is figuring out what I can learn on short notice and when to seek help for what I can’t.

A tip of the hat to our long-running sponsor, Soomo Learning. We couldn’t keep the lights on without Soomo’s support. They are good people with a great product. Check them out.

Last, but not least, if all goes according to plan, my favorite colleague and most trusted adviser returns to the USA later today. After six months apart, it will be nice living with my wife again.

Students’ Experiences With Technology During the Pandemic

The nonprofit association EDUCAUSE has released two reports on a study conducted in Fall 2020 about students’ pandemic experiences with:

  • Connectivity and technology.
  • Learning with technology.

Both reports can be accessed for free here.

What for me was the most telling part of the second report:

“students learning experiences were undermined in myriad ways by poor decisions in the delivery and management of courses. On the pedagogical side, students complained of long lectures with massive slides decks . . . assignments with little scaffolding or connections to learning outcomes . . . and generally trying to replicate face-to-face experiences in online learning environments.” (italics mine) 

Discord With Gerkhania

In my comparative politics course this past semester, I ran my usual Gerkhania simulation on Discord as an experiment. Discord is a free social media platform that Amanda has discussed previously. It was a positive experience, for the following reasons:

I had never used Discord before, yet it was very easy to figure out. Discord’s design is very intuitive and setting up the simulation was very simple. Students also found Discord easy to learn.

Students interacted more with each other than they did last year when I used Webex, despite a similarly small class. Webex does not allow for spontaneous communication between participants except for one-to-one chat messages. When building the Discord server, I granted students access to different communication channels according to their roles in the simulation. For example, a student representing an ethnic Khan who practiced the Montian religion had access to channels devoted to each group and could automatically message other Khans or Montians at any time. As server host, I could observe and participate in these conversations in real time.

Discord permits text, voice, and video communication. I deliberately chose not to use its videoconferencing capability and none of the students used it either. We communicated with each other solely through text messages. I believe this enhanced rather than degraded the experience in comparison to Webex — no black boxes instead of faces, and no interrupted video or audio because of low-bandwidth internet connections. A user interface that facilitates text communication also means Discord is suitable for running a simulation like Gerkhania asynchronously rather synchronously, something that isn’t realistic with video-based platforms.

My use of Discord also meant that students automatically had a complete record of the simulation’s events that they could reference for the final exam. I did not have to take any additional steps, like create and share a recording, for the class to have a history of what had transpired.