A Game Design Checklist

As usual, teams of students in my economic development & environmental change course are building games in a semester-long project. I created this game design checklist as an individual low-stakes assignment. I told students to share their checklists with teammates so that they could collaboratively identify faulty aspects of their team’s game and fix them. My hope is that the checklist will help ensure that students follow the design criteria that I have specified, something that has previously been a problem.

I have twenty students in this course. Seventeen completed the assignment. Of those, one uploaded a blank checklist to Canvas. Another copied and pasted my design criteria into the checklist and did not write anything specific about the game her team is building. So a total of five students, twenty-five percent of the class, earned a zero on the assignment. Looks like the pandemic of learned helplessness continues.

The Perils of Small Classes Revisited

Returning to a subject that I wrote about in 2015: when low enrollment makes collaborative assignments difficult. Last semester I taught an undergraduate course built around a team-based research project. Three teams, each focused on a different country. The course had only eleven students, resulting in two teams of four and one team of three. Not ideal, but minimally sufficient, or so I thought at the beginning of the semester.

I hadn’t planned on three students performing at the D/F level and making little to no contribution to their teams’ research projects. Compounding the problem: one-quarter to one-half of the class being absent on any given day.

The roster for one of my fall semester courses currently shows only nine students. If it does run, I might have to abandon the collaborative project I was thinking of including. Given what happened in the spring, I can’t form teams of only three students. Two teams somewhat defeats the purpose of the project.

The easy option is to simply replace team projects entirely with individual assignments, even though that’s not how the world works. Students might in fact prefer this, given the death of curiosity.

My situation is complicated by several environmental factors over which I have very little control. First, interest in the humanities and social sciences is declining nationally, exacerbated by the fact that the undergraduate enrollment at my university is quite small. The small number of students interested in learning about the subjects that I teach has steadily gotten smaller.

Second, the structure of the curriculum privileges some areas of the world and bodies of knowledge over others. The majors that are most popular with students demand 60 to 80 credits, while general education requirements present the history of human civilization as a straight line from Athens to Rome to St. Thomas Aquinas. The organization of the curriculum sends students the message that college is about checking off a list of boxes. The courses that I teach aren’t on that list.

Third, the delivery of the curriculum makes my courses less attractive to students, even if they are interested in the content. Each academic department must schedule some courses every semester at 8:00 a.m. Since I’m an early riser, I’m happy to take this time slot, but most undergraduates are not. More consequential: the false perception among students that they’ve learned more when what they’ve done to learn feels easy, a point recently made by my favorite cognitive psychologist, Daniel Willingham. Students resist active learning pedagogical methods because they feel harder than passive instruction, despite being far more effective. When given a choice between a course that I teach using active learning and a course taught by someone else that consists of lectures and multiple-choice exams, most students will choose the latter.

So that’s my situation. I have no easy solutions to the problem.

Engaging Students Through Collaborative Research Projects

Today we have a guest post from Rebecca A. Glazier at the School of Public Affairs at the University of Arkansas at Little Rock (rebecca [dot] glazier [at] gmail [dot] com) and Matthew Pietryka at Florida State University’s political science department (mpietryka [at] fsu [dot] edu).

Rebecca Glazier

Many professors are struggling to engage their students, who are often disengaged and burned out. To address these issues and improve student retention, universities are increasingly turning to edtech solutions or big data—everything from predictive analytics to chatbots in discussion boards. These remedies tend to be far removed from students’ daily lives. In contrast, as professors, we are with students in the classroom every day. And this experience often prepares us to know best how to engage our students.

Matthew Pietryka

In a new, open-access article we just published in Education Sciences, “Learning through Collaborative Data Projects: Engaging Students and Building Rapport,” we illustrate how faculty can engage students through collaborative data projects. Rather than relying on top-down university solutions, faculty can use the content of their own courses to involve students in collaborative projects that build rapport and make them feel included and engaged in the course. We see these collaborative data projects as another kind of active learning—getting students thinking outside of the textbook and involved in contributing to a project that is bigger than themselves.

We used data from more than 120 students over two semesters and our results suggest that most students find these collaborative data projects more enjoyable than typical college assignments. And students report the projects make them feel the professor is invested in their learning.

The article we wrote detailing these projects is open access. It provides advice on implementing these projects as well as the R code used to create individualized reports for students participating in the collaborative data projects. The individualized reports help develop rapport between the professor and each student. And this programmatic approach allows professors to scale up these reports to accommodate classes with hundreds of students. Building rapport and doing active learning is something considered possible only in smaller classes, but our approach demonstrates how it can be done in large classes as well—with significantly positive results.

At a time when many faculty members are struggling to engage students, we can take matters into our own hands by designing projects for our classes that draw students in and build rapport with them. It doesn’t take expensive edtech solutions or top-down directives. Mostly, it takes thoughtful pedagogy and prioritizing student connection.

Open Access article link: https://www.mdpi.com/2227-7102/12/12/897.

Recent episode on the Teaching in Higher Ed Podcast on this research: https://teachinginhighered.com/podcast/engaging-students-through-collaborative-research-projects/.

Update on CHAMP Forecasting Project

My impressions so far of the political forecasting project (described here, here, and here) that I created for the undergraduate course on the Middle East that I’m teaching this semester:

The quality of student writing on the different elements of CHAMP has improved over the first three iterations of the assignment. Their analyses are more specific and relevant to what they are trying to predict. I’m hoping that this is a sign of students’ increasing interest in the project. Citation of sources remains problematic for some students. No, a string of URLs is not a bibliography.

The quality of no-stakes classroom presentations has similarly improved. After each forecast, one member of each team presents his or her team’s collective conclusions without using any visual aids. The first two rounds of presentations were terrible. The third round was adequate. I’m now wondering, if I use this project in the future, whether I should grade the presentations against a simple rubric and introduce a rule that a different team member presents each round.

As for the forecasts themselves, here are the average probabilities assigned to each event for the three rounds of predictions students in the class have made so far:

Team Champ

As promised, here is the prompt for the collaborative portion of the forecasting project in my upcoming Middle East course. There are two of these team deliverables — a draft report due after students have submitted the first three of their individual CHAMP assignments, and a final version (shown below) due at the end of the semester. In terms of contribution to course grade, the draft and final versions together are worth only a third of what the five individual assignments are worth. Also, a portion of the course grade will come from teammate evaluations.

Your team is interviewing for a job with Eurasia Group. The interview process requires that your team submit a jointly-written report on your team’s Forecasting Project topic using the CHAMP framework:

  • Comparisons
  • Historical Trends
  • Average Opinion
  • Mathematical Models
  • Predictable Biases

Your team’s final report should apply all five components of the CHAMP framework in a forecast that is no more than five pages of double-spaced 11- or 12-point font text. Do not use quotations of sources in the report. Reference source material using footnotes. See the list of syllabus readings for proper bibliographic format. Footnotes can be in 10-point font. 

Only one member of your team needs to submit the document for this assignment.

Your team’s work will be assessed using the rubric below.

Now I just need to create rubrics for the project’s individual and team assignments . . .

American Autogolpe

A brief post this week about the televised hearings of the U.S. House of Representatives’ January 6 committee.*

Republic

I teach democracy from a comparative perspective, a challenge when students have had the ideology of American exceptionalism drilled into them since birth.

When watching the second installment of the hearings, it occurred to me that they could serve as a reality check for students who tend to see “democracy” as a purely American phenomenon and whose culminating undergraduate achievement is a legalistic rehash of a 19th century Supreme Court opinion on the U.S. constitution’s Establishment Clause.

In my opinion, a much more meaningful exercise would be for students to research forms of democracy and threats to it globally. A class could be divided into teams with each team analyzing a different country in relation to the USA. Testimony from the hearings could be used to identify pivotal events that might or might not parallel what has happened in, for example, Venezuela.**

It just so happens that there are plenty of people who already thought of this kind of project — the folks at Democratic Erosion. Check out their sample syllabus for a semester-long course.

* full name: Select Committee to Investigate the January 6 Attack on the United States Capitol

** with readings such as Javier Corrales, “Authoritarian Survival: Why Maduro Hasn’t Fallen,” and Milan W. Svolik, “Polarization versus Democracy,” which appeared in Journal of Democracy in 2020 and 2019, respectively.

Another Change to Teammate Evaluations

Jumping into the timecrowave again. Past posts on teammate evaluations:

Simplifying my life with Google Forms

What most students thought was a mysterious calculation

Distributing points instead of forced ranking

Calculating differently

For the upcoming fall semester, I’m making another tweak to the system. Instead of ranking or distributing a set number of points, they will rate each other’s contributions on a three-level scale. And rather than email each team a link to a different Google Form, I have one Google Form for the entire class. I can either email the link to the whole class, or — more likely because it’s easier on my end — I can post the link in the Canvas LMS. Or, as I discussed in my last post, I can embed the Form’s iframe into a Canvas assignment.

Since I’ve set the Form to collect students’ email addresses, I’ll be able to discard the responses of any student who rates a team he or she does not belong to.

The evaluation is worth up to 50 points out of 1,000 in the course grading scale; the last item in the Form is simply a method of encouraging students to reflect on how well they and their teammates collaborated (instead of mindlessly entering numbers). As I did last semester, I will set the corresponding assignment in the Canvas gradebook as worth nothing, to avoid complaints about “losing” points because of their peers’ evaluation of their work.

Possible Improvement To Team Research Projects

A follow-up to my recent post about increasing the quality of students’ final products from collaborative research projects:

In my Spring 2021 research methods course, I gave students this outline to follow when writing their team’s research reports. I’ve revised the outline for Spring 2022. Each part in the new outline will get graded separately, with a summative grade for the entire report at the end of the semester.

I’m also thinking of being much more specific about the report’s layout, and grading the reports accordingly — similar to what has worked well with student presentations. I can envision the following criteria:

No more than two pages per part, which would limit the final report to eight pages.

Each part must include at least one data visualization — a chart or graph.

No photographic images.

A Lesson Learned About Team Research Projects

Looking at student performance in the 2020-2021 academic year, I see evidence that team research projects due at the end of the semester can’t be scaffolded solely around individually-graded assignments completed throughout the semester. For example, in my Middle East politics course, each student shared four individually-completed assignments with their teammates for use in their team’s historical timeline. In my research methods course, there were ten individual assignments that teammates were supposed to share with each other as drafts of sections of team research reports. While this approach does decrease free riding and encourage collaboration, it apparently does not ensure high quality research in the final product. Four of the five timelines that teams created in the Middle East course lacked mention of significant events. None of the four teams in the research methods course collected information from coffee farmers, processors, or distributors in Central America, despite my instructions to do so, nor did the final reports resemble the industry exemplars I had provided.

It seems that in students’ minds, my formative assessment of their individual work is totally unconnected to the summative assessment of their collaborative work. I probably need to break the team project into discrete, graded chunks, with each chunk layered on top of some of the individual assignments. Teams can use the feedback they receive on each successive chunk of the project to improve the quality of the final product.