A Game Design Checklist

As usual, teams of students in my economic development & environmental change course are building games in a semester-long project. I created this game design checklist as an individual low-stakes assignment. I told students to share their checklists with teammates so that they could collaboratively identify faulty aspects of their team’s game and fix them. My hope is that the checklist will help ensure that students follow the design criteria that I have specified, something that has previously been a problem.

I have twenty students in this course. Seventeen completed the assignment. Of those, one uploaded a blank checklist to Canvas. Another copied and pasted my design criteria into the checklist and did not write anything specific about the game her team is building. So a total of five students, twenty-five percent of the class, earned a zero on the assignment. Looks like the pandemic of learned helplessness continues.

Engaging Students Through Collaborative Research Projects

Today we have a guest post from Rebecca A. Glazier at the School of Public Affairs at the University of Arkansas at Little Rock (rebecca [dot] glazier [at] gmail [dot] com) and Matthew Pietryka at Florida State University’s political science department (mpietryka [at] fsu [dot] edu).

Rebecca Glazier

Many professors are struggling to engage their students, who are often disengaged and burned out. To address these issues and improve student retention, universities are increasingly turning to edtech solutions or big data—everything from predictive analytics to chatbots in discussion boards. These remedies tend to be far removed from students’ daily lives. In contrast, as professors, we are with students in the classroom every day. And this experience often prepares us to know best how to engage our students.

Matthew Pietryka

In a new, open-access article we just published in Education Sciences, “Learning through Collaborative Data Projects: Engaging Students and Building Rapport,” we illustrate how faculty can engage students through collaborative data projects. Rather than relying on top-down university solutions, faculty can use the content of their own courses to involve students in collaborative projects that build rapport and make them feel included and engaged in the course. We see these collaborative data projects as another kind of active learning—getting students thinking outside of the textbook and involved in contributing to a project that is bigger than themselves.

We used data from more than 120 students over two semesters and our results suggest that most students find these collaborative data projects more enjoyable than typical college assignments. And students report the projects make them feel the professor is invested in their learning.

The article we wrote detailing these projects is open access. It provides advice on implementing these projects as well as the R code used to create individualized reports for students participating in the collaborative data projects. The individualized reports help develop rapport between the professor and each student. And this programmatic approach allows professors to scale up these reports to accommodate classes with hundreds of students. Building rapport and doing active learning is something considered possible only in smaller classes, but our approach demonstrates how it can be done in large classes as well—with significantly positive results.

At a time when many faculty members are struggling to engage students, we can take matters into our own hands by designing projects for our classes that draw students in and build rapport with them. It doesn’t take expensive edtech solutions or top-down directives. Mostly, it takes thoughtful pedagogy and prioritizing student connection.

Open Access article link: https://www.mdpi.com/2227-7102/12/12/897.

Recent episode on the Teaching in Higher Ed Podcast on this research: https://teachinginhighered.com/podcast/engaging-students-through-collaborative-research-projects/.

Another Change to Teammate Evaluations

Jumping into the timecrowave again. Past posts on teammate evaluations:

Simplifying my life with Google Forms

What most students thought was a mysterious calculation

Distributing points instead of forced ranking

Calculating differently

For the upcoming fall semester, I’m making another tweak to the system. Instead of ranking or distributing a set number of points, they will rate each other’s contributions on a three-level scale. And rather than email each team a link to a different Google Form, I have one Google Form for the entire class. I can either email the link to the whole class, or — more likely because it’s easier on my end — I can post the link in the Canvas LMS. Or, as I discussed in my last post, I can embed the Form’s iframe into a Canvas assignment.

Since I’ve set the Form to collect students’ email addresses, I’ll be able to discard the responses of any student who rates a team he or she does not belong to.

The evaluation is worth up to 50 points out of 1,000 in the course grading scale; the last item in the Form is simply a method of encouraging students to reflect on how well they and their teammates collaborated (instead of mindlessly entering numbers). As I did last semester, I will set the corresponding assignment in the Canvas gradebook as worth nothing, to avoid complaints about “losing” points because of their peers’ evaluation of their work.

Identifying How Assumptions Meet Reality

Four weeks until classes end, and I’m noticing some of the same problems in my comparative politics course that I saw a year ago. First, some students are not able to consistently locate a journal article’s main thesis, even though I simplified the assignment’s format, students discuss their work among themselves when creating presentations about the articles, and I review the organization of each article after the presentations. Second, students aren’t sharing notes about assigned articles despite my adaptation of Helen Brown Coverdale’s study huddle system. Since collaborative notetaking with Google Docs didn’t work, I assumed that students would at least share their completed article analyses with their green or red teammates. Nope. While the analyses are graded as individual assignments, the “sharing” aspect is not, so probably students see no reason to do it.

Seven years ago, I wrote about mistakenly assuming that students knew the meaning of methods in social science research. A similar problem might be occurring with thesis. Although students have probably heard the term since ninth grade English, maybe they still don’t really understand it. Or, even if they do understand, they could be unwilling to make the effort required to identify what and where it is in a text. As a more direct colleague put it, the problem can originate with stupidity, laziness, or a combination of both.

A solution might be to ask students to find where in the body of an article its title has been converted into a cause and effect statement. For example, I recently assigned “Authoritarian Survival: Why Maduro Hasn’t Fallen” by Javier Corrales (Journal of Democracy 31, 3). The thesis is essentially “Maduro hasn’t fallen because . . .”

As for the unwillingness of students to share their ideas about readings via collaborative notetaking, I would not be surprised if this stems from being taught since early childhood that reading is an isolated rather than a social activity. I.e., the ideal reading environment involves a room of one’s own, a blanket, a cup of tea, and possibly a cat, to ponder silently the meaning of what one has just read. This technique works fine for people like ourselves, because academia self-selects for the highly literate. But the average undergraduate student probably doesn’t know really know how to think about what they’re reading while they’re reading it. According to colleagues who know much more about this subject than I do, if reading is instead a public activity, the metacognition that occurs in the truly literate becomes visible and transferable to others. Social interaction facilitates a better understanding of the text.

Luckily we live in an era of digital tools that allow a reader to easily interact with a text and with other readers. One of these tools is Perusall, which a couple of English professors on my campus have been raving about. I have asked our IT support unit to link Perusall to my Canvas account so that I can start experimenting with it, hopefully before the semester ends. If that happens, I’ll report my observations here.

Changing a Comparative Politics Course, Part 4

My previous post explained how students will complete the template that identifies possible causes of either increased democracy or increased authoritarianism in two nation-states from 2000 to 2020. The next step in this project is for students to work in teams to produce qualitative comparative analyses. Here are my instructions for this collaborative assignment:

Continue reading “Changing a Comparative Politics Course, Part 4”

Changing a Comparative Politics Course, Part 2

In Part 1 of this series, I discussed changing my approach to teaching students how to analyze the arguments contained in journal articles. I also think it is important for students to actually do some discipline-related research rather than just read about it. Previously in this course, my students compared two nation-states using either a most similar systems or most different systems design. That assignment never worked very well because of student confusion about the basic nature of cause and effect. I’ve decided to replace this with a scaffolded process culminating in a team-produced qualitative comparative analysis.

There are three individual assignments that I’m calling Comparison 1, 2, and 3. For Comparison 1, each student chooses two nation-states from a list. That’s it. The list comes from Freedom House’s rankings of citizen freedom in countries around the world; I selected a subset of states for which scores differed between 2000 and 2019 — so that students choose cases where the dependent variable varies over time.

For Comparison 2, students calculate a value for the dependent variable. Here are the instructions for the assignment:

Continue reading “Changing a Comparative Politics Course, Part 2”

Changing a Comparative Politics Course

Looking back at Spring 2020, and making changes accordingly for 2021, despite that semester’s pandemic-induced weirdness:

I decided to use Helen Brown Coverdale’s study huddle technique, in the hopes that it will allow students to become more proficient in decoding academic literature. I am dividing the class into teams of 4-5 students each. Half of each team will be “green” and half will be “red.” Each week, students are responsible for analyzing a journal article of the corresponding color. I chose to use green and red font in the syllabus instead of red/blue because my hyperlinks are blue, and I did not want students to be confused. In addition to the font color, I have included the words “green” and “red” in case of students with colorblindness.

For the analysis assignments, students will be completing this template, which I believe is simpler than the worksheet I used last spring. I also expect it to be easier for me to grade, given my rubric, shown below:

Continue reading “Changing a Comparative Politics Course”

Fall 2020: Looking Backward and Forward, Part 3

One last post about successes and failures from the previous semester: last summer a colleague pointed me toward Knight Lab Timeline JS, and, inspired by Matthew Wilson’s work on crowd-sourcing and self-instruction, I decided to include a timeline project in my undergraduate course on the Middle East. Setting up the project was relatively simple:

Students were already divided into teams for breakout discussions, presentations, and note-taking; I used the same teams for the timelines. I chose five Middle Eastern countries that featured prominently in assigned readings — Egypt, Iran, Iraq, Lebanon, and Saudi Arabia — and created corresponding files in Google Sheets using the spreadsheet template provide by Knight Lab. I gave each team access to its Google Sheet.

Students completed five graded individual assignments that were designed to prevent free riders and guarantee that teams were working on the project throughout the semester rather than only in a frenzied rush at the end. Here are the instructions for the assignment and its associated rubric:

Continue reading “Fall 2020: Looking Backward and Forward, Part 3”

Fall 2020: Looking Backward and Forward, Part 2

To continue evaluating my successes and failures from last semester: the attempt to create community in synchronous online undergraduate courses by dividing students into teams for breakout discussions, note-taking, and memo-writing.*

Zoom breakout discussions for reading responses worked fairly well. Before the semester started, I created a Google Slide file for each team to use for building presentations, and I randomly selected one team to present its conclusions once Zoom breakout rooms closed. I screen shared the presentation from my computer, since I had access to all the files. Students who did not participate in breakout discussions or in creating presentations were held accountable by their teammates in the evaluations completed at the end of the semester. The one aspect of breakout discussions that needs to change for next semester is also true for synchronous classes in general: students need to turn on their webcams. Video of faces is much better at facilitating community than black boxes.

Teams were allowed only one slide per presentation, but often the slides were badly designed — too much text, font too small, etc. In the future, I should require that students follow a specific format.

The Google Slide files ended up being a written record of breakout room discussions for each team; however, I don’t know if students used them as notes. Students definitely didn’t collaboratively write notes in the Google Doc files I had created. Teams either left these files blank, or just pasted screen captures from my PowerPoint presentations into them. Yet another example of students’ lack of note-taking skills.

The memo exercises were also a failure. In an individual graded assignment, students were supposed to make a recommendation in response to a prompt, and provide two different reasons in support of that recommendation. In teams, they were supposed to write a draft of a complete memo, guided by a template I had provided. I then chose one team’s memo at random to discuss as an example with the whole class. There were five iterations of this process. In the individual assignments, students sometimes submitted one reason, just stated in two different ways, in support of their recommendation. The drafts of complete memos produced by teams were usually disorganized and unpersuasive, and the quality of the writing did not improve with successive iterations. Most undergraduates simply lack the writing skills necessary for collaborating effectively on a task like this. Students should instead each write a single memo over the entire semester, in a step-by-step process requiring multiple revisions.

*Additional posts that were part of this series are here and here.