Engaging Students Through Collaborative Research Projects

Today we have a guest post from Rebecca A. Glazier at the School of Public Affairs at the University of Arkansas at Little Rock (rebecca [dot] glazier [at] gmail [dot] com) and Matthew Pietryka at Florida State University’s political science department (mpietryka [at] fsu [dot] edu).

Rebecca Glazier

Many professors are struggling to engage their students, who are often disengaged and burned out. To address these issues and improve student retention, universities are increasingly turning to edtech solutions or big data—everything from predictive analytics to chatbots in discussion boards. These remedies tend to be far removed from students’ daily lives. In contrast, as professors, we are with students in the classroom every day. And this experience often prepares us to know best how to engage our students.

Matthew Pietryka

In a new, open-access article we just published in Education Sciences, “Learning through Collaborative Data Projects: Engaging Students and Building Rapport,” we illustrate how faculty can engage students through collaborative data projects. Rather than relying on top-down university solutions, faculty can use the content of their own courses to involve students in collaborative projects that build rapport and make them feel included and engaged in the course. We see these collaborative data projects as another kind of active learning—getting students thinking outside of the textbook and involved in contributing to a project that is bigger than themselves.

We used data from more than 120 students over two semesters and our results suggest that most students find these collaborative data projects more enjoyable than typical college assignments. And students report the projects make them feel the professor is invested in their learning.

The article we wrote detailing these projects is open access. It provides advice on implementing these projects as well as the R code used to create individualized reports for students participating in the collaborative data projects. The individualized reports help develop rapport between the professor and each student. And this programmatic approach allows professors to scale up these reports to accommodate classes with hundreds of students. Building rapport and doing active learning is something considered possible only in smaller classes, but our approach demonstrates how it can be done in large classes as well—with significantly positive results.

At a time when many faculty members are struggling to engage students, we can take matters into our own hands by designing projects for our classes that draw students in and build rapport with them. It doesn’t take expensive edtech solutions or top-down directives. Mostly, it takes thoughtful pedagogy and prioritizing student connection.

Open Access article link: https://www.mdpi.com/2227-7102/12/12/897.

Recent episode on the Teaching in Higher Ed Podcast on this research: https://teachinginhighered.com/podcast/engaging-students-through-collaborative-research-projects/.

Update on CHAMP Forecasting Project

My impressions so far of the political forecasting project (described here, here, and here) that I created for the undergraduate course on the Middle East that I’m teaching this semester:

The quality of student writing on the different elements of CHAMP has improved over the first three iterations of the assignment. Their analyses are more specific and relevant to what they are trying to predict. I’m hoping that this is a sign of students’ increasing interest in the project. Citation of sources remains problematic for some students. No, a string of URLs is not a bibliography.

The quality of no-stakes classroom presentations has similarly improved. After each forecast, one member of each team presents his or her team’s collective conclusions without using any visual aids. The first two rounds of presentations were terrible. The third round was adequate. I’m now wondering, if I use this project in the future, whether I should grade the presentations against a simple rubric and introduce a rule that a different team member presents each round.

As for the forecasts themselves, here are the average probabilities assigned to each event for the three rounds of predictions students in the class have made so far:

Team Champ

As promised, here is the prompt for the collaborative portion of the forecasting project in my upcoming Middle East course. There are two of these team deliverables — a draft report due after students have submitted the first three of their individual CHAMP assignments, and a final version (shown below) due at the end of the semester. In terms of contribution to course grade, the draft and final versions together are worth only a third of what the five individual assignments are worth. Also, a portion of the course grade will come from teammate evaluations.

Your team is interviewing for a job with Eurasia Group. The interview process requires that your team submit a jointly-written report on your team’s Forecasting Project topic using the CHAMP framework:

  • Comparisons
  • Historical Trends
  • Average Opinion
  • Mathematical Models
  • Predictable Biases

Your team’s final report should apply all five components of the CHAMP framework in a forecast that is no more than five pages of double-spaced 11- or 12-point font text. Do not use quotations of sources in the report. Reference source material using footnotes. See the list of syllabus readings for proper bibliographic format. Footnotes can be in 10-point font. 

Only one member of your team needs to submit the document for this assignment.

Your team’s work will be assessed using the rubric below.

Now I just need to create rubrics for the project’s individual and team assignments . . .

American Autogolpe

A brief post this week about the televised hearings of the U.S. House of Representatives’ January 6 committee.*

Republic

I teach democracy from a comparative perspective, a challenge when students have had the ideology of American exceptionalism drilled into them since birth.

When watching the second installment of the hearings, it occurred to me that they could serve as a reality check for students who tend to see “democracy” as a purely American phenomenon and whose culminating undergraduate achievement is a legalistic rehash of a 19th century Supreme Court opinion on the U.S. constitution’s Establishment Clause.

In my opinion, a much more meaningful exercise would be for students to research forms of democracy and threats to it globally. A class could be divided into teams with each team analyzing a different country in relation to the USA. Testimony from the hearings could be used to identify pivotal events that might or might not parallel what has happened in, for example, Venezuela.**

It just so happens that there are plenty of people who already thought of this kind of project — the folks at Democratic Erosion. Check out their sample syllabus for a semester-long course.

* full name: Select Committee to Investigate the January 6 Attack on the United States Capitol

** with readings such as Javier Corrales, “Authoritarian Survival: Why Maduro Hasn’t Fallen,” and Milan W. Svolik, “Polarization versus Democracy,” which appeared in Journal of Democracy in 2020 and 2019, respectively.

Another Change to Teammate Evaluations

Jumping into the timecrowave again. Past posts on teammate evaluations:

Simplifying my life with Google Forms

What most students thought was a mysterious calculation

Distributing points instead of forced ranking

Calculating differently

For the upcoming fall semester, I’m making another tweak to the system. Instead of ranking or distributing a set number of points, they will rate each other’s contributions on a three-level scale. And rather than email each team a link to a different Google Form, I have one Google Form for the entire class. I can either email the link to the whole class, or — more likely because it’s easier on my end — I can post the link in the Canvas LMS. Or, as I discussed in my last post, I can embed the Form’s iframe into a Canvas assignment.

Since I’ve set the Form to collect students’ email addresses, I’ll be able to discard the responses of any student who rates a team he or she does not belong to.

The evaluation is worth up to 50 points out of 1,000 in the course grading scale; the last item in the Form is simply a method of encouraging students to reflect on how well they and their teammates collaborated (instead of mindlessly entering numbers). As I did last semester, I will set the corresponding assignment in the Canvas gradebook as worth nothing, to avoid complaints about “losing” points because of their peers’ evaluation of their work.

Possible Improvement To Team Research Projects

A follow-up to my recent post about increasing the quality of students’ final products from collaborative research projects:

In my Spring 2021 research methods course, I gave students this outline to follow when writing their team’s research reports. I’ve revised the outline for Spring 2022. Each part in the new outline will get graded separately, with a summative grade for the entire report at the end of the semester.

I’m also thinking of being much more specific about the report’s layout, and grading the reports accordingly — similar to what has worked well with student presentations. I can envision the following criteria:

No more than two pages per part, which would limit the final report to eight pages.

Each part must include at least one data visualization — a chart or graph.

No photographic images.

A Lesson Learned About Team Research Projects

Looking at student performance in the 2020-2021 academic year, I see evidence that team research projects due at the end of the semester can’t be scaffolded solely around individually-graded assignments completed throughout the semester. For example, in my Middle East politics course, each student shared four individually-completed assignments with their teammates for use in their team’s historical timeline. In my research methods course, there were ten individual assignments that teammates were supposed to share with each other as drafts of sections of team research reports. While this approach does decrease free riding and encourage collaboration, it apparently does not ensure high quality research in the final product. Four of the five timelines that teams created in the Middle East course lacked mention of significant events. None of the four teams in the research methods course collected information from coffee farmers, processors, or distributors in Central America, despite my instructions to do so, nor did the final reports resemble the industry exemplars I had provided.

It seems that in students’ minds, my formative assessment of their individual work is totally unconnected to the summative assessment of their collaborative work. I probably need to break the team project into discrete, graded chunks, with each chunk layered on top of some of the individual assignments. Teams can use the feedback they receive on each successive chunk of the project to improve the quality of the final product.

My Latest Iteration of Teammate Evaluations

A brief note about end-of-semester teammate evaluations:

I again used Google Forms to distribute a survey for students to evaluate each other’s contributions to team projects, but I changed how I calculated this component of the course grade. Each student had twelve points to distribute across all team members, including themselves. The more valuable a person’s contribution to the team project, the more points that person was supposed to get. People who made equivalent contributions could have been awarded the same number of points, and if a person was judged as having made no contribution at all, he or she could have been given zero points.

When the Google Form closed, I computed an average number of points received for each student. I then divided this mean score by twelve and multiplied it by fifty (the teammate evaluation was worth 50 out of 1,000 points in the course). I used this formula because teams were larger than in previous semesters, and I assumed a few members of each team would do the heavy lifting with the rest doing little or no work. If the resulting number was fifty or higher, a student earned the full fifty points toward his or her course grade. If the result was below ten, the student earned nothing. For any number in between, I rounded to the nearest ten.

This past semester, I had a total of thirty-seven undergraduate students in two courses. Only thirty completed the evaluation. Four of the thirty completed the survey incorrectly — the scores they distributed across team members did not sum to twelve. I deleted their responses, as I had specified in email and in the Google Form’s directions.

In sum, approximately thirty percent of my students did not perform a simple task that could have benefited their own course grades.

As I speculated at the end of the Fall 2020 semester, I was able to label the teammate evaluation as being worth zero points on Canvas. Maybe that partially explains why no students have (so far) complained about this portion of the course grade.

Changing a Comparative Politics Course, Part 4

My previous post explained how students will complete the template that identifies possible causes of either increased democracy or increased authoritarianism in two nation-states from 2000 to 2020. The next step in this project is for students to work in teams to produce qualitative comparative analyses. Here are my instructions for this collaborative assignment:

Continue reading “Changing a Comparative Politics Course, Part 4”

Fall 2020: Looking Backward and Forward, Part 3

One last post about successes and failures from the previous semester: last summer a colleague pointed me toward Knight Lab Timeline JS, and, inspired by Matthew Wilson’s work on crowd-sourcing and self-instruction, I decided to include a timeline project in my undergraduate course on the Middle East. Setting up the project was relatively simple:

Students were already divided into teams for breakout discussions, presentations, and note-taking; I used the same teams for the timelines. I chose five Middle Eastern countries that featured prominently in assigned readings — Egypt, Iran, Iraq, Lebanon, and Saudi Arabia — and created corresponding files in Google Sheets using the spreadsheet template provide by Knight Lab. I gave each team access to its Google Sheet.

Students completed five graded individual assignments that were designed to prevent free riders and guarantee that teams were working on the project throughout the semester rather than only in a frenzied rush at the end. Here are the instructions for the assignment and its associated rubric:

Continue reading “Fall 2020: Looking Backward and Forward, Part 3”