Developing a Podcast Assignment

Today we have a guest post from John McMahon, Assistant Professor of Political Science at SUNY Plattsburgh. He can be contacted at jmcma004 [at] plattsburgh [dot] edu.

Podcast assignments make students the creators of political knowledge, allow them to actively research subjects of interest, and offer them the opportunity to improve their writing, listening, and speaking abilities. The format is more interesting and authentic to students than that of traditional assignments, in part because of the popularity of podcasts among people under the age of thirty-five.

In my experience, there are two especially salient components of podcast assignment design. First, it is necessary to be intentional and clear with oneself and one’s students about the assignment’s required elements. A podcast’s political content, length, required sound elements (clips, effects, music, etc.), type of interview subjects (if any), how its creation is scaffolded—all require careful consideration. The requirements of the assignment need to match course learning objectives.

Second, do not worry too much about the technology. Instructional technology and library staff usually can provide support and resources, from workshops to USB microphones to campus recording studios. If needed, students can simply use their phones to record audio. Audio editing tools like Audacity and GarageBand are easy for students to learn, and instructional videos on podcast creation abound online. In my experience, students have also found Spotify’s Anchor to be an easy platform to use.

Podcast assignments are adaptable to a range of courses. I have used them successfully when teaching political theory and American politics at the 100-, 200-, and 300-level. Crucially, as we enter another pandemic academic term, this kind of assignment is suitable for online, hybrid, and in-person courses, including those that change modality in the middle of the term.

Instructions for one of my podcast assignments are available on APSA Educate, and I have published an article on student podcasting in the Journal of Political Science Education.

Asynchronous Field Research Exercises

To answer Maria’s question in her comment on my last post about syllabus design, here is an example of one of my field research assignments:

To earn full credit: upload your first post by 6:00 p.m. Wednesday and respond to at least one other person’s post by 9:00 a.m. Friday; see the rubric.

  1. Read:
  1. Go to The Opportunity Atlas at https://www.opportunityatlas.org. Click “Begin Exploring.” Click “OK.” Enter Newport’s zip code in the search bar at the upper left. Make sure “Household Income” is selected for “Outcomes.” Select a street in the blue area and a street in the red area.
  2. Walk a portion of the two streets that you selected. What do you see about the built environment that you think relates to economic opportunity in these locations?
  3. Take a photo that indicates what you notice; post the photo and your observations in the discussion. Identify the location shown in the photo. Cite at least one the readings in your first discussion post. 

Here is my rubric for grading the online discussion:

Formative Assessment: Abort, Retry, Fail?

Two can play this game

Something of a response to Simon’s June 1 post on transitioning from pedagogical theory to teaching practice: he wrote, in part, “assessment is always formative and should be always linked to the feedback and adaptation process.” In theory, I agree. In practice, while I can lead students to feedback, I am still unable to make them read it.

As I’ve written before, the Canvas LMS has a “student viewed” time stamp feature that shows whether a student looks at my feedback on an assignment — my comments and a tabular rubric with cells that I’ve highlighted — after I have graded it. Generally, though, given the lack of time stamps, many students simply ignore this information. An example, with data: my annual spring semester course on comparative politics. In 2018 and 2019, I taught this course in the physical classroom. In 2020, the latter half of the course was online because of the coronavirus pandemic. In 2021, the course was delivered online for the entire semester. For each iteration, I tallied the number of students who looked at the first three, the third to last, and the second to last reading responses after I graded them. Results are below. N is number of students in the class; not every student in a class completed every assignment. The eyeball columns indicate the how many students viewed an assignment after I had graded it; the eyeball with a slash is the opposite.

While I can understand students not bothering to revisit assignments that they earned full marks on, I don’t understand why students who earn less than full marks frequently ignore information that would allow them to do better in the future. Anyone have an explanation?

Possible Improvement To Team Research Projects

A follow-up to my recent post about increasing the quality of students’ final products from collaborative research projects:

In my Spring 2021 research methods course, I gave students this outline to follow when writing their team’s research reports. I’ve revised the outline for Spring 2022. Each part in the new outline will get graded separately, with a summative grade for the entire report at the end of the semester.

I’m also thinking of being much more specific about the report’s layout, and grading the reports accordingly — similar to what has worked well with student presentations. I can envision the following criteria:

No more than two pages per part, which would limit the final report to eight pages.

Each part must include at least one data visualization — a chart or graph.

No photographic images.

Comparative Politics 2021: An Additional Post-Mortem

In January and February, I wrote about changes I had made in my annual comparative politics course:

I’ve already written about the problematic results from one assignment and a potential solution, and a way of improving the final exam prompt.

Student output in the collaborative qualitative comparative analyses, an assignment discussed in the Part 4 post linked above, proved disappointing. Despite extensive scaffolding, students did not demonstrate an understanding of how to determine cause and effect. Here is a simplified version of what one team produced; the other team’s QCA was nearly identical:

PhilippinesEgyptVenezuelaHungary
Less Economic GrowthYesYesYesYes
Less Citizen Political ParticipationYesYesYesYes
Higher Disease Mortality YesYesYesYes
Greater AuthoritarianismYesYesYesYes

There is no variation in the values for the independent or dependent variables. Nor are the independent variables likely to be causes, rather than effects, of the dependent variable — the direction of causality is ambiguous. The QCA provides no logical explanation for increased authoritarianism.

So next time around, I will have to specify that 1) variables must vary, and 2) causes can’t be effects.

From my perspective, these kinds of assignments get more prescriptive and less analytical with each tweak I make to them. While I don’t want them to devolve into mindless fill-in-the-blank worksheet exercises, it seems that is where they are headed.

A Lesson Learned About Team Research Projects

Looking at student performance in the 2020-2021 academic year, I see evidence that team research projects due at the end of the semester can’t be scaffolded solely around individually-graded assignments completed throughout the semester. For example, in my Middle East politics course, each student shared four individually-completed assignments with their teammates for use in their team’s historical timeline. In my research methods course, there were ten individual assignments that teammates were supposed to share with each other as drafts of sections of team research reports. While this approach does decrease free riding and encourage collaboration, it apparently does not ensure high quality research in the final product. Four of the five timelines that teams created in the Middle East course lacked mention of significant events. None of the four teams in the research methods course collected information from coffee farmers, processors, or distributors in Central America, despite my instructions to do so, nor did the final reports resemble the industry exemplars I had provided.

It seems that in students’ minds, my formative assessment of their individual work is totally unconnected to the summative assessment of their collaborative work. I probably need to break the team project into discrete, graded chunks, with each chunk layered on top of some of the individual assignments. Teams can use the feedback they receive on each successive chunk of the project to improve the quality of the final product.

My Latest Iteration of Teammate Evaluations

A brief note about end-of-semester teammate evaluations:

I again used Google Forms to distribute a survey for students to evaluate each other’s contributions to team projects, but I changed how I calculated this component of the course grade. Each student had twelve points to distribute across all team members, including themselves. The more valuable a person’s contribution to the team project, the more points that person was supposed to get. People who made equivalent contributions could have been awarded the same number of points, and if a person was judged as having made no contribution at all, he or she could have been given zero points.

When the Google Form closed, I computed an average number of points received for each student. I then divided this mean score by twelve and multiplied it by fifty (the teammate evaluation was worth 50 out of 1,000 points in the course). I used this formula because teams were larger than in previous semesters, and I assumed a few members of each team would do the heavy lifting with the rest doing little or no work. If the resulting number was fifty or higher, a student earned the full fifty points toward his or her course grade. If the result was below ten, the student earned nothing. For any number in between, I rounded to the nearest ten.

This past semester, I had a total of thirty-seven undergraduate students in two courses. Only thirty completed the evaluation. Four of the thirty completed the survey incorrectly — the scores they distributed across team members did not sum to twelve. I deleted their responses, as I had specified in email and in the Google Form’s directions.

In sum, approximately thirty percent of my students did not perform a simple task that could have benefited their own course grades.

As I speculated at the end of the Fall 2020 semester, I was able to label the teammate evaluation as being worth zero points on Canvas. Maybe that partially explains why no students have (so far) complained about this portion of the course grade.

Identifying How Assumptions Meet Reality

Four weeks until classes end, and I’m noticing some of the same problems in my comparative politics course that I saw a year ago. First, some students are not able to consistently locate a journal article’s main thesis, even though I simplified the assignment’s format, students discuss their work among themselves when creating presentations about the articles, and I review the organization of each article after the presentations. Second, students aren’t sharing notes about assigned articles despite my adaptation of Helen Brown Coverdale’s study huddle system. Since collaborative notetaking with Google Docs didn’t work, I assumed that students would at least share their completed article analyses with their green or red teammates. Nope. While the analyses are graded as individual assignments, the “sharing” aspect is not, so probably students see no reason to do it.

Seven years ago, I wrote about mistakenly assuming that students knew the meaning of methods in social science research. A similar problem might be occurring with thesis. Although students have probably heard the term since ninth grade English, maybe they still don’t really understand it. Or, even if they do understand, they could be unwilling to make the effort required to identify what and where it is in a text. As a more direct colleague put it, the problem can originate with stupidity, laziness, or a combination of both.

A solution might be to ask students to find where in the body of an article its title has been converted into a cause and effect statement. For example, I recently assigned “Authoritarian Survival: Why Maduro Hasn’t Fallen” by Javier Corrales (Journal of Democracy 31, 3). The thesis is essentially “Maduro hasn’t fallen because . . .”

As for the unwillingness of students to share their ideas about readings via collaborative notetaking, I would not be surprised if this stems from being taught since early childhood that reading is an isolated rather than a social activity. I.e., the ideal reading environment involves a room of one’s own, a blanket, a cup of tea, and possibly a cat, to ponder silently the meaning of what one has just read. This technique works fine for people like ourselves, because academia self-selects for the highly literate. But the average undergraduate student probably doesn’t know really know how to think about what they’re reading while they’re reading it. According to colleagues who know much more about this subject than I do, if reading is instead a public activity, the metacognition that occurs in the truly literate becomes visible and transferable to others. Social interaction facilitates a better understanding of the text.

Luckily we live in an era of digital tools that allow a reader to easily interact with a text and with other readers. One of these tools is Perusall, which a couple of English professors on my campus have been raving about. I have asked our IT support unit to link Perusall to my Canvas account so that I can start experimenting with it, hopefully before the semester ends. If that happens, I’ll report my observations here.

Forward, Not Back

A brief response* to Simon’s last post about not slipping back into old habits:

The Chronicle of Higher Education recently ran this article (paywalled) about how the pandemic might have permanently altered campuses. People interviewed for the article believed that the use of a hybrid delivery format that includes online, asynchronous components will persist, because even full-time, traditionally-aged students like the convenience and flexibility.

Another reason to continue the practice: hybrid design enables students to get out of a passive environment and into more active experiences. For example, last fall my previously 100% face-to-face course on economic development went on Zoom. I do not teach by lecturing in fifty-minute increments three times per week; students in my classes experience a lot of interaction with peers. However, many of these activities can’t be easily replicated in an online environment given the usual tools I have at my disposal. So as a substitute I created a series of assignments in which students documented evidence of economic inequality in the local community with photos and presented their findings in online asynchronous discussions. The assignments met my intended objectives and the students were really engaged, so I’m going to use them again in Fall 2021, when (most likely) the course will once again be delivered face-to-face on campus. But the assignments can’t be completed effectively in fifty-minute time blocks. My proposed solution? Just cut the students loose — not hold class on certain days. My hope is that the institutional hype about maximizing student learning matches reality, and my plan isn’t quashed by higher ups.

If you’re interested in modifying any of your course in a similar fashion, The Aga Khan University has produced an excellent step-by-step guide on the development, design, and implementation of online courses. The guide also applies to hybrid courses.

*The title of this post is also a very obscure reference to the phrase uttered by Muammar Qaddafi during the now decade-old Arab Spring — إلى الأمام — immortalized in Zenga Zenga.

Changing a Comparative Politics Course, Part 4

My previous post explained how students will complete the template that identifies possible causes of either increased democracy or increased authoritarianism in two nation-states from 2000 to 2020. The next step in this project is for students to work in teams to produce qualitative comparative analyses. Here are my instructions for this collaborative assignment:

Continue reading “Changing a Comparative Politics Course, Part 4”