Perusall 4

Another reflection on last semester’s comparative politics course . . .

I noticed a loose association between final course grades and students’ Perusall activity, so the cost-benefit of engaging or not engaging with Perusall assignments ought to be transparent to students.* Another plus: because Perusall scores student activity automatically with an AI algorithm, the assignments are basically “set and forget” on my end. This aspect was very convenient when I didn’t have the time or inclination to read all of the students’ annotations on certain assignments.

I’m so pleased with how Perusall functions that I’m going to incorporate it into my fall semester undergraduate courses.

Previous posts on Perusall:

Perusall

Perusall 2

Perusall 3

*With only twelve students in the course by the end of the semester, I’m not going to bother to try to calculate correlation coefficients.

Serendipity in Research Methods

Sometimes it is easier to demonstrate real-world relevance than others.

Last week students in my research methods course read Charles Wheelan, Naked Statistics, Ch. 12, and Ashley A. Smith, “Students Taking More Credit Courses and Introductory Math Faring Well,” Inside Higher Ed, 7 December 2018.

They then had to answer this question: What mistakes are Nevada officials making with data about community college students?

As written, the Inside Higher Ed story describes people who should know better falling victim to omitted variable bias and confusing correlation with causation. Although I might be making similar mistakes in evaluating in-class discussion about the assignment, I think that students found it more interesting than most because the assignment was about other students.

Soon afterward, two similar items came across my radar:

Students prefer mixing and matching online with on-campus courses.

Common premises about college students are wrong.

I shared these with my students, as additional examples of analyzing (or not) data about their peers.

The interrelation between attendance and writing assignment in a PBL course

This guest post comes from Patrick Bijsmans (Maastricht University) and Arjan Schakel (University of Bergen)

In one of his recent contributions to this blog, Chad asks why students should attend class. In his experience

[C]lass attendance and academic performance are positively correlated for the undergraduate population that I teach. But I can’t say that the former causes the latter given all of the confounding variables.

The question whether attendance matters often pops up, reflected in blog posts, such as those by Chad and by Patrick’s colleague Merijn Chamon, and in recent research articles on the appropriateness of mandatory attendance and on student drop-out. In our own research we present strong evidence that attendance in a Problem-Based Learning (PBL) environment matters, also for the best students, and that attending or not attending class also has an influence on whether international classroom exchanges benefit student learning.

Last year we reported on an accidental experiment in one of Patrick’s courses that allowed us to compare the impact of attendance and the submissions of tasks in online and on-campus groups in Maastricht University’s Bachelor in European Studies. We observed that that attendance appeared to matter more for the on-campus students, whereas handing in tasks was important for the online students.

This year the same course was fully taught on-campus again, although students were allowed to join online when they displayed symptoms of or had tested positive for Covid-19 (this ad-hoc online participation was, unfortunately, not tracked). We did the same research again and there are some notable conclusions to be drawn.

In the first-year BA course that we looked at, students learn how to write a research proposal (see here). The course is set up as a PBL course, so it does not come as a big surprise that attendance once again significantly impacted students’ chances of passing the course.

Continue reading “The interrelation between attendance and writing assignment in a PBL course”

Perusall 3

I decided to survey my comparative politics class on their opinions about Perusall after the first exam. Of a total of thirteen students, only eight were in class on the day of the survey, so the results are in no way statistically representative. But here they are anyway. Each survey item was on a five-point scale, with 1 equal to “strongly disagree” and 5 as “strongly agree.”

Ave Score
Reading other people’s annotations helps me understand assigned readings.4.1
The university should continue to offer Perusall as an option for undergraduate courses.3.2
I find Perusall difficult to use.2.4
I’m more likely to read assigned journal articles that are on Perusall.3.3
Perusall helped me complete reading responses.3.6
Perusall helped me study for the exam.3.4

No obvious warning signs in the results. And my main objective in using Perusall — to increase students’ understanding of assigned readings — was the statement with which they most strongly agreed.

The class has scored on average 80% on Perusall assignments so far. In my opinion, this is a sign that Perusall’s assessment algorithm fairly evaluates the quality of students’ interaction with assigned readings. Since the marking process involves no effort on my part, it’s win-win situation. I’m now thinking of how I can incorporate Perusall into other courses.

Other posts in this series:

Perusall

Perusall 2

Perusall 2

As I noted in my first post about Perusall and in previous comments about teaching comparative politics, students have not demonstrated a sufficient level of engagement with or understanding of journal articles I’ve assigned. While collaboratively annotating journal articles ought to help solve this problem, I’m hoping to make the learning benefits of the process more transparent to students by connecting each Perusall assignment to one of my traditional reading responses.

Here is the prompt for all of the Persuall assignments:

Annotate the article to answer these questions:

  1. Article’s subject—what is the question, puzzle, or problem examined?
  2. What and where is the thesis?
  3. What are the independent variables (causes) and how are they examined?
  4. How are the independent variables related to the dependent variable (effect)?
  5. What is the conclusion of the author(s)?

Here is an example of a reading response — the journal article in the Perusall assignment is at the top:

Why did the Arab Spring “succeed” in Tunisia but “fail” in Egypt and Libya?

The Perusall annotations and the reading response are due an hour before the start of the class in which the material will be discussed.

In today’s class, the first of the semester, students will be doing an ungraded practice run at using Perusall. The first graded Perusall assignment, along with its associated reading response, is due Wednesday morning. We’ll see how this goes.

Perusall

When the spring semester starts, I’ll be using Perusall for the first time, in my comparative politics course. I decided to finally experiment with it for three reasons. First, my previous attempts at getting students to engage in collaborative notetaking have mostly failed. Second, as I mention in that linked post, a couple of my colleagues have raved about Perusall’s ability to turn reading into a social learning process. Third, resiliency is as important as ever when it comes to course design. Given the pandemic and associated mitigation protocols, there is the chance that some or all of my students will be absent from the physical classroom at random points during the semester. Perusall allows students to engage with course content and each other asynchronously online.

I found it easy to set up Perusall by following these basic instructions (on my campus, Perusall has been administratively connected to all Canvas course shells, so there is no need for individual faculty members to install the LTI app). This brief explanatory video was also helpful. Perusall’s user interface is very intuitive. I set up the course’s article library and associated Canvas assignments in only a few minutes. Here is the end result from the Perusall side:

Notice how the layout is exactly what is shown in the video. It is also the same as what students will see.

Perusall uses an algorithm to machine grade student interaction with each document in the course library, and the algorithm’s output can be synced back to the Canvas gradebook. This means readings can become auto-graded Canvas assignments. Details on this and more are in the instructions I linked to above.

I will report on how well all of this has worked once the semester is underway.

Developing a Podcast Assignment

Today we have a guest post from John McMahon, Assistant Professor of Political Science at SUNY Plattsburgh. He can be contacted at jmcma004 [at] plattsburgh [dot] edu.

Podcast assignments make students the creators of political knowledge, allow them to actively research subjects of interest, and offer them the opportunity to improve their writing, listening, and speaking abilities. The format is more interesting and authentic to students than that of traditional assignments, in part because of the popularity of podcasts among people under the age of thirty-five.

In my experience, there are two especially salient components of podcast assignment design. First, it is necessary to be intentional and clear with oneself and one’s students about the assignment’s required elements. A podcast’s political content, length, required sound elements (clips, effects, music, etc.), type of interview subjects (if any), how its creation is scaffolded—all require careful consideration. The requirements of the assignment need to match course learning objectives.

Second, do not worry too much about the technology. Instructional technology and library staff usually can provide support and resources, from workshops to USB microphones to campus recording studios. If needed, students can simply use their phones to record audio. Audio editing tools like Audacity and GarageBand are easy for students to learn, and instructional videos on podcast creation abound online. In my experience, students have also found Spotify’s Anchor to be an easy platform to use.

Podcast assignments are adaptable to a range of courses. I have used them successfully when teaching political theory and American politics at the 100-, 200-, and 300-level. Crucially, as we enter another pandemic academic term, this kind of assignment is suitable for online, hybrid, and in-person courses, including those that change modality in the middle of the term.

Instructions for one of my podcast assignments are available on APSA Educate, and I have published an article on student podcasting in the Journal of Political Science Education.

Asynchronous Field Research Exercises

To answer Maria’s question in her comment on my last post about syllabus design, here is an example of one of my field research assignments:

To earn full credit: upload your first post by 6:00 p.m. Wednesday and respond to at least one other person’s post by 9:00 a.m. Friday; see the rubric.

  1. Read:
  1. Go to The Opportunity Atlas at https://www.opportunityatlas.org. Click “Begin Exploring.” Click “OK.” Enter Newport’s zip code in the search bar at the upper left. Make sure “Household Income” is selected for “Outcomes.” Select a street in the blue area and a street in the red area.
  2. Walk a portion of the two streets that you selected. What do you see about the built environment that you think relates to economic opportunity in these locations?
  3. Take a photo that indicates what you notice; post the photo and your observations in the discussion. Identify the location shown in the photo. Cite at least one the readings in your first discussion post. 

Here is my rubric for grading the online discussion:

Formative Assessment: Abort, Retry, Fail?

Two can play this game

Something of a response to Simon’s June 1 post on transitioning from pedagogical theory to teaching practice: he wrote, in part, “assessment is always formative and should be always linked to the feedback and adaptation process.” In theory, I agree. In practice, while I can lead students to feedback, I am still unable to make them read it.

As I’ve written before, the Canvas LMS has a “student viewed” time stamp feature that shows whether a student looks at my feedback on an assignment — my comments and a tabular rubric with cells that I’ve highlighted — after I have graded it. Generally, though, given the lack of time stamps, many students simply ignore this information. An example, with data: my annual spring semester course on comparative politics. In 2018 and 2019, I taught this course in the physical classroom. In 2020, the latter half of the course was online because of the coronavirus pandemic. In 2021, the course was delivered online for the entire semester. For each iteration, I tallied the number of students who looked at the first three, the third to last, and the second to last reading responses after I graded them. Results are below. N is number of students in the class; not every student in a class completed every assignment. The eyeball columns indicate the how many students viewed an assignment after I had graded it; the eyeball with a slash is the opposite.

While I can understand students not bothering to revisit assignments that they earned full marks on, I don’t understand why students who earn less than full marks frequently ignore information that would allow them to do better in the future. Anyone have an explanation?

Possible Improvement To Team Research Projects

A follow-up to my recent post about increasing the quality of students’ final products from collaborative research projects:

In my Spring 2021 research methods course, I gave students this outline to follow when writing their team’s research reports. I’ve revised the outline for Spring 2022. Each part in the new outline will get graded separately, with a summative grade for the entire report at the end of the semester.

I’m also thinking of being much more specific about the report’s layout, and grading the reports accordingly — similar to what has worked well with student presentations. I can envision the following criteria:

No more than two pages per part, which would limit the final report to eight pages.

Each part must include at least one data visualization — a chart or graph.

No photographic images.