Syllabus Quiz In Another Form: Annotation

This idea comes from Matt Reed at Inside Higher Ed, who in turn got it from Emily M. Farris at Texas Christian University: have students annotate, in ABC fashion, your course syllabus at the end of the semester.

I’m going to go a few steps further for the upcoming semester:

First, instead of my usual quiz on the syllabus at the beginning of the semester, I’ll have students annotate it on Perusall in response to my questions. The assignment should function as a close reading exercise, but it will be machine graded by Perusall.

Second, I’ll create a quiz on the Canvas LMS that will force students to explore the contents of the course’s Canvas shell. It has become apparent that most students only pay attention to the LMS’s “To Do” list of impending assignment deadlines that pops up on their phones. They ignore everything else I put into my course shells, including the “How to Get an A” advice. As with the Perusall assignment on the syllabus, the quiz will be machine graded by Canvas.

Third, I’ll create another Perusall assignment on the syllabus for the end of the semester, to get feedback on what worked and what didn’t, and to remind students of course learning outcomes.

Assignments, Platforms, and AI – Part 2

The follow-up to my last post: a new assignment that I’m calling, not very creatively, the argument analysis. Here are the directions to students:

Choose one of the peer-reviewed journal articles listed in the syllabus. Find an editorial published in the last year in one of the sources listed below that is about the same general subject as the article. List the bibliographic information for the article and editorial at the top. Then, in only four paragraphs, compare them according to the criteria below. Put the paragraphs in the indicated order and make each paragraph less than 200 words in length.

Which author: 

1. References the most comprehensive and relevant data? Why?

2. Infers the most valid relationship between cause and effect? Why?

3. Does the best job of refuting counter-arguments? Why?

4. Is the most persuasive to an audience of policymakers? Why?

I then provide a list of online news outlets that students can pull an editorial from.

Possible advantages of this over my old article analysis? First, the compare and contrast elements force students to engage in more complex thinking. With the article analysis, students sometimes focused too heavily on summarizing. Second, students engage with a recently published argument aimed at a general audience. Academic journal articles are written for a very narrow audience of specialists — not the people most students will be communicating with after they graduate. Also most journals whose contents are indexed in databases have moving walls that make their most recent issues inaccessible to students. Third, I’m hoping that students will be able to connect what they write about in the argument analysis to discussion topics, the reading responses, and maybe even potential future dissertation topics.

Even though the argument analysis is not machine-graded like the Perusall assignments are, I decided to simplify my life with a new rubric. My rubric for the old article analysis:

The rubric for the new argument analysis:

Fewer boxes to check so easier for me to use, but its criteria still hit my targets for the assignment.

Assignments, Platforms, and AI – Part 1

The first in a short series of posts on leveraging new technologies to alleviate boredom . . .

After fourteen years, I have decided to abandon the manually graded journal article analysis assignment in my graduate courses. I have integrated Perusall into all of the graduate courses that I teach, and the prompts for my Perusall assignments and the article analysis were the same. While repetition might be the mother of all learning, I’m not very maternal, and this seemed like overkill. Also, student writing in Perusall assignments is, at least potentially, a conversation between themselves and other students, the article analysis was a conversation with just one other person — me. Not very authentic. So the article analysis went into the trash bin. I wanted to replace it with something new and more interesting — for both me and my students. I’ll write about what that new thing is in my next post.

For now, I want to focus on the idea of using machine-graded assignments to make teaching less burdensome for the instructor and more interesting for students. Pre-Perusall, each of my graduate courses consisted of one discussion and two reading responses per week, the article analysis, and a final exam — 23 assessments. Now my courses have one discussion and one reading response per week, two Perusall assignments per week, the new yet-to-be-described assignment, and a final exam. Notice that I’ve reduced my assessment burden by almost a third while increasing student-to-student interaction.

Engaging Students Through Collaborative Research Projects

Today we have a guest post from Rebecca A. Glazier at the School of Public Affairs at the University of Arkansas at Little Rock (rebecca [dot] glazier [at] gmail [dot] com) and Matthew Pietryka at Florida State University’s political science department (mpietryka [at] fsu [dot] edu).

Rebecca Glazier

Many professors are struggling to engage their students, who are often disengaged and burned out. To address these issues and improve student retention, universities are increasingly turning to edtech solutions or big data—everything from predictive analytics to chatbots in discussion boards. These remedies tend to be far removed from students’ daily lives. In contrast, as professors, we are with students in the classroom every day. And this experience often prepares us to know best how to engage our students.

Matthew Pietryka

In a new, open-access article we just published in Education Sciences, “Learning through Collaborative Data Projects: Engaging Students and Building Rapport,” we illustrate how faculty can engage students through collaborative data projects. Rather than relying on top-down university solutions, faculty can use the content of their own courses to involve students in collaborative projects that build rapport and make them feel included and engaged in the course. We see these collaborative data projects as another kind of active learning—getting students thinking outside of the textbook and involved in contributing to a project that is bigger than themselves.

We used data from more than 120 students over two semesters and our results suggest that most students find these collaborative data projects more enjoyable than typical college assignments. And students report the projects make them feel the professor is invested in their learning.

The article we wrote detailing these projects is open access. It provides advice on implementing these projects as well as the R code used to create individualized reports for students participating in the collaborative data projects. The individualized reports help develop rapport between the professor and each student. And this programmatic approach allows professors to scale up these reports to accommodate classes with hundreds of students. Building rapport and doing active learning is something considered possible only in smaller classes, but our approach demonstrates how it can be done in large classes as well—with significantly positive results.

At a time when many faculty members are struggling to engage students, we can take matters into our own hands by designing projects for our classes that draw students in and build rapport with them. It doesn’t take expensive edtech solutions or top-down directives. Mostly, it takes thoughtful pedagogy and prioritizing student connection.

Open Access article link: https://www.mdpi.com/2227-7102/12/12/897.

Recent episode on the Teaching in Higher Ed Podcast on this research: https://teachinginhighered.com/podcast/engaging-students-through-collaborative-research-projects/.

Chat GPT: Possible Responses Crowdsourced from ISA

At ISA a couple of weeks back, I facilitated a Teaching Cafe discussion on AI and Chat GPT’s impact in our classes. Thanks to the Innovative Pedagogy Conference Committee generously allocating us space, several colleagues from a variety of different institutions stopped by to share their thoughts and ask questions about the ethics, practical responses, and positive aspects of this technology. I’m going to share a few of these responses in case they aid others in thinking through how AI will affect their teaching, with the caveat that AI is advancing at a rapid rate and many of the strategies we discussed will be outdated very quickly.

I’ve categorized our conversation into three themes: how to mitigate the impact of AI in our classes; ethics and academic honesty; and leveraging AI to teach.

Continue reading “Chat GPT: Possible Responses Crowdsourced from ISA”

How Do I Get An “A”?

Last summer, when building LMS sites for my fall semester undergraduate courses, I inserted a link titled “How do I get an ‘A’,” assuming it would get students’ attention. The link was to this short video about the importance of deadlines.*

I decided to expand on this idea for the spring semester and beyond, with an LMS page that contains the link to the video and this advice:

  • The due date is not the do date. Instructions and deadlines for all course assignments are available in the syllabus and on Canvas from the beginning of the semester. Plan ahead and complete assignments several days before they are due.
  • See the syllabus for the location of reading assignments. Ask librarians how to access these materials at no cost. There are computers available for this in the library and at other campus locations.
  • Revise your writing to eliminate as many unnecessary words as possible. Bad writing is an indication of sloppy thinking. If you are not familiar with the revision process, use the Writing Center.
  • Read the feedback on the quality of your work that is contained in assignment rubrics and my comments. It is not possible for me to care more about your learning than you do.
  • Sleep, eat, and exercise. Sufficient quantities of each are necessary for learning.

While the above can be construed as facilitating more learned helplessness among students, I’m finding that my syllabus quizzes just aren’t doing the job of communicating some of the most basic academic aspects of being a college student.

*Courtesy of TikTok via Reddit. Not something I created.

The Death of Curiosity? Part 2

Continuing to review my fall semester . . .

The forecasting project might have helped students learn Middle East politics and history. I’d rate it as a success on that front. As to whether their decision making skills have improved from using the CHAMP method, who knows?

At five different points in the semester, students forecasted the likelihood of these events occurring by December 9:

  • The value of the Turkish lira against the U.S. dollar decreases to less than 22:1.
  • In Iran, the Assembly of Experts names a new Supreme Leader.
  • An anti-government protest in Cairo results in at least twenty demonstrators arrested, injured, and/or killed.
  • The president or prime minister of Lebanon is assassinated.
  • Turkey ends its occupation of Syrian territory.

None of these events happened before the deadline, but that was ok given my purposes for the project. Here are the class’s predictions, with average percentage probability on the y-axis:

I need to tweak some of the project’s components. For example, the prompt for the last individual assignment — assess how your forecasts have been affected by cognitive biases — included this statement:

“People like Daniel Kahneman, Charles Wheelan, Tim Harford, Gerd Gigerenzer, and Nassim Taleb have written about cognitive biases and how to counter their effects.”

A few students did not discuss cognitive biases at all. Others clearly did a bad job of Googling “cognitive biases” and what the above individuals have written about them. In the future I’ll need to assign a specific reading on the topic. I see this as another manifestation of student inability or unwillingness to find information that I don’t put right in front of them.

Similarly, I either need to discard the in-class team presentations or formally assess them. Overall, they were of poor quality. Students need an explicit, rigid template for constructing presentations, and students will follow the template only if the presentations are graded. Asking students to give informal, ungraded presentations simply doesn’t work. Given that this country has raised a generation of children who frequently suffer from anxiety disorders, I might need to institute a rule that credit for presentations only goes to the students who deliver them, with the condition that each member of a team can present if they so choose. I already design my courses to provide students with “multiple paths to success,” so optional-yet-graded presentations are not much of a complication for me.

I administered my own course evaluation at the end of the semester. Here are the results — from 20 out a class of 22 students — for questions with a scale from “strongly agree” (5) to “strongly disagree” (1):

  • The forecasting project improved my ability to analyze political events in the Middle East – 3.9
  • I am now better able to use forecasting as a decision making tool in my own life – 3.7
  • More courses should include training in decision making skills like forecasting – 3.4

I would like the average scores on the second and third items to be higher.

Final comment: the last two reading response assignments before the final exam asked students to respond to “Will Lebanon/Syria still be a single sovereign state in 2030?” I did not realize until the last week of classes that these questions dovetail perfectly with the forecasting project, and that I should somehow integrate the CHAMP method and reading responses so that students get more opportunities to hone their decision making skills.

Update on CHAMP Forecasting Project

My impressions so far of the political forecasting project (described here, here, and here) that I created for the undergraduate course on the Middle East that I’m teaching this semester:

The quality of student writing on the different elements of CHAMP has improved over the first three iterations of the assignment. Their analyses are more specific and relevant to what they are trying to predict. I’m hoping that this is a sign of students’ increasing interest in the project. Citation of sources remains problematic for some students. No, a string of URLs is not a bibliography.

The quality of no-stakes classroom presentations has similarly improved. After each forecast, one member of each team presents his or her team’s collective conclusions without using any visual aids. The first two rounds of presentations were terrible. The third round was adequate. I’m now wondering, if I use this project in the future, whether I should grade the presentations against a simple rubric and introduce a rule that a different team member presents each round.

As for the forecasts themselves, here are the average probabilities assigned to each event for the three rounds of predictions students in the class have made so far:

Perusall 7

A tip about using Perusall:

For a fall semester course assignment, I scanned a book chapter and uploaded the resulting pdf to Perusall. I discovered that I could not accurately highlight any portion of the pdf using Perusall’s Annotate Text tool. I could, however, highlight rectangular areas of text using the Annotate Figure tool, shown below with the green underline. Apparently Perusall reads the pdf of the scanned document as an image file. I created a note in the assignment to inform students about which annotation tool they would need to use.

Previous posts in this series:

Perusall

Perusall 2

Perusall 3

Perusall 4

Perusall 5

Perusall 6

Perusall 7

Perusall 6

A brief note about Perusall course settings:

I put Perusall assignments into an online graduate course that I’m currently teaching. For the course’s first two weeks, students’ assignment scores were not syncing with the Canvas gradebook, nor were they visible to students in Perusall, until after the assignment deadline had passed. I had to manually release scores for each assignment. Perusall was not functioning as it had with my undergraduate courses in the spring semester, when assignment scores were always visible to students and were updated continuously in real time.

I eventually found the cause of the problem. I had not selected when to release scores to students in the settings page of the instructor’s dashboard:

Either this setting’s default had changed after the spring semester from “immediately, as students submit work” to one of the other options, or I had forgotten that I needed to change it when I was building the course on Perusall. Either way, the problem was easily solved. To this absent-minded professor, it was another demonstration of how easy Perusall is to use.

Previous posts in this series:

Perusall

Perusall 2

Perusall 3

Perusall 4

Perusall 5

Perusall 6

Perusall 7