From the prompt “Complain to my husband about not washing dishes thoroughly in the style of Jane Austen”:
My dearest husband,
I hope this letter finds you well. It is with some vexation that I write to you regarding a matter that has been weighing heavily on my mind: the washing of the dishes.
As you are no doubt aware, this is a task that we both undertake in turn, and one which is essential to the smooth running of our household. And yet, I cannot help but notice that your efforts in this regard have been somewhat lacking of late.
I have observed with no small amount of dismay that you have not been washing the dishes thoroughly, leaving behind a residue of food that is most unappetizing. This, I must confess, has caused me no small amount of discomfort and concern, as it is not only unhygienic, but also highly unpleasant.
I trust that you will take this matter seriously, and make a concerted effort to improve the quality of your dishwashing in the future. I am sure that with a little more attention to detail, you will soon become as proficient in this task as I am.
At ISA a couple of weeks back, I facilitated a Teaching Cafe discussion on AI and Chat GPT’s impact in our classes. Thanks to the Innovative Pedagogy Conference Committee generously allocating us space, several colleagues from a variety of different institutions stopped by to share their thoughts and ask questions about the ethics, practical responses, and positive aspects of this technology. I’m going to share a few of these responses in case they aid others in thinking through how AI will affect their teaching, with the caveat that AI is advancing at a rapid rate and many of the strategies we discussed will be outdated very quickly.
I’ve categorized our conversation into three themes: how to mitigate the impact of AI in our classes; ethics and academic honesty; and leveraging AI to teach.
For a fall semester course assignment, I scanned a book chapter and uploaded the resulting pdf to Perusall. I discovered that I could not accurately highlight any portion of the pdf using Perusall’s Annotate Text tool. I could, however, highlight rectangular areas of text using the Annotate Figure tool, shown below with the green underline. Apparently Perusall reads the pdf of the scanned document as an image file. I created a note in the assignment to inform students about which annotation tool they would need to use.
I put Perusall assignments into an online graduate course that I’m currently teaching. For the course’s first two weeks, students’ assignment scores were not syncing with the Canvas gradebook, nor were they visible to students in Perusall, until after the assignment deadline had passed. I had to manually release scores for each assignment. Perusall was not functioning as it had with my undergraduate courses in the spring semester, when assignment scores were always visible to students and were updated continuously in real time.
I eventually found the cause of the problem. I had not selected when to release scores to students in the settings page of the instructor’s dashboard:
Either this setting’s default had changed after the spring semester from “immediately, as students submit work” to one of the other options, or I had forgotten that I needed to change it when I was building the course on Perusall. Either way, the problem was easily solved. To this absent-minded professor, it was another demonstration of how easy Perusall is to use.
I’ve begun integrating Perusall into my online, asynchronous graduate international relations courses. First up is a course in our master’s degree program that starts next month. I’ve chosen to start with this one because I typically assign an analysis of a peer-reviewed journal article in lieu of a midterm exam, and the questions in my Perusall assignments for undergraduates mirror my instructions for the article analysis. Regular Perusall assignments will give them opportunities to develop skills they will need for the article analysis.
While practice improves performance generally, in this case I see it as particularly important. A growing proportion of our M.A. students are undergrads who have opted for a fifth-year master’s degree. They begin taking graduate courses in their fourth year of college. My four-person department only has about ten political science majors per year, but given the organization of the department’s curriculum, I encounter only about half of these majors in the classroom prior to their graduation. This means a wide variation in content knowledge and writing ability among the majors who enter the five-year program and first pop up in my M.A. courses. Making the situation even more complicated: the two-year M.A. students are often mid-career military officers who have first-hand international experience and are very academically talented.
These courses are seven weeks long. Previously I assigned an extensive list of readings, two writing prompts, and discussion board participation each week. I’ve replaced one of the writing prompts with two Perusall assignments in each week. I’m hoping that this change will help build a sense of community among the students, which is more difficult to achieve in an asynchronous online environment than it is in a physical classroom. At minimum the use of Perusall should cause students to notice the superior skills of some of their classmates and stimulate them to increase their own efforts.
I decided to survey my comparative politics class on their opinions about Perusall after the first exam. Of a total of thirteen students, only eight were in class on the day of the survey, so the results are in no way statistically representative. But here they are anyway. Each survey item was on a five-point scale, with 1 equal to “strongly disagree” and 5 as “strongly agree.”
Reading other people’s annotations helps me understand assigned readings.
The university should continue to offer Perusall as an option for undergraduate courses.
I find Perusall difficult to use.
I’m more likely to read assigned journal articles that are on Perusall.
Perusall helped me complete reading responses.
Perusall helped me study for the exam.
No obvious warning signs in the results. And my main objective in using Perusall — to increase students’ understanding of assigned readings — was the statement with which they most strongly agreed.
The class has scored on average 80% on Perusall assignments so far. In my opinion, this is a sign that Perusall’s assessment algorithm fairly evaluates the quality of students’ interaction with assigned readings. Since the marking process involves no effort on my part, it’s win-win situation. I’m now thinking of how I can incorporate Perusall into other courses.
When the spring semester starts, I’ll be using Perusall for the first time, in my comparative politics course. I decided to finally experiment with it for three reasons. First, my previous attempts at getting students to engage in collaborative notetaking have mostly failed. Second, as I mention in that linked post, a couple of my colleagues have raved about Perusall’s ability to turn reading into a social learning process. Third, resiliency is as important as ever when it comes to course design. Given the pandemic and associated mitigation protocols, there is the chance that some or all of my students will be absent from the physical classroom at random points during the semester. Perusall allows students to engage with course content and each other asynchronously online.
I found it easy to set up Perusall by following these basic instructions (on my campus, Perusall has been administratively connected to all Canvas course shells, so there is no need for individual faculty members to install the LTI app). This brief explanatory video was also helpful. Perusall’s user interface is very intuitive. I set up the course’s article library and associated Canvas assignments in only a few minutes. Here is the end result from the Perusall side:
Notice how the layout is exactly what is shown in the video. It is also the same as what students will see.
Perusall uses an algorithm to machine grade student interaction with each document in the course library, and the algorithm’s output can be synced back to the Canvas gradebook. This means readings can become auto-graded Canvas assignments. Details on this and more are in the instructions I linked to above.
I will report on how well all of this has worked once the semester is underway.
A few weeks ago, I wrote about using one technological platform to circumvent the design constraints of another. Here is another, more serendipitous, example of finding a technological means for achieving an instructional objective.
For an upcoming undergraduate course, I decided to make Twine stories part of my exams. My previous posts on Twine for a team writing project are here, here, and here. (Hard to believe it’s been seven years since I last used it — how time flies.) For now, it is only important to know that Twine is freeware that enables users to create interactive texts in the form of HTML files.
I wanted my exams to each have two parts that students complete in sequence — first, a series of multiple choice questions on concepts; second, an essay-type question in which students demonstrate their ability to apply the same concepts by extending a Twine’s plot line. It is fairly easy (if tedious) to create multiple choice test questions in the Canvas LMS. One can also set a content module to require that students complete each item in the module in order. But initially I didn’t know how to include the Twine story for each exam’s second part.
Probably all of us have encountered the constraints of educational technology — in a particular situation, it doesn’t do quite what we want it to do, so we try to figure out a workaround. Here is one example:
For the coming academic year, my undergraduate students will complete multiple metacognitive exercises that will supply me with data for some pedagogical research. The exercises consist of surveys that ask students to evaluate the effectiveness of their study habits before and after exams (I’ll describe this in detail in a future post).
Initially, I tried creating these surveys in the Canvas LMS quiz tool, because I can set Canvas to automatically reward students with a certain number of points if they complete a survey. I find point rewards to be necessary because most of the undergraduates I teach won’t do anything unless it has a transparent effect on their course grade. However, I rapidly hit several obstacles — e.g., as far as I can tell, one can easily duplicate an “assignment” in Canvas, but not a “quiz.”
In contrast, it is ridiculously easy to copy, rename, and revise survey instruments in Google Forms. But Google Forms isn’t connected to the Canvas gradebook, and I did not want to have repeatedly jump between Google Forms and Canvas to record points each time a student completed a survey. Also I prefer putting as much of my course content as possible in Canvas, because invariably, the more I expect students to use different technological platforms, the more emails I receive about their learned helplessness.