Chat GPT: Possible Responses Crowdsourced from ISA

At ISA a couple of weeks back, I facilitated a Teaching Cafe discussion on AI and Chat GPT’s impact in our classes. Thanks to the Innovative Pedagogy Conference Committee generously allocating us space, several colleagues from a variety of different institutions stopped by to share their thoughts and ask questions about the ethics, practical responses, and positive aspects of this technology. I’m going to share a few of these responses in case they aid others in thinking through how AI will affect their teaching, with the caveat that AI is advancing at a rapid rate and many of the strategies we discussed will be outdated very quickly.

I’ve categorized our conversation into three themes: how to mitigate the impact of AI in our classes; ethics and academic honesty; and leveraging AI to teach.

Continue reading “Chat GPT: Possible Responses Crowdsourced from ISA”

How Do I Get An “A”?

Last summer, when building LMS sites for my fall semester undergraduate courses, I inserted a link titled “How do I get an ‘A’,” assuming it would get students’ attention. The link was to this short video about the importance of deadlines.*

I decided to expand on this idea for the spring semester and beyond, with an LMS page that contains the link to the video and this advice:

  • The due date is not the do date. Instructions and deadlines for all course assignments are available in the syllabus and on Canvas from the beginning of the semester. Plan ahead and complete assignments several days before they are due.
  • See the syllabus for the location of reading assignments. Ask librarians how to access these materials at no cost. There are computers available for this in the library and at other campus locations.
  • Revise your writing to eliminate as many unnecessary words as possible. Bad writing is an indication of sloppy thinking. If you are not familiar with the revision process, use the Writing Center.
  • Read the feedback on the quality of your work that is contained in assignment rubrics and my comments. It is not possible for me to care more about your learning than you do.
  • Sleep, eat, and exercise. Sufficient quantities of each are necessary for learning.

While the above can be construed as facilitating more learned helplessness among students, I’m finding that my syllabus quizzes just aren’t doing the job of communicating some of the most basic academic aspects of being a college student.

*Courtesy of TikTok via Reddit. Not something I created.

The Death of Curiosity? Part 2

Continuing to review my fall semester . . .

The forecasting project might have helped students learn Middle East politics and history. I’d rate it as a success on that front. As to whether their decision making skills have improved from using the CHAMP method, who knows?

At five different points in the semester, students forecasted the likelihood of these events occurring by December 9:

  • The value of the Turkish lira against the U.S. dollar decreases to less than 22:1.
  • In Iran, the Assembly of Experts names a new Supreme Leader.
  • An anti-government protest in Cairo results in at least twenty demonstrators arrested, injured, and/or killed.
  • The president or prime minister of Lebanon is assassinated.
  • Turkey ends its occupation of Syrian territory.

None of these events happened before the deadline, but that was ok given my purposes for the project. Here are the class’s predictions, with average percentage probability on the y-axis:

I need to tweak some of the project’s components. For example, the prompt for the last individual assignment — assess how your forecasts have been affected by cognitive biases — included this statement:

“People like Daniel Kahneman, Charles Wheelan, Tim Harford, Gerd Gigerenzer, and Nassim Taleb have written about cognitive biases and how to counter their effects.”

A few students did not discuss cognitive biases at all. Others clearly did a bad job of Googling “cognitive biases” and what the above individuals have written about them. In the future I’ll need to assign a specific reading on the topic. I see this as another manifestation of student inability or unwillingness to find information that I don’t put right in front of them.

Similarly, I either need to discard the in-class team presentations or formally assess them. Overall, they were of poor quality. Students need an explicit, rigid template for constructing presentations, and students will follow the template only if the presentations are graded. Asking students to give informal, ungraded presentations simply doesn’t work. Given that this country has raised a generation of children who frequently suffer from anxiety disorders, I might need to institute a rule that credit for presentations only goes to the students who deliver them, with the condition that each member of a team can present if they so choose. I already design my courses to provide students with “multiple paths to success,” so optional-yet-graded presentations are not much of a complication for me.

I administered my own course evaluation at the end of the semester. Here are the results — from 20 out a class of 22 students — for questions with a scale from “strongly agree” (5) to “strongly disagree” (1):

  • The forecasting project improved my ability to analyze political events in the Middle East – 3.9
  • I am now better able to use forecasting as a decision making tool in my own life – 3.7
  • More courses should include training in decision making skills like forecasting – 3.4

I would like the average scores on the second and third items to be higher.

Final comment: the last two reading response assignments before the final exam asked students to respond to “Will Lebanon/Syria still be a single sovereign state in 2030?” I did not realize until the last week of classes that these questions dovetail perfectly with the forecasting project, and that I should somehow integrate the CHAMP method and reading responses so that students get more opportunities to hone their decision making skills.

Update on CHAMP Forecasting Project

My impressions so far of the political forecasting project (described here, here, and here) that I created for the undergraduate course on the Middle East that I’m teaching this semester:

The quality of student writing on the different elements of CHAMP has improved over the first three iterations of the assignment. Their analyses are more specific and relevant to what they are trying to predict. I’m hoping that this is a sign of students’ increasing interest in the project. Citation of sources remains problematic for some students. No, a string of URLs is not a bibliography.

The quality of no-stakes classroom presentations has similarly improved. After each forecast, one member of each team presents his or her team’s collective conclusions without using any visual aids. The first two rounds of presentations were terrible. The third round was adequate. I’m now wondering, if I use this project in the future, whether I should grade the presentations against a simple rubric and introduce a rule that a different team member presents each round.

As for the forecasts themselves, here are the average probabilities assigned to each event for the three rounds of predictions students in the class have made so far:

Perusall 7

A tip about using Perusall:

For a fall semester course assignment, I scanned a book chapter and uploaded the resulting pdf to Perusall. I discovered that I could not accurately highlight any portion of the pdf using Perusall’s Annotate Text tool. I could, however, highlight rectangular areas of text using the Annotate Figure tool, shown below with the green underline. Apparently Perusall reads the pdf of the scanned document as an image file. I created a note in the assignment to inform students about which annotation tool they would need to use.

Previous posts in this series:

Perusall

Perusall 2

Perusall 3

Perusall 4

Perusall 5

Perusall 6

Perusall 6

A brief note about Perusall course settings:

I put Perusall assignments into an online graduate course that I’m currently teaching. For the course’s first two weeks, students’ assignment scores were not syncing with the Canvas gradebook, nor were they visible to students in Perusall, until after the assignment deadline had passed. I had to manually release scores for each assignment. Perusall was not functioning as it had with my undergraduate courses in the spring semester, when assignment scores were always visible to students and were updated continuously in real time.

I eventually found the cause of the problem. I had not selected when to release scores to students in the settings page of the instructor’s dashboard:

Either this setting’s default had changed after the spring semester from “immediately, as students submit work” to one of the other options, or I had forgotten that I needed to change it when I was building the course on Perusall. Either way, the problem was easily solved. To this absent-minded professor, it was another demonstration of how easy Perusall is to use.

Previous posts in this series:

Perusall

Perusall 2

Perusall 3

Perusall 4

Perusall 5

Team Champ

As promised, here is the prompt for the collaborative portion of the forecasting project in my upcoming Middle East course. There are two of these team deliverables — a draft report due after students have submitted the first three of their individual CHAMP assignments, and a final version (shown below) due at the end of the semester. In terms of contribution to course grade, the draft and final versions together are worth only a third of what the five individual assignments are worth. Also, a portion of the course grade will come from teammate evaluations.

Your team is interviewing for a job with Eurasia Group. The interview process requires that your team submit a jointly-written report on your team’s Forecasting Project topic using the CHAMP framework:

  • Comparisons
  • Historical Trends
  • Average Opinion
  • Mathematical Models
  • Predictable Biases

Your team’s final report should apply all five components of the CHAMP framework in a forecast that is no more than five pages of double-spaced 11- or 12-point font text. Do not use quotations of sources in the report. Reference source material using footnotes. See the list of syllabus readings for proper bibliographic format. Footnotes can be in 10-point font. 

Only one member of your team needs to submit the document for this assignment.

Your team’s work will be assessed using the rubric below.

Now I just need to create rubrics for the project’s individual and team assignments . . .

Assign Like a CHAMP

As promised in my last post, here is an example of iterating so that students repeatedly practice the same skills.

As I’ve previously mentioned, I’m putting a forecasting project into my fall semester Middle East course. The project’s constituent assignments will be based on the CHAMP system recommended by people like Phil Tetlock. A brief description of CHAMP is at the end of this Financial Times article by the economist Tim Harford.

My prompt for the first CHAMP assignment reads:

You are interviewing for a job with Eurasia Group. The interview process requires that you submit a forecast on your team’s Forecasting Project topic. The forecast needs to use the CHAMP framework:

  • Comparisons
  • Historical Trends
  • Average Opinion
  • Mathematical Models
  • Predictable Biases

In a one-page, double-spaced, 11- or 12-point font document, answer these questions for the Comparisons portion of your forecast:

What other cases are comparable to this situation?
How do they indicate what will happen this time?

My guiding questions for the other CHAMP assignments are:

Historical Trends

What individuals, groups, and institutions played key roles in similar events in the past?
How are these “power players” likely to influence the current situation?

Average Opinion

What are the experts predicting about this situation?
What is the view that lies in the middle of their assessments? 

Mathematical Models

Are there mathematical models or empirical measures that can be used to gain insight into this situation?
What do these models or measures indicate?

Predictable Biases

How has your thinking been affected by emotion and personal preference?
How have you adjusted your analysis to account for these biases?

I’ll talk about the team-based aspects of this project in a future post.

Perusall 5

I’ve begun integrating Perusall into my online, asynchronous graduate international relations courses. First up is a course in our master’s degree program that starts next month. I’ve chosen to start with this one because I typically assign an analysis of a peer-reviewed journal article in lieu of a midterm exam, and the questions in my Perusall assignments for undergraduates mirror my instructions for the article analysis. Regular Perusall assignments will give them opportunities to develop skills they will need for the article analysis.

While practice improves performance generally, in this case I see it as particularly important. A growing proportion of our M.A. students are undergrads who have opted for a fifth-year master’s degree. They begin taking graduate courses in their fourth year of college. My four-person department only has about ten political science majors per year, but given the organization of the department’s curriculum, I encounter only about half of these majors in the classroom prior to their graduation. This means a wide variation in content knowledge and writing ability among the majors who enter the five-year program and first pop up in my M.A. courses. Making the situation even more complicated: the two-year M.A. students are often mid-career military officers who have first-hand international experience and are very academically talented.

These courses are seven weeks long. Previously I assigned an extensive list of readings, two writing prompts, and discussion board participation each week. I’ve replaced one of the writing prompts with two Perusall assignments in each week. I’m hoping that this change will help build a sense of community among the students, which is more difficult to achieve in an asynchronous online environment than it is in a physical classroom. At minimum the use of Perusall should cause students to notice the superior skills of some of their classmates and stimulate them to increase their own efforts.

Can Students Predict the Future?

I like each of my undergraduate courses to have at least one authentic skill-based deliverable; for example, historical timelines or ArcGIS storymaps. I’m teaching the Middle East in the fall semester, and, having recently read books like Superforecasting by Tetlock and Noise by Kahneman, Sibony, and Sunstein, I hit upon students trying to predict events in the region.

I’ll have students assign percentage probabilities to a list of potential events and give them opportunities to periodically adjust these probabilities. At the end of the semester, students will earn points if the events have happened and lose points if they haven’t, with point amounts weighted against the associated percentages. My formula:

((Sum of the percentage probabilities student chose for each of his or her correct forecasts) – (Sum of the percentage probabilities student chose for each of his or her incorrect forecasts ))*50.

A hypothetical example:

% ProbabilityEventOutcome
100The sun rises in the eastern skyCorrect
30The capital of Turkey changes from Ankara to IstanbulIncorrect
70Barham Sali remains president of IraqCorrect
50Iran fires an ICBM with a nuclear warhead at Saudi ArabiaIncorrect

The points earned by this student would be: ((1.0 + 0.7) – (0.3 + 0.5)) * 50 = 45.

The negative effect of incorrect predictions should prevent students from trying to game the process by blindly assigning 100% probability to every event on the list. But they won’t actually suffer from incorrect predictions, since the scale for the final course grade tops out at 950 and more than 1,000 points will be available from all assignments, quizzes, and exams. If an event on the list happens before the semester ends, I will simply use each student’s most recent forecast to calculate point values.

Since forecasting accuracy improves when estimates are averaged, I will show the class the aggregate results of students’ forecasts each time they are updated. I’m also going to divide the class into teams, with the members each team taking the lead — through additional research and class presentations — on becoming more expert on one topic from the list. Potential events that I might put on the list:

  • The value of the Turkish lira against the U.S. dollar decreases to less than 22:1.
  • In Iran, the Assembly of Experts names a new Supreme Leader.
  • An anti-government protest in Cairo results in at least twenty demonstrators arrested, injured, and/or killed.
  • The president or prime minister of Lebanon is assassinated.
  • Turkey ends its occupation of Syrian territory.

I’m still working out more of the details on this idea, but I think it could be interesting for me and the students.