Chat GPT: Possible Responses Crowdsourced from ISA

At ISA a couple of weeks back, I facilitated a Teaching Cafe discussion on AI and Chat GPT’s impact in our classes. Thanks to the Innovative Pedagogy Conference Committee generously allocating us space, several colleagues from a variety of different institutions stopped by to share their thoughts and ask questions about the ethics, practical responses, and positive aspects of this technology. I’m going to share a few of these responses in case they aid others in thinking through how AI will affect their teaching, with the caveat that AI is advancing at a rapid rate and many of the strategies we discussed will be outdated very quickly.

I’ve categorized our conversation into three themes: how to mitigate the impact of AI in our classes; ethics and academic honesty; and leveraging AI to teach.

Continue reading “Chat GPT: Possible Responses Crowdsourced from ISA”

Perusall 5

I’ve begun integrating Perusall into my online, asynchronous graduate international relations courses. First up is a course in our master’s degree program that starts next month. I’ve chosen to start with this one because I typically assign an analysis of a peer-reviewed journal article in lieu of a midterm exam, and the questions in my Perusall assignments for undergraduates mirror my instructions for the article analysis. Regular Perusall assignments will give them opportunities to develop skills they will need for the article analysis.

While practice improves performance generally, in this case I see it as particularly important. A growing proportion of our M.A. students are undergrads who have opted for a fifth-year master’s degree. They begin taking graduate courses in their fourth year of college. My four-person department only has about ten political science majors per year, but given the organization of the department’s curriculum, I encounter only about half of these majors in the classroom prior to their graduation. This means a wide variation in content knowledge and writing ability among the majors who enter the five-year program and first pop up in my M.A. courses. Making the situation even more complicated: the two-year M.A. students are often mid-career military officers who have first-hand international experience and are very academically talented.

These courses are seven weeks long. Previously I assigned an extensive list of readings, two writing prompts, and discussion board participation each week. I’ve replaced one of the writing prompts with two Perusall assignments in each week. I’m hoping that this change will help build a sense of community among the students, which is more difficult to achieve in an asynchronous online environment than it is in a physical classroom. At minimum the use of Perusall should cause students to notice the superior skills of some of their classmates and stimulate them to increase their own efforts.

Perusall

Perusall 2

Perusall 3

Perusall 4

Perusall 5

Perusall 6

Perusall 7

Op-Ed Writing Workshop

The Virginia USA chapter of the Scholars Strategy Network is sponsoring a free online op-ed writing workshop on Wednesday, October 20, 3:30 – 5:00 pm. This workshop is a hands-on training for scholars who want to learn how to write and pitch compelling, research-based op-eds. Participants will learn how to craft a good lead, identify and incorporate timely news hooks, signal the author’s unique and relevant expertise, increase the likelihood of publication, and structure an op-ed for maximum impact. Participants are asked to come prepared with an idea for an op-ed in mind; they will be guided through shaping their idea into a first draft.

Registration form and additional details are here.

Personal note: as the author of occasional op-eds for local and national publications, I know first hand the benefits of being able to write for a non-academic audience.

10 Things Learned From 10 Years of Blogging

Today marks the tenth anniversary of this blog’s first post. Here’s a listicle of some of what I’ve learned from a decade of blogging, in no particular order:

1. Deadlines are useful, because work does expand to fill the time I make available for it.

2. Sloppy writing reveals sloppy thinking. Laying out my ideas as text invariably shows that there is plenty of room for improvement.

3. To communicate effectively, short and simple is better than long and complex. Unfortunately, most academics are trained for the latter. It’s one of the reasons political scientists don’t have the relevance they ought to have in public policy making.

4. I’m really, truly, a morning person. Writing is so much easier for me before lunch.

5. Being married to a colleague whose background is wildly different from my own is incredibly helpful. See the statement about sloppy writing above. That’s hers.

6. Though ALPS has existed for a decade and gets on average 7,000 page views per month, blogging has not brought me fame and fortune. Luckily I’m intrinsically motivated by how the blogging process improves my — here it is again — writing and thinking. But not everyone has the same motivations I do.

7. Blogging on a weekly basis allows me to evaluate my teaching far more effectively than I otherwise would — which helps me minimize some of the more unpleasant aspects of my job.

8. Adapting to new technology never gets easier, but learned helplessness is just not my thing. I will try to figure it out, and if I can’t, I’ll try to find someone who can. Unfortunately this seems to put me in a distinct minority in certain academic circles, given that technological adaptation rarely drives curricular design and delivery. Undergraduate programs in the humanities and social sciences will probably continue to wither as a result.

9. I do some of my best thinking when I alter my environment. On pre-dawn runs. In airport terminals. While sitting on a balcony in Belize. I should do more of this.

10. I’ve learned that one can’t be a prophet in one’s own land. Institutions can preserve the problem to which they have been the solution for only so long, and then they tend to catastrophically fail because of self-organized criticality or some other process. So often it’s easier to just watch the flood waters rise while sitting on high ground, glass of lemonade in hand.

Challenges in Using Policy Briefs for Assessment

Today we have a guest post from Simon Lightfoot, Professor of Politics and the Faculty of Social Science’s Pro-Dean for Student Education, University of Leeds. He can be contacted at S [dot] J [dot] Lightfoot [at] leeds [dot] ac [dot] uk.

In recent article I reflected on my use of policy briefs as an assessment task on a final year module Politics of Aid. What surprised me as I was writing the article was how many expected and unexpected challenges arose because of decisions I had made in the design of the module.

I expected there to be some student concern about a relatively novel assessment task introduced in the final year. To counter that I encouraged at first and then made obligatory the submission of a draft. The unexpected challenge was that many students were unsure as to how the deal with drafts. There were always calls for more generic advice despite the fact that each student was able to get tailored individual feedback on their draft.

I thought that students would welcome individual feedback but overlooked the fact that personalised feedback can be specific and very personal. Just as academics bemoan the infamous reviewer 2 (who turns out to have a twitter feed dedicated to their special brand of reviews), students receive feedback in the same personalised way we do. It became clear that we need to ensure that students know how to use feedback if drafts are to be beneficial, and that students need to understand that revision of one’s writing is a central part of the research process.

The drafting/redrafting issue has grown in significance now that the policy brief is 100% of the module assessment. Though intended to reduce the assessment burden on students, the change just raised the stakes—it became a one strike assessment task, which caused some students to feel more pressured. At 1,500 words, the policy brief is very short compared to many other assessment tasks, yet it must demonstrate a high level of research, synthesis, and structure, which requires time. Convincing students to dedicate enough time to their writing proved to be another challenge. As Chagas-Bastos and Burges (2018) found, ‘it is consequently necessary to continuously emphasize the importance of revising and editing, actively encouraging students to deliberately think in terms of drafts’ if they are to produce good policy briefs.

Monday Morning Advice

A few items from elsewhere that might be of help for instructors who are new to teaching online:

First, here are five reasons not to use timed exams. In my opinion thse arguments apply whether one is teaching online or face-to-face.

Second, here are some very practical, easy-to-implement techniques for becoming a better writer — suitable for both professors and students.

Last, methods of identifying whether your employer’s financial viability has suddenly gotten much worse — editorials that I wrote in 2017 and 2019.

The Benefits of Rubrics

Today we have a guest post from Sarah E. James (sarah [dot] james [at] g [dot] harvard [dot] edu), Colin Brown (colin [dot] brown [at] northeastern [dot] edu), and George Soroka (soroka [at] fas [dot] harvard [dot] edu.)

At the 2020 APSA Teaching and Learning Conference, we led a workshop on writing rubrics for the political science classroom. Rubrics may not be a cure-all for student writing, but we do believe that college faculty underestimate their benefits for teaching advanced writing skills. They can also be a powerful tool for developing a mindful teaching practice.

The use of rubrics is extremely common at the K-12 level (at least here in the U.S.), and there is considerable evidence to support their effectiveness at teaching beginning writing skills (Heidi Andrade’s work is a good starting point, for example, here, here, and here). There has been less evidence for their effectiveness at the university level, but the few existing studies point in the same general direction as the elementary ed studies:

  • Rubrics help students learn specific skills, and make them aware of their own learning processes.
  • Rubrics make grading more efficient, while simultaneously increasing student perceptions of how fair the grading is.
  • Rubrics help expose aspects of the “hidden curriculum” in our writing expectations and may help the most disadvantaged writers the most.

Our key takeaway for participants in our workshop: rubrics let you deal with subjective measures, but in a transparent way, and without being arbitrary. Generating a good rubric requires you to be clear about what you actually value, and on what you expect students to be able to demonstrate. From the students’ side, this is a clear signal of where you want them to spend most of their time. From the instructor’s side, this is a good way to make sure that you’re following the adage of “if you didn’t teach it, don’t test it.” And when we think of the kind of genre-specific writing skills we demand of students, this sort of clarity can be extremely helpful for students who may “write well” in a general sense but who may have no experience in how things like evidence, counterarguments, citations, and literature reviews work in political science specifically.

Rubrics can only capture so much, and when you use one, you are limited to only a certain number of skills or aspects in your assessment. At our TLC workshop, the most common concern our participants had was, “what happens if we end up having to give a good grade to a bad paper?” This is a (small) risk, but we encouraged our participants to step back for a second and think about the rubric as a mindful teaching exercise. If a paper feels like it should get a lower grade than the rubric suggests, are there skills that should be included explicitly in the rubric? (They can be added next time!) If not, then what’s causing you to think the grade should be lower—and is it really something that should be entering into your assessment?

For those interested in designing their own rubrics, we provided examples for an introductory and an upper-level course, as well as a worksheet to help in setting it up. Our examples are designed to focus much more on discipline-specific skills (using evidence, critical thinking, professional norms) than on the quality of prose itself, and our instinct (tested to limited effect in our JPSE article) is that this is the most productive use of rubrics in the college-level classroom. But the structure of rubrics allows them to be adapted to the instructor’s aims, whatever they are—and they force the instructor to make those aims clear to themselves and to their students.

When Students Ignore Feedback

While I don’t comment on student writing nearly as much as some professors do, I expect students to at least read what I do write. A colleague recently pointed out that our Canvas LMS displays a date stamp at the top right of the Speedgrader web page when a student has viewed a previously-submitted assignment after an instructor has commented on it. I had never noticed this before, I guess because the date stamp’s font is rather small. Here it is, indicated by the red arrow:

This feature became very useful in a course in which students are required to write a series of memos that all have the same format. Last week, a student taking the course sent me this email:

I’m not sure what is expected from the assignments, my memo 3 was completely different from 2 yet your comment says see comments about memo 2. I am a second semester senior doing grad classes that had a 3.6 gpa last semester. Somehow I’m failing every single assignment in a freshman level class, while still attending every single class except one and participating in class basically every session. 

I looked at the student’s submissions for memos 1, 2, and 3 — no date stamp. My comments had been ignored. My reply to the student’s email:

The memo is a standard method of efficiently communicating information that is used in a wide variety of professional environments. I’m surprised you haven’t yet had much of an opportunity to practice this form of writing, so here is what I am willing to do: you can earn up to 10 points by emailing me by noon on Friday a memo that discusses how well you incorporated my feedback on your Memo 1, provided by my comments on your work on Canvas, into your Memo 2, and the same for Memo 3 in respect to my comments on your Memo 2.

Completion of my “extra credit opportunity” would have required the student to admit that he had not read my comments and thus ignored the feedback I had provided.

The student did not respond.

Design Fail or Attention Fail?

I recently graded a writing assignment for one of my courses and I’m wondering if it’s an example of “You can lead students to the education but you can’t make them learn.”

The instructions for the assignment:

You have been given the task of submitting a memo to the National Security Council  that answers the following question:

  • Will Nigeria remain a single state or divide into separate states?

Download the memo template; use it to format your work. Turn the question above into a declarative sentence and use it as the memo’s executive summary.  Write two subsections that support your argument using evidence taken from course readings. Each subsection should be a single paragraph. Reference sources with in-text parenthetical citations. 

The information above was repeated by the memo template itself and by the rubric attached to the assignment. From my perspective, the assignment is totally straightforward and the assessment criteria are completely transparent. Yet . . .

Several students wrote memos on the question of “Should?” rather than the actual question of “Will?

Many students also failed the “Turn the question above into a declarative sentence . . .” part. A few representative examples of what they submitted as an executive summary:

  • “In the current course, Nigeria will see a deterioration in multiple states because of the lack of agreement over these issues and the failure of the government to uphold a true democracy. Nigeria is a fledgling state on the verge of an inner collapse due to current trends.”
  • “The United States should help Nigeria dissolve into multiple sovereign states, by mediating the separation process. Nigeria is currently facing a slew of ethnic and social conflicts the country is made up of 36 states which legally sew divisions between regional outsiders and regional natives, this has sparked ethnic and religious conflicts.”
  • “The best path forward for Nigeria is to remain a single state. Splitting the nation up now would only be detrimental to Nigeria’s ability to control its sphere of influence across the African continent. Splitting Nigeria into multiple states would challenge the work that has gone on for years to make it politically equitable and would not account for the vast cultural differences of the nation.”

And so on.

I’m wondering how I should interpret what happened. Is this simply a case of failing to follow directions? If not, I don’t know how I can make things more obvious.

Modeling “Good” Writing

Today we have a guest post from Colin Brown, assistant teaching professor in the Department of Political Science at Northeastern University. He can be reached at colin [dot] brown (at) northeastern [dot] edu.

A few weeks ago, Shana Gadarian made a point on Twitter about explicitly teaching writing that I strongly endorsed. Getting our students to write better will, at minimum, make our job easier. If we’re lucky, it might actually promote better thinking.

For upper-level students, very open-ended prompts sometimes lead to really creative and insightful thinking—and a dissertation is of course the vaguest prompt of all. But our expectations often rest on implicit assumptions about what we think of as “good” political science. Amanda Rosen has written about this in the context of transparency. As she points out, telling students “write a five-page essay” doesn’t acknowledge that essay means different things in different fields, and many of our students will not yet know what this even means in political science.

Clarity is critical for essay prompts, especially for introductory students. While long, detailed instructions might help point students toward what they should think about, students new to a field often don’t have the context to know what is most important in a long prompt. To them, any sentence with a question mark might appear to be equally important—causing them to focus on what we thought was a minor point and producing a disconnect between what we want to assess and what we actually assess when our implicit expectations aren’t met.

Here are what I think were a failed and a relatively successful attempt to do this in my past semester’s intro comparative politics class. Students told me that while the instructions were explicit in the first, it was hard to know where to start and which parts to emphasize. With the latter prompt, they said it was not only clear what to do but why they were doing it.

One question I’ve found to be a bit polarizing in talking with colleagues is whether to provide model papers or templates. Is it a better way to make our aims clear, or does it cause students to just parrot back the template? I’ve always found myself on the side of providing models. Graff and Birkenstein’s They Say, I Say has solidified my thinking for why. They identify common rhetorical devices that mirror the most common academic ways of thinking, which they into templates that, for example, force students to write down the most obvious counterarguments. Experienced writers have read enough in the field to be able to pick up on these techniques implicitly, but beginners by definition have not. Graff and Birkenstein argue, and I think rightly, that this goes beyond rhetoric to actually learning the ways of thinking. Some students may not learn how to think about the flaws in their own argument, or even that they need to, until they are forced to write them with this kind of template.

In my own teaching, I’ve found it hard to explain in an abstract sense the need for clear writing over “beautiful” writing—and what students think is “beautiful” writing often feels cluttered and verbose to us. But when students see each other’s models and observe how much they actually understand from clear writing versus what they think is “good” writing, they start to self-diagnose their own excessive prose.

One challenge is that writing for politics requires some discipline-specific skills that might be more or less amenable to templates. Sarah James, George Soroka and I have a forthcoming JPSE piece on adapting tools from K-12 and composition studies for political science writing. But defining what we actually want from “good” political science writing seems often more folk wisdom than clearly defined—Ian Anson’s 2017 article on meaning-making is a notable and valuable exception—so as part of showing students what we want, there’s room for us to make this clearer to ourselves.

 Sarah, George and I will be leading a workshop at APSA TLC 2020 on implementing rubrics to improve student writing—Friday Feb. 7 @ 4:15 pm.