Chat GPT and Specifications Grading

Unsophisticated use of Chat GPT tends to produce generically poor essays, with repetitive structure, lack of analysis, and pretty stilted prose. Whether its identifiable as AI or not, the reality is that an essay written that way is likely to get a poor grade. When you receive a poorly written essay in which you suspect AI use, there are two typical paths:

  1. Pursue it as a case of suspected misconduct. You might run it through a detector to check for AI use, or ask the student to submit evidence of the work as it progressed through revisions.  Detectors are notorious for producing false positives, though, and students who were acting in good faith (but just have poor writing skills) will be caught up in this.
  2. Ignore the suspected use and just grade it accordingly. The essay is likely to get a C, as Devon Cantwell-Chaves pointed out in a recent tweet, so how much energy do you want to spend on trying to catch users out, when the results are poor? 
Devon Cantwell-Chavez tweets on February 13, 2024 about her approach to grading assignments where Chat GPT use is suspected.

To this I wish to add a third path: use specifications grading. 

Continue reading “Chat GPT and Specifications Grading”

The Mercy and Consequences of Assignment Extensions

I’ll admit it: I’m a softy when it comes to granting assignment extensions. Yes, my syllabus includes the standard boilerplate that their papers will randomly self-combust for each day late, but in reality, I offer this in-class guidance: I will grant extensions if they request it in person or via a Zoom or Teams meeting, not email. And by “request,” I mean they must explain exactly why they need an extension. I’ll even accept ridiculous reasons if they’re honest with me (think admitting they turned 21 and made poor life choices). But I offer the following advice at the beginning of my course and reiterate it during these student meetings:

  • I ask, “you realize that you knew about the assignment since the first day of class?” Thankfully, I’m batting 100% “yes” responses.
  • I then show them my OneNote project tracker, further reduced to subtasks and due dates. Indeed, my tracker is my browser homepage, so I can’t escape its caress. This includes research projects, entertainment gaming projects, travel planning, a parking lot for unsorted tasks lacking fixed due dates, and every semester course is broken down into topics, readings, and upcoming assignments–and more checklists for building those assignments. The student’s eyes are saucers at this point.
  • I don’t suggest that they follow my plan exactly–rather, I suggest that they simply have a plan. One that works for them and preferably a method that’s easy to update once created. I then tell a few true stories about when procrastinating and failing to plan cost me big time, one event that could’ve completely changed my entire life trajectory if my own mentor at the time hadn’t offered me the same advice.
  • I then note that their extension affects my and/or my teaching assistant’s planning. One student? Eh, no biggie. Multiple? Now we’re filling their extensions like sand into our scheduling rock jars. I want to be fair in timely grading, but I advise that I don’t expect my TAs to burn their schedules accommodating student extensions.
  • Finally, I advise that, although I’ll grant the extension like a squishy teddy bear, my other major concern is that I’m extending the assignment solely within the context of my course. I have no idea what other courses they’re taking, nor do I have insight on those other course assignments. What happens if I grant an extension and it interferes with their other assignments, which leads to a consequential snowball effect much larger than my assignment? I’ve seen it happen: one extension leads to students falling badly behind in their other classes.

So far I assess this method is a worthy compromise. It’s not that many students on balance and they at least comment that they appreciate the mentoring (and one student I’m aware of adopted my planning format).

That said, I’m curious to read your thoughts in the comments!

Podcasting in Class: Course planning for Spring 2024

As the semester winds down and final grading is in progress, I am looking ahead towards the Spring 2024 semester, when I will be teaching International Relations & Popular Culture for the first time. It is both a nerdy interest niche of mine, but I also think that field of popular culture is expanding, gaining more grounds, and operating as something relatable to our students. So, I guess – be prepared to see more of that type of content in the new year.

I have decided to incorporate a semester-long podcasting project as the students’ main research project, in which they will produce a public-facing piece of research. I imagine this is the first time most of my students will be engaging with such a project. But this is also a new assignment for me, and I am both excited and wondering what hurdles I haven’t thought of yet. I am building this on a previous guest post by John McMahon (2021) and his APSA Educate resource. Below, I have outlined the overall premise. What wisdom do you have to turn this project into something that can rival Joe Rogan’s podcasting dominance?

Continue reading “Podcasting in Class: Course planning for Spring 2024”

Project Citizen: Building Citizenship Skills in an Introductory American Government Course

Today’s guest post comes from Dr. Brooklyn Walker at Hutchinson Community College, Kansas!


This semester, on the first day of class, I asked my introductory American Government class to generate a list, in teams, of how they come across American government and politics. They couldn’t think of a single example. I concluded that many of my students haven’t thought about themselves in terms of politics. They lack political efficacy. They are turned off by polarization and negative affect. And they don’t notice the role government and politics play in their everyday lives. 

But years after they graduate, I want my students to be aware of their political environment and equipped to engage it. I wasn’t convinced that exposing students to interesting information or ideas would address the problems I was seeing. Instead, I wanted to help my students learn about themselves as citizens, develop citizenship skills, and see government in action. I developed Project Citizen to advance these three goals, and to create a bridge for my students between the classroom and the ‘real world.  

Project Citizen is the overarching title for a set of five assignments distributed throughout the semester. It comprises approximately a third of the semester’s total points, signaling that that civic engagement is a priority in the class.

Students begin the semester by writing a brief introductory essay (400-500 words). This essay reflects on a class reading making the case for civic engagement, and they describe what an ideal citizen does and knows. Then they detail their own civic engagement and the hurdles they face in becoming the ideal citizen they described. This essay forms the foundation for the remainder of the projects and marks the baseline of each student’s civic engagement.

Students then select three projects from a menu, each of which results in a 500-word essay. Each prompt is linked to a topic we cover in class, and the prompts are intended to advance the three main goals. Giving students choices is a core feature of Project Citizen. Some of the prompts may be triggering for students. For example, a Black male in my class asked if he had to talk to a law enforcement officer, and another student with social anxiety was worried about contacting a local civil rights group. Project Citizen encourages students to choose prompts that take them out of their comfort zone but gives students space to avoid prompts that they feel could be harmful. Finally, choice promotes equity. Many students, especially those with lower socioeconomic statuses, do not have easy access to someone who’s running for office or reliable transportation to civic meeting spaces.

ProblemGoalRelevant Prompts (course topic in parentheses)
“Don’t know who they are as citizens” Learn about self as citizenTake a survey to identify your party identification and ideology (Public Opinion); Make your voting plan (Participation); Develop a media diet plan after comparing news articles (Media)
“No efficacy, intimidated by polarization”Practice political skills, including political  discussionsTalk to a police officer about the role of civil liberties in their work (Civil Liberties);Talk to a local civil rights group about their work (Civil Rights); Interview someone who’s run for office about the role of money in politics (Elections); Complete an action recommended by an interest group (Interest Groups); Contact a member of Congress (Congress)
“Don’t know how government actually pops up in their lives”See politics / government in actionAttend a local meeting (Constitution and Federalism); Identify party linkage strategies via party communications (Political Parties); Analyze presidential communications via inaugural addresses (Presidency); Observe a courtroom (Judiciary);Evaluate bureaucracies after speaking with a recipient of state or federal services (Bureaucracy)

Finally, at the end of the semester, students revisit their initial essay. They annotate that essay with 8 comments, reflecting on what they learned from class readings, lecture, discussion, and their projects. Their comments can introduce new information or examples to support or refute their initial points, reflect on how they have changed through the semester, or describe their next steps for developing their citizenship skills.

While most of the students’ Project Citizen work occurs independently, projects are woven into class periods. I mention project prompts during lectures, pointing out information that may be relevant or questions the project may help them answer. Students learn from each other during a peer review session (for which they also earn Project Citizen points). I print copies of the assignment rubric and students provide feedback on two peers’ essays, using the rubric as a framework. After projects are submitted, we dedicate class time to discussing student experiences and connecting those experiences to course material.  

Some semesters I’ve integrated badges, which are essentially extra credit opportunities, into Project Citizen. Badges encourage students to take more ownership of their learning experience. Some of the badges I’d used include:

  • Jack-of-all-Trades Badge: complete one project for each of the three goals 
  • Design Your Own Adventure Badge: create your own project prompt (in consultation with me)
  • 10,000 Foot View Badge: complete a meta-learning worksheet about a project
  • Second Chances Badge: revise and resubmit a project
  • Collaborator Badge: complete a Project Learning prompt with a co-author

Ultimately, students have reported positive experiences with Project Citizen. One student said that Project Citizen “let me build up my ideas about who I am and about my beliefs. I got to explore what my point of view is and I learned things about myself I never knew.” Another appreciated seeing the government in action. After attending a local meeting, they commented that, “Normally people wouldn’t go out of their way, but Project Citizen gave me exposure to see how my community is run.” 

Project Citizen is constantly evolving, so I look forward to your reflections and comments.  

Is it in the assessment criteria?

I zoomed into an excellent QAA event this week on implementing racially inclusive practice in assessment, based on a project at University of Leicester, partnering with Birmingham City University and University of Wolverhampton. I’d very much recommend that you have a good look at their report in detail. The take-home for me was that that whilst an inclusive or decolonised curriculum and role models are incredibly important for engagement and for inspiring students, particularly racially minoritised students, if you want to tackle race awarding gaps, the solution is pedagogical.

Their approach is deceptively simple: they focused on making sure that the hidden curriculum is made visible for all students and the tacit is made explicit, that students understand exactly what they have to do to succeed, with no guessing games involved, with clear documentation of what is required, and that all assessment criteria are clearly and transparently explained with examples of what good or less good work against those criteria would look like. One of the staff who had implemented the intervention very disarmingly said that he felt a bit embarrassed that he and his colleagues hadn’t been doing this already! He also said that although there was some initial resistance because of worries about ‘spoonfeeding’, the improvement in the students’ work that he saw and the way they engaged allayed most of those fears. They found that by doing this, they could reduce awarding gaps significantly, improve student enjoyment and confidence, and also improve staff experience of teaching and assessing!

There is a lot to learn from in the report. Personally, I’ve already thought a lot about assessment criteria over the years, in an attempt to be inclusive, yes, but also because I just wanted to communicate with students what I wanted them to do, so they would learn better and I could read better work when assessing. As a less experienced teacher, I realised that I was marking work down for not doing things that I had never taught or told the students to do – which offended my sense of justice. But I knew I did want the students to do those things (such as make a coherent argument, evaluate evidence, use examples, write for an audience, use appropriate referencing), so it got me to thinking about how I might teach those things in the context of a disciplinary and substantive module. I came to the conclusion that having transparent criteria and spending some time making sure that everyone understands them would help me communicate what skills I wanted to see and how they might develop them. It turns out to be a practice that serves all students – not just those who have been previously disadvantaged, but also the ones who keep doing pretty well, but don’t know why.

As we know that tutors are often looking for different things in their students’ work, it usually doesn’t work in a discipline like ours to have generic or departmental criteria. It is an incredibly useful exercise for you, as a tutor, to sit down and write out what it is you are looking for in students’ work. This helps clarify expectations for me and helps me think about what and how I will teach. When team-teaching, working with other tutors to clarify not only what the assessment criteria are but also what they mean in practice is extremely useful for making sure that teaching and marking are fair and consistent. And working with students to help them understand marking criteria doesn’t so much help them ‘tick the right boxes’ in a spoon-feed way, but, much more importantly, understand what skills they are learning and why.

For my current module, the assessment is a portfolio, and the assessment criteria are as follows (although I do allow students to negotiate them, which I won’t dwell on here but will come back to another day):

  • Depth of understanding of how politics and power are shaped by, and shape, the natural world
  • Ability to weave together ideas from the module into your own coherent text
  • Depth and originality of critical evaluation of your own relationship with the natural world
  • Ability to argue for your perspective on how nature should be governed or cared for, by whom and in what ways, including use of reasons and evidence
  • Appropriate selection of multimedia on the portfolio
  • Ability to write appropriately for a particular audience (please specify: eg visitors to an exhibition, policy-makers, everyday readers of narrative non-fiction)
  • Creativity of your work on the portfolio
  • Evidence of learning and development over time in the module
  • Depth of critical engagement with the module materials and readings
  • Extent of additional research and further reading
  • Craft of writing, including readability, spelling and grammar
  • Accuracy of bibliographic materials

I like the approach of starting with a noun plus preposition, like ‘depth of’ or ‘ability to’, because it demonstrates that these are skills one can be better or worse at in a qualitative sense. Thus, this is not a box-ticking exercise for students but rather an invitation to engage in deep and dialogical reflection on what, for example, the ‘ability to argue’ or ‘appropriate selection of multimedia’ really looks like in practice.

It’s very important not to stop with listing the assessment criteria, of course, but rather to make them the centre of an ongoing conversation. Here is my top tip: every time a student asks a question about the assessment, or about what ‘good work’ might look like, I bring it back to the assessment criteria. So, let’s say they ask, ‘does my portfolio need to be consistent week by week?’ I will say, ‘Is that in the assessment criteria? No. So, I won’t be looking for that. If it’s something you want to learn, that is, how to create your own consistent style, that’s great – you can do so and add it to the assessment criteria for your self-assessment. But it’s not necessary from my point-of-view.’

Here is my top tip: every time a student asks a question about the assessment, or about what ‘good work’ might look like, I bring it back to the assessment criteria.

Or let’s say they ask, ‘Can my writing be more personal?’ I will say, ‘Is it in the assessment criteria?’ This is a longer conversation – the answer is, yes, I am asking them to give an account of their relationship with the natural world, so more personal writing in the first person is clearly appropriate. However, if they are using part of their portfolio to write for policy-makers, this can lead to a deeper conversation about what sort of writing, evidence and argument a policy-maker might be interested in. Distinguishing these different crafts of writing and talking about when they are appropriate, or not, is much more useful for learning than just prohibiting one of them without explaining why.

Other ways of getting students to engage deeply with the assessment criteria might include:

  1. Guided marking exercises where students mark examples of work with reference to the assesment criteria. Your aim here is to get them to focus on the criteria and not make the sorts of vague comments (‘this was not well structured’) that they have probably experienced themselves at times.
  2. Peer feedback where the focus is on giving each other feedback according to one or more of the assessment criteria.
  3. Formative feedback from the tutor where they have to tell you which criteria they want feedback on. (I have a form and they can’t have their feedback unless they tell me which criteria they are particularly interested in.)
  4. Self-assessment where students have to tell you how well they met the criteria, and where they could have done better.
  5. Any other discussion with examples of the criteria and what they mean, preferably iteratively, so they can improve over time.

Summative feedback should also, of course, refer constantly and closely to the assessment criteria. But by that point, this is just an exercise in demonstrating that you could be trusted to do what you said you were going to do. To return to the QAA discussion on racially inclusive criteria, the return of summative work should not be an opportunity to say: ‘Ta-DAH! This is what you should have done.’ What the students should have done should be clear right from the get-go, or else how can they learn how to do it?

“The Joy of Asking for Help: Getting students to read (anything?)”

I found myself in an all too well-known situation this week: my students didn’t read the assigned readings. In my opinion, I had set up the most fascinating set of readings to address an important issue in one of my classes. Every should want to gobble that knowledge up, said my hybris. But nada. Maybe a handful had read; the rest of the class became experts at looking straight ahead or down to their screens. I am not rediscovering the wheel with this not-reading problem, but the wheel certainly ran me over this week.  

Credit: Mercy Pilkington (Good E Reader)

I left the classroom after the lecture, wondering how I had created this environment, and how I could pivot away from this mid-way through our semester. Frustrated at myself, but knowing that somehow it had to change, I reached out for help on social media and in real life to people in academia.  I was in awe of the number of helpful responses I received. Although, I did not plan to write about it for ALPS (see Chad’s earlier work on that here), I decided to at least have – for the record – a collection of thoughts and ideas for others, who might find themselves in a similar situation at any point in their academic lives.

  1. Clarify for yourself and the students why we are reading (anything); what the purpose of the readings are; how they aid us in or learning process; and – importantly – do not assume that a one-time explanation covers that. Frequent reminders are helpful and necessary to the learning environment.
    • Here, I also received some online and offline advice about having a session at the beginning to demonstrate how to read articles. Folks have developed different techniques on how to guide students through a sample reading, including developing key questions students should keep in mind when reading (as guiding posts).
  2. I noticed an interesting debate regarding the use of reading quizzes or some sort of grading mechanism regarding doing the readings (or not).
    • I am not in favor of reading quizzes. For one, selfishly I do not want to have more grading work throughout the semester, and I am not sure how effective these quizzes are in motivating the students to read for understanding. And two, I do not want to create this sort of potentially punitive environment in our shared learning space.
  3. Nonetheless, there were some interesting grading mechanisms for readings: 
      1. This includes several recommendations for Perusall, which encourages more a communal reading practice, in which students engage with one another and can annotate readings. I am certainly not sponsored by them, and it also depends on whether your institution has a subscription to the service, but I like the idea.
        1. See similar thoughts on CritiqueIT by Dr. Colin Brown at ALPS.
      • Some faculty structure their entire lesson plan based on students’ reading reactions (required prior to class), focusing on things students did not understand or want to know more about. My planning anxiety stands in the way of this method.
  4. Similarly, the idea of cold calling was brought up. I have fallen to these temptations in the classroom, but at the same time, that does not really solve – for me at least – the reading issue. Then, I am just embarrassing folks in the classroom (if they haven’t done the readings).
    • Folks have suggested to use tools such as Menti, PollEv, or JamBoard (even though the last one is being phased out). They are interactive online boards, that you can project in the classroom. You can pose a question about the reading, and students can (anonymously) respond to that. The collection of the answers as well as the anonymity can overcome social anxiety and the fear of saying “something wrong”.
      • Anecdotally, I already tried a PollEv exercises in one of my classes this week, and I received responses from approx. 2/3 of the class vs. the usual 2-3 hands that shoot up when I start talking about the readings. I did not ask specific questions but rather focused on what stood out to them in the reading or if they had any questions (as the mind-hive suggested). Given my mid-semester pivot, I will stick with that for now.  

The beauty of having a problem with anything in this day and age is that none of us are special enough/unique enough to encounter it for the first time in all of human history. I realized relatively quickly that my problem is not just about “why aren’t they reading” but more importantly “what can I (!) do about this to improve our shared learning space”. And I think that is a better motivator than frustration. Reaching out and asking for help was probably the best way to handle this problem for myself. Aside from the fact that people provided advice/technique, what stood out to me what the fact that there are folks who care and that they care enough to help me out. Thank you!

Syllabus Quiz In Another Form: Annotation

This idea comes from Matt Reed at Inside Higher Ed, who in turn got it from Emily M. Farris at Texas Christian University: have students annotate, in ABC fashion, your course syllabus at the end of the semester.

I’m going to go a few steps further for the upcoming semester:

First, instead of my usual quiz on the syllabus at the beginning of the semester, I’ll have students annotate it on Perusall in response to my questions. The assignment should function as a close reading exercise, but it will be machine graded by Perusall.

Second, I’ll create a quiz on the Canvas LMS that will force students to explore the contents of the course’s Canvas shell. It has become apparent that most students only pay attention to the LMS’s “To Do” list of impending assignment deadlines that pops up on their phones. They ignore everything else I put into my course shells, including the “How to Get an A” advice. As with the Perusall assignment on the syllabus, the quiz will be machine graded by Canvas.

Third, I’ll create another Perusall assignment on the syllabus for the end of the semester, to get feedback on what worked and what didn’t, and to remind students of course learning outcomes.

Assignments, Platforms, and AI – Part 2

The follow-up to my last post: a new assignment that I’m calling, not very creatively, the argument analysis. Here are the directions to students:

Choose one of the peer-reviewed journal articles listed in the syllabus. Find an editorial published in the last year in one of the sources listed below that is about the same general subject as the article. List the bibliographic information for the article and editorial at the top. Then, in only four paragraphs, compare them according to the criteria below. Put the paragraphs in the indicated order and make each paragraph less than 200 words in length.

Which author: 

1. References the most comprehensive and relevant data? Why?

2. Infers the most valid relationship between cause and effect? Why?

3. Does the best job of refuting counter-arguments? Why?

4. Is the most persuasive to an audience of policymakers? Why?

I then provide a list of online news outlets that students can pull an editorial from.

Possible advantages of this over my old article analysis? First, the compare and contrast elements force students to engage in more complex thinking. With the article analysis, students sometimes focused too heavily on summarizing. Second, students engage with a recently published argument aimed at a general audience. Academic journal articles are written for a very narrow audience of specialists — not the people most students will be communicating with after they graduate. Also most journals whose contents are indexed in databases have moving walls that make their most recent issues inaccessible to students. Third, I’m hoping that students will be able to connect what they write about in the argument analysis to discussion topics, the reading responses, and maybe even potential future dissertation topics.

Even though the argument analysis is not machine-graded like the Perusall assignments are, I decided to simplify my life with a new rubric. My rubric for the old article analysis:

The rubric for the new argument analysis:

Fewer boxes to check so easier for me to use, but its criteria still hit my targets for the assignment.

Assignments, Platforms, and AI – Part 1

The first in a short series of posts on leveraging new technologies to alleviate boredom . . .

After fourteen years, I have decided to abandon the manually graded journal article analysis assignment in my graduate courses. I have integrated Perusall into all of the graduate courses that I teach, and the prompts for my Perusall assignments and the article analysis were the same. While repetition might be the mother of all learning, I’m not very maternal, and this seemed like overkill. Also, student writing in Perusall assignments is, at least potentially, a conversation between themselves and other students, the article analysis was a conversation with just one other person — me. Not very authentic. So the article analysis went into the trash bin. I wanted to replace it with something new and more interesting — for both me and my students. I’ll write about what that new thing is in my next post.

For now, I want to focus on the idea of using machine-graded assignments to make teaching less burdensome for the instructor and more interesting for students. Pre-Perusall, each of my graduate courses consisted of one discussion and two reading responses per week, the article analysis, and a final exam — 23 assessments. Now my courses have one discussion and one reading response per week, two Perusall assignments per week, the new yet-to-be-described assignment, and a final exam. Notice that I’ve reduced my assessment burden by almost a third while increasing student-to-student interaction.

Engaging Students Through Collaborative Research Projects

Today we have a guest post from Rebecca A. Glazier at the School of Public Affairs at the University of Arkansas at Little Rock (rebecca [dot] glazier [at] gmail [dot] com) and Matthew Pietryka at Florida State University’s political science department (mpietryka [at] fsu [dot] edu).

Rebecca Glazier

Many professors are struggling to engage their students, who are often disengaged and burned out. To address these issues and improve student retention, universities are increasingly turning to edtech solutions or big data—everything from predictive analytics to chatbots in discussion boards. These remedies tend to be far removed from students’ daily lives. In contrast, as professors, we are with students in the classroom every day. And this experience often prepares us to know best how to engage our students.

Matthew Pietryka

In a new, open-access article we just published in Education Sciences, “Learning through Collaborative Data Projects: Engaging Students and Building Rapport,” we illustrate how faculty can engage students through collaborative data projects. Rather than relying on top-down university solutions, faculty can use the content of their own courses to involve students in collaborative projects that build rapport and make them feel included and engaged in the course. We see these collaborative data projects as another kind of active learning—getting students thinking outside of the textbook and involved in contributing to a project that is bigger than themselves.

We used data from more than 120 students over two semesters and our results suggest that most students find these collaborative data projects more enjoyable than typical college assignments. And students report the projects make them feel the professor is invested in their learning.

The article we wrote detailing these projects is open access. It provides advice on implementing these projects as well as the R code used to create individualized reports for students participating in the collaborative data projects. The individualized reports help develop rapport between the professor and each student. And this programmatic approach allows professors to scale up these reports to accommodate classes with hundreds of students. Building rapport and doing active learning is something considered possible only in smaller classes, but our approach demonstrates how it can be done in large classes as well—with significantly positive results.

At a time when many faculty members are struggling to engage students, we can take matters into our own hands by designing projects for our classes that draw students in and build rapport with them. It doesn’t take expensive edtech solutions or top-down directives. Mostly, it takes thoughtful pedagogy and prioritizing student connection.

Open Access article link: https://www.mdpi.com/2227-7102/12/12/897.

Recent episode on the Teaching in Higher Ed Podcast on this research: https://teachinginhighered.com/podcast/engaging-students-through-collaborative-research-projects/.