Spring 2024: The Midterm Check-in

I enjoyed Cathy’s post from last week, in which she presented UCL’s approach to engage in student dialogue rather than receive “consumer feedback” at the end of the semester – aka the classic student evaluations. Earlier check-ins regarding our teaching are more useful than reviewing the game tapes over winter break. Our future students will benefit from it, but our past students just had to sit with it. As the new semester approaches rapidly (at least for me, Monday is rapidly approaching), and as I looked over my feedback from last semester, I can’t help to add even more things to my list of new semester resolutions: a midterm check-in with my students on what is working and what is not.

Continue reading “Spring 2024: The Midterm Check-in”

Generative AI changes teaching and learning: how to protect the integrity of assessment

This academic year, the UCL Centre for the Pedagogy of Politics (CPP) is hosting a series of online panel events. Our first event on 30 October was on the theme of ‘Using technology to teach politics’. In this guest post, one of the panellists at that event, Simon Sweeney (University of York), offers further reflections on the challenges involved in higher education’s embracing generative AI, where tools such as ChatGPT call into question issues of authorship and have profound implications for assessment.

A few years ago, we were worrying about students’ using essay mills, a form of contract cheating that plagiarism detection software struggled to identify. The Covid-19 pandemic and online delivery coincided with a reported increase in academic dishonesty (AD). In late-2022 the arrival of generative artificial intelligence (GAI) chatbots like ChatGPT is a further challenge to the integrity of assessment.

Universities realised that banning chatbots was not feasible, as AI has become an established feature in our lives and graduate employment. As educators, we need to respond positively to the opportunities AI presents, recognising its benefits and assimilating AI into teaching and learning practice.

This means developing strategies that accommodate students’ use of GAI while protecting assessment integrity.

Continue reading “Generative AI changes teaching and learning: how to protect the integrity of assessment”

Tools for Discussion: An Interpersonal Growth Toolkit

Today’s guest post is from Dr. Allison Anoll at Vanderbilt University!

Classroom discussion is a common pedagogical tool, but many instructors and students alike find themselves lost with the lack of structure.  Equally problematic, standard approaches to grading participation that simply count how often students speak can (re)produce racial and gender disparities.

How can we, as instructors, lead effective discussions? How can we help students grow in their interpersonal skills while also ensuring classrooms are inclusive spaces?

In my small, seminar classes I use a tool for structuring and assessing participation I call the Interpersonal Growth Toolkit.This tool relies heavily on a framework developed by Stephen Brookfield in The Skillful Teacher, where he argues that classroom discussion skills need to be taught just like any other learning goal. The Interpersonal Growth Toolkit provides students with learning goals for classroom discussion, tools for achieving these goals, and regular sources of feedback from the instructor about progress.

Here’s how it works. I start by identifying skills I want students to learn and practice in the discussion setting. In my classes, that’s: 1) humility; 2) confidence; and 3) social science thinking. I provide students with a detailed list of tools for how they can practice each of these themes. For instance, to practice humility, students can,

  • Provide credit to classmates for ideas and inspiration by using their name in comments.
  • Ask the group for a moment of silence to slow the pace of discussion and allow you (and others) time to gather your thoughts.
  • Find a way to express appreciation for what is new, interesting, or challenging in the discussion. Be specific about what has helped you understand something new.
  • Make a comment or ask questions that encourages others to elaborate on their ideas.

I then ask my students to identify which of the three areas—humility, confidence, or social science thinking— they are weakest in as a discussion participant.  My students then develop an individualized growth plan by identifying 2-3 skills in their area of weakness they want to try out in class. They write these skills on an index card that I hand back to them at the beginning of each class for the first two weeks. At mid-term, they write a brief reflection on how they think they are doing in their area of growth and what they want to work on for the rest of the semester. In combination with my own notes, I prepare mid-term feedback for each student about their performance in the three areas and tools to try in the second half of the semester. Students receive a final grade for their participation based on their growth and performance in each of the three areas.

This approach increases the quality of classroom discussions immensely. Not only do students and the instructor have a clear sense of what people are supposed to be doing during discussion time, but students themselves can become advocates of inclusivity in the classroom. Students who lean towards dominating a discussion are asked to think about how they can use their skills to draw others in; students who are nervous speaking up are provided with skills for finding their space and ways into a discussion. Using this technique, I have seen students grow immensely over the course of a semester with quieter students getting bolder and more dominant students using their skills to build bridges between other’s ideas. In my seminar courses, it is the norm to hear from every student multiple times over the course of a discussion.

This approach and assessment are well-suited for relatively small classes (less than 20 students). In larger classes, it is more difficult to build a community of trust that fosters vulnerability and to find enough time for all the students to practice their skills. However, this tool is suitable for any level of instruction: I use it with freshman, seniors, and even new graduate students. Instructors should feel empowered to adapt the skills they want students to learn in the discussion as long as they also provide specific tools to complement these skills. Want students to practice intellectual curiosity as a core skill instead of humility? Develop a list of tools that students can practice in a discussion to build this skill. You can look at chapter 8 of Brookfield’s The Skillful Teacher for more tools to consider.

To implement this assessment, remember to include a learning objective in your syllabus that highlights interpersonal growth as a key goal of the course.  I use, “By the end of the course, you will be able to:demonstrate growth in discussion techniques including close listening, speaking with evidence, challenging with respect, and summarizing others’ contributions.’’

Keeping track of student contributions and how they fit into each assessment category can be difficult to do on your own while also leading the discussion. When I use this tool, I hire an advanced undergraduate or graduate student to sit in class with me and keep track of student contributions. I use these notes to develop feedback and grades for my students.

You can find details about The Interpersonal Growth Tool Kit here. The document also includes a rubric. (In general, I’m a big fan of rubrics, but that’s a topic for another day.) Happy discussing!

On rowing and teaching

In my younger days I was a very keen rower. I caught the bug at university, and it took about another 15 years to really shake it off (and, TBH, I don’t think I really have, to this day). Towards the end of my time in the sport I lost a load of weight and switched into coxing – becoming the short shouty guy in the stern of the boat with the mic and the rudder strings – and from there got involved in coaching. And that, come to think of it, was one of my first ventures into teaching.

Now, rowing is a weird sport, in that the ‘training-to-competing’ ratio is hugely skewed towards the former. A quick comparison to rugby helps here (and may reveal some deep-seated jealousy of / animosity towards the egg-chasers). I had friends who played rugby who’d train one or two nights a week and then play a fixture every weekend – and then, true to the stereotype, spend a good few days recovering from the hangover. In contrast, we, at the rowing club, would spend nine months of the year training six days a week, then race frantically for about six weeks, and then have a month off. 

What has this to do with teaching? Well, hear me out…

Teaching is a bit like coaching sports – or rather, for the time being, coaching sport is a bit like teaching. The coach of my friends’ rugby team would be able to run them through a particular drill, or practice a particular tactic, on a Wednesday evening, and then (provided the oafs had remembered said drill / tactic) watch it in action that weekend. A review would ensue, followed by some tinkering in training the following week. The point is: the coach had a regular flow of data from his team, about how well his teaching methods were working, how well the individuals and the team were executing what he had taught them, and how to calibrate all of this in order to get the best of the team.

I, as a rowing coach, had no such luxury. We’d spent months and months training essentially in isolation, with very little contact with opposition (apart from the occasional accidental race against the much-derided Club From The Other Side Of The River), and it would be ages before we could see if the crew were able to hold their own in an actual race.

A while back it struck me that this is quite similar to our teaching-and-assessment model. In my department, and indeed, in many other political science departments around the UK, the pattern is quite similar: we spent eight to ten weeks delivering material and teaching skills, and then, after a briefing and cheery ‘Good luck!’ we send our students off to write their essays / policy briefs / to do whatever it is we ask them to do. Along the way, we have very little opportunity to see how well they’re learning the stuff we’re teaching them; and, for that matter, they have very little opportunity to find out how well the can do whatever skill it is we’re asking them to demonstrate.

(Before I get to the point of my piece, let me clarify something. There’s a conflating of ‘material’ and ‘skills’ here – and it’s deliberate. Back to my rowing analogy: I can explain to a rower, using a whiteboard and some video, how to perform a certain part of the stroke, and why it’s important that it’s done in a certain way, and that would constitute ‘material.’ That rower could understand that ‘material’, but whether or they can do actually do that thing is another matter – and that’s what I mean by ‘skill.’ TBH I’m not sure we’re clear enough with our students about this distinction, and I think we tend to assume our job is just to deliver material / transfer knowledge, and it’s their job to master the skills (subconsciously? From elsewhere? Who knows)).

There are a few fixes I can think of for this pickle. The first is to build in some form of continuous summative assessment, using spaced repetition and the frequent release of grades (and feedback) to help students learn. Although this makes me shiver and reminds me of my schooldays I see the attraction: at least that gives us a much more constant flow of data about performance (sorry, learning), and allows us to calibrate teaching accordingly.

The second is to structure the course around a portfolio of work, which is not assessed during the course, but from which students draw material when compiling their summative submission after the course has finished. This time, the ‘data flow’ comes in the form of on-the-go formative feedback, either from the teacher or from peers (or both, for that matter).

My approach harks shamelessly back to my rowing coaching, and goes like this.

Back in the day, we’d spend weeks and weeks practicing one miniscule aspect of the rowing stroke (think of those films of sprinters performing odd prancing drills in order to perfect their take-off). Then we’d switch and practice some other skill; and then much later we’d put them together, combine them, and have a crack at doing it at speed and under pressure. The point is these ‘miniscule aspects’ were the disaggregated bits and pieces of the finished article: the individual rowing stroke, to be repeated several hundred times during the race; the start, the mid-race push, and the finish; the steering of the tricky bend; and so on. When coaching – sorry, teaching – the crew how to do each element, we’d explain in great detail where it sat in the overall scheme of things, and why it was important that it was done like this and not how The Crew From The Other Club do it, and so on.

And so it is with teaching political science. I take the final thing I want the students to do well at, and I think about all the sub-tasks behind that thing: scraping the literature, presenting and justifying a position, using and critiquing evidence, even taking notes. These are my ‘miniscule aspects’, which I then blend into the design of the seminar activities. We might, then, spend a seminar discussing a particular contention in the literature about a certain topic, where my aim is both to ‘convey material’, and also to give the students a chance to learn, and refine, a particular skill – which in turn will feature in their eventual race (sorry, summative assignment). I make sure to explain how that nugget of material, and the accompanying skill, will be important further down the line.

Thinking about things this way has really helped me confront two interlocking challenges: how to handle a teaching-and-assessing pattern in which the latter is compressed into one instant at the end of term; and how to foreground the assessment without making the course all about the assessment (which, I fear, would just undermine the students’ deep learning of the material). One day I might summon up the nerve to screen a rowing race in the first week (Rio 2016 men’s four?) as I explain all this to the students…

Notes From a Conference of Damp Showers and Wet Snow*

Continuing on a theme . . . some notes on today’s pedagogical discussion at ISA 2023:

Teachers want to create an environment that facilitates learning and stimulates a spirit of curiosity. Students may have different expectations. As one session participant put it, students can have the purely transactional attitude of “I’m not going into debt so I can feel emancipated.”

In a similar vein, we talk about what students should get out of a college education, but we don’t ask what they bring to it; e.g., a K-12 education where the teacher was the sole authority in the classroom.

So we are frequently faced with a situation where students don’t want to engage unpredictably with new knowledge because it makes them feel uncomfortable, which they do their best to avoid.

To resolve this dilemma, students need to become familiar with tools for giving and receiving feedback productively so that they can learn from each other. They also need to learn how to articulate why they hold certain positions, why those positions are important to them, and what they mean when they state those positions.

During the conversation, I thought of a tweak to an assignment that might help with the above. As I have written previously, many of my students are unable to identify the author’s thesis, independent variables, and dependent variable in Perusall readings. I’m thinking of adding “What is a question about this article that you want answered?” to the assignments, with the stipulation that the answer needs to come from their classmates, not me. This could also be a way of getting students to design their own quiz questions.

*Allusion to 19th Russian literature, of which I am mostly ignorant — a known unknown that I am at present mostly comfortable with.

Benefits of Student Reflection

Today we have a guest post from Colin Brown, assistant teaching professor, and Jennifer Ostojski, Ph.D. candidate, from the political science department at Northeastern University. They can be contacted at  colin [dot] brown [at] northeastern [dot] edu and ostojski [dot] j [at] northeastern [dot] edu.

This year we have had to adapt the short, focused simulations for reinforcing material that we like to use in the classroom to the virtual environment. This adaptation has caused us to think more about the value of independent student reflection in relation to group debriefings.

Colin had previously developed a simulation of coalition-building in Germany (available here at APSA Educate) for introductory comparative politics, which had two main learning objectives: (1) gain familiarity with German political parties as an example of multipartism, and (2) understand that big, centrist parties can still exert a lot of agenda-setting power in sometimes-chaotic multiparty systems. A key part of the exercise is the bargaining that occurs as students walk around the physical classroom.

In Spring 2020, we switched to online teaching two weeks before Colin had scheduled the simulation in his course. He made it an optional extra-credit online exercise, in which about one-third of the class participated. In lieu of a debriefing, students submitted ungraded answers to three questions:

1. What did you find hardest about reaching a coalition agreement?

2. What new perspective does this give you on the German case in particular?

3. What might be some of the strengths and weaknesses of coalition governments, and how did those play out here?

We used slightly different online versions of the simulation in Fall 2020. In Colin’s course, students stayed muted/invisible and used the private chat function to communicate during simulation sessions. Jennifer’s larger class used breakout rooms with students communicating with one another behind the scenes via Zoom chat, a classroom Slack channel, and social media (which more directly simulated the more intentionally chaotic in-person discussions). Colin assigned students to parties right as the simulation began while Jennifer provided students with party roles beforehand.

Based on the written responses and discussions, students in our courses learned the central lessons of the simulation equally well, and equal to the in-person format in prior years, despite the difference in communication methods and the timing of role assignments. However, Colin’s Spring cohort seemed to demonstrate better knowledge of both the specifics of the German system and broader concepts about multipartism, whereas the students in our Fall courses displayed more learning of broad concepts than of specific details. We found it interesting that the Spring students seemed to pick up more details from the simulation despite it being, well, March 2020. Our hunch is that writing responses to the reflection questions caused students to spend some minimal amount of time and effort checking whether they were correctly using relevant concepts. Although it is hard to rule out selection effects, engaging in independent reflection might benefit students’ learning whether the simulation is online or in-person, even if it is not the most memorable or visible part of the exercise.

When Students Ignore Feedback

While I don’t comment on student writing nearly as much as some professors do, I expect students to at least read what I do write. A colleague recently pointed out that our Canvas LMS displays a date stamp at the top right of the Speedgrader web page when a student has viewed a previously-submitted assignment after an instructor has commented on it. I had never noticed this before, I guess because the date stamp’s font is rather small. Here it is, indicated by the red arrow:

This feature became very useful in a course in which students are required to write a series of memos that all have the same format. Last week, a student taking the course sent me this email:

I’m not sure what is expected from the assignments, my memo 3 was completely different from 2 yet your comment says see comments about memo 2. I am a second semester senior doing grad classes that had a 3.6 gpa last semester. Somehow I’m failing every single assignment in a freshman level class, while still attending every single class except one and participating in class basically every session. 

I looked at the student’s submissions for memos 1, 2, and 3 — no date stamp. My comments had been ignored. My reply to the student’s email:

The memo is a standard method of efficiently communicating information that is used in a wide variety of professional environments. I’m surprised you haven’t yet had much of an opportunity to practice this form of writing, so here is what I am willing to do: you can earn up to 10 points by emailing me by noon on Friday a memo that discusses how well you incorporated my feedback on your Memo 1, provided by my comments on your work on Canvas, into your Memo 2, and the same for Memo 3 in respect to my comments on your Memo 2.

Completion of my “extra credit opportunity” would have required the student to admit that he had not read my comments and thus ignored the feedback I had provided.

The student did not respond.

Continue: Fall 2017 Edition

I will continue using the somewhat tried-and-true knowledge plan and quality of failure essays, but I am going to modify them yet again. I want these assignments to push students toward the realization that they need to take responsibility for their learning by evaluating how and why it happens, instead of assuming that they can displace this task entirely onto me. However, I still see a large portion of students responding to these meta-cognitive prompts  without much thought, as if they are following a recipe in a cookbook.

So I have condensed the questions that I ask in these assignments even further, making them more open-ended, in the hope that it will force students to exert more effort in examining their own attitudes and behaviors.

The knowledge plan prompt now reads as:

Plan for this course by writing a 2-3 page essay (double-spaced, equivalent to 11 or 12 point font) that answers these questions: 

  • What do I want to get out of this course?
  • What strategies will help me achieve these goals?

The quality of failure prompt now says:

Read:

Write a 2-3 page essay that analyzes the following:

  • What helped or hindered your learning during the semester?
  • Are your experiences similar to those of Robert J. Moore and Soledad O’Brien? Why or why not?

The ABCs of Mentoring

As an increasingly senior (i.e., “older”) faculty member, professional development efforts–whether my own or those intended for others — occupy more of my time and attention than they used to. People here began a more formalized mentoring program for junior faculty about a year ago, and recently I was one of the people called upon to dispense wisdom about teaching to some of our recent hires.

Instead of just talking at them, I decided to demonstrate some teaching via active learning by using Simon’s ABC exercise: I asked the mentees to write down on Post-It notes what they wanted to abandon, begin, and continue about their own teaching and then stick their notes on the wall. Here is a compilation of the results: Continue reading “The ABCs of Mentoring”