Explain and justify

Students often think that there is a ‘right’ or a ‘wrong’ approach to every aspect of an assignment. This is a thought that can often lead to intense anxiety around an assessment period.

There are, of course, many areas in which this is correct. There are things that are simply ‘right’ or ‘wrong’, depending on the context and form of the assessment. If a department insists that a student submit with Harvard-style referencing, for example, it’s good practice for them to cite in Harvard, rather than footnotes.

However, students often take these concerns to a degree that might limit the potential for going out and exploring exciting and original cases or approaches, as they feel the need to double check every detail before committing – “is it right to use China as a case study for this essay question?”, “is it suitable to use liberal intergovernmentalism as a theory in my essay?”, etc. To students that are worrying about whether their approach, case, method, is the ‘right’ one to employ in completing an assessment, I usually reply with the simple advice to ‘explain and justify’.

Continue reading “Explain and justify”

Guest Post: Feminist Pedagogy within Constraints: Teaching Reflective Writing in a UK Higher Education Institution

Dr Cherry Miller
Dr Jenny Morrison

Today’s guest post is by Cherry Miller from University of Helsinki and Jenny Morrison from University of Glasgow.

Assessments are a core area for feminist teaching. Traditional assessments such as the essay or exam can reinforce gendered or other hierarchies in education through favouring ‘masculine’ forms of learning that prioritise disimpassioned objective expression. Thus, feminists have called for greater diversity of assessments in general, and assessments that value the personal alongside the ‘objective’ political in particular. While feminists engage with a range of assessments, the reflective journal (RJ) has become commonplace on feminist courses. 

Nonetheless, research cautions against the assumption that ‘alternative’ assessment automatically fosters more effective learning and teaching. Rather, all academic assessments include implicit expectations and exist within the constraints of the academy. That the reflective journal remains a less familiar – albeit growing – format in higher education means there can be greater uncertainty regarding the expectations of such an assessment. Therefore, we believe the drive to diversify assessments leads to certain tensions for feminist pedagogy.

Continue reading “Guest Post: Feminist Pedagogy within Constraints: Teaching Reflective Writing in a UK Higher Education Institution”

Pondering the future of political science education


Guest post by Dr. Jeremy F. G. Moulton at the University of York!

As political scientists, we often teach students not to speculate about the future in their work and that they should, instead, champion critical analysis based on verifiable trends.

However, I have recently been doing a fair amount of speculation. Specifically, I have been wondering about what the future of teaching and learning in political science will look like – a topic that greatly intrigues me and I’m eager to hear others’ thoughts on.

This came about because I have been recently involved in the work of the Faculty of Social Sciences’s ‘Faculty Futures Group’ at York. The group seeks to get away from the short-term reactivity that we can all be guilty of in Higher Education; focusing on the next week, the next semester, maybe, at a push, the next year, but rarely the proper “long term”. The Faculty Futures Group, therefore, has the mission to ‘Scan the horizon to identify important new directions for social science education and research in the medium to long-term (next 5-15 years)’. 

Whilst the project focuses on the Social Sciences as a whole, I have, of course, been thinking of the future of political science. And, as is fitting with this blog, I have been thinking about the future of political science’s approach to teaching and learning. 

There are a few themes that have struck me as particularly pressing and interesting; themes that I can most imagine significant change in the coming years. There are two that are at the forefront of my mind:

The first is assessment. When I undertook my undergraduate degree in Politics, the exam, the coursework essay, and the dissertation were the only assessment formats. This was not an uncommon experience at all, and still today, these feel like the dominant forms of assessment in our field.

However, it seems that there is some significant change happening now and that there is more on the horizon. Experiments with pass/ fail rather than graded assessment, with assessment optionality, with inclusive assessment design are all big themes that I can see really shaping how we assess political science courses in the coming years. 

Course-level innovations, tailored to the subject matter or to hoped-for employment outcomes, could be a path of potential for future innovation. I have also been thinking about what innovations could be made to champion the potential for fun or enjoyment in the assessment process. It probably won’t be a shock to anyone that, as an academic, I quite like the essay format. But I also totally get that many students might not love it and that we could occasionally vary our approaches in a way that makes students more enthused when it comes round to assessment time. For example, for a while now, I have been considering the potential of using creative writing in our classrooms and expanding that to assessment.

The second is AI. To be honest, I am a bit skeptical when anyone is a bit too enthused about AI and the future of education. This piece from Smolansky et al. has solidified my thoughts that some can be too keen on adopting the use of AI in assessments, whilst students might have genuine concerns about “watering down” the education experience. However, it’s nigh on impossible to turn our backs on AI as an important educational development. As this great ALPS piece from Amanda Rosen shows, there is a clear need for academics to engage with AI in our approach to assessment. It’s already apparent that some students are making quite liberal and not exactly productive use of generative AI as a writing resource.  

But should this mean that we welcome AI into our teaching and learning environment? There are already some in Higher Education that are leading the way on this. Auburn University provides a free, self-paced online course for those interested in employing AI in their teaching, for example. But this might be outpacing what contemporary students want from their education.

Recent conversations I have had with students indicate that there is quite strong opposition to AI replacing traditional forms of student-to-academic communication. Where there is enthusiasm is where AI chatbots might be a quick go-to tool for asking the “embarrassing” questions that students might be afraid to ask but are really important to them, e.g. “I know you’ve mentioned it a dozen times but… how long is the essay meant to be?”. 

That these themes are the ones that seem the most pressing and interesting is almost certainly a result of my own pressing teaching concerns and areas of interest. And so, I’d be very interested to hear from anyone else! What do you think the future holds? What are the pressing, interesting themes that we should all actually be worrying about or embracing? And finally, I would love to know if anyone has a crystal ball I can borrow so I can finally get to the bottom of all this ‘future’ business… 

Chat GPT and Specifications Grading

Unsophisticated use of Chat GPT tends to produce generically poor essays, with repetitive structure, lack of analysis, and pretty stilted prose. Whether its identifiable as AI or not, the reality is that an essay written that way is likely to get a poor grade. When you receive a poorly written essay in which you suspect AI use, there are two typical paths:

  1. Pursue it as a case of suspected misconduct. You might run it through a detector to check for AI use, or ask the student to submit evidence of the work as it progressed through revisions.  Detectors are notorious for producing false positives, though, and students who were acting in good faith (but just have poor writing skills) will be caught up in this.
  2. Ignore the suspected use and just grade it accordingly. The essay is likely to get a C, as Devon Cantwell-Chaves pointed out in a recent tweet, so how much energy do you want to spend on trying to catch users out, when the results are poor? 
Devon Cantwell-Chavez tweets on February 13, 2024 about her approach to grading assignments where Chat GPT use is suspected.

To this I wish to add a third path: use specifications grading. 

Continue reading “Chat GPT and Specifications Grading”

The Mercy and Consequences of Assignment Extensions

I’ll admit it: I’m a softy when it comes to granting assignment extensions. Yes, my syllabus includes the standard boilerplate that their papers will randomly self-combust for each day late, but in reality, I offer this in-class guidance: I will grant extensions if they request it in person or via a Zoom or Teams meeting, not email. And by “request,” I mean they must explain exactly why they need an extension. I’ll even accept ridiculous reasons if they’re honest with me (think admitting they turned 21 and made poor life choices). But I offer the following advice at the beginning of my course and reiterate it during these student meetings:

  • I ask, “you realize that you knew about the assignment since the first day of class?” Thankfully, I’m batting 100% “yes” responses.
  • I then show them my OneNote project tracker, further reduced to subtasks and due dates. Indeed, my tracker is my browser homepage, so I can’t escape its caress. This includes research projects, entertainment gaming projects, travel planning, a parking lot for unsorted tasks lacking fixed due dates, and every semester course is broken down into topics, readings, and upcoming assignments–and more checklists for building those assignments. The student’s eyes are saucers at this point.
  • I don’t suggest that they follow my plan exactly–rather, I suggest that they simply have a plan. One that works for them and preferably a method that’s easy to update once created. I then tell a few true stories about when procrastinating and failing to plan cost me big time, one event that could’ve completely changed my entire life trajectory if my own mentor at the time hadn’t offered me the same advice.
  • I then note that their extension affects my and/or my teaching assistant’s planning. One student? Eh, no biggie. Multiple? Now we’re filling their extensions like sand into our scheduling rock jars. I want to be fair in timely grading, but I advise that I don’t expect my TAs to burn their schedules accommodating student extensions.
  • Finally, I advise that, although I’ll grant the extension like a squishy teddy bear, my other major concern is that I’m extending the assignment solely within the context of my course. I have no idea what other courses they’re taking, nor do I have insight on those other course assignments. What happens if I grant an extension and it interferes with their other assignments, which leads to a consequential snowball effect much larger than my assignment? I’ve seen it happen: one extension leads to students falling badly behind in their other classes.

So far I assess this method is a worthy compromise. It’s not that many students on balance and they at least comment that they appreciate the mentoring (and one student I’m aware of adopted my planning format).

That said, I’m curious to read your thoughts in the comments!

Podcasting in Class: Course planning for Spring 2024

As the semester winds down and final grading is in progress, I am looking ahead towards the Spring 2024 semester, when I will be teaching International Relations & Popular Culture for the first time. It is both a nerdy interest niche of mine, but I also think that field of popular culture is expanding, gaining more grounds, and operating as something relatable to our students. So, I guess – be prepared to see more of that type of content in the new year.

I have decided to incorporate a semester-long podcasting project as the students’ main research project, in which they will produce a public-facing piece of research. I imagine this is the first time most of my students will be engaging with such a project. But this is also a new assignment for me, and I am both excited and wondering what hurdles I haven’t thought of yet. I am building this on a previous guest post by John McMahon (2021) and his APSA Educate resource. Below, I have outlined the overall premise. What wisdom do you have to turn this project into something that can rival Joe Rogan’s podcasting dominance?

Continue reading “Podcasting in Class: Course planning for Spring 2024”

Project Citizen: Building Citizenship Skills in an Introductory American Government Course

Today’s guest post comes from Dr. Brooklyn Walker at Hutchinson Community College, Kansas!


This semester, on the first day of class, I asked my introductory American Government class to generate a list, in teams, of how they come across American government and politics. They couldn’t think of a single example. I concluded that many of my students haven’t thought about themselves in terms of politics. They lack political efficacy. They are turned off by polarization and negative affect. And they don’t notice the role government and politics play in their everyday lives. 

But years after they graduate, I want my students to be aware of their political environment and equipped to engage it. I wasn’t convinced that exposing students to interesting information or ideas would address the problems I was seeing. Instead, I wanted to help my students learn about themselves as citizens, develop citizenship skills, and see government in action. I developed Project Citizen to advance these three goals, and to create a bridge for my students between the classroom and the ‘real world.  

Project Citizen is the overarching title for a set of five assignments distributed throughout the semester. It comprises approximately a third of the semester’s total points, signaling that that civic engagement is a priority in the class.

Students begin the semester by writing a brief introductory essay (400-500 words). This essay reflects on a class reading making the case for civic engagement, and they describe what an ideal citizen does and knows. Then they detail their own civic engagement and the hurdles they face in becoming the ideal citizen they described. This essay forms the foundation for the remainder of the projects and marks the baseline of each student’s civic engagement.

Students then select three projects from a menu, each of which results in a 500-word essay. Each prompt is linked to a topic we cover in class, and the prompts are intended to advance the three main goals. Giving students choices is a core feature of Project Citizen. Some of the prompts may be triggering for students. For example, a Black male in my class asked if he had to talk to a law enforcement officer, and another student with social anxiety was worried about contacting a local civil rights group. Project Citizen encourages students to choose prompts that take them out of their comfort zone but gives students space to avoid prompts that they feel could be harmful. Finally, choice promotes equity. Many students, especially those with lower socioeconomic statuses, do not have easy access to someone who’s running for office or reliable transportation to civic meeting spaces.

ProblemGoalRelevant Prompts (course topic in parentheses)
“Don’t know who they are as citizens” Learn about self as citizenTake a survey to identify your party identification and ideology (Public Opinion); Make your voting plan (Participation); Develop a media diet plan after comparing news articles (Media)
“No efficacy, intimidated by polarization”Practice political skills, including political  discussionsTalk to a police officer about the role of civil liberties in their work (Civil Liberties);Talk to a local civil rights group about their work (Civil Rights); Interview someone who’s run for office about the role of money in politics (Elections); Complete an action recommended by an interest group (Interest Groups); Contact a member of Congress (Congress)
“Don’t know how government actually pops up in their lives”See politics / government in actionAttend a local meeting (Constitution and Federalism); Identify party linkage strategies via party communications (Political Parties); Analyze presidential communications via inaugural addresses (Presidency); Observe a courtroom (Judiciary);Evaluate bureaucracies after speaking with a recipient of state or federal services (Bureaucracy)

Finally, at the end of the semester, students revisit their initial essay. They annotate that essay with 8 comments, reflecting on what they learned from class readings, lecture, discussion, and their projects. Their comments can introduce new information or examples to support or refute their initial points, reflect on how they have changed through the semester, or describe their next steps for developing their citizenship skills.

While most of the students’ Project Citizen work occurs independently, projects are woven into class periods. I mention project prompts during lectures, pointing out information that may be relevant or questions the project may help them answer. Students learn from each other during a peer review session (for which they also earn Project Citizen points). I print copies of the assignment rubric and students provide feedback on two peers’ essays, using the rubric as a framework. After projects are submitted, we dedicate class time to discussing student experiences and connecting those experiences to course material.  

Some semesters I’ve integrated badges, which are essentially extra credit opportunities, into Project Citizen. Badges encourage students to take more ownership of their learning experience. Some of the badges I’d used include:

  • Jack-of-all-Trades Badge: complete one project for each of the three goals 
  • Design Your Own Adventure Badge: create your own project prompt (in consultation with me)
  • 10,000 Foot View Badge: complete a meta-learning worksheet about a project
  • Second Chances Badge: revise and resubmit a project
  • Collaborator Badge: complete a Project Learning prompt with a co-author

Ultimately, students have reported positive experiences with Project Citizen. One student said that Project Citizen “let me build up my ideas about who I am and about my beliefs. I got to explore what my point of view is and I learned things about myself I never knew.” Another appreciated seeing the government in action. After attending a local meeting, they commented that, “Normally people wouldn’t go out of their way, but Project Citizen gave me exposure to see how my community is run.” 

Project Citizen is constantly evolving, so I look forward to your reflections and comments.  

Is it in the assessment criteria?

I zoomed into an excellent QAA event this week on implementing racially inclusive practice in assessment, based on a project at University of Leicester, partnering with Birmingham City University and University of Wolverhampton. I’d very much recommend that you have a good look at their report in detail. The take-home for me was that that whilst an inclusive or decolonised curriculum and role models are incredibly important for engagement and for inspiring students, particularly racially minoritised students, if you want to tackle race awarding gaps, the solution is pedagogical.

Their approach is deceptively simple: they focused on making sure that the hidden curriculum is made visible for all students and the tacit is made explicit, that students understand exactly what they have to do to succeed, with no guessing games involved, with clear documentation of what is required, and that all assessment criteria are clearly and transparently explained with examples of what good or less good work against those criteria would look like. One of the staff who had implemented the intervention very disarmingly said that he felt a bit embarrassed that he and his colleagues hadn’t been doing this already! He also said that although there was some initial resistance because of worries about ‘spoonfeeding’, the improvement in the students’ work that he saw and the way they engaged allayed most of those fears. They found that by doing this, they could reduce awarding gaps significantly, improve student enjoyment and confidence, and also improve staff experience of teaching and assessing!

There is a lot to learn from in the report. Personally, I’ve already thought a lot about assessment criteria over the years, in an attempt to be inclusive, yes, but also because I just wanted to communicate with students what I wanted them to do, so they would learn better and I could read better work when assessing. As a less experienced teacher, I realised that I was marking work down for not doing things that I had never taught or told the students to do – which offended my sense of justice. But I knew I did want the students to do those things (such as make a coherent argument, evaluate evidence, use examples, write for an audience, use appropriate referencing), so it got me to thinking about how I might teach those things in the context of a disciplinary and substantive module. I came to the conclusion that having transparent criteria and spending some time making sure that everyone understands them would help me communicate what skills I wanted to see and how they might develop them. It turns out to be a practice that serves all students – not just those who have been previously disadvantaged, but also the ones who keep doing pretty well, but don’t know why.

As we know that tutors are often looking for different things in their students’ work, it usually doesn’t work in a discipline like ours to have generic or departmental criteria. It is an incredibly useful exercise for you, as a tutor, to sit down and write out what it is you are looking for in students’ work. This helps clarify expectations for me and helps me think about what and how I will teach. When team-teaching, working with other tutors to clarify not only what the assessment criteria are but also what they mean in practice is extremely useful for making sure that teaching and marking are fair and consistent. And working with students to help them understand marking criteria doesn’t so much help them ‘tick the right boxes’ in a spoon-feed way, but, much more importantly, understand what skills they are learning and why.

For my current module, the assessment is a portfolio, and the assessment criteria are as follows (although I do allow students to negotiate them, which I won’t dwell on here but will come back to another day):

  • Depth of understanding of how politics and power are shaped by, and shape, the natural world
  • Ability to weave together ideas from the module into your own coherent text
  • Depth and originality of critical evaluation of your own relationship with the natural world
  • Ability to argue for your perspective on how nature should be governed or cared for, by whom and in what ways, including use of reasons and evidence
  • Appropriate selection of multimedia on the portfolio
  • Ability to write appropriately for a particular audience (please specify: eg visitors to an exhibition, policy-makers, everyday readers of narrative non-fiction)
  • Creativity of your work on the portfolio
  • Evidence of learning and development over time in the module
  • Depth of critical engagement with the module materials and readings
  • Extent of additional research and further reading
  • Craft of writing, including readability, spelling and grammar
  • Accuracy of bibliographic materials

I like the approach of starting with a noun plus preposition, like ‘depth of’ or ‘ability to’, because it demonstrates that these are skills one can be better or worse at in a qualitative sense. Thus, this is not a box-ticking exercise for students but rather an invitation to engage in deep and dialogical reflection on what, for example, the ‘ability to argue’ or ‘appropriate selection of multimedia’ really looks like in practice.

It’s very important not to stop with listing the assessment criteria, of course, but rather to make them the centre of an ongoing conversation. Here is my top tip: every time a student asks a question about the assessment, or about what ‘good work’ might look like, I bring it back to the assessment criteria. So, let’s say they ask, ‘does my portfolio need to be consistent week by week?’ I will say, ‘Is that in the assessment criteria? No. So, I won’t be looking for that. If it’s something you want to learn, that is, how to create your own consistent style, that’s great – you can do so and add it to the assessment criteria for your self-assessment. But it’s not necessary from my point-of-view.’

Here is my top tip: every time a student asks a question about the assessment, or about what ‘good work’ might look like, I bring it back to the assessment criteria.

Or let’s say they ask, ‘Can my writing be more personal?’ I will say, ‘Is it in the assessment criteria?’ This is a longer conversation – the answer is, yes, I am asking them to give an account of their relationship with the natural world, so more personal writing in the first person is clearly appropriate. However, if they are using part of their portfolio to write for policy-makers, this can lead to a deeper conversation about what sort of writing, evidence and argument a policy-maker might be interested in. Distinguishing these different crafts of writing and talking about when they are appropriate, or not, is much more useful for learning than just prohibiting one of them without explaining why.

Other ways of getting students to engage deeply with the assessment criteria might include:

  1. Guided marking exercises where students mark examples of work with reference to the assesment criteria. Your aim here is to get them to focus on the criteria and not make the sorts of vague comments (‘this was not well structured’) that they have probably experienced themselves at times.
  2. Peer feedback where the focus is on giving each other feedback according to one or more of the assessment criteria.
  3. Formative feedback from the tutor where they have to tell you which criteria they want feedback on. (I have a form and they can’t have their feedback unless they tell me which criteria they are particularly interested in.)
  4. Self-assessment where students have to tell you how well they met the criteria, and where they could have done better.
  5. Any other discussion with examples of the criteria and what they mean, preferably iteratively, so they can improve over time.

Summative feedback should also, of course, refer constantly and closely to the assessment criteria. But by that point, this is just an exercise in demonstrating that you could be trusted to do what you said you were going to do. To return to the QAA discussion on racially inclusive criteria, the return of summative work should not be an opportunity to say: ‘Ta-DAH! This is what you should have done.’ What the students should have done should be clear right from the get-go, or else how can they learn how to do it?

“The Joy of Asking for Help: Getting students to read (anything?)”

I found myself in an all too well-known situation this week: my students didn’t read the assigned readings. In my opinion, I had set up the most fascinating set of readings to address an important issue in one of my classes. Every should want to gobble that knowledge up, said my hybris. But nada. Maybe a handful had read; the rest of the class became experts at looking straight ahead or down to their screens. I am not rediscovering the wheel with this not-reading problem, but the wheel certainly ran me over this week.  

Credit: Mercy Pilkington (Good E Reader)

I left the classroom after the lecture, wondering how I had created this environment, and how I could pivot away from this mid-way through our semester. Frustrated at myself, but knowing that somehow it had to change, I reached out for help on social media and in real life to people in academia.  I was in awe of the number of helpful responses I received. Although, I did not plan to write about it for ALPS (see Chad’s earlier work on that here), I decided to at least have – for the record – a collection of thoughts and ideas for others, who might find themselves in a similar situation at any point in their academic lives.

  1. Clarify for yourself and the students why we are reading (anything); what the purpose of the readings are; how they aid us in or learning process; and – importantly – do not assume that a one-time explanation covers that. Frequent reminders are helpful and necessary to the learning environment.
    • Here, I also received some online and offline advice about having a session at the beginning to demonstrate how to read articles. Folks have developed different techniques on how to guide students through a sample reading, including developing key questions students should keep in mind when reading (as guiding posts).
  2. I noticed an interesting debate regarding the use of reading quizzes or some sort of grading mechanism regarding doing the readings (or not).
    • I am not in favor of reading quizzes. For one, selfishly I do not want to have more grading work throughout the semester, and I am not sure how effective these quizzes are in motivating the students to read for understanding. And two, I do not want to create this sort of potentially punitive environment in our shared learning space.
  3. Nonetheless, there were some interesting grading mechanisms for readings: 
      1. This includes several recommendations for Perusall, which encourages more a communal reading practice, in which students engage with one another and can annotate readings. I am certainly not sponsored by them, and it also depends on whether your institution has a subscription to the service, but I like the idea.
        1. See similar thoughts on CritiqueIT by Dr. Colin Brown at ALPS.
      • Some faculty structure their entire lesson plan based on students’ reading reactions (required prior to class), focusing on things students did not understand or want to know more about. My planning anxiety stands in the way of this method.
  4. Similarly, the idea of cold calling was brought up. I have fallen to these temptations in the classroom, but at the same time, that does not really solve – for me at least – the reading issue. Then, I am just embarrassing folks in the classroom (if they haven’t done the readings).
    • Folks have suggested to use tools such as Menti, PollEv, or JamBoard (even though the last one is being phased out). They are interactive online boards, that you can project in the classroom. You can pose a question about the reading, and students can (anonymously) respond to that. The collection of the answers as well as the anonymity can overcome social anxiety and the fear of saying “something wrong”.
      • Anecdotally, I already tried a PollEv exercises in one of my classes this week, and I received responses from approx. 2/3 of the class vs. the usual 2-3 hands that shoot up when I start talking about the readings. I did not ask specific questions but rather focused on what stood out to them in the reading or if they had any questions (as the mind-hive suggested). Given my mid-semester pivot, I will stick with that for now.  

The beauty of having a problem with anything in this day and age is that none of us are special enough/unique enough to encounter it for the first time in all of human history. I realized relatively quickly that my problem is not just about “why aren’t they reading” but more importantly “what can I (!) do about this to improve our shared learning space”. And I think that is a better motivator than frustration. Reaching out and asking for help was probably the best way to handle this problem for myself. Aside from the fact that people provided advice/technique, what stood out to me what the fact that there are folks who care and that they care enough to help me out. Thank you!

Syllabus Quiz In Another Form: Annotation

This idea comes from Matt Reed at Inside Higher Ed, who in turn got it from Emily M. Farris at Texas Christian University: have students annotate, in ABC fashion, your course syllabus at the end of the semester.

I’m going to go a few steps further for the upcoming semester:

First, instead of my usual quiz on the syllabus at the beginning of the semester, I’ll have students annotate it on Perusall in response to my questions. The assignment should function as a close reading exercise, but it will be machine graded by Perusall.

Second, I’ll create a quiz on the Canvas LMS that will force students to explore the contents of the course’s Canvas shell. It has become apparent that most students only pay attention to the LMS’s “To Do” list of impending assignment deadlines that pops up on their phones. They ignore everything else I put into my course shells, including the “How to Get an A” advice. As with the Perusall assignment on the syllabus, the quiz will be machine graded by Canvas.

Third, I’ll create another Perusall assignment on the syllabus for the end of the semester, to get feedback on what worked and what didn’t, and to remind students of course learning outcomes.