Got an interesting classroom exercise, project, or experience that you’d like to share? We want to publish it. Submit a draft of a blog post in an email to firstname.lastname@example.org for the editors to review. Guidelines are on the About page.
In 2015, I wrote about asking too many questions in instructions for assignments. What I as the information-craving professor sees as helpful detail, the student sees as a tangled and confusing mess.
I still notice occasions where I fall into this bad habit, most recently in an assignment in two of my online graduate courses, in which students analyze peer-reviewed journal articles. The old instructions said that analyses should answer the following questions: Continue reading
During the second half of this summer I wrote about running a Brexit game, and off the back of that I now have two great partners – Matthew LeRiche (Memorial) and Chris Huggins (Keele) – with whom I’m now doing just that un the run up to Christmas.
As I promised, I’m going to keep some postings going as we proceed, since it’s a new thing for me (and the others, for that matter) and it’s useful to share the experience now.
Chris actually came in on the back of the previous post, so now we have three groups of about 15-20 students taking part. Matthew’s students are taking the roles of the UK and the European Parliament, while mine and Chris are sharing duties, with mixed teams for Germany, France, Italy, Poland and Ireland, plus Keele-only teams for Hungary and the Netherlands, and a Surrey-only team for the European Council (who will chair). A bit complex, in short.
This complexity is a function of how we came together as a group, so first observation would be that this is much easier to sort out if you have your partner list fixed at the start. We’ve made a virtue of things, by having Memorial students take the roles that are most separate, so that they are less likely to come from the same place as the EU groups.
Yesterday, I ran my frist proper session with my students, getting them to negotiate (internally) briefs for the rest of the game. This was a good opportunity to get them thinking more systematically about what they’re trying to do (and how) and ties in with my need to get them focused on negotiating skills. Both Matthew and Chris have students who are coming to this from a EU politics perspective, so I’m hoping that we’ll get some insights as to whether it’s process or content that matters more in such discussions.
In a couple of weeks, the UK will submit its notification to leave, with any claims it wants to make and then the other groups will respond, building up to a final session in early December here at Surrey, when we plan to get almost everyone in the same room.
At the moment, our structure seems to be working: aside from the joint sessions, we’re leaving it to each of us to do anything more we want with our own students, as per our institutional and individual requirements. Communications between the students has been limited so far, although that’s starting to change in the past week.
The big challenge is going to be when the groups have to interact, and we’ll see much more clearly how that works. My guess is that someone is going to find that their assumptions were well off the mark, at which point we’ll have a nice learning moment. And you’ll have another post on this.
Last time, I posted about ways to get students involved in making hypotheses, forming expectations, and testing them with instructor-provided data. Today’s activity leverages technology in a different way, by allowing students to collaborate on data generation and encouraging comparative thinking as a means of drawing conclusions.
I begin by creating and sharing a Google spreadsheet (1) with several questions across the top of columns, and several cases listed one to a row. Then, pairs or small groups of students are assigned one or two cells of the resulting table. For example, my Intro to American Politics class did comparative civil rights struggles, comparing African Americans, women, Hispanic Americans, Asian Americans, and the LGBT+ community as cases, and motivations/starting conditions, core strategies, de jure discrimination, and de facto discrimination as questions. One group of students might have motivations and strategies for African Americans, and another group had de jure and de facto discrimination for LGBT+ Americans.
We don’t usually write about dissertations on this site, partly because it’s not obviously an area ripe for discussions of active learning, and partly because we’re doing so many other things.
However, a capstone dissertation is often the single most important piece of work that a student undertakes: an opportunity to explore the subject as they see best, producing something (hopefully) akin to a research output.
Put like that, it’s obviously an instance of active learning, because they drive the entire process, with us taking a ‘supervisory’ role.
Now that I’m back in the ranks after my stint as Associate Dean, I’m getting to do more of this supervision, and it’s been a good refresher of what the most useful advice I can give might be.
This guest post comes courtesy of Johan Adriaensen (Maastricht) & Carola Betzold (Antwerp).
Higher education often engenders a dual ambition. Upon graduation, we expect students to be prepared for the professional labour market. At the same time, we aspire them having developed academic qualities such as a critical mind-set, an understanding of scientific research and an inquisitive attitude. Reading between the lines, it is not hard to see that many of these ambitions do not revolve around students’ acquisition of knowledge but rather about the mastery of particular skills and attitudes. While there is a lot of literature available on innovative teaching methods to promote the learning of skills and attitudes, we wondered whether the standard methods of evaluation (exams, written assignment) are adequate to assess a student’s mastery of these important skills and -in turn- signal their accomplishment to any future employer.
But what exactly are these skills that students should learn, what are different ways of evaluating these skills, and how could we help students showcase their skills to the outside world? To address these questions, we organised a session on “Student evaluation and student portfolios” at the recent EUROTLC conference in Brussels. Using a World Café format, participants first identified generic skills students should acquire over the course of studies and then turned to different forms of evaluation of these skills. Finally, the discussion centred on student portfolios as one tool to enable graduates to present their skills to future employers.
So what are the skills we should teach and students should learn? The list is long: being able to communicate clearly via written as well as spoken word. Organising, prioritising and filtering information. Acquiring an inquisitive mind and becoming a life-long learner. Interestingly, these skills were quite generic to university education; it was much harder to identify skills unique to political science, international relations or European studies. Yet, the relative importance of the identified skills – and thus their prominence in the curricula – is likely to differ.
How can we assess these different skills? Is there more than essays and exams to evaluate students? Does our examination privilege certain skills or types of learners, and if so, how could we change this? Participants agreed that the evaluation of skills and attitudes require a slightly different approach and brought a range of examples on how they or their institutions provide feedback and evaluate students. One participant for instance described how he has a “menu” of tasks that students need or can do to obtain points in his class. Some elements are mandatory, but most are voluntary. Students can thus select a format that suits them: you may want to write an essay, but you could also do a presentation or take an oral exam. Another participant presented how they use peer review to obtain feedback on group work, whereby all group members have at certain points rate themselves and their peers on specific criteria such as creativity, reliability or punctuality. These open ratings are then discussed within the group: why did you give or obtain this rating? What do you take away from this? This peer review system worked very well, but did not influence the final mark.
But how much does such a mark really say about skills to a potential employer? How could students provide evidence for their skills beyond a numerical mark on an abstractly named course? To this end, we proposed the use of a portfolio. We viewed this portfolio as a sort of repository of students’ achievements and activities. The question then was how can we, as academic staff, help students to build up this evidence into a student portfolio? Ideas ranged from specific written assignments such as position papers, speeches, articles in student journals or opinionated editorials to participation in simulations and student debates. Branding and badging is an important aspect to ensure recognition of the accomplishments of the students. Competitions or the award of prizes are but one example how this can feed into a portfolio. With such a repository, you have concrete examples you can refer to in cover letters or job interviews to plausibly show what you can do.
Ultimately, time was too short for our discussions to come to a conclusion. Still, we were left with the impression that our exercise is useful for many educational programmes. Clearly, each programme is likely to prioritize different skills, requiring a different evaluation practice and offering alternative opportunities to develop a student’s portfolio. As in our World Café, the choices ultimately made, was contingent on the participants around the table. Identifying the required skills and tailoring one’s programme to it, is a collective endeavour of all involved teaching staff.
When teaching about democratic and autocratic regimes, I like to emphasize two basic points. First, democratic governments are held accountable by (typically) a majority of citizens, while autocrats must focus on a narrower constituency. Second, unpopular democratic governments can be removed by the vote, while removing autocrats often involves violence. This classroom simulation is intended to demonstrate these differences.
Materials needed: A deck of playing cards and a “resource” to be distributed. Candy works well, but tokens may also be used. A total of 200 units are needed.
The simulation works best with a class of 30-50 students, although it may be modified to suit your needs. I run the simulation twice (once under each set of rules), but with two decks of cards you could split a larger class in two. It takes about 30-40 minutes to complete the simulation and a debriefing.
Assigning roles: Distribute playing cards to the students, making sure to deal all of the face cards (kings, queens, and jacks). Tell the students that the king, queen, and jack of spades are the rulers (the top decision-makers); all other royal cards are the elites (nine in total; this could represent the military, a ruling party, or a royal family). All numbered cards, including aces, are the citizens.
- Have the three rulers divide up 100 units of a “resource” (e.g. candy) between the three groups: rulers, elites, and citizens. They only choose across these broad categories; within each category, resources are distributed as equally as possible. For this step they may leave the classroom to confer with each other.
- Once the distribution of resources is announced, the elites can decide to accept or reject the offer by majority vote. If they reject, the rulers are deposed in a coup; the rulers all die and get nothing. Those who voted to reject the offer can decide a new distribution of resources between elites and citizens. However, since a coup disrupts the economy, 25 of the resources have been destroyed leaving 75 to divvy up.
- The citizens can accept the distribution, or reject it by revolting. Since the rulers and elites control the guns, citizens need a supermajority (2/3 or 3/4) to revolt. If they revolt, the rulers and elites all die, and one card among the citizens picked at random will die. Moreover, since war destroys the economy, all survivors receive only 1 unit of the resource.
Assigning roles: Distribute playing cards to the students, making sure to deal all of the face cards. Tell the students that the king, queen, and jack of spades are the executives (this represents the top decision-makers); all other royal cards are the legislature (9 in total); all numbered cards, including aces, are the citizens.
- Have the three executives divide up 100 units of a “resource” between the three groups: executives, legislature, and citizens. They only choose across these broad categories; within each category, resources are distributed as equally as possible. For this step they may leave the classroom to confer with each other.
- Once the distribution of resources is announced, the legislature can decide to accept or reject the offer by majority vote. If they reject, the executives are deposed (impeachment, vote of no confidence) and receive nothing. Those who voted to reject the offer can decide a new distribution of resources between the legislature and citizens. As this is a constitutional procedure, no resources are lost.
- The citizens can accept the distribution, or reject it through a majority vote. If they reject the distribution, the executive and legislature are deposed, receiving zero resources; each citizen receives 2 resources.
Ask the rulers or executives how they came up with their initial “offer” and who they felt accountable to. (When I ran this game the first time, the rulers in the authoritarian scenario decided to be “fair” and distribute the resources evenly, but this caused them to suffer a coup. That lead to a nice discussion about incentives to be “good” or not.)
Ask the legislature in the democratic scenario why they voted the way they did; do the same of the citizens. What offer would have kept them happy? What offer would have caused a “no” vote?
- To whom are leaders accountable in democratic and autocratic settings?
- What are the advantages of voting versus violence to remove leaders?
- Which set of rules led to more “just” outcomes?
It’s time for another specifications grading update! These posts are my attempt to be highly transparent about the benefits and challenges of using this method in my research methods class this semester, with all the trials and tribulations and the reflection and changes they prompt here on display. Check out Posts 1, 2, and 3 if you haven’t already, or for a deep dive into the ins-and-outs of specifications grading as a system, take a look at our extensive discussions on the subject over the last year.
Today’s topics: requiring so much that I set up my students to fail; dealing with late work; and how all that grading is going. In other words, let’s talk about how even extensive reflection and consideration can result in basic syllabus mistakes that pose unacceptable challenges to students.
Let’s start with this basic question: Why in the world did I require 21 assignments? Yes, pedagogically speaking, this made sense: these assignments, collectively, added up the basic knowledge required of an introductory methods course. They covered topics such as writing research questions and hypotheses to measurement, ethics, sampling, correlation v. causation, and everything else. Back in the summer, I spent a lot of time considering whether to require everything, or allow students to complete some smaller number or percentage of the total.
Here’s the problem: if a student completes twenty assignments with a satisfactory or higher score, but misses assignment #21 because they forgot or overslept or had a family emergency, then according to my syllabus they fail the course outright. Sure, the tokens help with this, letting students get a 24 hour extension per token, up to 72 hours—but what if they don’t have enough tokens in the bank, or they completely forget for a week or more? These students FAIL THE COURSE.