SPABC: The gladiatorial panel format

the format apparently allows enough time for presenters to take selfies mid-flow, it seems…

So, after much talk, we finally did the novel conference panel formats.

Now I’ve got a bit more space, and some data, I want to provide you with an initial write-up of each of them. Next week, I’ll go through the one that I think has more obvious potential – the flipped format – but today it’s going to be the gladiatorial model that gets the attention.

The format

The conceit – and I think that’s the right word here – is that a presenter starts off with three minutes to present, then the audience get to vote (by app/website) on whether they get another three minutes. This repeats up to a maximum of 12 minutes per paper.

The options I settled on for each round of voting were ‘Yes’, ‘No’ and ‘Yes, but…’, which I suggested to the audience would be a way of signalling a willingness to give time, but with a caution to up one’s game.

We ran the panel with four papers – including me – ordered randomly immediately beforehand, and we had about 18 in the audience.

How it unfolded on the day

The first bit of feedback from the audience was ‘why?’, as I explained the format to them, and that required a bit of an explanation about my intention to create a format where everyone was more engaged and where there was more interaction.

Each paper took a very different approach to the time issue:

Continue reading

Terrific & Tested Tips for First Year Seminar Teaching!

This guest post is part of a series linked to the publication of G. Pleschova & A. Simon (eds.) Learning to teach in central Europe: Reflections from early career researchers. This post comes from Nanette S. Levinson

Alica Retiova’s Chapter  focusing on her innovative teaching experiment in a first year writing intensive Seminar provides plenty of great advice and, most importantly, evidence of what works in her classroom.  A particular bonus is that she does not just assess the innovation itself but she also measures and shares student perceptions of it. 

What Retiova successfully implemented is the use of written peer feedback on student papers designed to improve writing skills in the field and foster critical thinking. She also has the goal of catalyzing students’ confidence in their own independent learning.  

Showing us exactly how to foreshadow implementing the innovation of written peer-to-peer feedback, Retiova competently explains her techniques for developing and accurately assessing student ability to ‘feed forward (suggestions to their peers for future writing) as well as backward (specific feedback on their current position papers). Just as importantly, she measures this over time (three different papers in a three week period) and with the same peer evaluators.  

As a long-time faculty member who herself teaches a first year seminar, I look forward to following Retiova’s tips and the techniques she tried. I also encourage future experiments that recognize the role of culture in attempts to catalyze independent learning. Retiova found that the experiment contributed only “partly” to developing students’ confidence as independent learners. Based upon my research in cross-cultural communication I note that some cultures foster a more hierarchical view of the professor and student, with the professor being viewed as the major source of learning and knowledge. Thus, culture itself may play a role in shaping students’ views and, indeed, abilities to develop confidence in their own role in assessing the work of other students in their classes.

In sum, it is inspiring to read about the teaching & assessment excellence focus of Retiova, an early career faculty member. This bodes well for the next generation of faculty leaders and their focus on fostering student-learning excellence including critical thinking and field specific writing skills.

Helping students do better in essays: what are the options of the seminar tutor?

This guest post is part of a series linked to the publication of G. Pleschova & A. Simon (eds.) Learning to teach in central Europe: Reflections from early career researchers. This post comes from Liudmila Mikalayeva.

Daniela Jaklová Střihavková tells her story of a highly motivated seminar tutor in a first-year course on social work. The problem she sets to solve is a mismatch between the grader’s expectations and the students’ performance: the average grade for exam essays is between 16 and 18 points in 2016-2017, which is far from the maximum 30 points. What can the tutor do to fill this gap between the expectations and the performance?

The setup of the course will be familiar to many young instructors: while a senior professor gives weekly content-focused lectures and grades students’ work at the end of the course, a junior tutor is in charge of interactive seminars where students discuss and apply knowledge. The examination consists mainly of two essays checking students’ understanding of the nature of social work and of the social worker’s role, formulated in proper academic language. The essays are challenging for students, since they receive little training in writing them in the course itself.

Jaklová Střihavková very reasonably suggests improving the course through constructive alignment: course design should guarantee the fit between teaching goals, teaching methods and assignments. So if a major teaching goal is to enable students to define the role of the social worker in a concrete case, they should receive guidance on how to do it, have an opportunity to practice it and receive feedback before the exam.

Practically, the author introduced two additional seminar sessions where students practiced a task similar to that of the exam essay. Combining individual and group work, she provided space for the students to confront the complex task and reflect on their own performance. While she cannot prove that the essays received better grades because of these changes, both the grader and the students were happier with the learning outcome.

The effort by the seminar tutor to bridge the gap between the expectations of the grader and the actual students’ work was however only partly successful. Even after the additional seminars, students continued to feel unsure what the grader expected from them and the grader was still unhappy with how they used disciplinary vocabulary. I see three issues explaining the persistence of the gap.

A relatively minor point is that oral exercises may not be effective enough for supporting students’ success in written tasks. A much more important drawback, underlined by the author herself, is the absence of clear and explicit criteria for grading: the professor would need to make an effort detailing the requirements. And, most significantly, the course structure as such is at the core of the problem: the person grading students’ work is not directly in touch with the students and is senior enough to have forgotten how challenging it is for undergraduate students to understand and use academic jargon and navigate the often-implicit expectations to their work.

Jaklová Střihavková is right to point that to improve learning outcomes students need space to reflect on the curriculum, but young and senior instructors should become more reflective as well. Clarifying expectations, aligning content, teaching approaches and assignments, communicating among themselves and with the students is key and cannot be replaced by teaching experience alone. Students as well as instructors will benefit from it.

Unprepared students? Try an Online Quiz

This guest post is part of a series linked to the publication of G. Pleschova & A. Simon (eds.) Learning to teach in central Europe: Reflections from early career researchers. This post comes from Alistair Jones

We’ve all had the problem of students turning up to class without adequate preparation. To gain the full benefits of any classroom discussion or debate, student preparation is essential.  Yet there are too many other distractions, resulting in unprepared students attending class.  Natália Gachallová explains how she got the students to prepare for their classes, the ensuing benefits, and embeds the explanation in the existing literature.

Preparation for an introduction to Latin anatomical nomenclature, you might think, would be essential. The obvious incentive is the required mark of 75% to pass the first part of the course. Yet experience suggested this was not the case. Thus Gachallová decided to innovate her teaching. She introduced fortnightly summative online mini quizzes. These would reinforce prior learning as well as provide a useful revision tool. There was also a reward component, where students gained extra marks for the final exam based on the average score in the quizzes.

Such innovation can be time consuming, especially noting the length and volume of classes that are undertaken. In this case, there is a cohort of over 130 students! Gachallová does not mention how much time was used in preparing these quizzes, especially in comparison to what preparation was undertaken previously. Follow-up questions were used in class to stimulate debate – an example would be interesting to see.

An online student survey was utilised to measure the effectiveness of this approach, which produced remarkable findings. Around 85% of respondents claimed to find the in-class quizzes beneficial. Conversely, some respondents complained about the quizzes being too challenging, and voiced concerns over spelling mistakes leading to marks being dropped.

The benefits are visible in the grades of students. Both the average mark and the overall pass rates improved. The exception is in the tail of the class, where pass rates at the third attempt of sitting the final exam were almost unchanged.

Yet Gachallová takes into consideration the extra marks gained by students from the online quizzes. Her findings showed most students did not need the extra marks from the quizzes.  Most of those who passed the exam would have passed without the extra marks. A very small number of students failed the exam despite gaining the extra marks from the online quizzes. The reward component was meaningful for about 5% of all students.

The key message from this chapter is simple. If students can engage with the learning materials throughout the academic year, they are far more likely to pass. Online quizzes appear to motivate the students to engage with the class preparation. Formative assessments AND formative feedback can increase student success in the summative assessments.

Some of you may consider a Latin Medical Terminology course to be rather niche. It might be that online quizzes are not deemed appropriate for other subjects. Yet that is to miss the bigger picture. There is a need to engage with the students; to encourage them to take responsibility for their learning. One way is through the use of technology. The fact a small incentive was added in relation to the marks may be inconsequential – which is something for future exploration. 

If students are not going to engage with classroom preparation, why hold the class? If students merely parrot what has been said in class, has any learning actually happened?  Consider the six levels of knowledge in Bloom’s Taxonomy (1956): to have knowledge, comprehension, application, analysis, synthesis and evaluation. If using technology and incentives to innovate our teaching can help our students to a deeper level of knowledge, then it is worth the experimentation.


 [GP1]Simon, can you please insert chapter link here?

Wheel Reinvention Avoidance

A short one today, as I’m in the middle of the UACES conference in Lisbon, trying out my novel panel formats and I’m worrying about the tech side of it all: more on that next week.

On Sunday, we had our regular pre-conference L&T workshop, which was structured to let us discuss in small groups what and how we teach. The initial idea had been to focus this on different substantial specialisations, but it was evident that we all ended up working more general issues and questions.

One of the most useful thoughts in this was the notion that this kind of sharing is really helpful, because it lets us avoid having to do everything from scratch.

Rather than building from the ground up, the exchange of ideas can mean both the avoidance of things that don’t work and the opening-up of new vistas.

To take the latter, I got a couple of really interesting ideas from the discussion, that I’m now looking to see if I can implement with my students.

The first is assessing student’s lecture notes.

This could be as simple as just checking they’ve made such notes, or – as this is where I can see real potential – you could ask for those notes to be augmented with further reading/noting-making, or framed with some elaboration of how different elements hang together, or coupled to a reflective piece of writing.

The second is getting students to present each other’s work.

This forces students to think/act a lot more in making their work understandable enough for someone else to talk it through, and to appreciate how others interpret your work: did they stress what I wanted them to in their presentation?

Neither is such a radical idea, but they contain the core of some new things to explore.

Health Policy in DevelopmentLand: a practical policy simulation

This guest post comes from Cathy Elliott (UCL). A detailed instruction pack for this activity can be downloaded via the link at the end of the post.

One thing that students in my International Development class find intriguing is the fact that, in a previous career, I used to work for the British Government in Pakistan. Relatedly, one of the things I find difficult is students’ received ideas about what a job in international development like that might be like. When students join the class, some fall unreflexively into a discourse of “us” “helping” “them”. Others, meanwhile, bemoan unequal power relations in the world, imagining that a supplicant Pakistan is being pushed around by overbearing British development workers coercing them with huge amounts of conditional aid on offer.

The first position smacks of White Saviour attitudes and is at best patronising to local people and movements pushing for their visions of change. The second, meanwhile, bears little resemblance to my actual experiences of trying to spend relatively small amounts of money in ways that would be acceptable to the governments of both countries. Students also come to my class in search of solutions to global poverty and one recurrent grumble on my student evaluations is that they do not leave the ten week module knowing how it might be “fixed”. Meanwhile, I want them to question these sorts of technocratic attitudes that encourage them to seek the simplistic solutions.

In response, I have developed a 3 hour simulation game that attempts to give a glimpse of what the world of international development is actually like. It is based on my own experience of living and working in Pakistan for three years, and of course other experiences are available. However, I have found it a useful activity for getting students to understand some key ideas from the literature, including the role of power and networks in policy-making, the importance of powerful discourses including international goal-setting agendas such as the Sustainable Development Goals, and the open-ended, processual and contingent nature of political and policy-making practices.

The premise is quite simple: for the duration of the class we will be in the fictional country of DevelopmentLand, which bears a striking resemblance to Bangladesh in every way unless something different has been specified. The students (usually between 40 and 60) are split into small groups and assigned roles with detailed descriptions. The groups are broadly as follows:

  • The Minister of Health and her advisory team
  • The World Bank
  • The UK Department for International Development
  • A group representing the country’s religious leaders
  • A British consultancy firm
  • An international maternal health NGO
  • A local national maternal health NGO
  • A group representing local traditional birth attendants
  • A local NGO that saves lives by teaching children to swim
  • Local NGOs working on neglected tropical diseases

On their cards, students are given information about where they went to university, who they already know and whether or not they speak English. I try to organise the groups so that friendship groups within the class map onto networks that you might expect. Sometimes I also add a group of journalists.

The aim of the game is straightforward. The groups have to persuade the Minister to work with them to develop health policy in the country. Each group has to prepare a 5 minute presentation to persuade the Minister. However – and this is the really important bit – the Minister can work with more than one group if they are able to work together in a consortium. For groups that decide to work together, they can also pool their time. This means that if two groups work together, for example, they jointly get ten minutes to present. A class-wide “win win” situation is therefore possible: there is no reason in principle that all the teams could not get together and put a proposal to the Minister that she and her team would find acceptable.

This has never actually happened, though, because – in classrooms as in life – the game is rigged. The Minister and international donors are preoccupied by meeting the Sustainable Development Goals. The Minister is also keen to be re-elected and the group of religious leaders may have a role to play in enabling or, crucially, wrecking her chances. Some of the groups find that their very good cause doesn’t fit in well with her aims and it is more challenging for them to find partners to work with. Some groups don’t speak English or have dinner with the right people. And not all the groups know everything about what is going on in the other groups. They can usually find out by asking, gossiping and listening in but it often doesn’t occur to them. Intriguingly, no-one yet has produced a crumpled fiver from their wallet to grease the wheels, although I sometimes spread the mischievous rumour that previous classes have tried this!

The groups spend about 90 minutes making what they will of the situation and then they do their group presentations. Afterwards, they talk among themselves about what they think will happen, while the Minister and team make a decision, which they then present back to the group. The debrief session afterwards helps students make sense of the experience and understand, with guidance from me, how their experience fits into the broader theories we have been studying, as well as my own experiences.

One thing I like about this exercise is that it gives students an insight into ordinary everyday politics, as they are most likely to encounter them. There is no dramatic threat about to overwhelm DevelopmentLand and the always unrealised possibility of a win-win ending means that there are no bitter enmities, only ordinary political rivalries and invisibilities. The international donors are powerful, but so are the government and other local players, and all of them are operating in a landscape of power relations that they did not create and cannot reshape on their own. This gives students an understanding of the crucial point explained by Maureen Mackintosh: “Deciding what should be done is relatively easy. But achieving it requires alliances with others”.[1]  The infuriating nature of having to work with other people in conditions of unequal power is a great learning experience, above all for students of politics. As one student put it: “It helped me understand the process of politics, particularly the necessity of working with people with different agendas”. Another made me happy by remarking: “It was a challenge to cynicism – maybe things can change, but there aren’t easy answers.”

Michael Buroway once wrote that “[o]ne cannot both play the game and at the same time question the rules”.[2] This doesn’t seem right to me; surely it is only through playing the game that we can understand the rules, their consequences and possibilities for effecting change either working within them or by transforming them. Policy simulations like this give us some tools to help students begin to play the game and to ask critical questions about the rules that they probably could not access otherwise.


[1] Mackintosh, M. 1992 ““Creating a Developmental State: Reflections on Policy as Process” in Gregory Albo, David Langille and Leo Panitch (eds) A Different Kind of State? Popular Power and Democratic Administration Toronto: Oxford University Press Canada: p. 44)

[2] Buroway, M. 1979. Manufacturing Consent: Changes in the Labor Process Under Monopoly Capitalism Chicago: University of Chicago Press : p.79

Making the Society for the Abolition of Boring Conference Panels happen

It won’t actually come to this.

I think

So last year, I mused on trying out some different types of conference panels, to see if we couldn’t be doing better than our usual two-minutes-at-the-end-for-a-long-winded-question-that’s-more-a-statement-actually.

That sparked enough interest to get together two panels for next week’s UACES conference in Lisbon.

Long-standing readers (and those who’ve just gone back to the original post) will remember I’m trying out two formats here.

Format 1: flipped panels

For our panel (106) on Monday on the EU27 and Brexit negotiations, we’re trying to use a flipped approach.

That involves we recording presentations that run as long as we like and posting online beforehand, then having only very brief one-sliders in the panel itself.

You can watch the efforts of myself (here), Natasza (here) and Petr (here) already now.

As you can see, we’ve all gone longer than the usual 15 minutes and I personally found it nice not to have to worry about the time-limit for this.

In Lisbon, we’ll have 5 minutes to present key points and then we go into a nice block of time for Q&A/discussion.

Format 2: gladiator time!

For our second panel (416) on Tuesday on Learning & Teaching, we’re getting the audience involved.

At the start, we’ll be getting everyone to access Poll Everywhere, via laptop or app, so they can vote.

The presenters will be given an initial 3 minutes to present, before the audience gets to vote on whether to give them another 3 minutes.

Presenters can get up to a total of 12 minutes, with each block being conditional on the audience being willing to go along with more of this.

We’re going to draw lots for the order of papers too, because I’m guessing it’ll make a while for the audience to get used to using their power and it only seems fair.

I’ve given a couple of notes to the presenters to reflect upon:

How do you balance getting your key points across with trying the patience of your audience? Logically, you make some kind of cliff-edges every 3 minutes, but what if your paper isn’t like that, or doesn’t have that kind of content? How much do you trial content and how do you spring surprises?

How do you structure any powerpoint you’re using? I can keep the results of the polling on my laptop, but again, what do you need to communicate when to the audience?

I’ll not say how I’m approaching these points, because I’m guessing there will be a variety of approaches, but I’ve found it much more thought-provoking/tricky/a pain for my content than the flipped model.

Testing, testing

To back-up the panels, I’ll be asking for feedback from both audiences and presenters on the formats and I’ll give you a quick run-down on that post-conference, with a view to publishing at some point.

With that in mind, if someone else wants to try these formats at their event, then knock yourselves out: I’m happy sharing my feedback forms with you too.

So, as I go off to finalise that second panel, I will hope that you had a good summer. If you’re coming to Lisbon, then I hope to see you at these panels: please do either watch the flipped presentations and/or download the app.

If you’re not, then I’m sure I’ll be trying you about it soon enough!

Why I Got Arrested (Twice) Last Semester

Today we have a guest post from Andre Audette, assistant professor of political science at Monmouth College. He can be reached at aaudette [at] monmouthcollege [dot] edu.

It’s about time that I come clean publicly: last semester I was arrested not once, but twice, at the start of class. My crime? Teaching constitutional law.

Students in my Civil Liberties course were wrapping up a unit on criminal procedure, which includes case law involving proper arrests and interrogations. To give them firsthand experience, I asked for two volunteers to arrest me and then achieve a conviction without using any unconstitutional evidence.

Before class started, I discretely asked one student to watch over my snack-sized bag of “drugs” (oregano). At the time of my arrest I was handcuffed (using fake handcuffs that were easy to get out of) and brought over to the interrogation room where I was placed under a portable clip lamp I had concealed in a canvas bag.

Throughout the simulation I did not make the arrest easy. I admitted to the crime before my rights were read, after which I vigorously denied the charges. I pretended not to understand my rights while accusing the officers of violating them, signed the rights waiver under a pseudonym, asked for and then rescinded my request for a lawyer, and pretended to be under the influence of mind-altering substances. Each of these represents one of the surprisingly common complications in criminal procedure.

After the simulation concluded, I asked the class to determine which evidence could be used against me in a court of law. The results were . . . murky. The “easy” constitutional interpretation of Miranda v. Arizona began to look a lot more difficult.

Students responded positively to the experience and gladly arrested me again on the last day of class. This time I played an intelligent and peaceful extraterrestrial who had been living in the United States for many years, a scenario that asked students to extend the logic of Plyler v. Doe, a case about the children of undocumented immigrants. Students acted as a jury to determine whether I, as an extraterrestrial, could be tried under a military tribunal, executed, and denied admission to law school despite being otherwise qualified. The exercise served as a review of the semester and a reminder that constitutional rights come from cases that push the boundaries of the law.

This simulation requires that the instructor cede a great deal of control to students in a way that may not be comfortable or even advisable for everyone. The professor should have a rapport with the students beforehand. The number of students in the class and its physical location is another consideration.

But my students reported that the exercise gave them a new understanding of what can otherwise be dry and unapproachable legal reading. Anecdotally students seemed more attuned to the complexities and nuances of constitutional law in their exams and hypothetical case briefs after the simulation than they were before. And in their writing they were able to wade deeper into legal reasoning by analogy rather than a strict factual application of precedent. Students also noted in their course evaluations that they learned that the law is not as straightforward as they thought.

Thus, the exercise appeared to have achieved my goal of demonstrating that the law is not as cut-and-dried as students usually assume, and that most constitutional law is advanced through these tough cases, if it is ever settled at all.

Does the Question Determine the Answer?

Regular readers of this blog know that I sometimes ponder the clarity of my assignment and exam prompts (some past posts on this subject are here, here, and here). Students sometimes don’t hit what, in my mind, the question targets, so I revise in the hopes of creating a prompt that is more transparent. But I don’t want prompts to be answerable with a Jeopardy-like regurgitation of facts. I want students to exert some cognitive effort to figure out how to apply concepts that are relevant to the question at hand.

Usually this situation occurs with my undergraduates, but I’m noticing it more frequently with master’s degree students. A recent example is an assignment from my graduate-level introduction to comparative politics course:

Continue reading

What Do Grades Mean?

What do grades actually mean? I began pondering this question while designing a course for the fall semester. Theoretically a grade indicates the amount of knowledge or skill that a student possesses. But really? Those of us working in the USA are quite familiar with grade inflation. A final grade of C today probably doesn’t indicate the same level of knowledge or skill proficiency as the C from fifty years ago. There is also the persistent problem of knowing whether our assessment tools are measuring the types of learning that we think they are/want them to. And it is probably safe to assume that, both in and out of the classroom, there is a lot of learning happening but we just aren’t interested in trying to measure it. The situation gets even more complex given that — again, in the USA — a “learning activity” often won’t function as intended if students believe that it has no discernible effect on their course grades.

I structure my syllabi so that the sum total of points available from all assessed work is greater than what it needed for any particular final grade. For example, a student might need to accumulate at least 950 points over the semester for an A, but there could be 1,040 points available. I do this to deliberately create wiggle room for students — with so many assignments, students don’t need to get perfect scores on, or complete, all of them. While this leads to higher grades in my courses than if I graded strictly on a bell curve, I want to give students plenty of opportunities to practice, fail, and improve. And I firmly believe that sloppy writing indicates slopping thinking, while good writing indicates the converse. So in reality what I’m doing with most of my assignments is evaluating the writing abilities of my students.

This system often produces a bimodal grade distribution that is skewed to the right. Expend a lot of effort and demonstrate a certain level of proficiency, and you will get a grade somewhere between an A and a B-. Choose not to expend the effort, or consistently demonstrate an inability to perform at a minimum level, and you will get a D or an F. I’m comfortable with this result, in part because I know from the cognitive science research on learning that repeated exposure and frequent testing builds long term memory.

This leads me to the reason for doubting that grades my courses mean the same thing as they do in courses where the only assessment is done through mid-term and final exams composed of multiple-choice questions. Yes, the proportion of A’s in the latter might be lower than in the former, but I bet on average my students are retaining more. At least I like to think that’s the case. There is no way for me to be sure.