Making assessment relevant

Insert metaphor here

Reading Martin’s post yesterday, just as I’m finishing my duties as an external examiner, makes me think about assessment formats.

Too often, we fall into the essay-and-exam approach: it’s simple, and easy and hardly anyone questions it. Of course, as the institution I external at is about to find out, I’m one of the people who does question it.

Assessment has a terrible reputation to deal with: in essence, it’s a hassle to do as a student, a hassle to set and mark as an instructor and the source of more academic complaints than anything else. No-one has a good word to say about it, it seems.

In our hearts, we know that it matters and that there has to be some kind of means of evaluating student performance, for their sakes and ours. But surely there’s a better way of doing it. Continue reading

“The internet’s broken…”

Pfft

Reading this piece of investigative journalism over the weekend, I was struck by the sub-text that if something’s not on the internet, then it doesn’t exist.

The author was investigating the use of micro-targetting of social media in the EU referendum and funding links to the US, and much of it turned on the absence of an online footprint of the various companies and entities.

This struck me as a marginal issue for two reasons: firstly, I’m a digital migrant, so I remember a time of card-filing and dusty archives; secondly, I work in a field where much activity remains resolutely off-line.

However, from the perspective of one of our students, things might look a lot different: we know that many of them seem to struggle to get beyond the first page of whatever Google search they have entered, so how do they cope with this kind of thing?

Three basic elements suggest themselves here. Continue reading

Rules (and how to use them)

OK, so some rules don’t change…

For various reasons – some political, some professional – I’m thinking about rules.

So much of the work we do as scholars is about understanding the formal and informal rules of political interaction, and how political agents use, adapt to and shape them. The norms of political life are often purely conventional, but they can exert powerful effects, even before we get to notions of (il)legality. Take a moment to look at the leader of your country and think how much of our understanding of that individual is about their mastery of rules and conventions.

Fun, wasn’t it?

And so too in the classroom. Our institutions set up rules and regulations, codes and practices: in our classrooms, we fall into roles and habits.

One of the most useful things in my development of my practice has been to tackle those local rules in a political way: to think how I can make those rules work for me, rather than against.

To be (very) clear, that doesn’t mean breaking or ignoring rules, but reflecting on their intent and their definition and how they fit (or don’t) with what I’m trying to do.

Broadly speaking, there are two ways this works.

The first is when you’re doing something and then the rules change. This tends to be the more common, because we’re always doing stuff and the rules are always changing (or so it feels). The conventional view would be to throw up one’s hands and demand to know why ‘we’re fixing stuff that isn’t broke’: if it was good enough then, then why isn’t it now?

But rules do change and almost always for a well-intentioned reason (even if that latter point isn’t always immediately obvious). Rather than having a strop about it, we can more usefully consider how the rule changes impact on what we do and how we can adapt. Remember that change is usually evolutionary, rather than revolutionary, so it’s not a blank-sheet exercise.

Usually.

Moreover, rules are mostly about process, not substance, in a HE setting, so adaptions will tend to focus on broad frameworks, rather than much more invasive details. You might be told who can run a seminar, but you very likely aren’t told what has to happen (or not happen) in that seminar. Or even what a seminar is.

As any of you with exposure to any legal training will know, rules are always incomplete, so think about what isn’t said as much as what is: it’s the gap that offer the opportunities.

And this is the second category: doing stuff where there are no rules.

When I set up my negotiation module, many years back, there was very little guidance from the regulations, because they were blind to formats of sessions. As much as the regulations where there, they set expectations on how I lectured and how I assessed. The former simply didn’t apply, because there were no lectures, while the latter acted as a starting point for getting creative with my assessment. In the end, I used that to anchor a sound pedagogic model of self-reflection within a ‘conventional’ assessment regime. I was happy, my institution was happy and my students got a strong incentive to work towards the learning objectives that I’d written. Everyone’s happy.

Of course, at some rules change (see above) and I’ve had to evolve my course most years to accommodate this thing or that. We’re now quite some distance from where we’ve begun, but I still get to exercise a considerable degree of freedom, while also meeting my institutional obligations.

Of course, this can all happen at a much more prosaic level: the number of students who take your class is largely out of your control, so you have to adapt (sometimes majorly so, as I’ve discovered). Likewise, the number of students who turn up for the class, or who have prepared is a variable that you work around.

If you think of these as just variants on the rule problem, then you can start to see how you can work to the other rules in your life.

Until they change, of course.

My Students Don’t Read: Responses to a Classic Classroom Problem

All experienced instructors have had this happen to them: You assign an interesting reading that is pivotal to a topic on the syllabus. You emphasize to the students how important it is that they complete that particular reading, as it will be the basis of the next class session’s discussion. Walking into class, you smile, anticipating a smart, informed discussion on a fascinating topic, and ask a basic question to get things going. And then, the silence, and the signs: the blank stares, the eyes that won’t meet yours, the walls and shoes and notebooks that suddenly are the most interesting things in the room. Your smile drops as you realize the horrible truth: none of the students did the reading.

Quickly you realize it’s not entirely true: a small handful of students, the ones you can always rely on, tentatively raise their hands. Others may have skimmed the reading, or tried to do it just as class started. Still others pull it out as you ask the question, trying to do in 30 seconds what they need a concentrated 10 or 30 minutes to do. Despite this, the vast majority of the class simply did not do as instructed.

What’s the dedicated instructor to do?

I have been teaching for more than ten years, and this happened to me twice this semester alone. In one case, only one student out in my intro to IR class had read Thucydides’ short Melian Dialogue that IR teachers the world over use as an introduction to Realism—even though they had weekly reading quizzes on the material. In my intro to American politics course, none of them had read Federalist Paper #84, which outlines the arguments regarding the inclusion of a bill of rights in the Constitution. In the moment when I realized that my students were not prepared for the reading-based discussion I had planned, I had a decision to make: how would I respond to their lack of preparation?

A few options immediately came to mind.

Continue reading

Making alignment work

Still not…

I’ve just been helping a young child who lives in my house with their French homework, practising sentences for a test that’s coming up. I imagine that many of you will have done the same, either in the parent role or the child role (or both, for that matter).

For me, it was a pointed demonstration of the perils of alignment in teaching. The child is going to be testing on their ability to write out a series of sentences, so is focused entirely on that. Thus, when I ask them to read out the sentences, I get something that even I know isn’t good pronunciation: ‘magasins’ is remembered as ‘being like mega, but maga, and then sins’.

In short, this child, like pretty much every learner, is learning to the incentives that are provided: if the teacher isn’t going to be bothered about the speaking, then why should the child? Continue reading

Evaluating module evaluations

As for many of you, January is the time when students’ evaluation of your autumn courses and modules come in. It might also be the time when you have exciting conversations with line managers.

I think that I’ve laid out my view on such evaluations over the years – managerialist and often mis-directed questions – but perhaps its useful to think about how you can make the most of the information they provide.

As so often, three ideas to frame all of this.

The first is that course evaluations are useful, if properly contextualised. That means using them together with all the other feedback you get from students, plus your own reflection. I like using the ABC method for more constructive student input, but there are also all those chats you have with students, plus their assessed work: if no-one seems to understand concept X, or confuses A and B, then maybe you’re not presenting things very well to them. The key point here is triangulation: does a piece of evidence have support elsewhere?

The second idea is that you have to engage properly with the evaluations and the reflection. I, probably like you, have been known to skim through the comments, find the thing that it is obviously ridiculous and use that to roll my eyes about the whole exercise. As political scientists, we should know that just because people sometimes say and do silly things doesn’t mean that they are silly, or that everyone is silly. Instead, we need to understand why they say these things and how we might respond.

Of course, this is a bit tricky, especially when evaluations are anonymous and asynchronous to the class activities. Hence the importance of you running your own running evaluations throughout your contact time. Often, the source of the frustration is that you feel you’ve done something and the student hasn’t recognised that: this autumn, I laid on much more support on my assessment than before, only to read one student’s comments that even more was needed. The point should be that I need to think about how I communicate what I provide more clearly next time, rather than trying to track down this year’s lot and justify myself.

And this is the third point. Course evaluations are not meant to be character assassinations and – in the very large majority of cases – are not used as such by students. Much more common, in my experience, are staff taking comments as personal attacks.

Just as evaluations are about the students’ experience of the course, rather than about the student themself, so too should you treat them as about the specific instance of the course, rather than about you.

There’s the old teacher-training trope – which is actually very useful – that says people go through three stages in their teaching practice: they start by thinking everything’s about them (as teachers), then think it’s all about the students, and finally realise that it’s about the specific instance of interaction between them and the students. And so it is here.

One of the things we keep on returning to here at ALPSBlog is the idea that there is no one right way of doing things, only a series of choices that you can explore with your students. That requires self-awareness and self-criticality, underpinned by a sense that things will never be completely ‘right’ in any lasting sense.

Course evaluations might be flawed, but that doesn’t mean they’re not useful. But it also doesn’t mean that they are the be-all and end-all.

Thoughts from the 2016 EuroTLC, Brussels

carolabetzold_bild_mini johan_adriaensenThis guest post comes courtesy of Johan Adriaensen (Maastricht) & Carola Betzold (Antwerp).

Higher education often engenders a dual ambition. Upon graduation, we expect students to be prepared for the professional labour market. At the same time, we aspire them having developed academic qualities such as a critical mind-set, an understanding of scientific research and an inquisitive attitude. Reading between the lines, it is not hard to see that many of these ambitions do not revolve around students’ acquisition of knowledge but rather about the mastery of particular skills and attitudes. While there is a lot of literature available on innovative teaching methods to promote the learning of skills and attitudes, we wondered whether the standard methods of evaluation (exams, written assignment) are adequate to assess a student’s mastery of these important skills and -in turn- signal their accomplishment to any future employer.

But what exactly are these skills that students should learn, what are different ways of evaluating these skills, and how could we help students showcase their skills to the outside world? To address these questions, we organised a session on “Student evaluation and student portfolios” at the recent EUROTLC conference in Brussels. Using a World Café format, participants first identified generic skills students should acquire over the course of studies and then turned to different forms of evaluation of these skills. Finally, the discussion centred on student portfolios as one tool to enable graduates to present their skills to future employers.

So what are the skills we should teach and students should learn? The list is long: being able to communicate clearly via written as well as spoken word. Organising, prioritising and filtering information. Acquiring an inquisitive mind and becoming a life-long learner. Interestingly, these skills were quite generic to university education; it was much harder to identify skills unique to political science, international relations or European studies. Yet, the relative importance of the identified skills – and thus their prominence in the curricula – is likely to differ.

How can we assess these different skills? Is there more than essays and exams to evaluate students? Does our examination privilege certain skills or types of learners, and if so, how could we change this? Participants agreed that the evaluation of skills and attitudes require a slightly different approach and brought a range of examples on how they or their institutions provide feedback and evaluate students. One participant for instance described how he has a “menu” of tasks that students need or can do to obtain points in his class. Some elements are mandatory, but most are voluntary. Students can thus select a format that suits them: you may want to write an essay, but you could also do a presentation or take an oral exam. Another participant presented how they use peer review to obtain feedback on group work, whereby all group members have at certain points rate themselves and their peers on specific criteria such as creativity, reliability or punctuality. These open ratings are then discussed within the group: why did you give or obtain this rating? What do you take away from this? This peer review system worked very well, but did not influence the final mark.

But how much does such a mark really say about skills to a potential employer? How could students provide evidence for their skills beyond a numerical mark on an abstractly named course? To this end, we proposed the use of a portfolio. We viewed this portfolio as a sort of repository of students’ achievements and activities. The question then was how can we, as academic staff, help students to build up this evidence into a student portfolio? Ideas ranged from specific written assignments such as position papers, speeches, articles in student journals or opinionated editorials to participation in simulations and student debates. Branding and badging is an important aspect to ensure recognition of the accomplishments of the students. Competitions or the award of prizes are but one example how this can feed into a portfolio. With such a repository, you have concrete examples you can refer to in cover letters or job interviews to plausibly show what you can do.

Ultimately, time was too short for our discussions to come to a conclusion. Still, we were left with the impression that our exercise is useful for many educational programmes. Clearly, each programme is likely to prioritize different skills, requiring a different evaluation practice and offering alternative opportunities to develop a student’s portfolio. As in our World Café, the choices ultimately made, was contingent on the participants around the table. Identifying the required skills and tailoring one’s programme to it, is a collective endeavour of all involved teaching staff.