Got an interesting classroom exercise, project, or experience that you’d like to share? We want to publish it. Submit a draft of a blog post in an email to email@example.com for the editors to review. Guidelines are on the About page.
This weekend I caught up with an old friend. He works for a software company, overseeing the sales team.
Recently, he’s been doing some work with occupational psychologists, to get a better handle on the team’s stress levels. He told me about all this over a cuppa, including the SCARF model, which I’d not heard of.
SCARF is a diagnostic framework for identifying sources of stress, where individuals encounter challenges to their Status, Certainty, Autonomy, Relatedness (being part of the group) and Fairness.
There’s a useful summary (and diagram) here.
Listening to my friend, telling me how this works for his team (status is the big thing, apparently), I was struck by how this works in the educational context.
For example, one of the reasons why assessment is so stressful is that it hits most of these areas: students might feel success brings status with teaching staff, it’s relatively uncertain, it’s out of their control, and it’s not necessarily a fair way to judge achievement. The gain of a shared experience with other students pales next to all this.
Clearly, there are general lessons about student welfare to be picked up from this model, but it’s also useful to consider how it relates to active learning.
In traditional, transmission-centred approaches, life might appear to be relatively stress-free: most of the time you sit then, soaking up material, with the occasional bouts of panic at assessment time.
By contrast, active learning might be more challenging.
The biggest issue is likely to be the increased requirement for autonomy: active learning requires participation and the production of contributions on a rolling basis. This front-loads requirements on students, at a point where they might feel they know relatively little (raising issues of status (you want to look good in front of friends) and relatedness (you don’t want to get marginalised in the group if you fail)).
Similarly, the relative absence of the instructor means students have to self-regulate more than usual, so fairness might become more of a factor than in a situation where fairness gets imposed from above.
And it’s also worth highlighting that the model points to active learning being more stressful for teaching staff too, with lower status, higher uncertainty and a big hit to autonomy: no longer is everyone doing just what you want of them.
Despite this, I think that active learning’s benefits outweigh these costs.
Firstly, precisely because students are brought actively into the process from the start, they have much more time to prepare themselves for any summative assessment, both in terms of having to consider materials and of practising producing ideas. The stress is spread out, rather than concentrated at the back end.
But equally, if stress is managed properly, it also comes with raised engagement. If we are making our active learning spaces safe (as we always should be), then we are offering students both the opportunity and the tools to manage stress better, which not only points them to thinking more about the matter in hand, but also how to deal with other sources of stress in their life.
We’re helping our students to learn about the world and how to engage with it. That means skills matter at least as much as substantive knowledge. And handling stress is one of those skills. Yes, active learning is more stressful for all involved, but the benefits that flow from that are ones that might serve us all well.
To illustrate the dilemma I presented in my last post: the possible devolution of the final exam for one of my courses.
My Fall 2018 exam was an attempt at an authentic writing exercise, but students had to choose one of two positions and use course readings. This meant I supplied the available arguments and evidence, and the exam was actually an assessment of rhetorical skill. Students didn’t demonstrate an ability to use concepts that I thought were crucial for the arguments they had developed.
For the final exam in Fall 2019, I ended up giving students a choice of arguments — “basis for aid policy to Egypt” and “plan for Louisiana’s future” — and I added this to the instructions for the exam:
Apply relevant concepts like discounting the future, moral hazard, etc.
Students still had to select one of two predetermined positions regardless of the argument chosen, and again I specified the pool of evidence they could draw from. And students still didn’t demonstrate knowledge of concepts listed in the exam’s instructions.
What next? I could have a final exam that asks students to, for example, “make an evidence-based determination of whether moral hazard exists in a location affected by climate change.” But this type of exam prompt might introduce even more problems.
It is the most basic of tropes about learning. It’s tricky to start with, but once you’ve learnt to ride a bike, you never forget.
Writing as someone who’s just started their fifth calendar decade of bicycle riding for pleasure, as someone who’s ridden up (and down) mountains, commuted by bike for years in various urban areas, and who’s taught his kids to ride, I’d just like to challenge this.
That’s right: I’ve bought a tandem for a very reasonable sum on EBay.
It’s a bike – it’s got only the two wheels – and it works in just the same way as all of the other bikes I’ve ever owned or ridden.
But it’s also very different. I now have to think about the other person on the machine and what they’re doing and how I will need to communicate what I’m doing.
And even getting past that, how I ride will have to change very markedly too: the brakes – like the bike – are relics of a past age, so assumptions of stopping distances will have to alter radically. As will my memory of indexed gear-shifting at my fingertips.
None of this is going to be helped by the fact that I’m going to keep on riding my other (‘normal’) bikes, so the potential for immersing myself into this is constrained, even if I can rustle up a family member to share this experience with me.
In short, I appear to have acquired a large metaphor for the learning process.
Right now, I’ve had this metaphor for a grand total of two days and I’ve got as far as the end of the road with it, so I’m still at the stage of not even being particular sure what it is that I will need to learn.
I’m especially worried about doing that learning with a loved one right there, learning too: the various scars on my body are testament to my periodic efforts to understand the limits of what I can do on a bike.
Much as could go back to that other great saying about bikes – when you fall off, you’ve just got to jump back on – the sense of responsibility is somewhat constraining. Indeed, much more so than I feel in a classroom when trying something new: at least there the failures don’t result in road rash or broken bones (unless something’s gone extremely wrong).
But I can also tell you that our short trip down the road also reduced us both to tears of laughter, so I already know that this can be an enjoyable process: indeed, that’s why I got the thing in the first place.
So if you find I’ve stopped posting here, then maybe it’s because I’m off having an adventure on a tandem with a loved one. Rather than because we’ve had a crash.
My last post discussed writing as a professional endeavor. Today: writing as a device for learning; i.e., why and how we as teachers assign writing to students.
Generally we present our students with some form of expository writing task. Perhaps we call it thesis-driven, discipline-oriented, argumentative, or research-based. Regardless of the label, there is an assumption of students locating relevant primary data by means of different methods that they understand how to use, evaluating the data in an appropriate manner while being aware of their own assumptions, reaching some conclusion, and effectively communicating all of this to an audience.
That’s the ideal. The reality? Students often don’t know how to find primary data, or which methods are best suited for analyzing it. They may not even know what methods are. They assume there is either one right answer, or that all possible answers are equal, because they don’t understand that some answers can be more strongly supported by data than others while even better answers await discovery in the future.
And so we default to assignments that direct students to preferred secondary or tertiary sources (a “text”), tell them to organize their explanations as competitions between two artificial, diametrically-opposed positions, or, sometimes, encourage them to dredge up arguments that arrive at positions they already favor. Students learn to hang evidence on a predetermined conclusion rather than derive a conclusion from the evidence.
This type of deductive exercise has been used by teachers since the age of the agora to build students’ rhetorical skills. Today, unfortunately, it can produce people with a facile ability to argue any position at any time without veering from a worldview that they hold to be sacrosanct.
So what’s the solution? I don’t really have one. Too few of the students I encounter are willing or able to draw reasonable conclusions from evidence they have independently located, so writing exercises that involve inductive reasoning get chucked out the window. It’s frustrating.
Apart from a dull ache from the thought that the 1990s are now, on average, a quarter-century ago, it’s been a good break.
It’s been especially good to get away from all those social media posts about how much one has changed over the past decade, complete with youthful/haggard profile pics to chart one’s maturation/decline.
The usual thing to think about at this time of year is what you’ll change.
I’ve already had one colleague inform me they’ll be focusing on writing every Friday, even as they undermined it by querying whether it will actually happen.
So I’m going to suggest you try not changing things right now.
Instead, focus on what you do that works. It’s a bit harder than picking out what’s not working, but it’s a more positive starting point.
In practice, it’s what most of us do in any case: building out from successes and chipping away at the problems around the edges. Sure, it’s probably not as likely to treat root causes of those problems, but if the latter are really ramping your style, then we should be having a different conversation.
Part of this is about perspective.
Writing personally, I’ve been thinking a lot recently about how I manage my work and my career, probably as a result of a podcast interview I’ve discussed already.
I’m aware that I’ve got various pressing deadlines right now, and that this post is another element of my avoidance strategy, and I’m aware that I could be doing a whole bunch of other things that would be Good Things To Do too.
But instead of succumbing to the winter blues, I’m trying to pull myself out of it, reminding myself that I am reasonably competent and have handled much worse situations than this before.
It doesn’t mean it’s going to be easy or particularly enjoyable, but the prospect of what is to come cheers me up.
As one trivial example, I made a simple wall-planner for the half-year, listing my various events and talks. Not only does it remind me of More Stuff To Be Done, but it also makes me think about how much I enjoy sharing my work with others.
So, stick your head out the door and notice the days getting longer*, and think on reasons to be cheerful.
* – Sorry to any Southern Hemisphere readers on this one
If you’re like me — a contractual teaching load of seven courses per academic year, plus overloads, committee work, and administrative duties — you tell yourself that you’ll work diligently on those unfinished conference papers and journal manuscripts during the winter holidays. And then life happens, time slips away, and suddenly the spring semester is about to begin.
There are simple tools — which aren’t part of the standard graduate program curriculum, but should be — that can help you become a more productive writer. I’ll mention two.
Stretch & SMART
The stretch goal is your ultimate objective or ambition; the whole project. For example, write a complete draft of a book chapter. SMART is an acronym that describes the actions that need to be taken to reach one’s objective:
- Specific — actions must be defined and discrete, such as create a literature review that will be part of the book chapter.
- Measurable — actions must be countable so that progress can be gauged. Each morning, find and read twelve peer-reviewed articles that are relevant to the book chapter. For each article, write a phrase or sentence on, respectively, its methods, findings, and quality.
- Achievable — create the conditions needed to complete the above tasks. Clear morning schedule, turn off email.
- Realistic — ensure that the tasks can actually be accomplished. Don’t go down rabbit holes; on the first day select which journals will be searched, a date range, and other limiting criteria.
- Timeline — establish a schedule with an endpoint. I am devoting one hour each morning to the literature review. If I define my search on Monday, and then subsequently locate four articles per day, then I will have a total of twelve articles by the end of the allotted time on Thursday and can begin writing the literature review on Friday morning.
There are many definitions of Stretch & SMART; if the one above is unclear, others can be found with a quick internet search.
Front Forty & Back Forty
Front Forty & Back Forty maps the tasks that are part of a project and tracks which of those tasks have been completed. The technique was invented by my colleague and illustrator extraordinaire, Susannah Strong. An explanation is here. Make sure to scroll down to the sample map.
Today we have the second of two posts from Nicholas Wright, of the political science department at University College London. He can be reached at nicholas [dot] wright [at] ucl [dot] ac [dot] uk.
Some key insights from the simulation:
- Talking to colleagues matters. While all students produced negotiation briefs, very few used their briefs as a basis for preliminary discussions with colleagues. Some did seek to build alliances with like-minded partners, but few outside the formal leadership group of HR/VP, Presidency and institutions attempted to build bridges to those they thought would have opposing positions. All recognised the importance of knowing what their partners wanted, but not all acted on it. Those that did felt better prepared for the meeting.
- Respond to the draft agenda. Several students felt that the agenda did not sufficiently reflect the issues. However, even though it was circulated well in advance, none of the member states engaged directly with the HR/VP to discuss whether it could be amended, even though it was deliberately anodyne to enable flexibility.
- Time flies. Everyone felt the pressure of time, especially in the second hour. They all thought they had more time for discussion than they did and did not consider time allocated for the debrief. Despite the Chair encouraging them to move as swiftly as possible to the second question, it was neglected.
- Being heard matters . . . but so does taking part. With any group of students there will be those who are more forthright and vocal, so part of the challenge is to encourage everyone to participate as fully as possible. Ultimately, the time is theirs and this year everyone made at least some contribution. France, Germany and Hungary were all quite active, while Ireland less so. The UK representative struggled to get the attention of the chair, partly because of the table layout, but also because she felt constrained by the impact of Brexit—thereby, wittingly or not, reflecting the reality of these meetings since 2016!
I drew three lessons from the simulation that I can apply to the future:
- Picking a good leadership group matters. This is quite a challenge as roles are assigned early in the term and it is not always clear at that stage who will have the requisite skills to manage the meeting. But this year, I feel my choice was vindicated – the HR/VP was effective and was ably supported by the EEAS and Commission.
- Time management is crucial. This year I deliberately reduced the number of questions to two to allow even more time for discussion and negotiation, but did not anticipate that the discussions would become so dominated by the first question. Next year I will reduce the initial tour de table from 3 to 2 minutes and in the pre-briefing with the HR/VP really emphasise the need to be strict on time. We’ll see if that makes a difference.
- Build on the negotiation component of the module. There is an opportunity to include more training in negotiation prior to the simulation. Adding a short exercise in preceding weeks that deals with specific negotiating scenarios would improve the practical aspects of the module and probably the simulation itself.
Today we have the first of two guest posts from Nicholas Wright, of the political science department at University College London. He can be reached at nicholas [dot] wright [at] ucl [dot] ac [dot] uk.
My favourite class of the autumn term is when we simulate a crisis meeting of the EU’s Foreign Affairs Council (FAC) as part of my ‘EU in the World’ postgraduate module. It’s a great opportunity to turn from (sometimes quite dry) conceptual and theoretical discussions about EU actorness (or lack thereof) and test out in practice some of our assumptions – e.g. about how negotiation and decision-making work in the FAC, how far traditional power attributes bestow advantage, etc. It’s also a great opportunity for the students to take the lead while I remain on the sidelines and observe.
This year, our meeting focused on Turkey’s military deployment in northern Syria. The students had just two and a quarter hours to reach a consensus and draft answers to two questions:
- Should the EU respond any further to the crisis?
- Does the EU need to reset its longer-term relationship with Turkey, particularly regarding Turkey’s path to possible future EU membership?
The outcome of the meeting was interesting. Beyond rejecting any form of EU Common Security and Defence Policy (CSDP) military or civilian mission, the group could not reach consensus on anything. Caught up in discussion of the first question, which I had intended to be resolved swiftly, they had insufficient time to adequately address the second. The resulting diplomatic statement offered little in the way of concrete action while substantive discussion of the EU-Turkey relationship was postponed for a future meeting.
This outcome was initially considered a failure in the subsequent debriefing. But in dissecting what happened, the ‘failure’ highlighted to very good effect the challenges posed by this kind of policy discussion, especially when actors with clear status quo positions seek to prevent a more proactive policy response.
Using the simulation:
The simulation takes place in Week 7. In Week 3 students are informed about the topic, provided with briefing documents, and assigned their roles. The class is capped at twenty students so not all EU member states can be represented. One is assigned the role of HR/VP (High Representative of the Union for Foreign Affairs and Security Policy) who acts as chair, and one each acts as the holder of the rotating presidency, the European External Action Service (EEAS), and the relevant Commission Directorate-General. The remaining students are assigned a member state and wherever possible they represent their home state.
As preparation, students submit a 1,000-word negotiation brief in Week 5. Although the simulation itself is formative, the brief is assessed and must set out the historical role of their actor in relation to the Common Foreign and Security Policy (CFSP), their goals and red lines, and a strategy for how the meeting will be approached. Students may disclose the contents of their briefs to one other, but only after submission.
In Week 6, the HR/VP, in consultation with the rotating presidency, EEAS, and Commission, produces a preliminary agenda for the meeting. From that point, students are actively encouraged to consult with each other up until the simulation starts. To facilitate this, I created a discussion forum on the module’s Moodle page, and this year students also used WhatsApp.
The simulation starts with a brief introduction where I remind them of the ground rules including the time limit. Then the HR/VP takes over the task of facilitating the discussions, beginning with an opening tour de table.
Approximately twenty minutes before the end of the simulation, I introduce a ‘live’ update that is intended to disrupt their deliberations, test their ability to think on their feet, and get them to demonstrate their understanding of their actor’s interests and priorities. In this case it was a Turkish decision to suspend the 2016 refugee agreement with the EU, which resulted in a hastily drafted additional paragraph at the end of students’ conclusions.
We conduct a twenty-five minute debriefing after the simulation. Students consider whether they achieved the goals they had identified in their original briefs, reasons for why this happened, and what they could have done differently.
The other week, I sat in on a session run by another part of the university for people to show their teaching practice.
One of the people presenting kick-started their slot by asking everyone to write down their definition of ‘education’ on a post-it, which they then used to elaborate on some key themes.
Of course, one person wrote down something that bore little relation to what everyone else had done.
The presenter choose not to get into whether education really is a system of indoctrination, and the person who wrote it down didn’t press them on it, but it did raise an more general question.
How do you cope with stuff coming in from left field?
I’m guessing you’ve had this too: running a session, then someone either saying something so far from the mark that you worry they’ve totally misconstrued things, or offering up a very radical take.
In both cases, you really need to explore what’s happening, either to offer a corrective or to embrace the new breadth that opens up.
This has really been brought home to me this semester by the new course I’ve been teaching, which involves the use of a lot of critical theory to understand European integration.
It’s a course I inherited from a colleague, who was kind enough to let me use her materials, which I largely retained, because I wanted to challenge myself.
That’s been a really positive experience, both because I’ve had to reconsider the ways of talking about the material and because I’ve had to learn about using some new methods, so I can teach about using those new methods.
Fortunately, the use of critical approaches does necessarily invite challenging of ideas and approaches, so the space was very conducive to working with the broad range of ideas present, but it’s something we have to work on, whatever we’re doing.
Personally, I see such moments as opportunities to get students to articulate their thinking and to connect that to what else is happening in the room, which ultimately serves everyone’s learning: if nothing else, if one person is struggling to make sense of a point, then others will most likely be too.
But back to that teaching session.
You’d all recognise that panicked expression as the presenter read out the helpful/unhelpful contribution, and the rapid adjustment of language to note that almost everyone in the room has identified some key themes.
In that case, going off into the outlier wouldn’t have worked in hitting the learning objectives, but often it will, because it is a moment either to bring the confused into the fold, or to bring new ideas to the group.
Both of those are Good Things to do, so do make the most of them.
And think about who you invite to your teaching sessions.
Not knowing whether one has actually helped students learn is one of the most frustrating aspects of teaching. Assuming an absence of megalomania or the Dunning-Kruger effect, indications that we’ve made a difference are typically quite vague and ambiguous. So I was pleasantly surprised — as in, “hey, maybe students really did benefit from this” — by the results of a knowledge probe that I launched at the beginning and end of the semester in my course on economic development and environmental politics.
The knowledge probe was an ungraded quiz that asked questions about a few basic economic concepts, administered through Google Forms in the first and last weeks of the semester. Results, showing percentage of respondents answering correctly, are below.
N = 21
N = 16
|Diminishing Returns to Capital||52||75||44|
|Common Pool Resource Problem||48||81||69|
Obviously this wasn’t the perfect tool. Sample sizes are too small for statistical significance. And a slightly different proportion of students reported previously taking a university economics course on the pre-test than on the post-test. But the numbers at minimum suggest that students learned something over the semester, which gave me a sense of satisfaction that I otherwise wouldn’t have.