Got an interesting classroom exercise, project, or experience that you’d like to share? We want to publish it. Submit a draft of a blog post in an email to firstname.lastname@example.org for the editors to review. Guidelines are on the About page.
A very short one today, as I’m struggling with a pile of stuff that I’m not sure I understand.
While it’s great that I get to do things I wouldn’t have otherwise be able to, Brexit has also meant I get asked to explain things that are either at the edge of my knowledge, or which are so novel that no-one’s considered them before.
You might have this in your classroom sometimes – I know I still do – so a couple of thoughts on how I handle it.
Firstly, work from what you know.
Nothing is so out-there that it doesn’t touch on something that’s much more settled, so build your conceptual bridge out from that. It not only gives you something more solid to work with, but often it’s where those involved are working from too.
Secondly, consider the range of options.
Politics is great to study because of its uncertainty, but that usually works within a bounded set of pathways. The more you can work through what that set might include, the better you can evaluate how actors might choose among them.
And thirdly, don’t be afraid to say you don’t know.
No-one knows everything and sometimes it’s a matter of either being too early to tell, or too uncertain to guess. Park it, say what would be a marker of things changing in a way that you could tell, so that your audience is left with some tools, even if they don’t get the answer there and then.
Right, back to the world of UK Parliamentary procedure.
A follow-up about asking students why they do what they do . . . For the second stage of this data-gathering exercise, I had students use Post-its to anonymously answer three questions at the beginning of class:
- How are you feeling right now? (the one-word check-in)
- Why are you feeling what you’re feeling?
- Why did you come to class today?
Nineteen out of twenty-three students, or more than eighty percent, reported feeling badly — the same proportion as last time. Of the nineteen, ten referenced being tired while four wrote “stressed.” Only one wrote “hungry.” The overwhelming majority of people in this group attributed their feelings to too little sleep and too much work.
The other four students felt “happy,” “good,” “relaxed,” and “chill.” Three of these students attributed their feelings to having had time to eat, buy coffee, or otherwise get ready before class. One of them mentioned sleeping comfortably, while another wrote “not super-stressed . . . trying to stay calm for the day ahead.”
I sorted answers to the third question into a few different categories, which are shown below, along with their frequencies. A few students’ comments fell into more than one category.
- I had to; attendance is mandatory: 7
- Get a good grade: 5
- I am paying for the course: 3
- Learn something: 3
- Participate in discussion: 1
- Collaborate with teammates on an upcoming assignment: 3
- Miscellaneous reasons — “My roommate told me I couldn’t skip,” “I was awake so I figured why not,” “Because I didn’t go to the last one,” “I try to go to all of my classes,” “Didn’t want to miss anything,” “To avoid falling behind”: 6
In sum, only seven students, or thirty percent, indicated that they had been intrinsically motivated to attend class that day; i.e., they came to learn or participate in a learning-oriented activity. More than half of the students indicated that they were extrinsically motivated by the fear that their grades would be harmed if they did not attend. What I think is interesting here: I do not penalize students for being absent from class — I regard them as legal adults, free to suffer the natural consequences of their actions. I do not grade on attendance or class participation. Only students’ written work, submitted before class, gets assessed.
This guest post is part of a series linked to the publication of G. Pleschova & A. Simon (eds.) Learning to teach in central Europe: Reflections from early career researchers. This post comes from Maxine David.
In her chapter Life after academia: preparing students for successful collaboration, Kovačević talks us through her 2017 experiences teaching a course on EU Enlargement at the University of Economics in Bratislava (EUBA). We are first given a little insight into teaching practices at EUBA and into Slovak Higher Education legislation, before moving on to the detail regarding the problems she felt needed remedying, the method she employed, the rationale for it and expected results. Data collection and results are then discussed, albeit the latter more comprehensively than the former. The chapter ends with Kovačević’s reflections on the application of group presentations and the challenges involved in devising a reliable research design to generate data on applied teaching methods.
Many academics who have been teaching for a good number of years will recognise their own early teaching days in the experiences Kovačević describes. At one level, this is rather depressing; I, for instance, have assessed students through group presentations for a decade and more. Even at the beginning, I did not consider it as terribly innovative given it was what I had encountered on my own Bachelor’s degree back in the 1990s. The chapter therefore raised questions for me about what might really be called innovative. In turn, that suggests the real value in Kovačević’s chapter: first, that it adds to our knowledge of other contexts; second that it highlights the wider failure of many academics to engage sufficiently in an exchange of pedagogical knowledge and practice.
Kovačević is therefore to be commended for the degree to which she has problematised the learning process, thinking about impediments to learning and how to overcome them at this early stage in her career. She grounds her thinking in the literature explaining the benefits of group work and presentations, especially in respect of developing transferable skills and enhancing employability. Based on that literature and her prior experience in teaching the course, she comes up with three hypotheses. The first of these is somewhat unclear: “The innovation—i.e. group work—takes place in a classroom environment that is supportive of learning via collaboration”. Does she mean that she is hypothesising groupwork is innovative, that groupwork is learning by collaboration, or something else? The other two hypotheses are clearer: students’ interest in the subject matter will be increased as a result of the process of creating a group presentation; and there will be a noticeable increase in subject-specific knowledge, as well as related skills.
Again, for many teaching in environments that regularly apply such methods, these will be self-evident: as Kovačević herself acknowledges, the benefits of student-centred learning are already well-recognised. Clearly, however, whether as a whole or just in pockets, methods that put the student at the centre of the learning process are not the norm for EUBA (and many another institutions).
In her course, working with seminar groups of around 13 students, Kovačević began by having students collaborate to produce a poster on Turkey in the EU’s enlargement process before moving on to the creation of a Powerpoint presentation. She is keen to point out the support that was offered throughout the process, including instructor and peer feedback. It is a shame, incidentally, that we did not hear more about this peer feedback, a notably tricky area (see, for instance: Liu and Carless).
The chapter is weakest on talking us through definitions and in the section on data collection and methods, though the latter aspects are partly addressed in the conclusions. On definitions, it is not entirely clear what is meant by “presentation”. Presumably, it is confined to a Powerpoint presentation but it could be more (role plays) or less (students acting as rapporteurs) extensively construed. The question is an important one for those thinking about adaptions to the method Kovačević applies.
Questions about measurement and comparison are also insufficiently considered. For example, the third hypothesis (“Student learning, including knowledge and skills after collaborative group work, is noticeable”) begs questions about how levels of learning can be measured, and compared to what. If we accept group presentations as innovative, we must accept also that others need to be persuaded of the relative benefits of such innovative teaching, otherwise, why change? As such, the persuasive potential of the chapter is reduced. Methodologically, it would have been useful to know how Kovačević recorded and evaluated the “student activity and behaviour” she observed.
Notwithstanding the number of unanswered questions, the chapter is an important one. It functions as a reminder that there is still much to be done to convince others of the benefits of changing ways of thinking and doing because innovation is not contagious. It is a reminder too that rigorous and reliable evidence is sometimes difficult to generate and that without that, it becomes all the more difficult to overcome resistance to change. Finally, the chapter is important because it raises implicitly the question of whose responsibility it is to bring about change. Should it be contingent on young scholars, under pressure in so many other ways, to undertake all this work? I think we all know the answer to that question, what we are doing to address it is another matter.
This is a review of “Enhancing formative assessment as the way of boosting students’ performance and achieving learning outcomes.” Chapter 8 of Early Career Academics’ Reflections on Learning to Teach in Central Europe, by Nikita Minin, Masaryk University.
Nikita Minin of Masaryk University is motivated by a goal we can all appreciate: ensuring that his students achieve the learning outcomes of his course. In his case, the course is a graduate seminar on theories of IR and energy security and the learning outcomes include improving student skills in critical thinking and writing. He noticed that students in his class did not seem to really improve on these skills during the class, and introduced three teaching interventions in an attempt to fix this.
First, Minin provided more intense instruction on the writing assignments at the start of the course, providing a grading rubric and examples of successful student work. Second, he gave students audio rather than written feedback on their papers. Finally, using a sequential assessment system, the instructor gave formative feedback first and grades much later in the course. Minin assessed the impact of these three interventions, comparing course sections with and without them, and concluded that the first two interventions achieved the objective of improving student achievement of the learning outcomes.
The interventions described in the chapter are in line with current thinking regarding in-course assessment. While Minin does not use the language of transparent teaching, his first intervention falls exactly in line with the Transparency in Teaching and Learning Project’s (TILT)approach. Transparency calls on instructors to openly communicate about the purpose of an assignment, the tasks they are to complete, and the criteria for success, and Minin does exactly that in this first intervention. Given the data so far on the TILT project, it is not surprising that Minin saw some success by taking this approach. Likewise, now-ubiquitous learning management systems allow for giving feedback in multiple platforms, including audio and video. For years now, advocates for audio-based feedback claim that this can be a more effective tool than written feedback. Minin’s observations therefore, also fit nicely in line with existing work.
Where the chapter falls short, then, is not in the design of its interventions, but in the claims made based on the available data. The sample sizes are tiny, with just five students receiving the interventions. With final grades used as the primary dependent variable, it is difficult to tease out the independent impact of each of the three changes. Using final grades is also an issue when the experimenter is also the person who assigns grades, as it is more difficult to avoid bias than when more objective or blind items are used. Lang’s (2016) bookSmall Teaching: Everyday Lessons from the Science of Learningtells us that engaging in self-reflection is itself an intervention, and Minin’s use of minute-paper style self-reflections to assess the impact of feedback, while itself an interesting and potentially useful idea, mean that a fourth intervention was used in the course. While I do not doubt Minin’s observations that his interventions had a positive impact, as they are backed by existing research, the evidence in the chapter does not strongly advance our confidence in those findings.
However, I have never been one to dismiss good teaching ideas simply because of a lack of strong evidence from a particular instructor. Minin highlights a crucial concern—that we should never assume that our courses are teaching what we intend them to teach, and that ‘time and effort’ do not necessarily achieve the desired results, even for graduate students. Reflecting on this, seeking out innovative solutions, and then assessing the impact is a process we should all be following, and Minin sets a great example.
My first-year module this semester has been a real training ground for me. Not only am I going all-in on flipping, but I’m also trialing the new assessment software that the University is thinking of using.
By extension, that also means it’s a training ground for my students, something that I’ve been very open about with them.
The flipping seems to be working and I’ll be writing up my thoughts on that later in the semester, but having coming through the first use of the software I need to make some decisions now.
In part, my situation arises from wanting to push how we used the software past a conventional approach. Not only did students submit a literature review to it, but they then had to review someone else’s using the system, all in aid of a final piece of self-reflection (which we’re marking now).
Using the marking function is a bit more involved than just submitting work and a couple of people did get a bit lost on that. But the bigger problem was that not everyone submitted work.
In the good old days (i.e. last year and before) we did all this in-class, so it was much simpler to cover (the exceptionally few) missing pieces. However, because we’d pre-selected peer reviewers, we ended up with some students having nothing to review and others not getting their work reviewed.
That’s a failing on my part: next time, I’d leave allocation until after the first submission was in, so everyone who submitted got allocated and reviewed.
But that’s next time. What about now?
Already, I’ve indicated to everyone that not getting peer feedback won’t count against them in marking, but a couple of students have felt that absent such comments they’re not in a position to complete the self-reflection.
To that, I’ve had to underline that it’s self-reflection, so peer feedback was only ever one component of that: indeed, the whole purpose of the somewhat-convoluted exercise is to get students becoming more independent and critical about their learning.
All that said, peer review was added in here to help prompt everyone to think more about what they’ve done and what they could do.
As we sit down to mark, the question will be much we can, and should, take the circumstances into account. Until we’ve seen the full range of work, that’s going to be a tricky call to make.
However, it all highlights an important point in such situations: do we have fall-backs?
Trying new things is inherently risky – that’s why many colleagues stick with what they know – but with some risk management, that need not be a barrier to moving practice forward.
Annoying through our situation here is, it’s not fatally-compromising to the endeavour: we know who’s affected and how; they’re still able to submit work; and the assessment is relatively small in the overall scheme of things.
Yes, we’ll be using the system again for the final exam, but without the aspects that have proved problematic. Indeed, the exam has already been trialled elsewhere in the University, so that’s well-understood.
So, on balance, I feel comfortable that we can manage the situation and implement the necessary changes next time around to remove the problems identified.
Which is, of course, a big part of the reason for trying it out in the first place.
An important component of both statistical and information literacy is the ability to recognize the difference between correlation and causation. Teaching this skill is made even more difficult by cognitive biases that lead to errors in probabilistic thinking.* So I decided to hit my students over the head with Chapter 4 from Charles Wheelan’s Naked Statistics and, from Tyler Vigen’s Spurious Correlations website, an image of the 99.26% correlation between the divorce rate in Maine and margarine consumption.
The assignment asked students to submit a written response to this question:
Why are these two variables so highly correlated? Does divorce cause margarine consumption or does margarine consumption cause divorce? Why?
All the students who completed the assignment answered the question correctly: neither one causes the other. In class, students identified several possible intervening variables, including:
- People eat margarine and margarine-laced products as an emotional comfort food when relationships end.
- Divorce leads to a greater number of households, with each household purchasing its own tub of margarine.
Students’ ideas led in turn to a discussion of how to appropriately measure these variables and construct new hypotheses.
*An excellent overview of this topic is Jack A. Hope and Ivan W. Kelly, “Common Difficulties with Probabilistic Reasoning,” The Mathematics Teacher 76, 8 (November 1983): 565-570.
One of the many fascinating aspects of my Erasmus study abroad year in Bonn was that the town was then undergoing a major change: following reunification, the capital was being moved to Berlin, necessitating a multi-million DM programme of construction, re-construction and general upheaval.
Right now, I feel a bit like I’m facing my own Umzug, not least because while the Germans were moving only the once, I’ve got two moves ahead of me in the next 9 months or so.
The reason is the usual one for a university: the juggling of spaces against changing needs is a constant for most colleagues and the biggest wonder of it all is that our Department’s not moved in its 15 year history.
That’s great, but it means that now we are moving, there’s a problem: what to take?
Usually, this isn’t the kind of thing I’d bother you with, but because we’re doing it twice, there’s an additional constraint: I can take only three packing crates to the interim location, with the rest going into proper storage.
Three crates? Not so bad, maybe.
Welcome to my office.
I did actually get moved some years back when I was doing my Associate Dean role, and I used about 30 crates, and even then got told off for over-packing them.
So you see my dilemma: three crates to take the stuff I’ll actually use between May and January.
Of course, a lot of stuff doesn’t get used very much: I’ll admit that I cleared out a couple of shelves directly upon hearing the news, as our local book harvest is coming by next week: old textbooks might have some historiographical value, but realistically they’re not a burning priority.
Which leaves me with still 30-odd crates-worth of choices.
The priority seems to me to be around teaching: my research is mostly based around materials I can transport virtually, plus we’re next door to the library (or rather, we are now: not when we move). But teaching needs me to have access immediately to some texts and to teaching aids.
That means trying to map out what my classes might look like in the first semester of the next academic year, plus the second semester just in case we get delayed (my Brexit research on contingency planning coming in there).
Put like that, the problem suddenly becomes much more manageable: it’s now a matter of boxes of Lego, blindfolds, whiteboard markers and post-it notes rather than my extensive collection of notes on Danish euroscepticism in the late 1990s (two crates-worth, last time I looked).
An as exercise, I’m actually finding it rather cathartic: it feels like an extension of much of the rest of my work experience (I’m writing this on a crowded train heading into London, laptop balanced on my knees, for example). And if I really need something I’ve packed away, they say it can be retrieved, so the peril is relatively low.
Of course, I’m not going to suggest that you move just for the sake of it, but it is good to occasionally ask yourself whether you need all that stuff you’ve piled up. Asking yourself what you really need is a good question at any time, not least because it invites you to focus on the core of what you do.
So it’s actually all good.
Except for the plants, which are going to be a very different matter…
Another post about the methods course that I’m now teaching. Chapter 3 of Naked Statistics is about deceptive description. So here is the accompanying assignment . . .
Many high school seniors are interested in attending Southwest America State University for college. Before 2015, applicants to this university had to submit high school transcripts that include average GPA scores, SAT scores, and an essay. In 2015, the application process changed; applicants had to submit high school transcripts with average GPA scores and two essays, while submission of SAT scores became optional. In 2019, the university claimed that the academic quality of its students had increased since 2011 given this pattern in the average SAT score of each year’s incoming class:
- 2011 – 990
- 2012 – 1130
- 2013 – 1090
- 2014 – 1150
- 2015 – 1160
- 2016 – 1185
- 2017 – 1170
- 2018 – 1190
Is this claim deceptive? Why?Continue reading
A few years back I wrote about the The Lego Movie and how it captured the operation of fascism in a form that was both accessible and about as enjoyable as fascism ever can be.
Some readers of this blog were unhappy that I’d ruined that film for them, so it’s only right I try to do that for another generation of scholars and young parents.
The Lego Movie 2 isn’t as good as the first one (confine discussion of whether this is ever possible to yourself, away from me), but obviously I watched it, because, well, Lego.
This aside, there’s still plenty of politics going on in the film., which I’ll write about now with some mild spoilers (because you’re not 9 years old and because you’re a serious scholar).
For the IR types among you, there’s a whole bunch of realism going on, with security dilemmas, anarchy and the brutishness that this engenders in actors. The collapse of the (ultimately benign) dictatorship in the first movie produces a literal and metaphorical wasteland in which actions are guarded and security is everything. The final reconciliation of the actors this time occurs when they identify a common external threat and work together to overcome it.
As a musing on power in its various forms, the movie offers a useful way to conceptualise how actors operate under uncertainty and the tension between collaborative and conflictual action, as captured in the notions of being a ‘master builder’ or a ‘master smasher’.
For the gender scholars, there’s a bunch of gendered roles, paternalism (and maternalism), as well as how children ‘become’ adults going on in all this too. It’s also a classic of the ‘absent father’ trope and all that implies.
But for my purposes, the film is all about constructivism.
The logic of appropriateness runs through the entire piece, as individuals strive and struggle to either fit into their environment or communicate their intentions.
If Emmet’s arc is one of finding a persona and an attitude that works from him (puberty alert!), then Bianca’s is one of learning to signal intentions less ambiguously.
As someone who teaches negotiation, I recognised a lot of these tensions from my classroom, where students are apt to adopt personae as ‘negotiators’ that don’t always sit comfortably with their more general sense of being.
In both classroom and the film, individuals try out different gambits, with varying degrees of sincerity and of success. But ultimately, as the film suggests, it is when there is a more open exchange of views that progress is made, clearing up the confusions and misunderstandings and realigning how we view other’s actions.
Indeed, the whole film turns on how individuals perceive one another and themselves: Finn misunderstands Bianca; Emmet is misled by Rex; Lucy struggles (as in the first film) with her sense of identity; Batman has to learn about living with light as well as dark; and Superman has to find accommodation with the Green Lantern.
Identity here is thus not purely about being true to yourself, but also about being true to others. Whether you accept that being doing both you end up with a happy society – as the movie argues (as movies are wont to do) – is another matter.
But however you take it, this case highlights how we can use cultural products to illustrate and illuminate our teaching: the beauty of politics is that it is pervasive, so we can find it pretty much anywhere we look, if we choose to see.
Sometimes the best way to find out why students do what they do is to ask them.
During a recent lunchtime conversation with a colleague, I learned about the “one-word check-in” — asking students to each describe, with a single adjective, how they felt at that moment. I decided to incorporate this into a data collection exercise that I hoped would demonstrate one benefit of taking notes in class — a problem for which I still haven’t figured out a solution.
My hypothesis: students who took notes — a more cognitively-engaging activity than just listening — would be more likely to feel better by the end of class.
I collected data in my course on globalization, which meets twice a week in seventy-five minute sessions from 9:30 a.m. to 10:45 a.m. The class, when everyone attends, has only twenty-five students, so my results are not statistically significant.
As students were entering the classroom and settling into their chairs, I gave each person three Post-it notes, along with a playing card dealt from a stacked deck (more on this further down). I told everyone to marked their Post-it notes with the suit and number of the playing card each had received. This allowed me to sort the Post-its by individual student afterward. Students should also number each Post-its with a 1, 2, or 3, to simplify keeping them in the correct sequence after class. I didn’t think of this at the time, but luckily I kept each pile of Post-it notes separate after they were collected.
- At the beginning of class, students wrote a one-word check-in on Post-it #1.
- After the discussion of that day’s reading response, students wrote on Post-it #2 answers to “Have I written any notes during today’s class?” and “Why?”
- Students then clustered into teams to discuss plans for an upcoming project assignment. Note that this introduces a methodological flaw in my research design, but it turned out to be irrelevant.
- At the end of class, students wrote a one-word check-out on Post-it #3.
A different randomly-selected student collected each set of Post-it notes after students had finished writing on them, which he or she placed face down on a table. The goal here was to make it obvious that I was trying to preserve the anonymity of students’ responses. However, I had dealt cards from a stacked deck (low value cards on the bottom) so that I could identify which responses were from men and which were from women — because I expected that women would be more likely to take notes.
Now for the results. Out of 23 students who were in class that day . . .Continue reading