This week I got my first email about making arrangements for the autumn semester’s teaching. Luckily, it was for the institution I’m about to leave, rather than the one I’m about to join, so I could put it in my new and exciting email folder: “not my problem”.
But most of us aren’t that fortunate – we’ll all on a hamster-wheel of some kind, running to stand still, future commitments racing towards us alongside a bunch of deadlines.
All of which makes it hard to stop and take stock of our L&T practice.
Of the (very few) benefits of the pandemic, the ‘opportunity’ to reconsider what we do with our students was perhaps completely undermined by the associated factor of ‘you’re getting no cues on what is either possible or allowed’. Fun times.
But we have navigated that huge change, and in many cases produced learning environments that work really well. Just in time to see a possible shift once again, back into the classroom.
If you’d like an institutionalised take on this, you might try the UK’s Office for Students’ recent report Gravity Assist (plus this critique from WonkHE), that essentially argues we should be trying to retain the new good stuff, rather than just going back to the Olden Days.
That’s all nice, at a sector- or institutional-level, but what about you and me, as individuals? How do we go about that?
The issue strikes me as being primarily one of path dependency: you’ve reworking your teaching a lot during this past year, so you probably only want to tinker around the edges, rather than doing a wholesale reworking MkII.
That might be appropriate in some cases, but equally not in others: without the space to devote to some big thinking, it’s hard to tell. Changing jobs is one solution – especially if your new employer doesn’t do teaching in any way like your old one – but it’d be good if we didn’t introduce any more precarity into it all.
Instead, we have to try to keep the matter in hand as much as possible.
It has been striking how the profusion of interest in L&T during 2020 seems to have fallen back: the excellent PSA webinar series (recordings very recommended) have – in my anecdotal opinion – returned to the ‘usual suspects’ in the audience. It’s a great bunch of people, but that moment of broad professional interest in L&T has not been sustained, most likely because most people got through their crisis and got their head back down.
But pedagogy is just like research: it requires a constant discussion and challenging of ideas and approaches. Indeed, the tempo of the former is perhaps more pressing, given the sustained rapidity of producing outputs.
All of which is to say that we need to try to maintain an active culture of discussion and debate around our teaching. The more we can do that, the easier it will be to manage this transition, and the next and all the other changes that will be coming down the line.
Students and staff are experiencing challenging times, but, as Winston Churchill famously said, “never let a good crisis go to waste”. Patrick recently led a new undergraduate course on academic research at Maastricht University (read more about the course here). Due to COVID-19 students could choose whether they preferred online or on-campus teaching, which resulted in 10 online groups and 11 on-campus groups. We were presented with an opportunity to compare the performance of students who took the very same course, but did so either on-campus or online. Our key lesson: particularly focus on online students and their learning.
In exploring this topic, we build on our previous research on the importance of attendance in problem-based learning, which suggests that students’ attendance may have an effect on students’ achievements independent from students’ characteristics (i.e. teaching and teachers matter, something that has also been suggested by other scholars). We created an anonymised dataset consisting of students’ attendance, the number of intermediate small research and writing tasks that they had handed in, students’ membership of an on-campus or online group, and, of course, their final course grade. The latter consisted of a short research proposal graded Fail, Pass or Excellent.
316 international students took the course, of which 169 (53%) took the course online and 147 (47%) on-campus. 255 submitted a research proposal, of which 75% passed. One of the reasons why students did so well – normal passing rates are about 65% – might be that, given that this was a new course, the example final exam that they were given was one written by the course coordinator. Bolkan and Goodboy suggest that students tend to copy examples, so providing them may therefore not necessarily be a good thing. Yet students had also done well in previous courses, with the cohort seemingly being very motivated to do well despite the circumstances.
But on closer look it’s very telling that 31% of the online students (52 out of 169) did not receive a grade, i.e. they did not submit a research proposal. This was 9.5% for the on-campus students (14 out of 147). Perhaps this is the result of self-selection, with motivated students having opted for on-campus teaching. Anyhow, it is clear that online teaching impacts on study progress and enhancing participation in examination among online students needs to be prioritised by programme directors and course leaders.
We focus on students that at least attended one meeting (maximum 6) and handed-in at least one assignment (maximum of 7). Out of these 239 students, 109 were online students (46%) and 130 on-campus (54%). Interestingly, on average these 239 students behaved quite similarly across the online and on-campus groups, they attended on average 5 meetings (online: 4.9; on-campus: 5.3) and they handed-in an average of 5 to 6 tasks (online: 5.0; on-campus: 5.9).
We ran a logit model with a simply dummy variable as the dependent variable which taps whether a student passed for the course. As independent variables we included the total number of attended meetings and the total number of tasks that were handed-in. Both variables were interacted with a dummy variable that tracked whether students follow online or offline teaching and we clustered standard errors by 21 tutor groups.
Unfortunately, we could not include control variables such age, gender, nationality and country of pre-education. This would have helped to rule out alternative explanations and to get more insight into what factors drive differences in performance between online and offline students. For example, international students may have been more likely to opt for online teaching and may have been confronted with time-zone differences, language issues, or other problems.
Figure 1 displays the impact of attending class on the probability to pass for the final research proposal. The predicted probabilities are calculated for an average student that handed-in 5 tasks. Our first main finding is that attendance did not matter for online students, but it did for on-campus students. The differences in predicted probabilities for attending 3, 4, 5, or 6 meetings are not statistically significant (at the 95% confidence level) for online students but they are for on-campus students. Students who attended the maximum of six on-campus meetings had a 68% higher probability to pass compared to a student who attended 3 meetings (89% versus 21%) and a 52% higher probability to pass compared to a student who attended 4 meetings (89% versus 37%).
Figure 2 displays the impact of handing-in tasks on the probability to pass for the final research proposal. The predicted probabilities are calculated for an average student that attended 5 online or on-campus meetings. Our second main finding is that handing-in tasks did not matter for on-campus students, but it did for online students. The differences in predicted probabilities for handing-in 4, 5, 6, or 7 tasks are not statistically significant (at the 95% confidence level) for on-campus students but they are for online students. Students who handed-in the maximum of seven tasks had a 51% higher probability to pass compared to a student who handed in four tasks (69% versus 18%) and a 16% higher probability to pass compared to a student who handed-in five tasks (69% versus 53%).
Note that we do not think that attendance does not matter for online students or that handing-in tasks does not matter for offline students. Our dataset does not include a sufficient number of students to expose these impacts. From our previous research we know that in general we can isolate the impact of various aspects of course design with data from three cohorts (around 900 students). The very fact that we find remarkably clear-cut impacts of attendance among on-campus students and of handing-in tasks for online students for a relatively small number of students (less than 240) reveals that these impacts are so strong that they surface and become statistically significant in such a small dataset as ours.
This is why we feel confident to advise programme directors and course leaders to focus on online students. As Alexandra Mihai also recently wrote, it is worth investing time and energy in enhancing online students participation in final examinations and to offer them many different small assignments to be handed-in during the whole time span of the course. This is not to say that no attention should be given to on-campus students and their participation in meetings but, given limited resources and the amount of gain to be achieved among online students, we think it would be wise to first focus on online students.
 The difference of 21% in no grades between online and offline students is statistically significant at the 99%-level (t = 4.78, p < 0.000, N = 314 students).
As well as being the home of queuing, the UK is also a big fan of politics. Indeed, recent production rates of politics have been at all time highs, as you might have noticed.
One of the more niche bits of politics of the last week has been the recording of an online meeting of a parish council (i.e. the very lowest tier of local government). Actually, of a sub-committee of the council.
In what has been a shock to anyone who has failed to anything to do with local politics, this particular meeting was filled with more, um, drama than most.
I’ll leave you to watch at your leisure, and to check out the endless memes and gifs it has spawned, but it’s a good example of how we can connect Fun Things Happening On The Internet to our work as educators.
For some, the meeting was an opportunity to discuss the application of rules on council meetings, in a legal context.
For me, it was a great moment to share with my negotiating students, to consider multiple aspects of the theory and practice that we had been covering through last semester.
To take a non-exhaustive list, we might watch and consider: the issue of gender and engendering in negotiations; the practice of online negotiating spaces; the role of technologies; and, that old chestnut of dealing with difficult people.
Even in the more lurid scenarios I have created for my classes, I’ve never been close to something like this (and I’m not sure I’d like to have been, either), so having this kind of (legitimate) access is a godsend for enriching our materials.
And this isn’t an isolated case.
The more we can help make connections for students to Stuff Outside The Classroom, the more chance we have of putting them in a position to become critically engaged with their environments, applying their learning and making actual use of it themselves.
This doesn’t have to be about providing the answers, but about asking the questions that stimulate reflection and debate.
And if you don’t think this example is good for your class’ needs, then there’ll be something along before you know it.
This isn’t going to be about trolls who just won’t give up, so apologies from the start about that.
Yesterday I spent my coffee-break reading about trains across Russia. Partly that was from a random thought, partly it’s because I’m a man-child who really struggles to concentrate, partly because the site I was reading about this was really engaging.
For those not in the long-distance-train-journey community, you only need to know that the guy behind this started off many moons ago, putting together some pretty basic info and, well, it rather snowballed from there.
I mention it because the site is a good example of someone not only carving out a niche, but turning it into the kind of resource that is both connected-to and thought-about as a place to go by a wide audience.
Of course, this makes me think about a number of things.
Most immediately, I makes me reflect that while ALPS has built up a considerable profile and reputation within the immediate L&T community in Poli Sci/IR/European Studies, we still have a way to go with other disciplines (or with those less motivated by the prospect of teaching): so do continue your good work of recommending us, and writing for us.
More generally, it touches on a thought that’s bothered me for a while about creating online resources: how do people find stuff?
Our decision here to go for a blog format, made nearly a decade ago in Albuquerque, made sense at the time for our needs and capacities, but we now have over 1400 posts and we didn’t do a particularly assiduous job of tagging stuff. Even I struggle to track down things, especially if different posts have covered different aspects.
It’s also becoming an issue with the Brexit-related graphics I’ve been making for a couple of years now. I tweet these out, and have hosted PDFs on a Google Drive folder, but it’s not searchable and if you don’t know me, or even couldn’t remember my name, then you’d struggle to even track down the things in the first place.
This isn’t an issue of “look at all my work”, but rather of the annoying feature of political life (and L&T) to keep on circling around to the same issues. Getting quick feedback in a class is just as much a problem now as it was in 2011.
It’s also a matter of trying to save colleagues having to reinvent the wheel all the time: if you knew what was out there and could access it easily and quickly, then you might not start from first principles. Sure, in some cases you still would, but at least you’d have more sense of what those first principles might be.
So the open question from all this is how we can build resources that are persistent, visible and useful to audiences?
In the case of ALPS, we have talked at times about making non-blog format material, but keep coming back to the challenge of creating – and, crucially, maintaining – a static site. The Man in Seat 61 has been able to do that, but we’ve not been able to make it work with our other commitments.
As so often, there’s not an easy answer to this, but it is something we’ll all have to keep on working on. Digital materials are only going to become more important with time and as academics we seem to be asked ever more about demonstrating our digital footprint. That means traffic data as much as content production, so that’s another can of worms.
But we can only continue to try. if you have some ideas that have worked for you, then let us know.
Maybe even write us a post and then we can share it with everyone who can find it.
One of the key questions regarding Higher Education (HE) curriculums is concerned with the extent to which a curriculum should be flexible. For our own undergraduate studies, the curriculum consists of a relatively fixed set of courses. This set-up seems logical, after all: Why would we expect students’ assessment of the knowledge or skills required for the discipline to be more advanced than that of educational professionals active in the field for many years? Yet, the use of electives, tracks, or specialisations has become a staple in many undergraduate programmes.
As part of a project on mapping the undergraduate political science curriculum, we calculated the proportion of course credits that are optional in 225 undergraduate programmes. This measure of flexibility shows significant variation as highlighted in the figure below.
Considering the curriculum is the backbone of a programme, one can expect this variation in flexibility is likely to have pedagogical, administrative, economic and social consequences.
Consequences of flexible curriculum
In terms of learning, theory suggests education becomes more inclusive with curriculum flexibility as students can structure their program in accordance with their personal needs, strengths and interests. This empowers students and can increase their intrinsic motivation to the study. It could also stimulate a deeper understanding of learning and self-reflective cycles of planning change. In practice, however, this freedom increases student anxiety around choosing electives, minors and majors, because students experience pressure to make the right decision in a meritocratic environment without having sufficient self-knowledge. Moreover, research suggests that students choose electives based on short-term perspectives and the estimated level of difficulty to pass the course. In that case, education does not encompass what is best for students but rather what they perceive as best, hence, students do not reach their full potential in the absence of more challenging courses.
But there are also several implications in terms of management and organisation of a flexible curriculum. Electives commonly require the completion of several prerequisite courses. Updating and enforcing these prerequisites further complicates course development, particularly if students from multiple programmes can sign up to the course. Teaching staff may find themselves confronted with a diverse set of procedures, customs and meetings for each of the respective programmes or Faculties that offer the course. For the curriculum as a whole, increased flexibility may also compromise the development of coherent and cohesive teaching as each student is likely to follow a different trajectory. The exposure to a diverse set of teaching styles and conventions can certainly help students’ adaptability; it may also render a disorienting and inconsistent learning experience.
Explanations for flexible curriculum
Many of the possible consequences requires further corroborating evidence. Yet, it also raises questions on the underlying motives that push curricula towards more flexible formats.
The main argument we found in discussions with peers is the marketization of universities. The student becomes a value-seeking customer of knowledge and flexible curricula are part of the HE institution’s business strategy. As students pay for a service, universities offer a customization of their product based on the needs and desires of the customer. It assumes the customer knows best, even if they are pursuing an education.
Other explanations take an organizational perspective and look at curricular reform as a process with vested interests. Faculty members want to retain the courses they have been teaching. If student numbers in a programme drop, they may push to have their course taken up in new or other programmes (if rebranding wouldn’t work). An elective system can thus offer a solution, but it also makes student numbers highly volatile. Similarly, the creation of new programmes in response to market pressures can take place without expanding the faculty if one can repackage existing (elective) courses.
The literature on the topic is relatively scarce, and often dependent on anecdotal evidence. This is the reason why most of this post has been written in a conditional tense. There is clearly a lot to be studied, which drove us to construct a comprehensive comparative database on Political Science programs over the last years to facilitate these efforts. In case anyone is interested, do reach out!
If you’ve got to this point without thinking about this question, then you’re either someone who’s had no need to be online in the past year, or just a very unreflective person.
In either case, I envy you.
As we roll around to a full 12 months of All This, I find myself spending as much time wondering how an online event could be working better as I do engaging with the nominal point of said event.
This isn’t about the – scarcely believable that we still have it happen – “you’re on mute” or the – only slightly less credible – “can you see my slides?”, fun though those things are, but about the structure of events in the broader sense.
Moving online has given us a great opportunity to reimagine how we do an important part of our work as academics. Personally, I’ve loved being able to join groups that would have been essentially impossible to talk with if we’d had to be in the same place, as well as the possibility of levelling-up access to debate, rather than just having to go with the tedious “it’s more a monologue than a question” from the usual suspect in the audience.
But it might not be the most controversial position to hold that this could all be working better.
In particular, the notion of “let’s just move it online” seems too often to mean “let’s just do exactly what we’d have done in-person, online”, rather than “let’s try making the most of the opportunities that moving online offers.”
To take the most obvious example, we still find ourselves sitting through lots of transmission, rather than getting to use the space for debate and dialogue. Even as we’ve all spent ages moving our lectures into pre-recordings, just to avoid doing that to our students.
As ever, I think this comes back to the same kinds of issues that we talk about so often on this site: are we being clear about what an event is for, and are we structuring it so that we have the best chance of hitting that objective?
I’m not going to offer a solution to this one, for the simple reason that I don’t think there is a single solution, just a need for constant reflection and discussion among organisers to check if this is doing what it needs to do.
That must necessarily be an occasion-specific process, even if it does work from a standard set of principles. As with our teaching, it’s possible (likely, even) that there are multiple ways to hit our goals, and that variety is part of our response (since there’s a limit to how much engagement you’re going to get from someone who’s sitting through many hours of video calls every day).
But maybe the first step is something like the one I’m making: constructive critique.
When I’m sat in something that’s not working so well, I try to think if I am clear (as a participant) about what the objectives of the session are, and then I try to think if I could have devised something that might work better.
Importantly, that’s not always possible, so we have to consider whether it’s a matter of the least-worst option.
I also try to think about what elements work and why: most obviously, I try to think about the extent to which individuals’ personalities and actions cover a lot of the ground, as opposed to more structural elements, because the latter are going to be much more transferable.
And I try to do that while still paying attention to what’s happening in the event. Hopefully with my mic muted.
Talking about L&T on Twitter is often a rather niche pursuit; one for the specialists and enthused.
But from time to time, it’s possible to get a wider set of views, as happened to me last week.
I’d posted a pretty rhetorical question about the image of online teaching, off the back of Alex’s comment:
For reasons best-known to himself, a well-known radio presenter retweeted me, resulting in a large number of responses, which you can read by clicking on the tweet above.
The responses covered a lot of ground, and highlighted some of the different dimensions we might want to engage with.
To reiterate my starting point, there remains a strong tendency to see distance/remote/online learning as inferior to face-to-face modes, something that is not, and cannot be, ever as good as ‘the real thing’.
To take that one step further, I wonder if part of why we often see Oxbridge held up as a gold standard of education is because it’s so intensely face-to-face, with one-to-one or one-to-two instruction, in person. That face-time must be good, no?
Pulling back out to face-to-face in general, part of comes down to the perceived disintermediation: it’s you and the instructor, there, just doing your thing. The other modes involve various kinds of technology to engage or facilitate communication: a computer screen, a workbook, etc.
Certainly, that additional layer does require close attention, but it does not necessarily preclude effective and efficient learning. Just as you’ve all seen a disaster in the classroom at some point, no method is inevitably fool-proof or ‘better’.
Likewise, the capacity for responsiveness and on-the-fly adjustment is something that comes up repeatedly in critiques of distance learning: the workbook can’t ‘see’ that you’ve not understood concept X.
But again, that is to take a workbook as the sole element of how that learning operates, when typically you are engaging in multiple streams of content and activity, precisely to ensure that content is tackled from multiple directions, maximising the chances of successful learning.
Again, as someone who tried and failed to do some trigonometry with my son this weekend, in-person instruction doesn’t always stick either [for me, more than for him, to be clear].
Ultimately, the standards of ‘good’ teaching remain the same as always: clarity of learning objectives; alignment of objectives, content and assessment; and engagement with students’ needs.
None of that is platform-dependent or only possible in person. Instead, it’s about us, as instructors, working to produce effective learning environments for our students, whatever the circumstances we find them to be.
And if this all sounds a bit self-serving, then you’d be right, since I’m moving in May this year to the Open University, one of the world’s leading distance-learning institutions, so you’ll be getting a lot more of this kind of thing from me, and less of the empty-room exercises. Although I mention it, maybe that could work…
A fringe benefit of writing this blog is that I regularly get asked to do reviews of L&T work for others.
It might sound odd to put it like that, since I guess you also have a pile of journal article review requests and the like, and you probably don’t think it’s the best part of your job.
But L&T reviewing work tends to be somewhat different.
Most obviously, it’s more varied. This week, I’ve been reviewing an article, but also sitting as an external expert on a programme validation panel and inputting to a promotion application for someone on a teaching track.
But it’s also that there’s much more scope for me to learn from all this.
In all three cases, I’ve got something useful for my own practice. Clearly, I’m not going to talk about the article or the promotion here, since that would be inappropriate, but I can tell you about the programme validation.
This is a distance learning programme, building out from some existing practice, but also making systematic use of an approach that I didn’t really know about before, namely e-tivities.
In essence, this is a methodology for creating structured and engaging online activities : as with so much L&T it’s not particularly complex, but it is clearly-presented and digestible.
And that’s why I like doing this kind of thing: I get to discover more ways of making learning work, that I can pull pretty directly into my classes.
Whether those who ask for my comments feel the same way is more debatable, but maybe we’ll get those involved here to write it all up some time in the new year.
So next time you’re asked to do something like this, do consider it, because it might be as good for you as it is helpful for them.
And on that note, I’m off on annual leave until 2021, which doubtless contains its own unique pile of Things To Deal With. Have a great break, as and when you get to it.
As I look around our offices, it’s all rather odd: not only are there no students (who’ve been sent home early), but there are stacks of packing crates.
We’re being moved out to new offices in January, so in-between Zoom sessions, it’s the now-rather-familiar ritual of winnowing and packing.
As the Departmental Gardener, I also have to think a bit about how Estates will be able to get the plants across campus without too much damage.
You know, the big questions in life.
Moving matters, because it’s an opportunity to consider afresh the things we have and the things we do. I know that we’ve all had plenty of cause to shake up our working practice, but unlike Covid, an office move is something more managed and delineated.
New spaces enable new practices and call into questions Things We Just Do. As a Department that was moved out of its long-term residence about 18 months ago, we’ve had this experience already, which was good in making us look again at how and what we do as a group.
This move now is meant to be more long-term, so maybe we’ll lose all the crates that have sat in the corner all this time, especially if I use this week wisely to throw away a bunch of stuff that apparently I never actually use. Nothing like a move to make you get rid of your comfort blankets.
Of course, moving is also disruptive – which is why we’re doing it when there’s very little else on – but it’s precisely that disruption that brings opportunities.
Now I’m not going to suggest you lobby for a move, but I will ask you to consider how much of what you do is through habit rather than thoughtful choice.
That’s not simply about the stuff on your shelves – although those could do with some pruning, no doubt – but more the structures and content of the practices you undertake. Do all those meetings you go to work as well as they could? Do you have an inclusive and supportive community of colleagues? Does your working week work?
Just as we ask students to be reflective learners, so too must we be reflective instructors and facilitators, not just in the classroom or online, but also in the wider range of our professional activity.
And with that in mind, I’m off to decide what to do with a pile of 30 t-shirts.
Last week I gave a surprise collaborative quiz to one class, as a test run for possibly using this exercise in my synchronous online courses next semester. The quiz consisted of five multiple-choice questions on basic concepts, deployed in three iterations. First, students took the quiz individually on Canvas, which auto-graded students’ answers but did not reveal which were correct. The individual quiz was worth up to half a percent toward the course grade.
Second, I sent students into team breakout rooms to confer and decide which answers to submit as a group. This version of the quiz was also worth up to half of the course grade. I pasted the quiz into each team’s notes on Google Docs. Because the Canvas quiz tool does not have a “groups” setting, I had already created a Canvas assignment through which each team could submit its answers. Again students did not know which answers were correct — after class I had to read what teams had submitted and manually enter a quiz score for every student who had been present for the breakout room discussions.
Third, after breakout rooms closed, students answered the quiz’s questions yet again in the form of a Zoom poll. After closing the poll and sharing the results, I explained which answers were correct and offered to answer any questions.
Twenty-nine undergraduates are in the course. Three were completely “absent” — they never signed into Zoom during class that day. A fourth student logged out before I announced the group version of the quiz. For the remaining twenty-five students: twelve, or nearly fifty percent, scored higher on the collaborative quiz than on the individual quiz. Great! Three students, all members of the same team, scored lower on the former than on the latter. Ten students’ scores were unchanged.
Finally, the poll, which did not contribute to the course grade: One student left class by disconnecting from Zoom when breakout rooms closed. Of the remaining twenty-four students, nine got the same number of questions correct on the poll and the individual quiz. Ok. Three students did better on the former than they did on the latter. Good. Twelve scored worse on the poll. Terrible! I have no idea why this happened, given the improvement in scores on the collaborative quiz.