I’m still enough of a kid to be excited to see the place I work at mentioned in the news, especially if it’s in an outlet my mum might see.
Of course, it’d be better if the context of this particular mention were different, but I guess you can’t have it all.
This all comes off the back of the on-going debate in government about grade inflation.
I wrote about all this last summer, and I’m not sure I’ve gotten much further in my thinking about this, except to note the shift in framing to combating ‘artificial’ grade inflation.
While this might seem to start to take account of the other factors at play, what it singularly doesn’t do is set out a means of calculating this in practice.
Obviously, there are changes in student characteristics that have a direct bearing and these are relatively simple to capture: socio-economic status; entry grades; progressive performance in each year of study.
However, there are also obviously changes in the teaching environment: staffing changes; changes in pedagogic approach; changing curricula (we’ve made our final year dissertation optional thus year, for example); changing provision of learning resources outside the degree programme, at the library or in welfare; changes in programme regulations.
As I mentioned in my first post in this series, my interdisciplinary methods course includes a research proposal assignment consisting of:
An introduction containing a research question, hypothesis, rationale, and context.
A one-paragraph abstract.
Two-page discussion of the design of the proposed research project, the types of data that will be collected, how the data will be analyzed, and how this process will test the hypothesis and provide an answer to the research question.
A bibliography of references.
The proposal is intended to prepare students for an actual research project that they will design, conduct, and report on before graduating. I’ve created three smaller practice assignments that scaffold different aspects of the final proposal. The first uses research on Bolivia; here are the instructions:
Read the rubric.
Read either a chapter from Jim Schultz and Melissa Crane Draper, eds, Dignity and Defiance: Stories from Bolivia’s Challenge to Globalization, UC Press, 2008, or Isabel M. Scarborough, “Two Generations of Bolivian Female Vendors,” Ethnology 49, 2 (Spring 2010): 87-104.
A very short one today, as I’m struggling with a pile of stuff that I’m not sure I understand.
While it’s great that I get to do things I wouldn’t have otherwise be able to, Brexit has also meant I get asked to explain things that are either at the edge of my knowledge, or which are so novel that no-one’s considered them before.
You might have this in your classroom sometimes – I know I still do – so a couple of thoughts on how I handle it.
Firstly, work from what you know.
Nothing is so out-there that it doesn’t touch on something that’s much more settled, so build your conceptual bridge out from that. It not only gives you something more solid to work with, but often it’s where those involved are working from too.
Secondly, consider the range of options.
Politics is great to study because of its uncertainty, but that usually works within a bounded set of pathways. The more you can work through what that set might include, the better you can evaluate how actors might choose among them.
And thirdly, don’t be afraid to say you don’t know.
No-one knows everything and sometimes it’s a matter of either being too early to tell, or too uncertain to guess. Park it, say what would be a marker of things changing in a way that you could tell, so that your audience is left with some tools, even if they don’t get the answer there and then.
Right, back to the world of UK Parliamentary procedure.
A follow-up about asking students why they do what they do . . . For the second stage of this data-gathering exercise, I had students use Post-its to anonymously answer three questions at the beginning of class:
How are you feeling right now? (the one-word check-in)
Why are you feeling what you’re feeling?
Why did you come to class today?
Nineteen out of twenty-three students, or more than eighty percent, reported feeling badly — the same proportion as last time. Of the nineteen, ten referenced being tired while four wrote “stressed.” Only one wrote “hungry.” The overwhelming majority of people in this group attributed their feelings to too little sleep and too much work.
The other four students felt “happy,” “good,” “relaxed,” and “chill.” Three of these students attributed their feelings to having had time to eat, buy coffee, or otherwise get ready before class. One of them mentioned sleeping comfortably, while another wrote “not super-stressed . . . trying to stay calm for the day ahead.”
I sorted answers to the third question into a few different categories, which are shown below, along with their frequencies. A few students’ comments fell into more than one category.
I had to; attendance is mandatory: 7
Get a good grade: 5
I am paying for the course: 3
Learn something: 3
Participate in discussion: 1
Collaborate with teammates on an upcoming assignment: 3
Miscellaneous reasons — “My roommate told me I couldn’t skip,” “I was awake so I figured why not,” “Because I didn’t go to the last one,” “I try to go to all of my classes,” “Didn’t want to miss anything,” “To avoid falling behind”: 6
In sum, only seven students, or thirty percent, indicated that they had been intrinsically motivated to attend class that day; i.e., they came to learn or participate in a learning-oriented activity. More than half of the students indicated that they were extrinsically motivated by the fear that their grades would be harmed if they did not attend. What I think is interesting here: I do not penalize students for being absent from class — I regard them as legal adults, free to suffer the natural consequences of their actions. I do not grade on attendance or class participation. Only students’ written work, submitted before class, gets assessed.
More thoughts on this subject in a future post . . .
This guest post is part of a series linked to the publication of G. Pleschova & A. Simon (eds.) Learning to teach in central Europe: Reflections from early career researchers. This post comes from Maxine David.
In her chapter Life after academia: preparing students for successful collaboration, Kovačević talks us through her 2017 experiences teaching a course on EU Enlargement at the University of Economics in Bratislava (EUBA). We are first given a little insight into teaching practices at EUBA and into Slovak Higher Education legislation, before moving on to the detail regarding the problems she felt needed remedying, the method she employed, the rationale for it and expected results. Data collection and results are then discussed, albeit the latter more comprehensively than the former. The chapter ends with Kovačević’s reflections on the application of group presentations and the challenges involved in devising a reliable research design to generate data on applied teaching methods.
academics who have been teaching for a good number of years will recognise
their own early teaching days in the experiences Kovačević describes. At one
level, this is rather depressing; I, for instance, have assessed students
through group presentations for a decade and more. Even at the beginning, I did
not consider it as terribly innovative given it was what I had encountered on
my own Bachelor’s degree back in the 1990s. The chapter therefore raised
questions for me about what might really be called innovative. In turn, that
suggests the real value in Kovačević’s chapter: first, that it adds to our
knowledge of other contexts; second that it highlights the wider failure of
many academics to engage sufficiently in an exchange of pedagogical knowledge
is therefore to be commended for the degree to which she has problematised the
learning process, thinking about impediments to learning and how to overcome
them at this early stage in her career. She grounds her thinking in the
literature explaining the benefits of group work and presentations, especially
in respect of developing transferable skills and enhancing employability. Based
on that literature and her prior experience in teaching the course, she comes
up with three hypotheses. The first of these is somewhat unclear: “The
innovation—i.e. group work—takes place in a classroom environment that is
supportive of learning via collaboration”. Does she mean that she is
hypothesising groupwork is innovative, that groupwork is learning by collaboration,
or something else? The other two hypotheses are clearer: students’ interest in
the subject matter will be increased as a result of the process of creating a
group presentation; and there will be a noticeable increase in subject-specific
knowledge, as well as related skills.
for many teaching in environments that regularly apply such methods, these will
be self-evident: as Kovačević herself acknowledges, the benefits of
student-centred learning are already well-recognised. Clearly, however, whether
as a whole or just in pockets, methods that put the student at the centre of
the learning process are not the norm for EUBA (and many another institutions).
her course, working with seminar groups of around 13 students, Kovačević began by
having students collaborate to produce a poster on Turkey in the EU’s
enlargement process before moving on to the creation of a Powerpoint
presentation. She is keen to point out the support that was offered throughout
the process, including instructor and peer feedback. It is a shame,
incidentally, that we did not hear more about this peer feedback, a notably
tricky area (see, for instance: Liu and Carless).
chapter is weakest on talking us through definitions and in the section on data
collection and methods, though the latter aspects are partly addressed in the
conclusions. On definitions, it is not entirely clear what is meant by
“presentation”. Presumably, it is confined to a Powerpoint presentation but it
could be more (role plays) or less (students acting as rapporteurs) extensively
construed. The question is an important one for those thinking about adaptions
to the method Kovačević applies.
about measurement and comparison are also insufficiently considered. For
example, the third hypothesis (“Student learning, including knowledge and
skills after collaborative group work, is noticeable”) begs questions about how
levels of learning can be measured, and compared to what. If we accept group
presentations as innovative, we must accept also that others need to be
persuaded of the relative benefits of
such innovative teaching, otherwise, why change? As such, the persuasive
potential of the chapter is reduced. Methodologically, it would have been
useful to know how Kovačević recorded and evaluated the “student activity and
behaviour” she observed.
the number of unanswered questions, the chapter is an important one. It
functions as a reminder that there is still much to be done to convince others
of the benefits of changing ways of thinking and doing because innovation is
not contagious. It is a reminder too that rigorous and reliable evidence is
sometimes difficult to generate and that without that, it becomes all the more
difficult to overcome resistance to change. Finally, the chapter is important
because it raises implicitly the question of whose responsibility it is to
bring about change. Should it be contingent on young scholars, under pressure
in so many other ways, to undertake all this work? I think we all know the
answer to that question, what we are doing to address it is another matter.
Nikita Minin of Masaryk University is motivated by a goal we can all appreciate: ensuring that his students achieve the learning outcomes of his course. In his case, the course is a graduate seminar on theories of IR and energy security and the learning outcomes include improving student skills in critical thinking and writing. He noticed that students in his class did not seem to really improve on these skills during the class, and introduced three teaching interventions in an attempt to fix this.
First, Minin provided more intense instruction on the writing assignments at the start of the course, providing a grading rubric and examples of successful student work. Second, he gave students audio rather than written feedback on their papers. Finally, using a sequential assessment system, the instructor gave formative feedback first and grades much later in the course. Minin assessed the impact of these three interventions, comparing course sections with and without them, and concluded that the first two interventions achieved the objective of improving student achievement of the learning outcomes.
The interventions described in the chapter are in line with current thinking regarding in-course assessment. While Minin does not use the language of transparent teaching, his first intervention falls exactly in line with the Transparency in Teaching and Learning Project’s (TILT)approach. Transparency calls on instructors to openly communicate about the purpose of an assignment, the tasks they are to complete, and the criteria for success, and Minin does exactly that in this first intervention. Given the data so far on the TILT project, it is not surprising that Minin saw some success by taking this approach. Likewise, now-ubiquitous learning management systems allow for giving feedback in multiple platforms, including audio and video. For years now, advocates for audio-based feedback claim that this can be a more effective tool than written feedback. Minin’s observations therefore, also fit nicely in line with existing work.
Where the chapter falls short, then, is not in the design of its interventions, but in the claims made based on the available data. The sample sizes are tiny, with just five students receiving the interventions. With final grades used as the primary dependent variable, it is difficult to tease out the independent impact of each of the three changes. Using final grades is also an issue when the experimenter is also the person who assigns grades, as it is more difficult to avoid bias than when more objective or blind items are used. Lang’s (2016) bookSmall Teaching: Everyday Lessons from the Science of Learningtells us that engaging in self-reflection is itself an intervention, and Minin’s use of minute-paper style self-reflections to assess the impact of feedback, while itself an interesting and potentially useful idea, mean that a fourth intervention was used in the course. While I do not doubt Minin’s observations that his interventions had a positive impact, as they are backed by existing research, the evidence in the chapter does not strongly advance our confidence in those findings.
However, I have never been one to dismiss good teaching ideas simply because of a lack of strong evidence from a particular instructor. Minin highlights a crucial concern—that we should never assume that our courses are teaching what we intend them to teach, and that ‘time and effort’ do not necessarily achieve the desired results, even for graduate students. Reflecting on this, seeking out innovative solutions, and then assessing the impact is a process we should all be following, and Minin sets a great example.
My first-year module this semester has been a real training ground for me. Not only am I going all-in on flipping, but I’m also trialing the new assessment software that the University is thinking of using.
By extension, that also means it’s a training ground for my students, something that I’ve been very open about with them.
The flipping seems to be working and I’ll be writing up my thoughts on that later in the semester, but having coming through the first use of the software I need to make some decisions now.
In part, my situation arises from wanting to push how we used the software past a conventional approach. Not only did students submit a literature review to it, but they then had to review someone else’s using the system, all in aid of a final piece of self-reflection (which we’re marking now).
Using the marking function is a bit more involved than just submitting work and a couple of people did get a bit lost on that. But the bigger problem was that not everyone submitted work.
In the good old days (i.e. last year and before) we did all this in-class, so it was much simpler to cover (the exceptionally few) missing pieces. However, because we’d pre-selected peer reviewers, we ended up with some students having nothing to review and others not getting their work reviewed.
That’s a failing on my part: next time, I’d leave allocation until after the first submission was in, so everyone who submitted got allocated and reviewed.
But that’s next time. What about now?
Already, I’ve indicated to everyone that not getting peer feedback won’t count against them in marking, but a couple of students have felt that absent such comments they’re not in a position to complete the self-reflection.
To that, I’ve had to underline that it’s self-reflection, so peer feedback was only ever one component of that: indeed, the whole purpose of the somewhat-convoluted exercise is to get students becoming more independent and critical about their learning.
All that said, peer review was added in here to help prompt everyone to think more about what they’ve done and what they could do.
As we sit down to mark, the question will be much we can, and should, take the circumstances into account. Until we’ve seen the full range of work, that’s going to be a tricky call to make.
However, it all highlights an important point in such situations: do we have fall-backs?
Trying new things is inherently risky – that’s why many colleagues stick with what they know – but with some risk management, that need not be a barrier to moving practice forward.
Annoying through our situation here is, it’s not fatally-compromising to the endeavour: we know who’s affected and how; they’re still able to submit work; and the assessment is relatively small in the overall scheme of things.
Yes, we’ll be using the system again for the final exam, but without the aspects that have proved problematic. Indeed, the exam has already been trialled elsewhere in the University, so that’s well-understood.
So, on balance, I feel comfortable that we can manage the situation and implement the necessary changes next time around to remove the problems identified.
Which is, of course, a big part of the reason for trying it out in the first place.
An important component of both statistical and information literacy is the ability to recognize the difference between correlation and causation. Teaching this skill is made even more difficult by cognitive biases that lead to errors in probabilistic thinking.* So I decided to hit my students over the head with Chapter 4 from Charles Wheelan’s Naked Statistics and, from Tyler Vigen’s Spurious Correlations website, an image of the 99.26% correlation between the divorce rate in Maine and margarine consumption.
The assignment asked students to submit a written response to this question:
Why are these two variables so highly correlated? Does divorce cause margarine consumption or does margarine consumption cause divorce? Why?
All the students who completed the assignment answered the question correctly: neither one causes the other. In class, students identified several possible intervening variables, including:
People eat margarine and margarine-laced products as an emotional comfort food when relationships end.
Divorce leads to a greater number of households, with each household purchasing its own tub of margarine.
Students’ ideas led in turn to a discussion of how to appropriately measure these variables and construct new hypotheses.
*An excellent overview of this topic is Jack A. Hope and Ivan W. Kelly, “Common Difficulties with Probabilistic Reasoning,” The Mathematics Teacher 76, 8 (November 1983): 565-570.
Links to all posts in this series about information literacy:
One of the many fascinating aspects of my Erasmus study
abroad year in Bonn was that the town was then undergoing a major change:
following reunification, the capital was being moved to Berlin, necessitating a
multi-million DM programme of construction, re-construction and general upheaval.
Right now, I feel a bit like I’m facing my own Umzug, not least because while the
Germans were moving only the once, I’ve got two moves ahead of me in the next 9
months or so.
The reason is the usual one for a university: the juggling
of spaces against changing needs is a constant for most colleagues and the
biggest wonder of it all is that our Department’s not moved in its 15 year
That’s great, but it means that now we are moving, there’s a
problem: what to take?
Usually, this isn’t the kind of thing I’d bother you with,
but because we’re doing it twice, there’s an additional constraint: I can take
only three packing crates to the interim location, with the rest going into
Three crates? Not so bad, maybe.
Welcome to my office.
I did actually get moved some years back when I was doing my
Associate Dean role, and I used about 30 crates, and even then got told off for
So you see my dilemma: three crates to take the stuff I’ll
actually use between May and January.
Of course, a lot of stuff doesn’t get used very much: I’ll
admit that I cleared out a couple of shelves directly upon hearing the news, as
our local book harvest is coming by next week: old textbooks might have some
historiographical value, but realistically they’re not a burning priority.
Which leaves me with still 30-odd crates-worth of choices.
The priority seems to me to be around teaching: my research
is mostly based around materials I can transport virtually, plus we’re next
door to the library (or rather, we are now: not when we move). But teaching
needs me to have access immediately to some texts and to teaching aids.
That means trying to map out what my classes might look like
in the first semester of the next academic year, plus the second semester just
in case we get delayed (my Brexit research on contingency planning coming in
Put like that, the problem suddenly becomes much more
manageable: it’s now a matter of boxes of Lego, blindfolds, whiteboard markers
and post-it notes rather than my extensive collection of notes on Danish
euroscepticism in the late 1990s (two crates-worth, last time I looked).
An as exercise, I’m actually finding it rather cathartic: it
feels like an extension of much of the rest of my work experience (I’m writing
this on a crowded train heading into London, laptop balanced on my knees, for example).
And if I really need something I’ve packed away, they say it can be retrieved,
so the peril is relatively low.
Of course, I’m not going to suggest that you move just for
the sake of it, but it is good to occasionally ask yourself whether you need
all that stuff you’ve piled up. Asking yourself what you really need is
a good question at any time, not least because it invites you to focus on the
core of what you do.
So it’s actually all good.
Except for the plants, which are going to be a very different
Another post about the methods course that I’m now teaching. Chapter 3 of Naked Statistics is about deceptive description. So here is the accompanying assignment . . .
Many high school seniors are interested in attending Southwest America State University for college. Before 2015, applicants to this university had to submit high school transcripts that include average GPA scores, SAT scores, and an essay. In 2015, the application process changed; applicants had to submit high school transcripts with average GPA scores and two essays, while submission of SAT scores became optional. In 2019, the university claimed that the academic quality of its students had increased since 2011 given this pattern in the average SAT score of each year’s incoming class: