Building a grid for measuring the effect of Active Learning

The great thing about colleagues is the way that they get you to move beyond yourself. Reading Peter’s summary of our Nicosia discussion is a case in point, setting out our agenda in a way that makes me want to write more about the ideas involved.

That means the dream I had last night about how to run my negotiating course will have to wait until next week, for which we might all be graeful.

At the centre of Peter’s idea is the creation of a framework that would allow colleagues to engage in a more systematic and rigorous examination of the effects of Active Learning. In so doing, it plots a middle path through the challenges I set out before.

On the one hand, a framework can be too vague, offering no real purchase on the issues involved, nor a mechanism for comparison of individual pieces of research, even if it would have the benefit of flexibility.

On the other, prescription might guide the work much better, but at the risk of missing out important elements. And that’s after the long, hard struggle to agree such a detailed model in the first place.

The compromise approach suggested by our discussions is to divide the big question of ‘what effects?’ along three discrete and meaningful dimensions.

The first is to unpack ‘Active Learning’. Our workshop alone contained simulations, creation of videos, semi-structured facilitated group discussion, problem-based learning and more: each rather different, each brought together by not much more than the placing of the student in the centre of the learning activity.

Indeed, much of my informal conversation in Nicosia was precisely about what makes Active Learning, Active Learning. Given the range, it’s difficult to come up with a definition that includes the kind of range listed above, but excludes something like a lecture. And there’s a question about whether lectures should be excluded in any case: colleagues using EVS might feel that they’re doing Active Learning.

And no, I didn’t get to an answer on this one. There’s maybe something in thinking about learning as being about stimulus-response, with active learning focused more on the response element, but by that point I was feeling that I was hopelessly out of my depth and in need of an educational scientist with some emergency theory.

Digressions aside, this dimension logically matters: the type of thing you do in your learning environment should influence what students learn from it. By differentiating across the variety, we might be able to spot commonalities and differences, especially as it doesn’t a priori exclude consideration of the effect of non-Active Learning situations too, as a benchmark.

Which leads to the second dimension of types of effect.

Here again, much discussion ensued in Nicosia about what types of effect to consider and how to group them. As I’ve discussed already, Bloom’s tripartite cognitive-affective-psychomotor domaining forms an obvious starting point, even if you can have a discussion about whether something like self-confidence is a skill or a disposition or something else.

However you resolve this one, there are still the three main areas of ‘facts’, skills and attitudes. Clearly one can break each of these down into more specific elements, and consider interactions between each of them – if my students enjoy it more, do they learn more facts? – but this does at least begin to structure the range of what we might consider.

The third dimension – of context – is somewhat different, since it’s not about the activity per se, but rather the environment in which Active Learning takes place. Several of our papers dealt with school children rather than university students, posing a question of whether this made any fundamental difference.

My personal experience makes me think that it is more a difference of degree than kind: the high levels of confidence and knowledge allow university students to take simulation scenarios further than school pupils, in terms of depth, realism and reflection. However, others find rather different dynamics, which suggest that differentiation across this might hold value.

Again, we come back to the impact of types of Active Learning and to the scope and magnitude of effects.

And this might be the biggest challenge: measurement.

Peter didn’t try to specific minimum or common standards for measuring effects, in part because of the scale and scope indicated by the three dimensions. However, we have to hope that as we start to work on this, we might all develop a better sense of what works how: to take the obvious example, some techniques will work better than others for different effects.

So, a plan. And a grid.

On to the next step.