
For the past couple of days, I’ve been talking about simulations at an event organised by Peter Bursens and colleagues at the University of Antwerp. If we leave to one side how nice it was to get some many positive comments about this blog from people, then it was a really heartening workshop for more academic reasons.
One of the biggest challenges that users of simulations (and other active learning techniques) face is the lack of a robust evidence base that such pedagogies actually have an educational benefit for students, either at all or above and above ‘conventional approaches’. This workshop was directed precisely at discussing this gap.
Papers came from across Europe, talking about how measures of knowledge acquisition, skills development, situational interest and other facets were been collected and evaluated. In addition, we talked about how one might make assessments of student learning on these different dimensions.
I’d love to say we cracked the problem wide open and now have all the answers, but that’s not what happened. Instead, we found that the more we looked, the more we found there was to see. In the widest formulation of the dilemma, we found that often we might note that other learning effects might be going alongside those we were interested in, but our willingness and ability to capture, measure or assess them was very limited.
The problem is that we might well argue that the various things are bound up with each other. In particular, the knowledge-skills-disposition clustering of effects might make analytical sense, but in a simulation environment they are profoundly inter-related. Improved knowledge might be driven by the need to articulate and defend ideas with others, while the ability to communicate might require substantive knowledge, in turn given encouragement to be more confident, with helps with presentational skills, etc., etc.
Even the most apparently simple of the effects – knowledge – is more complex when considered more closely. Not only might there be a question of knowing more facts, but also the ability to manipulate and integrate those facts is promoted: the translation from passive to active modes of learning require the student to use their knowledge in new ways.
And behind it all was the thing that has got so many of us into sims in the first place: they are so much fun to do. As always at such events, you could see people go ‘oooh, I could use that’ as colleagues talked about what they did. I myself got to recall possibly the first simulation I ever did – Dentist Drills – which has apparently enjoyed more of an active life than might have been good for it (and the paperwork for which I hope I can finally get my hands on: trust me, it’s brilliant).
But ‘fun’ still doesn’t cut it.
The workshop was a very good moment to consider next steps and I’m glad that the group is looking to continue its interactions. For me, three main points stood out as being central in making progress.
- There needs to be a more systematic attempt to typologise the effects we might expect to find with simulations. That typology might be the same as for any other learning approach, but their relative weight will vary;
- We need some kind of set of validated measures to capture these different effects. The papers presented a number of examples, but if we want to explore the variety of simulations that exist, then those measures might need to become more accommodating of form;
- Finally, we need to use these instruments to get closer to answering the question of what (if anything) is the crucial mechanism within ‘simulations’ that generates these effects. Is it about the scenario, the student production of inputs, the public testing of ideas, something else?
This last point is perhaps the biggest unknown. Over the years (I’ve also come away from Belgium feeling that I’ve now been in this game for longer than I realise), I’ve seen a wide range of sims that claim to have a wide range of effects, and I can’t say that I can spot the common theme or mechanism. Maybe there isn’t one, but only a flexibly-constructed set of mechanisms. But then I’ve also yet to see a clear example of a sim that didn’t achieve anything: there has always been something that could be dragged from the wreckage, intentional or not.
Perhaps this is the secret: that in creating such open and flexible spaces for learning, we also create a failsafe for learning. Simulations are explicitly about creating a meta-cognition of the learning processes, in their creation of a world-within-the-world. Maybe we can never truly fail.