New frontiers for simulations: validating their value

Last week I talked about the challenge of building a community of simulation users and designers, as one of the key themes that emerged from a workshop I attended recently.

This time, I want to think a bit about a more technical issue, which is nevertheless connected to this idea of community.

One of the major barriers to more widespread use of simulations is that we simply don’t know how effective they are as pedagogic tools, or even if they are effective at all.

Naturally, this is a slightly awkward message to be giving on a blog like this, but it reflects the general paucity of research in this field. There are some good efforts to consider the matter (a short bibliography by Amanda is available here), but their coverage is limited and their methods are not fully compatible.

Certainly, there are some practical questions of methodology to be addressed. Do you have a control group? If so, what do they do? What do you measure – knowledge, skills, something else? Do you look for short- or long-term effects? How do you isolate the impact of the simulation itself? Each one of these questions is a major headache by itself, but together they pose a very considerable challenge to anyone trying to identify impact.

However, we need to address this matter. Otherwise, we risk remaining in a situation where simulation users end up relying on their personal enthusiasm to convince others to try it out. I – like any other simulation user I know – have a wealth of stories and anecdotes to share, but the value of that is less and less.

Firewalk
University Administrator considers using sims during his morning walk

One of the workshop participants pointed out that universities are increasingly evidence-based in their approach to learning & teaching. That’s a good thing, rather than just relying on a model of ‘doing what we’ve always done’ or of ‘let’s just try something’. Hence the proliferation both of research into pedagogy and of metrics to quantify L&T. This latter has gone further in the UK than most other countries, but I’d be happy to bet that it’s something that will be coming your way soon enough.

Seen in this light, the weak evidence base for simulations is bad not only for encouraging new users, but also for encouraging universities to support their use. Even in those places where active learning models are the norm – Maastricht, Edinburgh – that is something that is as much an auto da fe as anything else.

This all requires us to put much more effort into building a research base that lets us speak to this changing landscape.

The workshop participants suggested that pre/post testing was a particularly valuable way of identifying learning effects with a group, since it provided some data on the impact of the learning intervention. It doesn’t resolve the control group issue (‘how much would they have learnt in any case?’), but it does suggest some ways in which we can start to unpack the effect, especially if we look at both knowledge and skills development in our questions.

One idea that got floated, if not actioned, was that of some kind of standardisation of testing, to allow for better comparability of findings across cases. The difficulty here is one of individual instructors having individual priorities to address in their teaching, but this shouldn’t stop us considering how we might build a core test, around which additional, more specific questions could be hung.

Ideally, building in evaluation mechanisms to simulations would be an integral part of our work. At present it’s not, but circumstances might force us to consider that once again. Regardless, it makes sense for us – as individual instructors – to know whether what we’re doing with our students actually makes any difference to them.

2 Replies to “New frontiers for simulations: validating their value”

  1. Myself and a colleague of mine were thinking along the same lines, and we’re planning on a meta-analysis of the simulation literature in political science. With any luck, we’ll be presenting our findings either in February at the TLC or in April at the Midwest.

Comments are closed.

%d bloggers like this: