When is the right time to review your teaching provision?

I’m just about to enter that most mystical of times for an academic; the quiet time, when relatively little is urgent and the mind can turn to the long list of ‘priorities’ that have been neglected since the last quiet time.

One of the potential priorities (you see how ephemeral the notion of ‘quiet’ can be) is reviewing my teaching provision for the coming year.  Rather than just getting on with it, I’m moved to write this post, since it seems pertinent: just when should we be doing this?

Last week, I was marking a big pile of final exams, a surprisingly stimulating time for me to be thinking about what had (and hadn’t) worked in that module, as well as what questions I might ask next year.  In the end, I decided to focus on getting my marking done, since that was more urgent, but I’ve got some notes to work from.

Likewise, in preparing for a summer school next week (this time is getting less and less quiet by the minute), I found a game that I had run last November and then immediately re-written for next November.  I re-wrote it again for the summer school, then re-wrote it once more for next November, in light of the ideas I’d had in the second re-writing.

I also know that come September, I’ll be asked for module handbooks, so I’ll be spending a block of time specifying what I’m planning to do, although I also know that I often don’t know what I want to do (precisely) until I’m in the classroom on the day.

The difficulty largely seems to be one of integrating our living ideas about what we will/might do with the relatively rigid structures of HE quality assurance.  In the UK, at least, we can’t decide to chop and change assessment at the last minute (or even a long time before that last minute), so we have to take educated guesses about how things will/might work.  One answer might be to build a living module handbook for oneself, containing not only the information for students, but also the lecture notes, OHPs, activities, resources, etc. that are needed to run the module, all in a form that can be instantly updated.

The downside of this is the start-up cost, not least in the conceptualising of what is necessary to include, but it’s something for me to think about during this quiet time.  Doing it might – sadly – become one of those ‘priorities’ I mentioned.

I’ll Have the Merlot Please

A brief mention of the Multimedia Educational Resource for Learning and Online Teaching (MERLOT), which I stumbled upon while in a training workshop for teaching blended/hybrid courses:

MERLOT is an immense searchable directory of learning materials that are available online for free. These materials are suitable for both online and face-to-face teaching environments.

I was particularly interested in political science simulations on MERLOT, and among them I found Ayiti: The Cost of Life.

Ayiti is a simple but fiendishly challenging simulation of poverty in Haiti, created through a partnership between Gamelab and Global Kids, with support from UNICEF and Microsoft. It’s a great demonstration of the effects of productivity shocks in conditions of poverty. The setting is a poor Haitian family that is struggling to survive; the player must decide how to allocate the family’s limited resources, manage risk, and pursue goals. I’ll admit that so far I haven’t been able to crack the simulation’s algorithm — every time I’ve played, the game ends with family members dead of cholera and any surviving children sent off to live with relatives.

Decisions, Decisions

Here’s another idea from Thinking, Fast and Slow. Daniel Kahneman credits economist Thomas Schelling (author of Choice and Consequence) as the original creator; I’ve slightly altered the  version Kahneman presents in his book. The exercise demonstrates how framing, emotion, and moral values affect how people make decisions, and how this effect flies in the face of rational choice. It can easily be modified  for application in a number of contexts. Here it is:

Present students with two fictional developing countries, A and B. The governments of each country need to collect tax revenue to pay for public services. The governments also decide that taxes on family income will vary according to the number of a family’s dependent children.

The government of country A decides that for adult couples with children, the tax exemption for the wealthy should be larger than that for the poor.

Ask students if they agree or disagree with the decision of government A. They will most likely find it to be highly objectionable.

The government of country B takes the opposite approach and imposes a tax surcharge on adult couples that have no children. It decides that childless adult couples who are poor will pay a penalty that is just as large as childless adult couples who are wealthy.

Ask students if they agree or disagree with the decision of government B. They will most likely also find this highly objectionable.

Then explain that students cannot logically reject both proposals because they are economically equivalent. The only difference between the scenarios is that A presents a childless couple as the default and for B the default is couples with children.

Kahneman points out that responses to questions like these are highly dependent upon how the questions are framed. If I were to introduce some additional but functionally irrelevant detail into one of the scenarios — such as “because wealthy citizens in country A typically invest much more money in their children’s educations than poor citizens” — many people would respond differently.

Twitter in Teaching: Student Memes and Critical Humorous Thinking

A guest post from my colleague, Dr Jack Holland (University of Surrey).

 

This academic year, I promoted the use of a Twitter hashtag in my Level 2 Security Studies module (#pol2036).  This was done to increase student interaction outside of the classroom and encourage the sharing of academic sources, news articles and other links related to the study of international security.  Many students were already familiar with Twitter, whilst for others signing up and creating an account was a new experience.  Although some student’s engagement with the social networking site was very limited, others fully embraced the platform as a useful forum for discussing ideas, sharing quotes and sources, as well as (quite unexpectedly) developing humorous critiques of the study of security.  It was this latter use of Twitter that has interested me the most.  Developing the popular ‘meme’ phenomenon that has seen numerous campus specific variants, students took it upon themselves to create their own #pol2036 memes, which attempted to poke fun at some of the elements of the subdiscipline.  As well as being very funny (to a, perhaps somewhat geeky, Political Science audience), these memes also demonstrate a considerable depth of understanding of the course material, as well as an ability to critique it.  A selection is included here:

Of course, Twitter and the #pol2036 hashtag had other more mundane uses – announcements about marking, student queries, links to news stories and Youtube videos  – but it was the memes that have made me think about what social networking can do in terms of helping students to develop key skills outside of their lectures, seminars and assessments.  And, it has made me think about the role that humour can play in keeping students interested and as a mode of critique.

The benefits of not going to university

Yesterday was our exam board for finalists, where we agreed module marks for the year and made our final degree classification recommendations to the university (overseas readers might look at the QAA website for more information about English practice).  As in previous years, we’ve noticed once again that those students who have spent a year on a professional or academic placement during their studies with us have performed noticeably better (and more consistently) than those who don’t.

Surrey is very rare in offering the option of a placement to students on essentially every programme it runs and Politics has been part of that, with students doing such varied activities as working for refugee organisations in South Africa, corporate social responsibility for large multinationals, working for MPs and studying politics in Denmark and the US.  This takes place after the second year of study, and before the final year.

Students’ own feedback on this suggests these experiences have helped them to develop a stronger sense of why they are at university (i.e. rather than just because it is expected of them), to try out their academic learning in an applied setting, and to strengthen their skills in time management, presentation and interpersonal relations.  In short, it seems to help them get through the demands of modern living.

This is excellent, and one of our primary objectives at educators: helping students realise their potential. But it also has another beneficial side-effect, namely that their academic performance seems to gain from it as well. 

Looking over the results from the previous four years, we see that while students who go out on placement have somewhat better academic performances in the year before placement (compared those that don’t), they increase that gap significantly in the year after placement.  Moreover, while some students see a small drop in mean grades in their final year, this almost never happens with placement students (and didn’t happen at all this year).

This suggests that placement students bring back some of their experience into their final year of study, and that this in turn helps them to achieve better results on the narrow metric of grading.  So for all students thinking about placements, I would strongly recommend it.

The Premortem

I regularly teach courses on economic development and complex humanitarian emergencies, and I often employ a case study approach. When students examine past events, it is easy for them — and for me — to fall victim to hindsight bias. We assume that the prospects for failure should have been just as evident in the past when decisions were made as they are to us when we are evaluating those decisions in the present.

Even if teachers take hindsight bias into account when explaining to students why incorrect answers are incorrect, we often see students making the same mistakes over and over again.

In his book Thinking, Fast and Slow, Daniel Kahneman credits fellow psychologist Gary Klein with the idea for engaging in a premortem when making decisions. The premortem is a critique that is crowd-sourced on a very small scale. It can also be described as a quick and dirty outside review. Here’s how it works:

  • A person presents a proposed decision to a group of people who are knowledgeable about the subject matter.
  • The group is told “Imagine that we are a year into the future. We implemented the plan as it now exists. The outcome was a disaster. Please take 5 to 10 minutes to write a brief history of that disaster” (Kahneman, p. 264).

I can see this technique being applied to all sorts of assignments, especially proposals for thesis research or project design. Although many undergraduates don’t have field research or public policy experience, a classroom of students should be able to think of and write about a few ways that the best laid plans can go awry. Including experienced students as a panel of experts during the ensuing discussion  — for example, students who have already conducted thesis research while studying abroad  — might make the process work even better.

Algorithms and Predicting Academic Success

I’ve been reading Thinking, Fast and Slow, by Daniel Kahneman, who won the 2002 Nobel Prize in Economic Sciences for his work on the psychology of decision making.

Based on what he writes about the accuracy of clinical versus statistical predictions, I’m wondering if my university should employ an algorithm to determine which incoming students are most likely to suffer severe academic problems, and direct resources only at the students the algorithm identifies as most at risk.

Like other universities in the USA, mine is worried about student retention, academic progress, and graduation rates, and increasing amounts of staff and faculty resources are being devoted to making sure that Jeremy doesn’t fall through the institutional cracks.  The result is a combination of a blanket (every first-year student takes a course on college-related life management skills) and individual (a professor or staff member has a hunch that a particular student might not return next semester and decides to warn others) intervention strategies.

From a statistical point of view, there are serious problems to this approach. Requiring that every student take an orientation course amounts to “we don’t know which students are most likely to drop out, so all of them have to be treated.” Faculty and staff, even though they might be highly trained in advising, must decide whether to raise the alarm about a particular student in isolation. They are unaware of information contained in the overall data set, which is an extremely unreliable method of making decisions because people put more faith in their decision-making abilities than they should. According to Kahneman,

“Those who know more forecast very slightly better than those who know less. But those with the most knowledge are often less reliable. The reason is that the person who acquires more knowledge develops an enhanced illusion of her skill and becomes unrealistically overconfident . . . To maximize predictive accuracy, final decisions should be left to formulas, especially in low-validity environments” (pages 219 and 225).

Something similar to the Apgar test for newborns might be a more accurate and efficient means of predicting which students will run into academic problems — students who score above a certain threshold on the test would be targeted for intervention. The intervention could take the form of mandatory periodic meetings with an advisor, recommending that the student take courses taught by certain professors, etc.

Kahneman recommends that

  • this type of instrument measure no more than six characteristics or dimensions,
  • the dimensions should be as independent as possible from each other,
  • questions should be factual in nature (i.e., not an affective or associative test).

Obviously such a procedure would require some coordination between different units of the university to gain access to data, but the sample size would be large. There would also need to be tracking of data over time to see how predictive the algorithm is. But longitudinal tracking would enable the instrument to be refined.

Currently this kind of data-driven method of decision making is probably too radical an idea to be considered by my university. Meanwhile I’m going to think about how I can generate some kind of algorithm to use on the students that I teach, and see if I can find some non-academic dimensions that predict grades. Any suggestions are welcome.

Just what do you think you’re doing, Dave? Dave, I really think I’m entitled to an answer to that question.

Effective games

I found myself in a blind panic, spraying bullets wildly into mainly civilian populations, the other day.  For all concerned, it was just as well that this was part of the gameplay in Modern Warfare 2, but I was reminded of the presentation I’d heard last week by Mary Flanagan.

Mary was speaking at a workshop on simulations and game in politics teaching.  Her core message was that games contain value systems, intentionally or not: the design of a game conveys a world-view that the player has to engage with at some level.  Mary’s work has been about building games that create spaces for critical reflection on a wide range of political and social objects, and she presented a range of examples from immunisation to community-building to the consequences of job lay-offs.

The idea that games contain values is a very useful one for us all, not least because that is precisely what we are aiming for in educational games and sims.  Therefore, it’s important to think about how to harness this most productively.  Mary pointed to a number of elements in achieving this:

  • games need to allow for the use of strategy by the player, i.e. they cannot be purely linear.
  • games need to allow the player to make meaningful choices, i.e. they need to have consequences behind them.
  • games need to create situations where ‘the mechanic is the message’, i.e. the implicit structure of the game and how one plays conveys the idea you wish to communicate.
  • games need to provide a testbed for systems thinking, by creating an integrated experience for the player to visualise and internalise an approach.
  • and finally, games need to prepare players for zombie attacks (this might not be so central).

These are all excellent points for us to consider as we develop our own activities.  Certainly, the presentation helped to bring out much of the latent thinking I have done over the years and will spur me on.

To bring it back to MW2, it helped underline the very ambivalent approach to violence that the game embodies, with its situations of deep moral ambiguity (what do you do in the Moscow airport?) and its contestation of ‘good’ and ‘bad’ (by the end, who do/can you trust beyond your own team?).  I’m finally moving on to Black Ops (since I’ve now got a machine that can play it), so soon I’ll be covered on the zombie-thing too.

Asking the Right Question Before Creating a Good Answer

Like Simon, I’ve also been wondering “what are exams for?” I want my exams to function first as learning exercises and second as a means of formative assessment. Using exams as  summative assessment instruments ranks for me a distant third.

My exams usually consist entirely of in-class essays.  I provide students with a list of possible questions as a study guide; they get a shortened list for the exam and must select one question apiece from two or three subgroups. Students typically have already encountered some of the questions as short writing assignments that are tied to course readings.

I stress the development of argumentative skills when teaching, and my exams are no different. But students believe that exams should be either-or exercises in the regurgitation of factual knowledge, and they frequently complain that my grading of the arguments they have presented in their exams is “unclear” despite my use of rubrics. I should add that these are not high-stakes events — each exam is usually worth only ten percent of the final grade in the course — but because they are labeled “exams” they loom large in students’ minds.

Time for a change.

In the fall semester, I’m thinking of devoting classroom time on question development. I tried this during the spring semester and it didn’t work very well, possibly because I didn’t have students carefully and deliberately identify what makes one question better than another.

Once students have created an appropriate exam question, they can then determine what is necessary to adequately answer it; for example, by using at least three pertinent examples drawn from reading assignments. This will hopefully lead them into the creation of a rubric, a greater sense of personal investment in and responsibility for the exam process, more frequent interaction with information they will be tested on over a longer period of time, and more learning.

Being a winner

In my mental image of me, I’m not the sort of person who wins things.  So when I do win something, I’m quite chuffed. Yesterday, I got to the status of being very chuffed indeed to win the nanoteach prize at the Higher Education Academy’s Social Science conference in Liverpool.

The nanoteach competition asked people to come with a single, short idea about Learning & Teaching, to be shared in no more than 2 minutes. The delegates then got to vote on their favourite.

It would be fair to say that I was anxious about doing something for this.  I’d never tried it before and I wasn’t quite sure what kind of thing the organisers were looking for.  So how did I manage it?

Firstly, I thought about my audience.  You and I are busy people, who haven’t got time for anything too involved or complex to remember.  So it needed to be something that spoke to that.

Secondly, I thought about the context.  It’s a conference, at the end of the day, and there’s wine on the tables, so it can’t be too solemn.

Thirdly, I thought about how I could make a connection. Most of the people weren’t political scientists, so it couldn’t be disciplinary, just as it couldn’t be university-based, since people were also coming from a wide range of sectors.

With all that in mind, I still messed up.  My original idea was to talk about the sticks as a means of randomising contributions. It is visual, it speaks to a problem that’s generic and it’s a bit quirky.  But as the day went on, I realised that it wasn’t right for this: it’s actually quite involved – certainly for a 2 minute presentation – and as I’ve noted here before, it’s not without caveats about its use.  So this is where the final element came into play: I got lucky.

The nanoteach session was ordered by lots and I was able to listen to the first couple of presentations, then pop out of the room, find three post-it notes and dash back in. I then got lucky again, as my name didn’t come up until the very end, so I could collect myself and – importantly – gauge the audience more closely.

By the time I stood up and told them about the ABC system for feedback, I knew that I was giving them a much better idea than the sticks.  It’s super-simple, very easy to remember (“as easy as ABC…” indeed), useful for everyone who needs feedback on anything, and communicable in visual form.

I’m neither going to pretend the ABC idea is my own, nor that it’s the best idea that was presented last night (I’ve got some things I’ll be trying in class when I get back), but it was pitched right for that moment.  Just as our teaching needs to be responsive to our students and their specific needs, so that rest of our pedagogic interactions need to reflect that too.

So thanks to the HEA (for the prize), to my fellow delegates (for the votes) and to the colleagues who gave me the idea in the first place.  This winning thing is rather enjoyable!