Trust But Verify: Assessing Preparation Instead of Participation

Joel MooreToday’s guest contributor is Joel D. Moore of Monash University Malaysia.

Discussion sections (‘tutorials’ at my institution) are frequently my most and least favorite part of teaching. When they work well, they can be more than the sum of their parts. Informed by the week’s readings, students will drive discussions in unexpected directions, critiquing the work and applying it to new examples. Small group activities will be dynamic and student-led, leaving participants with a deeper appreciation of the material and their own preconceptions.

When they fail, they fall to the lowest common denominator. Unprepared students waste valuable time derailing conversations, trying to hide the fact that they haven’t done the readings. Well prepared students become resentful when, class after class, their peers scramble to extract Cliff’s Notes versions of the material from them. The best outcome to be hoped for class is that students come away with a basic understanding of the essential arguments in the readings.

Other than avoiding the dreaded after-lunch time slot for sections, I’ve not been able to do much to ensure that a section runs smoothly. Some classes just worked and some didn’t. Some students simply don’t seem to be motivated by the threat of a poor participation grade or the promise of a vigorous discussion.

Last Fall, I tried a new strategy: grading preparation instead of participation. I allowed students in my States and Markets class to self-report their level of preparedness and claim a commensurate grade for each class session. I would then spend the first 10-15 minutes of the session calling on three random students (assisted by an on-screen random number generator) to verify that they were prepared. After the verification portion of the class was finished, I led the section as usual — sometimes with discussion and sometimes with activities. Continue reading

Fish out of water?

whirlpool-883755_960_720
That whirl I mentioned

It’s only a couple of days until we get to run our PSA/APSA sponsored workshop here at Surrey, so it’s a whirl of organisation around here.

Since I’m going to get everyone to try and chip in some thoughts as we go, I’m not going to dwell too much on the planned activities, but instead think about the potential dynamics that might arise.

While some of us very cosmopolitan, it’s also the case that others of us haven’t crossed the pond too often (at least to judge by someone’s queries about ATMs, sockets and mobile telephony standards).

I recall when I first met the ALPS people, back in New Mexico in 2011, I was also largely unaware of what to expect or of how things were. For me, it was a very liberating experience. Continue reading

Active Learning: What is it good for?

I spent this week attending a Course Design Institute held by my university’s teaching and learning center. The workshop centered on creating a learner-centered syllabus and aligning course objectives, assessments and activities. I thought I’d share a few quick take-aways related to active learning.

First, the facilitator presented evidence from STEM fields on the value of active learning over lecture-based courses. In particular, I was struck by two studies.

Active learning increases student performance in science, engineering, and mathematics (Freeman et al). is a meta-analysis that reviewed 225 studies comparing student performance in undergraduate STEM courses. This is the stand-out quote from that piece:

“If the experiments analyzed here had been conducted as randomized controlled trials of medical interventions, they may have been stopped for benefit—meaning that enrolling patients in the control condition might be discontinued because the treatment being tested was clearly more beneficial” (Freeman et al 2014: 8413, emphasis added).

Continue reading

New Brave New World (again)

A university, yesterday

Like the ticking of a rather slow clock, we’ve just had the publication of a White Paper on Higher Education here in England and Wales. This comes after a long consultation process that stretches back into the previous Parliament and which leaves as many questions open as it addresses.

I’m not going to run through all of it because a) it’s very long, b) not all of it relates to L&T and c) I can’t face it: there’s lots of good pieces here and here. But I will update on the famous Teaching Excellence Framework (TEF), which I’ve written about before (here, here and here).

For those of you who have managed to avoid this idea so far, firstly my congratulations: essentially, it’s seen as an equivalent of the Research Excellence Framework (REF), tryign to provide some judgement on the quality of teaching provision in a way that is comparable across institutions and disciplines. In so doing, it aims to increase transparency for students and to drive up standards.

As with all good ideas, the operationalisation has been the problem: how do you actually do this? Continue reading

Already Closed Minds

CrybabyTaking Simon’s recent post about encouraging student feedback in a different direction:

Yes, students often perceive and understand differently than I do, and I agree that removing barriers to their acquisition of knowledge as an important part of my job. But in many cases students are as different from one another as they are from me, and some of them are simply not interested in learning.

For example, I’m still using the Quality of Failure essay in all my courses as an end-of-semester exercise in meta-cognitive reflection. Compare these quotes from essays written by two students in a course that just ended:

When I realized that we only really went over the homework in class, I mentally decided that I didn’t really want to participate because I had already written my response and it had already been graded.”

While I feel that I have achieved my goal of learning about new populations, I also feel that this was achieved for other reasons than what I previously mentioned. For instance, the one thing that I never really took into consideration was the fact that discussions with my peers would end up being the most influential factor in learning what I did this semester.”

The first student decided early on that she would learn nothing from hearing about the perspectives of her peers during classroom discussions, while the second student was surprised to find that this aspect of the course was by far the most valuable.

The pedagogical “experts” might say that I should meet all students where they are and adjust to all the ways in which students define their interests. But I refuse to accommodate those who are too close-minded to try something that challenges their own view of themselves.

Beginner’s Guide to Simulations: Part 2, Benefits of Simulations

This is Part 2 of an ongoing series aimed at newcomers to using simulations and games in their classroom.  Part 1 introduced the series and focused on how to reduce the workload required in the design and use of these pedagogies.

Skepticism is a pretty standard attitude that we face when trying to convince instructors to try simulations and games in their classes.  Beyond the issues of workload and time that cause new adopters to hesitate, there is a more basic problem: convincing instructors that simulations and games have any place at all in a classroom traditionally dominated by lecture and discussion.

I won’t bury the lede: the ALPS team are all strong proponents of the value of simulations and games in the classroom.  That being said, we are also very aware of the limits of these pedagogies, and one of us publishes consistently on the failures of simulations.  So we are not die-hard true believers. I’m going to focus this post on highlighting some of the benefits of using simulations–such as increasing interaction, engagement, and skill-building opportunities– and then turn to a potential limit–the lack of solid evidence that they improve learning.

Continue reading