Making alignment work

Still not…

I’ve just been helping a young child who lives in my house with their French homework, practising sentences for a test that’s coming up. I imagine that many of you will have done the same, either in the parent role or the child role (or both, for that matter).

For me, it was a pointed demonstration of the perils of alignment in teaching. The child is going to be testing on their ability to write out a series of sentences, so is focused entirely on that. Thus, when I ask them to read out the sentences, I get something that even I know isn’t good pronunciation: ‘magasins’ is remembered as ‘being like mega, but maga, and then sins’.

In short, this child, like pretty much every learner, is learning to the incentives that are provided: if the teacher isn’t going to be bothered about the speaking, then why should the child? Continue reading

Evaluating module evaluations

As for many of you, January is the time when students’ evaluation of your autumn courses and modules come in. It might also be the time when you have exciting conversations with line managers.

I think that I’ve laid out my view on such evaluations over the years – managerialist and often mis-directed questions – but perhaps its useful to think about how you can make the most of the information they provide.

As so often, three ideas to frame all of this.

The first is that course evaluations are useful, if properly contextualised. That means using them together with all the other feedback you get from students, plus your own reflection. I like using the ABC method for more constructive student input, but there are also all those chats you have with students, plus their assessed work: if no-one seems to understand concept X, or confuses A and B, then maybe you’re not presenting things very well to them. The key point here is triangulation: does a piece of evidence have support elsewhere?

The second idea is that you have to engage properly with the evaluations and the reflection. I, probably like you, have been known to skim through the comments, find the thing that it is obviously ridiculous and use that to roll my eyes about the whole exercise. As political scientists, we should know that just because people sometimes say and do silly things doesn’t mean that they are silly, or that everyone is silly. Instead, we need to understand why they say these things and how we might respond.

Of course, this is a bit tricky, especially when evaluations are anonymous and asynchronous to the class activities. Hence the importance of you running your own running evaluations throughout your contact time. Often, the source of the frustration is that you feel you’ve done something and the student hasn’t recognised that: this autumn, I laid on much more support on my assessment than before, only to read one student’s comments that even more was needed. The point should be that I need to think about how I communicate what I provide more clearly next time, rather than trying to track down this year’s lot and justify myself.

And this is the third point. Course evaluations are not meant to be character assassinations and – in the very large majority of cases – are not used as such by students. Much more common, in my experience, are staff taking comments as personal attacks.

Just as evaluations are about the students’ experience of the course, rather than about the student themself, so too should you treat them as about the specific instance of the course, rather than about you.

There’s the old teacher-training trope – which is actually very useful – that says people go through three stages in their teaching practice: they start by thinking everything’s about them (as teachers), then think it’s all about the students, and finally realise that it’s about the specific instance of interaction between them and the students. And so it is here.

One of the things we keep on returning to here at ALPSBlog is the idea that there is no one right way of doing things, only a series of choices that you can explore with your students. That requires self-awareness and self-criticality, underpinned by a sense that things will never be completely ‘right’ in any lasting sense.

Course evaluations might be flawed, but that doesn’t mean they’re not useful. But it also doesn’t mean that they are the be-all and end-all.

Thoughts from the 2016 EuroTLC, Brussels

carolabetzold_bild_mini johan_adriaensenThis guest post comes courtesy of Johan Adriaensen (Maastricht) & Carola Betzold (Antwerp).

Higher education often engenders a dual ambition. Upon graduation, we expect students to be prepared for the professional labour market. At the same time, we aspire them having developed academic qualities such as a critical mind-set, an understanding of scientific research and an inquisitive attitude. Reading between the lines, it is not hard to see that many of these ambitions do not revolve around students’ acquisition of knowledge but rather about the mastery of particular skills and attitudes. While there is a lot of literature available on innovative teaching methods to promote the learning of skills and attitudes, we wondered whether the standard methods of evaluation (exams, written assignment) are adequate to assess a student’s mastery of these important skills and -in turn- signal their accomplishment to any future employer.

But what exactly are these skills that students should learn, what are different ways of evaluating these skills, and how could we help students showcase their skills to the outside world? To address these questions, we organised a session on “Student evaluation and student portfolios” at the recent EUROTLC conference in Brussels. Using a World Café format, participants first identified generic skills students should acquire over the course of studies and then turned to different forms of evaluation of these skills. Finally, the discussion centred on student portfolios as one tool to enable graduates to present their skills to future employers.

So what are the skills we should teach and students should learn? The list is long: being able to communicate clearly via written as well as spoken word. Organising, prioritising and filtering information. Acquiring an inquisitive mind and becoming a life-long learner. Interestingly, these skills were quite generic to university education; it was much harder to identify skills unique to political science, international relations or European studies. Yet, the relative importance of the identified skills – and thus their prominence in the curricula – is likely to differ.

How can we assess these different skills? Is there more than essays and exams to evaluate students? Does our examination privilege certain skills or types of learners, and if so, how could we change this? Participants agreed that the evaluation of skills and attitudes require a slightly different approach and brought a range of examples on how they or their institutions provide feedback and evaluate students. One participant for instance described how he has a “menu” of tasks that students need or can do to obtain points in his class. Some elements are mandatory, but most are voluntary. Students can thus select a format that suits them: you may want to write an essay, but you could also do a presentation or take an oral exam. Another participant presented how they use peer review to obtain feedback on group work, whereby all group members have at certain points rate themselves and their peers on specific criteria such as creativity, reliability or punctuality. These open ratings are then discussed within the group: why did you give or obtain this rating? What do you take away from this? This peer review system worked very well, but did not influence the final mark.

But how much does such a mark really say about skills to a potential employer? How could students provide evidence for their skills beyond a numerical mark on an abstractly named course? To this end, we proposed the use of a portfolio. We viewed this portfolio as a sort of repository of students’ achievements and activities. The question then was how can we, as academic staff, help students to build up this evidence into a student portfolio? Ideas ranged from specific written assignments such as position papers, speeches, articles in student journals or opinionated editorials to participation in simulations and student debates. Branding and badging is an important aspect to ensure recognition of the accomplishments of the students. Competitions or the award of prizes are but one example how this can feed into a portfolio. With such a repository, you have concrete examples you can refer to in cover letters or job interviews to plausibly show what you can do.

Ultimately, time was too short for our discussions to come to a conclusion. Still, we were left with the impression that our exercise is useful for many educational programmes. Clearly, each programme is likely to prioritize different skills, requiring a different evaluation practice and offering alternative opportunities to develop a student’s portfolio. As in our World Café, the choices ultimately made, was contingent on the participants around the table. Identifying the required skills and tailoring one’s programme to it, is a collective endeavour of all involved teaching staff.

Specs Grading Update #4: On poor syllabus design, late work, and the woes of grading

It’s time for another specifications grading update! These posts are my attempt to be highly transparent about the benefits and challenges of using this method in my research methods class this semester, with all the trials and tribulations and the reflection and changes they prompt here on display.  Check out Posts 1, 2, and 3 if you haven’t already, or for a deep dive into the ins-and-outs of specifications grading as a system, take a look at our extensive discussions on the subject over the last year.

Today’s topics: requiring so much that I set up my students to fail; dealing with late work; and how all that grading is going. In other words, let’s talk about how even extensive reflection and consideration can result in basic syllabus mistakes that pose unacceptable challenges to students.

Let’s start with this basic question: Why in the world did I require 21 assignments? Yes, pedagogically speaking, this made sense: these assignments, collectively, added up the basic knowledge required of an introductory methods course. They covered topics such as writing research questions and hypotheses to measurement, ethics, sampling, correlation v. causation, and everything else. Back in the summer, I spent a lot of time considering whether to require everything, or allow students to complete some smaller number or percentage of the total.

This student just failed my class due to poor syllabus-making on my part.
This student just failed my class due to poor syllabus-making on my part.

Here’s the problem: if a student completes twenty assignments with a satisfactory or higher score, but misses assignment #21 because they forgot or overslept or had a family emergency, then according to my syllabus they fail the course outright. Sure, the tokens help with this, letting students get a 24 hour extension per token, up to 72 hours—but what if they don’t have enough tokens in the bank, or they completely forget for a week or more? These students FAIL THE COURSE.

Continue reading

Online Learning Teams

John RothgebToday we have a guest post from John M. Rothgeb, Jr., professor of political science at Miami University of Ohio. He can be contacted at rothgejm[at]miamioh[dot]edu.

Do online learning teams help students recall facts and/or analyze political problems? Instructors often use online chat rooms and discussions as convenient tools for engaging teams of students in the process of writing, reading, and reacting, helping them recall information, and giving them opportunities for analyzing problems.  Despite much agreement about the benefits of online teams, there is little evidence supporting their value.

I collected data on this subject from four sections of my introductory world politics course. Two sections from 2014 with a total of 75 students employed online learning teams, and two sections from the Fall of 2015 and Spring of 2016 with a total enrollment of 86 did not.  The classes met at the same time, the same days, in the same classroom, and received the same lectures.

The learning teams were created by randomly assigning five or six students to groups, with each group given its own online chat room.  Students received careful instructions regarding the use of chat rooms.  They were required to post chat room comments at least twice each week, first with a response to the week’s assignment and second with their reaction to other students’ posts. This activity counted for ten percent of the final grade. Students in the sections that did not use online learning teams were instructed to study material on their own.

I assessed students’ learning with in-class exams on conceptual definitions and a take-home essay in which they used theory to analyze an historical event—for example, applying deterrence theory to the outbreak of the Gulf War in 1990.

A chi square analysis showed that students’ grades on exams and the paper assignment did not significantly differ between the sections with the online learning teams and those without. The findings imply that the online learning teams had no effect on student performance.

These results invite reflection as to why online teams did not lead to evidence of superior factual recall or analytical ability.  More than two-thirds of students across all sections earned a B or better on the exams, indicating good recall. While fewer students earned an A grade on the analytical paper, only a small number earned grades below a C+. Perhaps the lectures, problem exercises, and readings that were part of all course sections were enough to generate this result.

Further research should explore the relative efficacy of online teams and weekly study guides. Given that my research did not control for variables like a student’s prior GPA, year in college, major, or extra-curricular activities, it also might be useful to include this kind of data in future investigations.

For additional information, please refer to John M. Rothgeb, “The Efficacy of Learning Teams: A Comparative Analysis,” Journal of Political Science Education 9, 3 (2013): 336-344, or contact him at the email address listed above.

Specifications Grading in Practice, Weeks 1 and 2

Week 1 of my newly-transformed specs grading course, research methods, is in the history bin, and I have much to report. As you may recall, I decided to overhaul my research methods course with this new system over the summer, and am teaching it for the first time this semester. Here on ALPS I will be chronicling my experience with the course, sharing with you the ups and downs over the course of the semester.

Check out my initial post on Day 0 here, and previous posts on specs grading here here, here, and here.

So how did my first two weeks go?

Continue reading

Specifications Grading, Attempt 1, Day 0

Hello, ALPS readers! I’m back after a long summer and spring sabbatical, and am eager to get back in the classroom and talk all things pedagogy here on ALPS. I’m starting a new series where I outline in excruciating detail my experiences using Specifications Grading.  I’ll be sharing my materials, talking about the ups and downs, and reflecting on this unique grading system throughout the semester.

We’ve given quite a bit of attention to specifications grading in the past few months. I did a presentation on it at the ALPS workshop at the University of Surrey in May as I started working on adapting one of my own courses to this new system. I also consulted several former students and children-of-friends about what they thought of the system in abstract, and the general consensus ranged from “shrug” to “that might be cool.” Experts in analysis, my young consultants.

In a nutshell, Specifications Grading is a system where all assignments are clearly linked to course learning outcomes, given clear specifications on what students need to do to earn a passing mark, and graded on a pass/fail style system, where a pass is a high bar (typically a B). Assignments are bundled together by learning outcome, and course grades are assigned based on the bundles that students complete. So, higher grades go to students that either complete more bundles (achieving more learning outcomes) or higher-level bundles that demand students complete more complex work. The course also employs flexibility mechanisms such as tokens to let students revise or reattempt a failing assignment, forgive a course absence, or gain some other kind of benefit. This system is supposed to ensure that all students who pass the class are achieving the minimum learning outcomes for the course, but also puts their grade into their hands by removing the mystery behind grades (no longer 170 out of 200 points, but ‘excellent’ ‘satisfactory’ or ‘unsatisfactory) and letting them choose which grade bundle to achieve.

Check out our previous posts for more general information on Specs Grading, or check out this great community of scholars working with the system.. For this new series, I am going to write throughout the semester about my experience in adapting and teaching my research methods course to this system.

Continue reading