Specs Grading Update #4: On poor syllabus design, late work, and the woes of grading

It’s time for another specifications grading update! These posts are my attempt to be highly transparent about the benefits and challenges of using this method in my research methods class this semester, with all the trials and tribulations and the reflection and changes they prompt here on display.  Check out Posts 1, 2, and 3 if you haven’t already, or for a deep dive into the ins-and-outs of specifications grading as a system, take a look at our extensive discussions on the subject over the last year.

Today’s topics: requiring so much that I set up my students to fail; dealing with late work; and how all that grading is going. In other words, let’s talk about how even extensive reflection and consideration can result in basic syllabus mistakes that pose unacceptable challenges to students.

Let’s start with this basic question: Why in the world did I require 21 assignments? Yes, pedagogically speaking, this made sense: these assignments, collectively, added up the basic knowledge required of an introductory methods course. They covered topics such as writing research questions and hypotheses to measurement, ethics, sampling, correlation v. causation, and everything else. Back in the summer, I spent a lot of time considering whether to require everything, or allow students to complete some smaller number or percentage of the total.

This student just failed my class due to poor syllabus-making on my part.
This student just failed my class due to poor syllabus-making on my part.

Here’s the problem: if a student completes twenty assignments with a satisfactory or higher score, but misses assignment #21 because they forgot or overslept or had a family emergency, then according to my syllabus they fail the course outright. Sure, the tokens help with this, letting students get a 24 hour extension per token, up to 72 hours—but what if they don’t have enough tokens in the bank, or they completely forget for a week or more? These students FAIL THE COURSE.

Continue reading

Online Learning Teams

John RothgebToday we have a guest post from John M. Rothgeb, Jr., professor of political science at Miami University of Ohio. He can be contacted at rothgejm[at]miamioh[dot]edu.

Do online learning teams help students recall facts and/or analyze political problems? Instructors often use online chat rooms and discussions as convenient tools for engaging teams of students in the process of writing, reading, and reacting, helping them recall information, and giving them opportunities for analyzing problems.  Despite much agreement about the benefits of online teams, there is little evidence supporting their value.

I collected data on this subject from four sections of my introductory world politics course. Two sections from 2014 with a total of 75 students employed online learning teams, and two sections from the Fall of 2015 and Spring of 2016 with a total enrollment of 86 did not.  The classes met at the same time, the same days, in the same classroom, and received the same lectures.

The learning teams were created by randomly assigning five or six students to groups, with each group given its own online chat room.  Students received careful instructions regarding the use of chat rooms.  They were required to post chat room comments at least twice each week, first with a response to the week’s assignment and second with their reaction to other students’ posts. This activity counted for ten percent of the final grade. Students in the sections that did not use online learning teams were instructed to study material on their own.

I assessed students’ learning with in-class exams on conceptual definitions and a take-home essay in which they used theory to analyze an historical event—for example, applying deterrence theory to the outbreak of the Gulf War in 1990.

A chi square analysis showed that students’ grades on exams and the paper assignment did not significantly differ between the sections with the online learning teams and those without. The findings imply that the online learning teams had no effect on student performance.

These results invite reflection as to why online teams did not lead to evidence of superior factual recall or analytical ability.  More than two-thirds of students across all sections earned a B or better on the exams, indicating good recall. While fewer students earned an A grade on the analytical paper, only a small number earned grades below a C+. Perhaps the lectures, problem exercises, and readings that were part of all course sections were enough to generate this result.

Further research should explore the relative efficacy of online teams and weekly study guides. Given that my research did not control for variables like a student’s prior GPA, year in college, major, or extra-curricular activities, it also might be useful to include this kind of data in future investigations.

For additional information, please refer to John M. Rothgeb, “The Efficacy of Learning Teams: A Comparative Analysis,” Journal of Political Science Education 9, 3 (2013): 336-344, or contact him at the email address listed above.

Specifications Grading in Practice, Weeks 1 and 2

Week 1 of my newly-transformed specs grading course, research methods, is in the history bin, and I have much to report. As you may recall, I decided to overhaul my research methods course with this new system over the summer, and am teaching it for the first time this semester. Here on ALPS I will be chronicling my experience with the course, sharing with you the ups and downs over the course of the semester.

Check out my initial post on Day 0 here, and previous posts on specs grading here here, here, and here.

So how did my first two weeks go?

Continue reading

Specifications Grading, Attempt 1, Day 0

Hello, ALPS readers! I’m back after a long summer and spring sabbatical, and am eager to get back in the classroom and talk all things pedagogy here on ALPS. I’m starting a new series where I outline in excruciating detail my experiences using Specifications Grading.  I’ll be sharing my materials, talking about the ups and downs, and reflecting on this unique grading system throughout the semester.

We’ve given quite a bit of attention to specifications grading in the past few months. I did a presentation on it at the ALPS workshop at the University of Surrey in May as I started working on adapting one of my own courses to this new system. I also consulted several former students and children-of-friends about what they thought of the system in abstract, and the general consensus ranged from “shrug” to “that might be cool.” Experts in analysis, my young consultants.

In a nutshell, Specifications Grading is a system where all assignments are clearly linked to course learning outcomes, given clear specifications on what students need to do to earn a passing mark, and graded on a pass/fail style system, where a pass is a high bar (typically a B). Assignments are bundled together by learning outcome, and course grades are assigned based on the bundles that students complete. So, higher grades go to students that either complete more bundles (achieving more learning outcomes) or higher-level bundles that demand students complete more complex work. The course also employs flexibility mechanisms such as tokens to let students revise or reattempt a failing assignment, forgive a course absence, or gain some other kind of benefit. This system is supposed to ensure that all students who pass the class are achieving the minimum learning outcomes for the course, but also puts their grade into their hands by removing the mystery behind grades (no longer 170 out of 200 points, but ‘excellent’ ‘satisfactory’ or ‘unsatisfactory) and letting them choose which grade bundle to achieve.

Check out our previous posts for more general information on Specs Grading, or check out this great community of scholars working with the system.. For this new series, I am going to write throughout the semester about my experience in adapting and teaching my research methods course to this system.

Continue reading

Review of Learning Assessment Techniques

LATsSome of more widely-known practical guides for college teaching:

A recent addition to the list is Learning Assessment Techniques: A Handbook for College Faculty by Barkley and Manor (2016).

The bulk of this book consists of an easy-to-use directory of assessment techniques, many of which can be found in the other books. Here, however, the emphasis is on how the data generated by each technique can be collected, analyzed, and packaged for dissemination.

The focus on the links between learning goals, learning activities, and outcomes assessment can be summarized with two questions people should be asking themselves:  Continue reading