Specifications Grading in Practice, Weeks 1 and 2

Week 1 of my newly-transformed specs grading course, research methods, is in the history bin, and I have much to report. As you may recall, I decided to overhaul my research methods course with this new system over the summer, and am teaching it for the first time this semester. Here on ALPS I will be chronicling my experience with the course, sharing with you the ups and downs over the course of the semester.

Check out my initial post on Day 0 here, and previous posts on specs grading here here, here, and here.

So how did my first two weeks go?

Continue reading

Specifications Grading, Attempt 1, Day 0

Hello, ALPS readers! I’m back after a long summer and spring sabbatical, and am eager to get back in the classroom and talk all things pedagogy here on ALPS. I’m starting a new series where I outline in excruciating detail my experiences using Specifications Grading.  I’ll be sharing my materials, talking about the ups and downs, and reflecting on this unique grading system throughout the semester.

We’ve given quite a bit of attention to specifications grading in the past few months. I did a presentation on it at the ALPS workshop at the University of Surrey in May as I started working on adapting one of my own courses to this new system. I also consulted several former students and children-of-friends about what they thought of the system in abstract, and the general consensus ranged from “shrug” to “that might be cool.” Experts in analysis, my young consultants.

In a nutshell, Specifications Grading is a system where all assignments are clearly linked to course learning outcomes, given clear specifications on what students need to do to earn a passing mark, and graded on a pass/fail style system, where a pass is a high bar (typically a B). Assignments are bundled together by learning outcome, and course grades are assigned based on the bundles that students complete. So, higher grades go to students that either complete more bundles (achieving more learning outcomes) or higher-level bundles that demand students complete more complex work. The course also employs flexibility mechanisms such as tokens to let students revise or reattempt a failing assignment, forgive a course absence, or gain some other kind of benefit. This system is supposed to ensure that all students who pass the class are achieving the minimum learning outcomes for the course, but also puts their grade into their hands by removing the mystery behind grades (no longer 170 out of 200 points, but ‘excellent’ ‘satisfactory’ or ‘unsatisfactory) and letting them choose which grade bundle to achieve.

Check out our previous posts for more general information on Specs Grading, or check out this great community of scholars working with the system.. For this new series, I am going to write throughout the semester about my experience in adapting and teaching my research methods course to this system.

Continue reading

Review of Learning Assessment Techniques

LATsSome of more widely-known practical guides for college teaching:

A recent addition to the list is Learning Assessment Techniques: A Handbook for College Faculty by Barkley and Manor (2016).

The bulk of this book consists of an easy-to-use directory of assessment techniques, many of which can be found in the other books. Here, however, the emphasis is on how the data generated by each technique can be collected, analyzed, and packaged for dissemination.

The focus on the links between learning goals, learning activities, and outcomes assessment can be summarized with two questions people should be asking themselves:  Continue reading

Teaching Methods: Reliability and Validity at the AP Reading

Last month I participated in the annual Reading of the Advanced Placement US Government exams. In May, over 300,000 students in US high schools took this exam, consisting of a series of sixty multiple choice questions and four free responses, usually in the hope of earning college credit or placement in a higher level course. The multiple-choice items are easily dealt with, but someone has to score the free response questions. Enter the AP Readers, a collection of several hundred college professors and high school teachers who voluntarily spend a week sitting in a convention center, scoring essays for eight hours a day, seven days straight. Sound awful? It gets worse: we actually score responses to the same essay question, over and over again. On a good day, I score about 450 essays, all answering the same question.

So why have I put myself through this tedious exercise for nine of the last ten years?

Continue reading