As promised in my last post about teaching risk-averse students, I am going to again apply Simon’s ABC technique to last semester’s teaching. And since I taught two sections of my first-year seminar, I’ll focus on that.
First item on the “abandon” list: in-class peer review of student-designed games. Although I think the rubric that students use to evaluate classmate-designed games is good, they simply refuse to use it to provide honest feedback. I know that the majority of the students understand at least some of the principles reflected by the rubric because of the way they have analyzed the games in the final exam. In the classroom, though, they rate the games as perfect. A potential replacement for the peer review process — and this is really more of a “begin” item — is a short writing assignment after each round of game design in which they compare the game their team designed with another team’s game that they played in class.
Second thing to abandon: my organization of memo-writing assignments. I have assumed, incorrectly, that first-semester college students can grasp the purpose and format of a memo with minimal instruction on my part. After three separate iterations of the assignment, complete with an opportunity to rewrite each memo, I didn’t see significant improvement in the quality of students’ work, which was the same thing that happened in the course last year. A possible solution is to walk students step by step through the mechanics of writing a memo in class, so that by the end of the process they have in their hands a document that they can submit for a “perfect” grade. But this would remove pretty much any opportunity for students to independently engage in creative thinking, which is another term for problem solving. More holding of students’ hands to protect them from anything they might find unpleasant. I’ll have to think more about how to better organize an assignment like this.
Third item on the list, which is speculative at this point: abandon the whole course. I’ve been teaching this first-year seminar since its inception four years ago, when it came into being through a revision to my university’s general education requirements. The developmental rationale for the course is not well-defined, and the learning outcomes associated with it are mostly not measurable. Valid data on how the course may be of benefit to students simply isn’t being collected, which means that it is really nothing but an empty three-credit curricular requirement. While I think the topic on which I have built the course is a good one, I am finding it less enjoyable to teach over time. And interaction with university administrators about aspects of teaching it have been less than satisfactory. So, if I have the opportunity in future fall semesters not to teach the course, I might take it.
As promised, here is information about the final exam I have created for my first-year seminar. As I mentioned in my previous two posts in this series, my goal is to make students more aware of what is now often referred to as “design thinking”: in this course, they are not just learning how to build games, but to solve problems.
Instructions for the exam:
Write an essay that compares one of the games that you helped design and one of the games that you played in this course. Identify which game was better by analyzing how well the games:
- Incorporated the principles of the “golden rule” and “magic circle” in relation to events in the real world.
- Utilized elements of conflict and cooperation.
- Had clear written and unwritten rules.
- Facilitated meaningful decision making.
Use information from assigned course readings to support your argument. Be sure to include references to your sources in your essay to avoid plagiarism – this applies whether you are using direct quotations or just the ideas of another author. Use parenthetical in-text citations as with reading responses to save space. An essay with few or no relevant references to the course readings will suffer a reduction in grade.
- The essay should be double-spaced, in 11 or 12 point font, and the equivalent of 3-4 pages long.
- There is no need for a separate bibliography or title page; please do not include them.
- Work independently; do not discuss your essay with other students.
A screenshot of the rubric I’ll be using to grade the exams is below. As I mentioned about the game design beta test rubric, I am not really concerned with the exam scores generated by the rubric — my primary goal is getting students to become more aware of how their experiences translate into learning.
This is the second post on my process for game design by students. The first post, where I mention changing the beta test rubrics with which teams of students evaluate each other’s games, is here. The rubric originally looked like this:
Ok, but not great.
My new rubric more clearly references the principles discussed by Tomer Perry in his series on game design:
What is most important here is not the rubric’s criteria or point values, but the fact that it will be used by students rather than the instructor. The purpose of the rubric is to get students to benchmark the quality of their effort against that of their peers, and possibly make them into more autonomous learners in the process.
Today we have another post by guest contributor William R. Wilkerson, Professor of American Government and Politics at SUNY-Oneonta. He can be reached at bill [dot] wilkerson [at] oneonta [dot] edu.
This summer for the first time I am teaching an online version of my judicial process course, Law Courts and Politics. I adopted a specifications grading system, something that has been discussed by people like Linda Nilson at Inside Higher Ed and Amanda Rosen on this blog. With specifications grading all assignments are graded on a satisfactory or unsatisfactory basis and course grades are based on assignment bundles.
My course is five weeks long with a distinct theme for each week’s lesson. Each lesson includes an online quiz made up of multiple choice and short essay questions on the textbook (Corley, Ward and Martinek’s American Judicial Process ), various discussion topics on the text, other assigned readings, video and audio, as well as a 600-750 word writing assignment. Each of these elements—quizzes, discussion, and the writing assignment, along with a summative assignment for those wishing earn a B or an A—are tied to course learning objectives. The grade bundles are as follows: Continue reading
Hello, ALPS readers! I’m back after a long summer and spring sabbatical, and am eager to get back in the classroom and talk all things pedagogy here on ALPS. I’m starting a new series where I outline in excruciating detail my experiences using Specifications Grading. I’ll be sharing my materials, talking about the ups and downs, and reflecting on this unique grading system throughout the semester.
We’ve given quite a bit of attention to specifications grading in the past few months. I did a presentation on it at the ALPS workshop at the University of Surrey in May as I started working on adapting one of my own courses to this new system. I also consulted several former students and children-of-friends about what they thought of the system in abstract, and the general consensus ranged from “shrug” to “that might be cool.” Experts in analysis, my young consultants.
In a nutshell, Specifications Grading is a system where all assignments are clearly linked to course learning outcomes, given clear specifications on what students need to do to earn a passing mark, and graded on a pass/fail style system, where a pass is a high bar (typically a B). Assignments are bundled together by learning outcome, and course grades are assigned based on the bundles that students complete. So, higher grades go to students that either complete more bundles (achieving more learning outcomes) or higher-level bundles that demand students complete more complex work. The course also employs flexibility mechanisms such as tokens to let students revise or reattempt a failing assignment, forgive a course absence, or gain some other kind of benefit. This system is supposed to ensure that all students who pass the class are achieving the minimum learning outcomes for the course, but also puts their grade into their hands by removing the mystery behind grades (no longer 170 out of 200 points, but ‘excellent’ ‘satisfactory’ or ‘unsatisfactory) and letting them choose which grade bundle to achieve.
Check out our previous posts for more general information on Specs Grading, or check out this great community of scholars working with the system.. For this new series, I am going to write throughout the semester about my experience in adapting and teaching my research methods course to this system.
Last month I participated in the annual Reading of the Advanced Placement US Government exams. In May, over 300,000 students in US high schools took this exam, consisting of a series of sixty multiple choice questions and four free responses, usually in the hope of earning college credit or placement in a higher level course. The multiple-choice items are easily dealt with, but someone has to score the free response questions. Enter the AP Readers, a collection of several hundred college professors and high school teachers who voluntarily spend a week sitting in a convention center, scoring essays for eight hours a day, seven days straight. Sound awful? It gets worse: we actually score responses to the same essay question, over and over again. On a good day, I score about 450 essays, all answering the same question.
So why have I put myself through this tedious exercise for nine of the last ten years?
The first-year seminar I mentioned in my last post will have twenty-two students; each will be (or should be) submitting at least thirty-five reading responses during the semester. While the reading responses make students read and write about what they’ve read, I have to grade them within twenty-four hours of submission if my feedback is going to have any effect. To make grading these writing assignments as easy as possible, I’ve used a form of specifications grading with a 2 X 2 rubric.
I’ve noticed that any long-form comments I make about what a student submits — besides compliments like “Good work” — almost always relate to mechanical errors in the student’s writing. I see spelling mistakes, disagreement between the singular and plural, incorrect verb tenses, and other problems that detract from the ideas students are trying to express. My rubric hasn’t included a criteria for writing mechanics, hence the perceived need for me to type something in the tiny comment box when grading.
I’ve decided to add that criteria to the rubric to further reduce the amount of time spent grading. The rubric now looks like this:
Since each reading response will be worth 30 points rather than 20, I will probably also boost my grading scale for the course from 1,000 to 2,000 points.
Previous posts on specifications grading here, here, and here.