What Do Students See?

What do you conclude about the organization of this course, specifically the quizzes, based on the image below — part of the homepage for the course website?

The document containing the course syllabus is formatted in a similar manner.

To me, the course obviously contains a series of topical units, each ending with a quiz that tests knowledge of that unit.

Given the number of students who are emailing me questions like “What will Quiz X cover?”, it’s not so obvious to many of them. Apparently students don’t know how to read a syllabus, even when they do read it.

Does A River Really Run Through It?

Do students experience the curriculum in the manner intended? Probably not. The curriculum in the department that I just finished chairing includes a sequence of courses common to all three of the department’s majors — one course each at the 200-, 300-, and 400-level. All students are supposed to take the 200-level research methods course when they begin their respective majors, in part to identify a topic of interest before doing a semester of study abroad, an internship, or some other field experience in the junior year where the topic is researched. The 400-level course is the senior capstone, where they are supposed to compile the findings of the work they’ve already done and generate a finished report.

The reality? A fair number of seniors start the capstone not having previously selected a topic. They then have a single semester to identify their topic, design and complete the necessary research, and write about it — a situation that promotes a less than ideal final product.

The obvious solution to the problem would be to devote the 200-level course to having each student create their own research design, begin the data collection and analysis in the 300-level course, and tie everything up in a nice little bow in the capstone.

But there are complications. Each of these three courses is taught by a different instructor. Sometimes students don’t take the 300-level course until the senior year. And the university’s general education requirements — a distribution model — reinforce the preconception held by students that knowledge exists in discrete boxes and that courses have little or no connection to each other. While some of this could be prevented by imposing hard prerequisites, such a system would exclude many of the students who enter the department because they’ve changed major or added a double major. Given the small size of these academic programs, we want to encourage, not discourage, enrollment.

Storytelling with GIS

Another example of why it’s good to consult with librarians:

For the last several years in my globalization course, I’ve had student teams create and deliver presentations on their commodity chain analyses and ethnographies of consumption. Generally students build Powerpoint files for these assignments; occasionally someone uses Prezi. Simple rubrics make grading this work very easy. But the end products aren’t going to make recent graduates stand out from the competition when interviewing with prospective employers. It’s also difficult to convey the content of the entire project in a single presentation without showing a mind-numbing number of slides. Enter the storymap . . .

One of our librarians, a specialist in digital scholarship whom I’ll be working with next semester, introduced me to the digital storytelling tool from Esri,* a.k.a. the Environmental Systems Research Institute, which allows a person to create a multi-media presentation with ArcGIS. Rather than describe what this looks like, I’ll show you:

My task now is to reconfigure the project’s different assignments so that students complete their own storymaps by the end of the course, and figure out how to evaluate them.

*I have no financial interest in this company.

SIFTing for the Facts

The Winnowers, 1855, Gustave Courbet

Anyone interested in teaching students how fact check claims should visit the Check, Please! starter course. Check, Please! is a free training course in how to use the SIFT method to evaluate the accuracy of online information:

  • Stop.
  • Investigate the source.
  • Find trusted coverage.
  • Trace claims, quotes, and media back to the original context.

Check, Please! was created by the company Notion (which I have no financial interest in) but as stated on the website’s homepage, the content is intended for reuse and revision.

Call for Proposals: 2020 TLC

A reminder that the 16th APSA Teaching and Learning Conference is going to be held February 7-9 in Albuquerque, New Mexico. Proposals are due September 23. Full details are at the APSA’s TLC 2020 webpage. As I’ve mentioned previously, this conference is not the standard sequence of tedious, badly-attended panel sessions. Participants can facilitate interactive workshops or engage in full-weekend working groups on particular topics.

If I remember correctly, when the TLC was last held in Albuquerque, a small group attendees began talking about the need to better communicate what we do and what we are passionate about. This blog was the result.

Why I Got Arrested (Twice) Last Semester

Today we have a guest post from Andre Audette, assistant professor of political science at Monmouth College. He can be reached at aaudette [at] monmouthcollege [dot] edu.

It’s about time that I come clean publicly: last semester I was arrested not once, but twice, at the start of class. My crime? Teaching constitutional law.

Students in my Civil Liberties course were wrapping up a unit on criminal procedure, which includes case law involving proper arrests and interrogations. To give them firsthand experience, I asked for two volunteers to arrest me and then achieve a conviction without using any unconstitutional evidence.

Before class started, I discretely asked one student to watch over my snack-sized bag of “drugs” (oregano). At the time of my arrest I was handcuffed (using fake handcuffs that were easy to get out of) and brought over to the interrogation room where I was placed under a portable clip lamp I had concealed in a canvas bag.

Throughout the simulation I did not make the arrest easy. I admitted to the crime before my rights were read, after which I vigorously denied the charges. I pretended not to understand my rights while accusing the officers of violating them, signed the rights waiver under a pseudonym, asked for and then rescinded my request for a lawyer, and pretended to be under the influence of mind-altering substances. Each of these represents one of the surprisingly common complications in criminal procedure.

After the simulation concluded, I asked the class to determine which evidence could be used against me in a court of law. The results were . . . murky. The “easy” constitutional interpretation of Miranda v. Arizona began to look a lot more difficult.

Students responded positively to the experience and gladly arrested me again on the last day of class. This time I played an intelligent and peaceful extraterrestrial who had been living in the United States for many years, a scenario that asked students to extend the logic of Plyler v. Doe, a case about the children of undocumented immigrants. Students acted as a jury to determine whether I, as an extraterrestrial, could be tried under a military tribunal, executed, and denied admission to law school despite being otherwise qualified. The exercise served as a review of the semester and a reminder that constitutional rights come from cases that push the boundaries of the law.

This simulation requires that the instructor cede a great deal of control to students in a way that may not be comfortable or even advisable for everyone. The professor should have a rapport with the students beforehand. The number of students in the class and its physical location is another consideration.

But my students reported that the exercise gave them a new understanding of what can otherwise be dry and unapproachable legal reading. Anecdotally students seemed more attuned to the complexities and nuances of constitutional law in their exams and hypothetical case briefs after the simulation than they were before. And in their writing they were able to wade deeper into legal reasoning by analogy rather than a strict factual application of precedent. Students also noted in their course evaluations that they learned that the law is not as straightforward as they thought.

Thus, the exercise appeared to have achieved my goal of demonstrating that the law is not as cut-and-dried as students usually assume, and that most constitutional law is advanced through these tough cases, if it is ever settled at all.

Does the Question Determine the Answer?

Regular readers of this blog know that I sometimes ponder the clarity of my assignment and exam prompts (some past posts on this subject are here, here, and here). Students sometimes don’t hit what, in my mind, the question targets, so I revise in the hopes of creating a prompt that is more transparent. But I don’t want prompts to be answerable with a Jeopardy-like regurgitation of facts. I want students to exert some cognitive effort to figure out how to apply concepts that are relevant to the question at hand.

Usually this situation occurs with my undergraduates, but I’m noticing it more frequently with master’s degree students. A recent example is an assignment from my graduate-level introduction to comparative politics course:

Continue reading

What Do Grades Mean?

What do grades actually mean? I began pondering this question while designing a course for the fall semester. Theoretically a grade indicates the amount of knowledge or skill that a student possesses. But really? Those of us working in the USA are quite familiar with grade inflation. A final grade of C today probably doesn’t indicate the same level of knowledge or skill proficiency as the C from fifty years ago. There is also the persistent problem of knowing whether our assessment tools are measuring the types of learning that we think they are/want them to. And it is probably safe to assume that, both in and out of the classroom, there is a lot of learning happening but we just aren’t interested in trying to measure it. The situation gets even more complex given that — again, in the USA — a “learning activity” often won’t function as intended if students believe that it has no discernible effect on their course grades.

I structure my syllabi so that the sum total of points available from all assessed work is greater than what it needed for any particular final grade. For example, a student might need to accumulate at least 950 points over the semester for an A, but there could be 1,040 points available. I do this to deliberately create wiggle room for students — with so many assignments, students don’t need to get perfect scores on, or complete, all of them. While this leads to higher grades in my courses than if I graded strictly on a bell curve, I want to give students plenty of opportunities to practice, fail, and improve. And I firmly believe that sloppy writing indicates slopping thinking, while good writing indicates the converse. So in reality what I’m doing with most of my assignments is evaluating the writing abilities of my students.

This system often produces a bimodal grade distribution that is skewed to the right. Expend a lot of effort and demonstrate a certain level of proficiency, and you will get a grade somewhere between an A and a B-. Choose not to expend the effort, or consistently demonstrate an inability to perform at a minimum level, and you will get a D or an F. I’m comfortable with this result, in part because I know from the cognitive science research on learning that repeated exposure and frequent testing builds long term memory.

This leads me to the reason for doubting that grades my courses mean the same thing as they do in courses where the only assessment is done through mid-term and final exams composed of multiple-choice questions. Yes, the proportion of A’s in the latter might be lower than in the former, but I bet on average my students are retaining more. At least I like to think that’s the case. There is no way for me to be sure.

Call for Proposals

The next New England Faculty Development Conference will be held on November 8 at the College of the Holy Cross in Worcester, Massachusetts. The deadline for proposals is August 17. Full details are here. The NEFDC is totally teaching-oriented and interactive workshops are encouraged.

As the new Director of Faculty Development at my university, and managing editor of this blog, please get in touch if you would like to publicize a teaching-related conference or event.

What Sticks?

Inside Higher Ed recently published a column written by a community college dean on the most important subjects one took or could take in high school — part of a larger conversation that originated on Twitter. Responses to the column mentioned:

  • Theater productions, to learn how to work with other people who have different perspectives and objectives.
  • A foreign language, to learn principles of grammar that allow one to become a better communicator in English.
  • Typing, to learn how to communicate more quickly with less effort.
  • Bookkeeping, to learn how to manage one’s personal finances.

Comments also referenced the processes through which the learning occurred. For example, one person mentioned that he gained a better understanding of the here and now when a history teacher worked backward from the present instead of using the traditional method of moving from the distant past toward today (which in high school is almost never reached).

The column and the comments got me thinking about the same question as applied to college. What undergraduate course was the most useful to you, and why?

As I wrote a few years ago, I generally don’t remember anything about the content of my college courses. Sorry, James Clerk Maxwell, I’ve forgotten how to use your equations. But I do have memories of what actions I performed when I originally learned the content and how I felt when that happened. The general process stuck. The specific outcome did not.

While I have tried in my own teaching career to better emphasize process over content, I still don’t get the kind of feedback contained in the Inside Higher Ed piece. My university doesn’t collect data on this level from alumni. So maybe it’s time I started doing it myself with a survey.