SIFTing for the Facts

The Winnowers, 1855, Gustave Courbet

Anyone interested in teaching students how fact check claims should visit the Check, Please! starter course. Check, Please! is a free training course in how to use the SIFT method to evaluate the accuracy of online information:

  • Stop.
  • Investigate the source.
  • Find trusted coverage.
  • Trace claims, quotes, and media back to the original context.

Check, Please! was created by the company Notion (which I have no financial interest in) but as stated on the website’s homepage, the content is intended for reuse and revision.

Call for Proposals: 2020 TLC

A reminder that the 16th APSA Teaching and Learning Conference is going to be held February 7-9 in Albuquerque, New Mexico. Proposals are due September 23. Full details are at the APSA’s TLC 2020 webpage. As I’ve mentioned previously, this conference is not the standard sequence of tedious, badly-attended panel sessions. Participants can facilitate interactive workshops or engage in full-weekend working groups on particular topics.

If I remember correctly, when the TLC was last held in Albuquerque, a small group attendees began talking about the need to better communicate what we do and what we are passionate about. This blog was the result.

Why I Got Arrested (Twice) Last Semester

Today we have a guest post from Andre Audette, assistant professor of political science at Monmouth College. He can be reached at aaudette [at] monmouthcollege [dot] edu.

It’s about time that I come clean publicly: last semester I was arrested not once, but twice, at the start of class. My crime? Teaching constitutional law.

Students in my Civil Liberties course were wrapping up a unit on criminal procedure, which includes case law involving proper arrests and interrogations. To give them firsthand experience, I asked for two volunteers to arrest me and then achieve a conviction without using any unconstitutional evidence.

Before class started, I discretely asked one student to watch over my snack-sized bag of “drugs” (oregano). At the time of my arrest I was handcuffed (using fake handcuffs that were easy to get out of) and brought over to the interrogation room where I was placed under a portable clip lamp I had concealed in a canvas bag.

Throughout the simulation I did not make the arrest easy. I admitted to the crime before my rights were read, after which I vigorously denied the charges. I pretended not to understand my rights while accusing the officers of violating them, signed the rights waiver under a pseudonym, asked for and then rescinded my request for a lawyer, and pretended to be under the influence of mind-altering substances. Each of these represents one of the surprisingly common complications in criminal procedure.

After the simulation concluded, I asked the class to determine which evidence could be used against me in a court of law. The results were . . . murky. The “easy” constitutional interpretation of Miranda v. Arizona began to look a lot more difficult.

Students responded positively to the experience and gladly arrested me again on the last day of class. This time I played an intelligent and peaceful extraterrestrial who had been living in the United States for many years, a scenario that asked students to extend the logic of Plyler v. Doe, a case about the children of undocumented immigrants. Students acted as a jury to determine whether I, as an extraterrestrial, could be tried under a military tribunal, executed, and denied admission to law school despite being otherwise qualified. The exercise served as a review of the semester and a reminder that constitutional rights come from cases that push the boundaries of the law.

This simulation requires that the instructor cede a great deal of control to students in a way that may not be comfortable or even advisable for everyone. The professor should have a rapport with the students beforehand. The number of students in the class and its physical location is another consideration.

But my students reported that the exercise gave them a new understanding of what can otherwise be dry and unapproachable legal reading. Anecdotally students seemed more attuned to the complexities and nuances of constitutional law in their exams and hypothetical case briefs after the simulation than they were before. And in their writing they were able to wade deeper into legal reasoning by analogy rather than a strict factual application of precedent. Students also noted in their course evaluations that they learned that the law is not as straightforward as they thought.

Thus, the exercise appeared to have achieved my goal of demonstrating that the law is not as cut-and-dried as students usually assume, and that most constitutional law is advanced through these tough cases, if it is ever settled at all.

Does the Question Determine the Answer?

Regular readers of this blog know that I sometimes ponder the clarity of my assignment and exam prompts (some past posts on this subject are here, here, and here). Students sometimes don’t hit what, in my mind, the question targets, so I revise in the hopes of creating a prompt that is more transparent. But I don’t want prompts to be answerable with a Jeopardy-like regurgitation of facts. I want students to exert some cognitive effort to figure out how to apply concepts that are relevant to the question at hand.

Usually this situation occurs with my undergraduates, but I’m noticing it more frequently with master’s degree students. A recent example is an assignment from my graduate-level introduction to comparative politics course:

Continue reading

What Do Grades Mean?

What do grades actually mean? I began pondering this question while designing a course for the fall semester. Theoretically a grade indicates the amount of knowledge or skill that a student possesses. But really? Those of us working in the USA are quite familiar with grade inflation. A final grade of C today probably doesn’t indicate the same level of knowledge or skill proficiency as the C from fifty years ago. There is also the persistent problem of knowing whether our assessment tools are measuring the types of learning that we think they are/want them to. And it is probably safe to assume that, both in and out of the classroom, there is a lot of learning happening but we just aren’t interested in trying to measure it. The situation gets even more complex given that — again, in the USA — a “learning activity” often won’t function as intended if students believe that it has no discernible effect on their course grades.

I structure my syllabi so that the sum total of points available from all assessed work is greater than what it needed for any particular final grade. For example, a student might need to accumulate at least 950 points over the semester for an A, but there could be 1,040 points available. I do this to deliberately create wiggle room for students — with so many assignments, students don’t need to get perfect scores on, or complete, all of them. While this leads to higher grades in my courses than if I graded strictly on a bell curve, I want to give students plenty of opportunities to practice, fail, and improve. And I firmly believe that sloppy writing indicates slopping thinking, while good writing indicates the converse. So in reality what I’m doing with most of my assignments is evaluating the writing abilities of my students.

This system often produces a bimodal grade distribution that is skewed to the right. Expend a lot of effort and demonstrate a certain level of proficiency, and you will get a grade somewhere between an A and a B-. Choose not to expend the effort, or consistently demonstrate an inability to perform at a minimum level, and you will get a D or an F. I’m comfortable with this result, in part because I know from the cognitive science research on learning that repeated exposure and frequent testing builds long term memory.

This leads me to the reason for doubting that grades my courses mean the same thing as they do in courses where the only assessment is done through mid-term and final exams composed of multiple-choice questions. Yes, the proportion of A’s in the latter might be lower than in the former, but I bet on average my students are retaining more. At least I like to think that’s the case. There is no way for me to be sure.

Call for Proposals

The next New England Faculty Development Conference will be held on November 8 at the College of the Holy Cross in Worcester, Massachusetts. The deadline for proposals is August 17. Full details are here. The NEFDC is totally teaching-oriented and interactive workshops are encouraged.

As the new Director of Faculty Development at my university, and managing editor of this blog, please get in touch if you would like to publicize a teaching-related conference or event.

A Classroom Competition in Risk Taking

Today we have a guest post from Kyle Haynes, assistant professor of political science at Purdue University. He can be reached at kylehaynes [at] purdue [dot] edu.

Thomas Schelling’s (1966) groundbreaking work on “brinkmanship” explains how deterrent threats are made credible between nuclear-armed opponents. Schelling argued that although rational leaders would never consciously step off the ledge into nuclear Armageddon, they might rationally initiate a policy that incurs some risk of events spiraling into an inadvertent nuclear exchange. Whichever state can tolerate a greater risk of accidental disaster could then escalate the crisis until the adversary, unwilling to incur any additional risk, concedes. For Schelling, this type of crisis bargaining is a competition in risk taking. I use the following simulation to teach this concept:

The simulation begins by randomly splitting the entire class into pairs of students. One student in each pair is designated as Player 1 (P1), the other as Player 2 (P2). At the beginning of each game the instructor places nine white table tennis balls and a single orange table tennis ball into an empty bowl or small bucket. In Round 1 of the game, P1 must decide whether to concede the first extra credit point to P2, or to “stand firm” and refuse to concede. If P1 concedes, P2 receives one point and P1 receives zero points. If P1 stands firm, the instructor will blindly draw a single ball from the ten in the bowl. If the instructor draws a white ball, both players survive, and the game continues to the next round. If the instructor draws an orange ball, then “disaster” occurs and both players lose two points.

If the game continues to the second round, the instructor removes a white ball from the pot and replaces it with another orange ball—there are now eight white balls and two orange balls. It is P2’s turn to decide whether to stand firm or concede. If P2 concedes, P1 receives one point. If P2 stands firm and the instructor draws a white ball, both players survive, and the game continues to Round 3. If, however, the instructor draws an orange ball, both players lose two points.

Continue reading

What Sticks?

Inside Higher Ed recently published a column written by a community college dean on the most important subjects one took or could take in high school — part of a larger conversation that originated on Twitter. Responses to the column mentioned:

  • Theater productions, to learn how to work with other people who have different perspectives and objectives.
  • A foreign language, to learn principles of grammar that allow one to become a better communicator in English.
  • Typing, to learn how to communicate more quickly with less effort.
  • Bookkeeping, to learn how to manage one’s personal finances.

Comments also referenced the processes through which the learning occurred. For example, one person mentioned that he gained a better understanding of the here and now when a history teacher worked backward from the present instead of using the traditional method of moving from the distant past toward today (which in high school is almost never reached).

The column and the comments got me thinking about the same question as applied to college. What undergraduate course was the most useful to you, and why?

As I wrote a few years ago, I generally don’t remember anything about the content of my college courses. Sorry, James Clerk Maxwell, I’ve forgotten how to use your equations. But I do have memories of what actions I performed when I originally learned the content and how I felt when that happened. The general process stuck. The specific outcome did not.

While I have tried in my own teaching career to better emphasize process over content, I still don’t get the kind of feedback contained in the Inside Higher Ed piece. My university doesn’t collect data on this level from alumni. So maybe it’s time I started doing it myself with a survey.

Research the Write Way

Credit for the subject of today’s post goes to Lindsay LaChapelle, writing center acting director, and Alicia Vaandering, education and instructional design librarian, at my university. They can be contacted, respectively, at lindsay [dot] lachapelle [at] salve [dot] edu and alicia [dot] vaandering [at] salve [dot] edu.

I attended a presentation by the above-mentioned individuals on leveraging the expertise of academic support staff when using research-driven writing assignments. Undergraduates trying to complete these assignments typically lack proficiency in critical skills like:

  • Identifying peer-reviewed journals for literature reviews.
  • Knowing when and how to cite sources.
  • Decoding the rhetorical techniques of scholarly literature.
  • Applying the authentic writing principles of role, audience, and format.

For example, students often locate a pile of sources that they believe are pertinent, and only then begin writing. Or they do the writing first and then seek out what they think are a sufficient number of not-really-that-relevant sources to cite in the text.

Incorporating instruction from your university’s writing and research specialists can help solve these problems. The first step in the process is to consult with writing center and library staff, to share assignment information, identify your goals and expectations for the assignment, plan classroom workshop content, and schedule the workshops in relation to assignment due dates.

Workshop topics should reflect the four phases of the research writing process: exploration, reflection, revision, and becoming a creator of information. Each workshop can target one or more of the specific objectives for each phase, as shown in this table. “L” indicates a practice that normally falls within a librarian’s area of expertise, while “W” indicates a focus on writing. Think of the table as a menu from which to choose the skills in which your students most need training.

The classroom workshops allow both academic support staff and the course instructor to interact with students about their work as they proceed through the research and writing process. They can include probes of students’ pre-existing knowledge of college-level research and writing principles and the review of students’ work as they complete different components of the assignment.

Collaborating with librarians and writing specialists benefits both me and my students because it:

  • Reduces redundancy in instruction.
  • Shares the workload for lesson planning, delivering instruction, and assessment.
  • Reduces the instructor-to-student ratio in the classroom.
  • Models the collaborative nature of academic research.
  • Allows writing center staff to identify research-related problems that require librarian support, and vice versa — instead of leaving students in the dark about the kind of specialized help they might need and who can provide that help.

Emotional Labor in the Digital Age

Katherine Wright’s recent post on Twitter as a hostile environment caught my attention, especially her point about employer expectations that faculty “engage” with social media.

If you’re like me, you never received any formal training in using social media for professional purposes. Whatever you see from me on this blog, it’s self-taught. A learn-as-you-go process where mistakes, whether technical or editorial, get made. But the consequences of these mistakes are quite small given that the format allows me to retain most of the privileges (male, Caucasian, overly-educated, etc.) that I enjoy in so-called meatspace. Comments are few, come from like-minded colleagues, and in the end, moderated. The content doesn’t attract hackers, doxxers, or nuisance lawsuits.

That is not the digital world that many of us live in. We know that some people, because of their identities, are regularly subjected to negative biases in the work environment. Expecting them to communicate in mediums where such biases can be infinitely amplified, while not preparing them for what they are likely to encounter, is simply disastrous.

Making matters worse: employers that want their employees to act as semi-public figures for marketing purposes often expect those employees to endure the harassment and abuse that they subsequently receive, or, as if this were somehow possible, prevent the harassment and abuse from occurring in the first place by tailoring their communication to the needs of an anonymous and dysfunctional audience.