A short note about a column in Inside Higher Ed about student engagement in flipped courses. Chandralekha Singh, physicist and director of the Science Education Research Center at the University of Pittsburgh, reports that she and her colleagues interviewed thirty-seven science majors about remote instruction. These students said that they simply did not do ungraded in-class and out-of-class components of flipped courses. For example, they did not watch videos or complete self-assessment exercises on a weekly basis; instead, these tasks were completed, if at all, right before exams. Synchronous class meetings, in which the students were expected to discuss this work, became useless.
As an example of one of my points in my last post — making the connection between assignments and course learning objectives explicit to students — I have created an ungraded, anonymous survey for the first day of class. One of the survey items is “I want to learn how to . . . ” Students can choose from the following responses:
(1) decode scholarly literature about comparative politics through frequent practice.
(2) improve my argumentative writing skills through frequent practice.
(3) improve my reading comprehension skills through frequent practice.
(4) None of the above.
Once students complete the survey, I will explain to them how the question corresponds to course assignments. On Mondays, students will submit journal article analyses (options 1 and 3). Wednesdays will be dedicated to reading responses (options 2 and 3). This will be the weekly routine across most of the semester. If anyone in the class chooses option 4, I will encourage those people to drop the course.
I will post details about my Monday and Wednesday assignments in the coming weeks.
I had something else scheduled to appear today, but since readers might be making last minute changes to their syllabi to reflect recent events in the USA, here are some potentially useful resources for undergraduate instruction:
Zara Abrams, “What do we know about conspiracy theories?” American Psychological Association, 18 November 2020.
Reza Aslan, “Is the Trump presidency a religious cult?” Big Think, 15 April 2018.
Benedict Carey, “A Theory About Conspiracy Theories,” The New York Times, 28 September 2020.
Nicky Case, “To Build a Better Ballot: An Interactive Guide to Alternative Voting Systems,” December 2016.
Tyler Cowen, “The Theory of the Median Voter,” Marginal Revolution University.
Anna Kusmer and Carol Hills, “‘Even if a coup fails, it still damages your government’: What the US can learn from Sri Lanka’s coup attempt,” The World, 8 January 2021.
Richard Moulding et al., “Better the devil you know than a world you don’t? Intolerance of uncertainty and worldview explanations for belief in conspiracy theories,” Personality and Individual Differences 98 (2016): 345-354.
Radio Lab, “Tweak the Vote,” 5 November 2018.
James Purtill, “This model forecast the US’s current unrest a decade ago. It now says ‘civil war’,” ABC Triple J Hack, 17 June 2020.
Steve Saideman, “Why Do We Care About Ethnic Outbidding?” Saideman’s Semi-Spew, 9 December 2015.
Milan W. Svolik, “Polarization versus Democracy,” Journal of Democracy 30, 3 (July 2019): 20-32.
Amanda Taub, “The Rise of American Authoritarianism,” Vox.com, 1 March 2016.
Brian Winter, “System Failure: Behind the Rise of Jair Bolsonaro,” Americas Quarterly, 24 January 2018.
One last post about successes and failures from the previous semester: last summer a colleague pointed me toward Knight Lab Timeline JS, and, inspired by Matthew Wilson’s work on crowd-sourcing and self-instruction, I decided to include a timeline project in my undergraduate course on the Middle East. Setting up the project was relatively simple:
Students were already divided into teams for breakout discussions, presentations, and note-taking; I used the same teams for the timelines. I chose five Middle Eastern countries that featured prominently in assigned readings — Egypt, Iran, Iraq, Lebanon, and Saudi Arabia — and created corresponding files in Google Sheets using the spreadsheet template provide by Knight Lab. I gave each team access to its Google Sheet.
Students completed five graded individual assignments that were designed to prevent free riders and guarantee that teams were working on the project throughout the semester rather than only in a frenzied rush at the end. Here are the instructions for the assignment and its associated rubric:Continue reading “Fall 2020: Looking Backward and Forward, Part 3”
To continue evaluating my successes and failures from last semester: the attempt to create community in synchronous online undergraduate courses by dividing students into teams for breakout discussions, note-taking, and memo-writing.*
Zoom breakout discussions for reading responses worked fairly well. Before the semester started, I created a Google Slide file for each team to use for building presentations, and I randomly selected one team to present its conclusions once Zoom breakout rooms closed. I screen shared the presentation from my computer, since I had access to all the files. Students who did not participate in breakout discussions or in creating presentations were held accountable by their teammates in the evaluations completed at the end of the semester. The one aspect of breakout discussions that needs to change for next semester is also true for synchronous classes in general: students need to turn on their webcams. Video of faces is much better at facilitating community than black boxes.
Teams were allowed only one slide per presentation, but often the slides were badly designed — too much text, font too small, etc. In the future, I should require that students follow a specific format.
The Google Slide files ended up being a written record of breakout room discussions for each team; however, I don’t know if students used them as notes. Students definitely didn’t collaboratively write notes in the Google Doc files I had created. Teams either left these files blank, or just pasted screen captures from my PowerPoint presentations into them. Yet another example of students’ lack of note-taking skills.
The memo exercises were also a failure. In an individual graded assignment, students were supposed to make a recommendation in response to a prompt, and provide two different reasons in support of that recommendation. In teams, they were supposed to write a draft of a complete memo, guided by a template I had provided. I then chose one team’s memo at random to discuss as an example with the whole class. There were five iterations of this process. In the individual assignments, students sometimes submitted one reason, just stated in two different ways, in support of their recommendation. The drafts of complete memos produced by teams were usually disorganized and unpersuasive, and the quality of the writing did not improve with successive iterations. Most undergraduates simply lack the writing skills necessary for collaborating effectively on a task like this. Students should instead each write a single memo over the entire semester, in a step-by-step process requiring multiple revisions.
Time once again to evaluate my teaching successes and failures. As usual, I will focus on the failures, but to start, a happy accident: discovering setting options in Google Forms. I still use Google Forms for students’ teammate evaluations. I gave students these instructions for this past semester’s evaluations:
You have 10 points to distribute across members of your team according to each person’s contribution to team projects. These projects include course notes on Google Docs, reading response breakout discussions, and presentations. For example: If one person did all the work, award that person 10 points and the other members of the team 0 points. If one person did 70 percent of the work, a second person did 30 percent of the work, and the rest of the team did 0 percent of the work, award the first person 7 points, the second person 3 points, and everyone else 0 points. Total points awarded across all members of your team must equal 10 or your response will be discarded. I will use people’s responses to calculate an average ranking for each member of your team. This ranking determines the teammate evaluation portion of your course grade [as many as 50 points out of more than 1,000, or approximately five percent].
So, in my mind, this is a very low stakes assessment, without forced ranking. The five members of one team, in fact, noticed that they would all earn the full 50 points if they gave each other scores of 2. To me it was yet one more sign of their ability to collaborate productively.
But as usual, some other students submitted a completed Google Form multiple times or ranked their team members with, for example, scores of 10, 9, 9, 9, and 9. However, when designing the Google Form for another class, I serendipitously clicked on settings (gear icon) when trying to do something else. I was greeted by this dialog box:
As shown, users can capture respondents’ email addresses and limit people to one response. Checking these boxes makes it easy to prevent and decipher students’ errors. One can even convert a Google Form into an auto-graded quiz by going to the Quizzes tab. Maybe this is quite familiar to you, but it was new for me. I’ll be making use of the settings options from this point forward.
Eight students (out of fifty-four) emailed me complaints about the teammate evaluation portion of the grade once I had entered the information in the Canvas LMS. They perceived earning 20 or 30 points out of 50 as “failing,” even though I explicitly inform students in multiple ways that the final course grade is based solely on total points accumulated over the semester, not on the results of individual assessment instruments. I think students’ mistaking perception for reality is in part due to me listing the teammate evaluation as a 50-point assignment on Canvas. When students don’t earn the maximum possible points shown, they react as if I’m at fault for their performance. Next semester, I will see if Canvas allows me to label this item as worth 0 points, to make it look like the teammate rankings are “bonus” points.
Last week I gave a surprise collaborative quiz to one class, as a test run for possibly using this exercise in my synchronous online courses next semester. The quiz consisted of five multiple-choice questions on basic concepts, deployed in three iterations. First, students took the quiz individually on Canvas, which auto-graded students’ answers but did not reveal which were correct. The individual quiz was worth up to half a percent toward the course grade.
Second, I sent students into team breakout rooms to confer and decide which answers to submit as a group. This version of the quiz was also worth up to half of the course grade. I pasted the quiz into each team’s notes on Google Docs. Because the Canvas quiz tool does not have a “groups” setting, I had already created a Canvas assignment through which each team could submit its answers. Again students did not know which answers were correct — after class I had to read what teams had submitted and manually enter a quiz score for every student who had been present for the breakout room discussions.
Third, after breakout rooms closed, students answered the quiz’s questions yet again in the form of a Zoom poll. After closing the poll and sharing the results, I explained which answers were correct and offered to answer any questions.
Twenty-nine undergraduates are in the course. Three were completely “absent” — they never signed into Zoom during class that day. A fourth student logged out before I announced the group version of the quiz. For the remaining twenty-five students: twelve, or nearly fifty percent, scored higher on the collaborative quiz than on the individual quiz. Great! Three students, all members of the same team, scored lower on the former than on the latter. Ten students’ scores were unchanged.
Finally, the poll, which did not contribute to the course grade: One student left class by disconnecting from Zoom when breakout rooms closed. Of the remaining twenty-four students, nine got the same number of questions correct on the poll and the individual quiz. Ok. Three students did better on the former than they did on the latter. Good. Twelve scored worse on the poll. Terrible! I have no idea why this happened, given the improvement in scores on the collaborative quiz.
As the end of the semester approaches, I’m noticing fewer students signing into my synchronous online classes. I’m also noticing that some students sign in, don’t turn on webcams, and do not respond when asked verbally or in text chat to answer questions. These students log into Zoom and then completely ignore whatever might be happening in class.
How to increase student “presence” in a course? The usual solution — whether face-to-face or online — is to make attendance obligatory and penalize students when they are absent. Early in my teaching career I abandoned this type of policy because I got tired of deciphering students’ claims about “excused” absences. I have no interest in learning about students’ medical or other problems, and I don’t want sick students attending class only to avoid exceeding an allowed number of absences. I believe that legal adults get to set their own priorities and suffer the consequences of their decisions. And students who don’t regularly attend and participate in my classes invariably do poorly grade-wise anyway. That’s their choice.
But that was the pre-Covid era. Given the difficulty students had with the transition to online instruction last spring, there is a chance that the student with mediocre academic performance in the physical classroom is doing terribly as an online student, simply because their time management skills, motivation, and willingness to exert effort weren’t great to begin with.
So I’m starting to experiment with a few techniques that I’m hoping will increase student participation in my synchronous online courses next semester. I believe they will operate as positive reinforcement rather than as a punitive attendance policy.Continue reading “Preventing Zoombies”
Some readers of this blog work at universities that have now shifted to online instruction after starting the semester with face-to-face classes — a repeat of what happened in March. You’re now faced with a very awkward transition. But as Simon, Amanda, and I wrote over the spring and summer, don’t try to wedge a square peg into a round hole. What works in the physical classroom often doesn’t function nearly as well online. And now is your opportunity to experiment.
Here is one simple suggestion: replace one day of synchronously-held class each week with a week-long asynchronous online discussion. Here is one rubric for designing and grading these discussions. Here is another. Drop from the syllabus upcoming assignments that are worth an equivalent amount toward the course grade. Inform students about your reasoning for doing this — whether it’s to reinforce their understanding of previously-studied concepts, to maintain a sense of community in the class, to lessen student stress at the end of the semester, or something else.
There are many other relatively simple adjustments that can be made that will simplify your life when teaching a course that has suddenly gone online.
I ran a quick anonymous survey in my undergraduate courses last week, as a way to find out what’s on students’ minds and how it might be affecting their academic performance. The survey’s four questions and an analysis of the responses are below. Results might be skewed because only 70% of the 54 students on my course rosters connected to class on Zoom when I administered the survey. In my experience, absenteeism correlates strongly with lousy academic performance, but this semester it could in part be a maladaptive response to greater than usual levels of stress and anxiety. I just don’t know.
Five years from now, what outcome do you want to have achieved in your life? Nearly all the students who responded wrote that financial stability from a career was an important objective. Less than one-quarter listed happiness or enjoyment. Seven students wrote that travel was a goal. Only one specifically referenced being healthy.
What can you do in this course to make achieving your desired outcome more likely? About 40% of respondents commented in some fashion about practicing the application of conceptual knowledge. 30% mentioned getting a good or passing grade for the course. Only five students said anything related to understanding different cultures or perspectives. There were six comments about improving writing, time management, or note-taking skills.
What are you worried about? Here there were about a dozen comments each about employability after college, long term economic effects of the pandemic, and the election/condition of the country. Four students said they were concerned about not having a satisfying career or a fulfilling life. Two said that they were worried about the environment and the future of the planet.
What are some small, practical actions that you can take to respond to your worries and help you achieve your desired outcome? This is where I was surprised, though perhaps I shouldn’t have been. Few students listed any simple behavioral changes that might help them better manage stress. There were twenty comments about working harder, spending more time on coursework, or leveraging internships to create future employment or graduate study opportunities. Slightly more than 25% of respondents said they could focus more on the present or not try to control what can’t be controlled. Only two students discussed seeking emotional support from family or peers. No students mentioned getting sufficient sleep, eating healthily, or exercising.