Some of you might have read last week’s Inside Higher Ed editorial on combatting internet-era cheating with authentic assessment. On this blog, “authentic” has been occasionally applied in the past to writing assignments, presentations, and marshmallows. Before reading the editorial, I had never before carefully thought about authenticity in relation to exams — something I can explore in future posts (and others’ thoughts on the subject are very welcome). Today though I’d like to provide an example of making class discussions more authentic.Continue reading “Authentic Discussion”
In line with Simon’s musings on whether or not he matters, I’ve been wrestling with whether all of my ideas about how to structure classes to get particular results actually work. Do they matter?
In one of my classes, only a handful of students were able to answer a pretty basic question: what is the main claim in this reading? I’m sure many of us have experienced this before, but in line with my strategies on ensuring students do the reading, I thought I was well inoculated against the steady silence of puzzlement, for two reasons:
- Students have to write on all the readings each week. Those papers include an annotated bibliographic entry for each reading, where in 1-2 sentences they must state the main claim of each reading. Since about half the students wrote last week, they should know this.
- In this particular case I was asking about last week’s readings, which we had already discussed. This was review! I had mentioned these main points at least once before during our previous classes.
And yet, silence.
That’s not strictly fair. A handful of students were able to answer my question for each of the readings. But the bulk of the students just sat there, staring at me. First, let’s review Simon’s thoughts on getting students to talk, and then let’s consider the possible reasons for this, and how to solve them:
- They had not done the reading. Since they only have to write 8 of the 12 papers, these students may have chosen not to write last week–and therefore didn’t bother to read.
- Solutions: require more papers. 8/12 was probably too generous, and it is clear that when students do not have to write, they don’t always do the readings very closely (often due to other legitimate commitments, such as work).
- They did the reading, but couldn’t remember it. Students may not take good notes when they read, and therefore can struggle with details. They may also need training in how to identify key points so that they don’t miss the forest for the trees.
- Solutions: teach students how to take notes on the reading. Its an important skill, and we should not assume they already have it.
- Continue to require the annotated bibliographies of the weekly readings so they build this skill over the course of the term.
- They did the readings, but were confused. The readings I am assigning are a mix, but many of them are scholarly in nature. Not all students in the class are majoring in the social sciences, and therefore may struggle with key terms. It was also a lot of reading–about 100 pages–and some of the chapters could have been split in two because they covered two widely different topics.
- Solutions: Review the syllabus to make sure that the mix of readings is appropriate in terms of amount and difficult.
- Take note of key terms and review them in class so that non-majors don’t feel lost.
- Continue to review the key point of each reading in class prior to discussion so that everyone is on the same page.
- They may or may not have done the reading, but they did not make the connection between our discussions last week and the question of this week. While I had mentioned the key points of the readings last week in passing, I didn’t make a point of it–I did not write them on the board, or encourage students to take notes of what I had said. Often students don’t know how to recognize a key point that is made solely verbally.
- Solution: anytime I mention a key point, make sure I put it on the whiteboard to signal to students that it is important.
- They may have known the answer, but chose not to speak up. Even though I’ve encouraged my students to ‘fail’ in line with previous discussions on ALPS, many of them are afraid to say something wrong.
- Solutions: whenever possible, use small groups to discuss the question first. This allows students to check their answers with a small group of peers first, and then share them with the rest of the class if encouraged.
- Minute papers–where students take a minute to write down their thoughts–might also give them the time they need to choose the right wording for their responses.
- Note who in their papers got the answer correct, and then cold call on those students to read their responses.
- Using encouraging language and thanking students for offering their response may also encourage quieter students to share their ideas in the future.
- Track and increase wait time. What feels like an eternity to us in the silence is often mere moments, which might not be enough time to process the question and generate a response. There are plenty of strategies out there to do this effectively.
My takeaway: the students failure to answer my basic question is as much my failing as theirs. We need to recognize the reasons WHY students can’t identify the key point of a reading, and exhaust all the structural and instructional tools and methods we have to get them to a point where they do the readings and are willing and able to talk about them. Our job is provide the tools and training they need to succeed, and we should always make sure that any issues on the part of our students aren’t caused by a failing on ours.
This is follow-up to a previous guest post by Joel Moore on the benefits of assessing students’ preparation for class discussion rather than their participation in it. Students are asked to rate themselves on the amount of assigned reading they have completed prior to class, and this forms the basis of their preparation grade.
He has created a web app that simplifies the process. The app is available at https://joeldmoore.com/apps/preparation/.
A video that discusses how to use the app is at https://www.youtube.com/watch?v=SQdK1FwycKw.
Model Diplomacy is a series of free National Security Council simulations put out by the Council on Foreign Relations. Michelle used it in her class last year, and I decided based on her positive experience to try it in my intro to IR class this year. In this post I’m going to explain the basics of Model Diplomacy, discuss my experience using it, and give my recommendation. Spoiler Alert: I loved it.
In a post last year, I talked about the potential of using annotation software like CritiqueIt to make the reading process more collaborative. In short, by creating a single copy of the reading that students can mark up together online, there’s the potential for creating discussion prior to and during class, and also for getting students to see course readings as statements in a dialogue.
My first use of CritiqueIt was promising, but I’m less satisfied after having further used it in two undergraduate seminars plus a graduate-level, continuing education course.
Two things have continued to work, probably still making the tool a net positive. First, as a diagnostic tool CritiqueIt makes class prep easier, because it gives me a window into what students find interesting or are struggling with. Students indicate their interest implicitly or explicitly, and they also seem relatively fine with using their comments to signal that something doesn’t make sense—especially useful when they’re having difficulty with something I didn’t expect. Second, they seem to like it. Students seem to perceive it as a cool new gimmick, and I seem to get credit for trying it.
However, while CritiqueIt lets me know what students want the conversation in class to be about, it hasn’t generated a conversation among students on its own. Students have posted a few responses to other students’ annotations, but the kind of exchange I mentioned in the original post hasn’t happened consistently. Students seem to be completing the assignment because it sends me a signal that they have, in fact, engaged with the reading. This provides me with feedback for me, as mentioned above, but was not my ultimate reason for using the tool.
Since I want students to see political science writings as part of an ongoing exchange of ideas, there are three changes that I’ll be implementing next semester, thanks to insights from my colleague Daniel Smail, who has been experimenting with the same tool in his history courses:
- Build CritiqueIt into the entire semester. Students need time to get used to the tool, and the expectation that it’s an integral part of their work.
- Assign early readers. If everyone reads the night or morning before class, there’s less incentive to start a dialogue that none of their peers will respond to. By dividing up the collaborative readings and having one or two students make their annotations three or four days before class, there will be more time for students to jump into the conversation.
- Work CritiqueIt into summative assessment. This also normalizes the use of the tool, and gives students the incentive to develop better commenting skills. Students will need several days to virtually hand the document back and forth so this has to be accounted for in scheduling other assignments. But giving them a longer piece of journalism on the broad course theme and having them react to it, and then to each other, knowing that their comments will be graded on some explicit rubric, might be a better way to tease out their ability to respond critically to arguments—and actually use something they learned from class.
I’ve been telling students in my first-year seminar that the design of a good game often simultaneously combines chance, strategy, competition, and cooperation. About a month ago I invented a simple game to demonstrate how this could be accomplished.
I took the class outside, defined starting and finishing lines — about twenty-five meters apart — and divided the class into teams. The game had only one rule: every person on a team had to keep his or her left hand on the right foot of another teammate. First team to reach the finish line “won” the game. Continue reading “Amoeba Game”
Earlier this summer I wrote about two changes that I made to my five-week online summer course, Law, Courts and Politics: using Slack for class communication and specifications grading. Both experiments were a success.
Slack was a great addition. I found it easy to set up and to use. Students liked it. Thanks to the resources I noted in my earlier post, I created a simple structure: channel for each week was the home of announcements, files, links, and discussion — the center of the course. The introduction channel gave students the ability to practice and the questions forum got some use, especially early in the term.
Because Slack has excellent apps for all mobile and computer platforms, I hoped that it would encourage regular communication, which it did. Total posts in the weekly channels ranged from 62 in week 2 to 90 in week 4. I posted reminders and introduced topics, but most posts were from the students. Nine or ten students active each week; one student never posted in the weekly discussion forums. I was pleased that a group of students began posting mid-week and continued through the end of the week. Students picked up quickly on hashtags for topics and connecting to their fellow students via the @ symbol, which facilitated interaction. Posts were fairly long too, especially when you consider they were writing on their phones. I had expected phone use to result in short responses to comments, but that didn’t happen. Continue reading “Follow-Up on Slack and Specification Grading”
Do online learning teams help students recall facts and/or analyze political problems? Instructors often use online chat rooms and discussions as convenient tools for engaging teams of students in the process of writing, reading, and reacting, helping them recall information, and giving them opportunities for analyzing problems. Despite much agreement about the benefits of online teams, there is little evidence supporting their value.
I collected data on this subject from four sections of my introductory world politics course. Two sections from 2014 with a total of 75 students employed online learning teams, and two sections from the Fall of 2015 and Spring of 2016 with a total enrollment of 86 did not. The classes met at the same time, the same days, in the same classroom, and received the same lectures.
The learning teams were created by randomly assigning five or six students to groups, with each group given its own online chat room. Students received careful instructions regarding the use of chat rooms. They were required to post chat room comments at least twice each week, first with a response to the week’s assignment and second with their reaction to other students’ posts. This activity counted for ten percent of the final grade. Students in the sections that did not use online learning teams were instructed to study material on their own.
I assessed students’ learning with in-class exams on conceptual definitions and a take-home essay in which they used theory to analyze an historical event—for example, applying deterrence theory to the outbreak of the Gulf War in 1990.
A chi square analysis showed that students’ grades on exams and the paper assignment did not significantly differ between the sections with the online learning teams and those without. The findings imply that the online learning teams had no effect on student performance.
These results invite reflection as to why online teams did not lead to evidence of superior factual recall or analytical ability. More than two-thirds of students across all sections earned a B or better on the exams, indicating good recall. While fewer students earned an A grade on the analytical paper, only a small number earned grades below a C+. Perhaps the lectures, problem exercises, and readings that were part of all course sections were enough to generate this result.
Further research should explore the relative efficacy of online teams and weekly study guides. Given that my research did not control for variables like a student’s prior GPA, year in college, major, or extra-curricular activities, it also might be useful to include this kind of data in future investigations.
For additional information, please refer to John M. Rothgeb, “The Efficacy of Learning Teams: A Comparative Analysis,” Journal of Political Science Education 9, 3 (2013): 336-344, or contact him at the email address listed above.
I’m taking a break from specs grading this week–not because I don’t have anything to write about, but actually because I’m too busy writing specs and grading homework modules to write up everything that’s going on. Plus we are in the midst of a search, and I’m buried in applications. I’ll be back on topic next week with my thoughts about grading, and some micro adjustments I had to make to the course as a result of my reflections.
When I’m not talking about specs grading, I try to share some quick and easy ideas for teaching that can make a big difference. These often fall into the vein of James Lang’s Small Changes in Teaching, both his book and his series over at the Chronicle of Higher Ed (definitely worth checking out!).
Today’s idea is about using audio and oral assignments in the classroom.
I spent the majority of my first class of the semester – or I should say, the students spent the majority of their first class creating community norms, or classroom guidelines. This exercise was valuable for a number of reasons.