Got an interesting classroom exercise, project, or experience that you’d like to share? We want to publish it. Submit a draft of a blog post in an email to email@example.com for the editors to review. Guidelines are on the About page.
One of the joys of being department chair is creating a curriculum map for information literacy learning outcomes — as part of a five-year program review for a department that is only two years old. Since I’m teaching research methods, a requirement for students in all three of the department’s interdisciplinary majors, I decided to make information literacy a focus of the course. I designed several brief assignments based on chapters in Charles Wheelan’s Naked Statistics that pertain to evaluating information sources for authority, reliability, and relevance. These tasks in turn complement, in my mind at least, two larger assignments: Amanda’s Best Breakfast in Town project and writing a research proposal.
I thought I’d post some of those assignments here on the blog along with an assessment of how well students did on them. First topic on the list is hypothesis construction:
- Naked Statistics, Ch. 1.
- https://www.mobilecoveragemaps.com/. Click on Ghana, Nigeria, and Tanzania to see mobile phone coverage in each country.
Given the availability of mobile phone coverage in Ghana, Nigeria, and Tanzania, how can we infer which country is the most violent? Why? (Generate a hypothesis about a relationship between mobile phone coverage and violence.)
Students did a good job thinking of possible causal relationships between mobile phone use and violence. Class discussion included ways to operationalize the concepts of violence, wealth, and happiness, which we did with some quick internet research. Students did not find an association between homicide rate and the amounts of mobile phone coverage in Ghana, Nigeria, and Tanzania, which then led to the topic of sample size. The assignment seemed to work as I had intended.
Something of an update to my last post on the slow-motion tsunami in U.S. higher education: Green Mountain College and Oregon College of Art and Craft will close at the end of this semester. Essentially the same fate will befall Hampshire College, because its board of trustees has limited the Fall 2019 incoming class to only about seventy deferred and early decision admits. Few of them will enroll and current students will transfer out, hastening Hampshire College’s impending insolvency.
Applying my measurement of change in annual total expenses per FTE undergraduate from fiscal years 2011 to 2016 to these schools, I get the following percentages:
- 27: Green Mountain College
- 24: Oregon College of Art and Craft
- 25: Hampshire College
Note that these figures are far lower than those for several of the colleges and universities listed in my last post. Does an increase of 25 percent or more over a six-year period in the average cost per full-time undergraduate indicate that a private, tuition-dependent, small-enrollment institution is at high risk of closure? I’ll say, “Yes.”
What’s the figure for the college or university at which you work?
I’m back in Antwerp for my regular visit to discuss L&T (and to get pushed on my thinking in this field).
One of the discussions yesterday was about trying to teach European integration through cultural products.
We see it often for IR, where there are a bunch of films and TV shows to be used to illustrate theory and other aspects of (not) bashing each other about.
But it’s not so easy for European studies, because there’s much less available.
A quick tour of the table (and some thought after) produced the following:
Robert Menasse’s The Capital, a novel set in Brussels and based on extensive embedding among the civil servants.
Yes Minister’s episode about the Eurosausage, dated (shown in 1984), but funny and shorter than a film.
The John Hurt movie The Commissioner (1998), which you’ve not heard of, because it’s rubbish and not really about the EU.
Series 4 of the The New Statesman (1992), where Alan B’stard becomes an MEP after pushing his German challenger down a mineshaft. Silly.
The episode of Danish series Borgen on selecting a new Commissioner, which is interesting for the interface of European and national politics.
There’s the recent Uncivil War, a dramatisation about the UK’s 2016 referendum, but it’s not really about the EU at all.
And that was about it: the recent (and excellent) BBC series 10 Years of Turmoil isn’t fiction, and isn’t available outside the UK.
In short, a dearth of materials, which partly reflects the position of the EU in popular life (i.e. it doesn’t really have one).
Any more suggestions? Post them below and if we get enough we can talk about making a module for delivering them somewhere.
It seems there are some problems with posting comments, apologies, so I’m adding in some more suggestions here. Email me s.usherwoodATsurrey.ac.uk if you want to join the carnival of fun!
From Patrick Bijsmans:
Middle England, by Jonathan Coe (https://www.theguardian.com/books/2018/nov/16/middle-england-by-jonathan-coe-review). Reading it now.
Did David Hasselhoff end the Cold War?, by Emma Hartley (https://www.theguardian.com/commentisfree/2013/mar/19/david-hasselhoff-berlin-wall-fall). Read this some years ago and it’s really funny, but with a serious twist.
Some comments on a recent study of active learning published in the journal PLOS One — “Knowing is half the battle” by Shaw et al. The study reports on data gathered in an introductory biology course that was taught with active learning techniques from 2013 to 2016. Post-course scores on a concept and skill inventory were significantly higher than pre-course scores, which the authors take as an indication that students learned. Inventory scores from traditionally-taught iterations of the course are not reported. Without a control group, we have no idea whether the new pedagogy is more effective at generating desired learning outcomes than the old one. This is the typical flaw in research on active learning.
But there is a silver lining to this study. The researchers also measured student perceptions. Over time, students increasingly attributed their learning to course structure and pedagogy. Student course evaluations usually correlate with grades, but in this case, grades did not significantly change from year to year. So it appears that students’ expectations about the course eventually aligned more closely with how the course was taught.
This points to a phenomenon that I have noticed anecdotally: if you suddenly adopt an active learning pedagogy, prepare to be criticized initially by students, especially if all the other instructors that students encounter continue to teach in a traditional way.
Alternative title for this post:
Why Many Teacher-Training Programs Should Be Abolished
Perhaps some of you — at least in the USA — have noticed the phenomenon of college students using multi-syllabic words out of context. The student clearly does not know what the word he or she inserted into the sentence actually means.
I used to think this was an attempt to impress me in the hopes of getting a higher grade on the writing assignment — pull a complicated-sounding but inaccurate word from an online thesaurus instead of using something simpler. But perhaps the behavior is really a sign that the student is deficient in some basic literacy skills.
As pointed out in this National Public Radio story, millions of children in the USA do not learn how to read well at an early age because of the unscientific and ineffective methods used by their teachers. If children fall behind in the first few years of primary school, it’s probably difficult for them to become proficient readers later on. I’m now wondering if these deficits in literacy persist all the way into college.
There’s not much that separates PoliSci academics from others in most aspects of pedagogy, but one that is quite notable is the question of “what’s your politics?”
The reasons for this should be pretty clear, so I’ll not get into that, but instead will offer some thoughts, because we get this kind of thing on our side of the Atlantic too.
As the various respondents to Carolyn’s tweet suggest, the very question speaks to a set of assumptions, which can be usefully exposed and explored.
However, that can be a deflection, rather than an answer, so it still behoves us to consider what answers we can give.
It’s something I’ve had to chew on a lot in recent years, given my work on Brexit: “how did you vote?” is now getting overtaken by “what do you think we should do?”
The fact that I genuinely don’t know what we should do is neither here nor there, because the rest of what I’m offering people is what I claim to be impartial and fair insight into assorted issues, so if I’m seen as speaking for any one party then my whole work is compromised.
This is, of course, the problem we all face: politics gets seen as a clash of interests with no objective truth to be defended, thus meaning we must all be on one side or another.
Without wishing to get lost down an ontological or epistemological hole on this one, I think it’s possible to mark out a more segmented view of politics: we have our own views, but the consequence of those is limited, especially if we are reflective about these.
Thus I can acknowledge how I voted in the referendum, while also stressing that my interest now is in helping others to reach an informed and considered set of decisions about what comes next. It helps that this is my heartfelt belief – process matters much more than outcome to me right now.
But we can also communicate such messages in different ways in our classroom.
Promoting and defending a range of perspectives on contentious issues; fostering a space in which different views can be discussed with respect and tolerance; acknowledging the limits of what evidence (and anecdote, for that matter) can tell us.
These elements often prove to be much more meaningful in conveying the values of academic inquiry and debate and the interplay between facts and opinions than any “what’s your politics?” discussion.
Still doesn’t make it that much easier when you get asked, though.
Interested in designing a classroom game, but have no idea where to start? Being a fan of classroom games, I developed this checklist to help me think through my own designs. The only checklist items that I think are absolutely necessary are the objective and win conditions, as both are crucial for identifying the concepts you are measuring and providing students with clear and achievable goals. Other checklist items are dependent on your design. For example, if your game is not map-based, then a map and scale are not required, but a game with many pieces likely needs a detailed inventory. Game on!
- Win Conditions: how the game ends. Can be competitive (zero-sum) or cooperative (non-zero sum). Games in which all teams can win are still challenging.
- Objective: what is the specific goal of your game?
- Number of Players: helps the designer conceptualize the game size and boundaries.
- Level of Detail: abstract to elaborate setting. Increased detail improves conceptual accuracy, but requires significantly more time to develop and play. Not that abstract games are necessarily easier to design!
- Inventory: all required pieces and parts to play the game. Be exhaustive, even down to number of spare rulebooks and pencils.
- Map or Board: visual display of the gameplay area.
- Scale: if the game requires length and volume measurement. Example: each hex or square equals 1/6 of a mile.
- Course of Play: every step for running a game from start to finish. This will be the most detailed portion of the game.
- Combat Resolution: determining outcome of players cooperating or conflicting during the course of play.
- Rewards and Punishments: mechanisms for players to advance or regress based on performance.
- Measurement: scoring the game. Can be qualitative (e.g. area of controlled space) or quantitative (number of points).
- Arbitration: handling rule and player disputes.
- Feedback: discussing game outcomes and recommended game improvements.
- Glossary: define key terms.
Asal, Victor. “Playing Games with International Relations,” International Studies Perspectives, Vol. 6, No. 3 (2006): 359-373.
Dunnigan, James. Wargames Handbook, Third Edition: How to Play and Design Commercial and Professional Wargames. Lincoln, NE: iUniverse, 2000.
Macklin, Colleen, and John Sharp. Games, Design and Play: A Detailed Approach to Iterative Game Design. Boston, MA: Addison-Wesley Professional, 2016.
Sabin, Philip. Simulating War: Studying Conflict through Simulation Games. New York, Continuum, 2012.
This guest post is part of a series linked to the publication of G. Pleschova & A. Simon (eds.) Learning to teach in central Europe: Reflections from early career researchers. This post comes from Heidrun Maurer.
Innovation and active learning are nowadays often too easily welcomed catchphrases in Higher Education, whose successful implementation is taken for granted. Stanislava Kováčová from Masaryk University aimed at testing the added value of active learning herself.
To her own surprise, her experiment did not show any significant indication that her students had learnt more after active learning than after traditional lectures. In her contribution “Does active learning work? The experiences of Brno and Tehran psychology students” she presents collected data and reflects on how her students experienced passive lecture-focused and active student-focused learning. She tests three hypotheses: if students in an active learning environment participate more, gain higher level of content knowledge, or engage more. While the results are not statistically significant, they suggest counterintuitively a tendency for lecturing to be more effective practice than active learning.
Stanislava´s contribution encourages all of us to think more carefully of how to measure the success of the teaching methods that we employ. Her attempt showcases tellingly the complexity of measuring learning, but also how important it is to think about the methods of data collection. In Stanislava´s case, institutional policies made it difficult to gather reliable, comparable data. In addition, one needs to consider the right moment to test the effect of learning tools, especially when it is not only about content but also skills: is it right after the class, at the end of term, or years after?
Furthermore, measuring the effect of learning must depend on the objective(s) that we set for our teaching innovation. Stanislava had decided to assess participation (“students asking questions”), knowledge (“students being able to answer questions”), and engagement (“students taking notes”), but those criteria will vary depending on the expected outcomes of the innovation. It is generally a good reminder that we should not innovate for innovation´s sake, but that all attempts to improve the learning experience have to start from a concise definition of what is meant to change and why.
Stanislava´s project also reminds us that it is not just a question if we use active learning tools but how we apply and integrate them into our students´ learning. It seems like a plausible explanation that students in Brno and Tehran were overwhelmed with the task and would have needed more attempts to get used to switch from a more lecture-based system to actively engaging with the exercises. Another explanation could be that the exercises that we sometimes use are not achieving what they are meant to achieve, and they would need a different design altogether. Especially for colleagues unfamiliar with active learning there is a tendency to design active learning exercises that are too prescriptive and too narrow, as they do not allow students to engage in researching and asking their own questions.
Last but not least important, the reflections of Stanislava´s project even more tellingly emphasise what we must not ignore when employing active learning pedagogy: students´ skills like active listening, processing information and taking notes must not be taken for granted and should also be actively – or even more concisely – encouraged and trained in an active learning environment.
Active learning pedagogy can help us a great deal to design tools to engage students, facilitate their learning, and train them as researchers. But applying active learning effectively asks for a different mindset, and its successful application looks easier than it is often made out to be in practice. Adding a few exercises in a traditional curriculum is often not enough to harness its full potential.
As with learning more generally, it can only work with practice, critical reflection, and sometimes, trial and error.
This guest post is part of a series linked to the publication of G. Pleschova & A. Simon (eds.) Learning to teach in central Europe: Reflections from early career researchers. This post comes from Alexandra Mihai.
Active learning refers to a large range of teaching strategies and methods that put the student at the centre of the learning process. From debates and group discussions, to simulations and problem-based learning, this versatile approach has the students’ active engagement at the forefront, while teachers play the role of a coach or facilitator. Fujdiak captures in her chapter a very interesting instance of active learning being employed in an International Security Policy course. Her two goals are to find out what students think about the methods used and to assess whether they were contributing to a deeper understanding of the topic. In order to do that, Fujdiak analyses student feedback from “minute papers”, enhanced by her own classroom observations.
The active learning approach was introduced in the second part of the course. The main aim was to complement the series of lectures from the first part and to get students to engage with the topics, which would in turn lead to a more effective learning experience. For this part of the course the large class was split into three seminar groups of 26 students each and the classes were conducted by seminar leaders, one of whom was Fujdiak. Throughout the six seminars she used various learning activities such as group discussions in various formats, brainstorming, mind-mapping and role play. The chapter contains annexes detailing the activities and their perceived impact, as well as a visual representation of the findings.
By analysing students’ qualitative feedback via content analysis and through her own observations, Fujdiak could draw a few conclusions concerning the impact of her active learning activities.
First of all, students found the student-student interaction very useful and their overall engagement in class increased. Two seminars received mixed ratings: one where the guest lecturer did not employ active learning at all and another one where the activity was not planned very well in terms of timing. This shows that students are very fine observers insofar as the activity design is concerned. Moreover, the more familiar they get with active learning the higher their expectations are, this being mirrored in their degree of engagement with the respective tasks.
In her chapter, Fujdiak emphasizes some of the most important aspects of active learning. In order for this teaching approach to fulfil its main goals, it is crucial to put a considerable amount of effort into class design, with a focus on providing students a clear structure and instructions. Moreover, effective learning activities need to be meaningfully integrated in the overall course design.
It would be interesting to see whether some of the activities would have a bigger impact if they were to take place in alternation with the related lectures, and not in a separate part of the course, somewhat in isolation. As Fujdiak herself explains, this is not always a choice one has; she, like many other early career academics, had to operate within a pre-defined course structure. Her varied active learning activities and her reflective study are a proof that teaching innovation can also occur under rather rigid external conditions. The important thing is to establish clear learning objectives, be receptive to students’ needs and feedback and be bold enough to try out new ways of engaging students in their learning.
For reasons best known to others, it’s the end of our first semester here, so that means coursework grades are going back to students.
I was even more interested than usual in this event this time around because something unusual happened with my class: they came to talk with me about their assessment.
I know that might seem mundane, but despite my best efforts my office hours have often resembled one of the remoter oases in a desert: potentially of use, but rarely visited by anyone.
I’d love to tell you what was different this semester, but I genuinely have no idea: I did the things I usually did, so maybe it was a cohort effect. Or not.
In any case, I reckon I sat down for discussions with most of the students and emailed with several others. In those exchanges we typically covered both generic guidance on what was required and specific discussion on students’ plans.
Of course, the big question is whether that helped the students to do better.
At this point, I’ll note that my class had about 35 students and it’s a one-off event so far, so I’m alive to not over-reading the outcomes. Against that, the marking has been confirmed by the second marker.
That said, the main positive outcome was that the bottom half of the class moved up quite markedly. In previous years, I’ve always had a cluster of students who simply didn’t ‘get’ the assessment – a reflective essay – and thus came out with poor marks. This time, I had only a couple of students in that situation, and they appeared (from my records) to have not attended most of the classes, and hadn’t come to talk.
Put differently, the tail was severely trimmed and the large bulk of students secured a decent grade.
What didn’t appear to happen was an overall shift upwards though: the top end remaining where it had been previously.
Again, I’m not sure why this might be. Without another cohort I’m not even sure if my guidance actually did anything for anything.
Quite aside from the specific instance, it does underline for me how little me know about the ways in which our teaching practice does and doesn’t impact on student learning.
In this case, I don’t really know how one could ethically test the impact of formative feedback and support, given the multiple variables at play. If you have an idea, I’d love to hear it.