Serendipity in Research Methods

Sometimes it is easier to demonstrate real-world relevance than others.

Last week students in my research methods course read Charles Wheelan, Naked Statistics, Ch. 12, and Ashley A. Smith, “Students Taking More Credit Courses and Introductory Math Faring Well,” Inside Higher Ed, 7 December 2018.

They then had to answer this question: What mistakes are Nevada officials making with data about community college students?

As written, the Inside Higher Ed story describes people who should know better falling victim to omitted variable bias and confusing correlation with causation. Although I might be making similar mistakes in evaluating in-class discussion about the assignment, I think that students found it more interesting than most because the assignment was about other students.

Soon afterward, two similar items came across my radar:

Students prefer mixing and matching online with on-campus courses.

Common premises about college students are wrong.

I shared these with my students, as additional examples of analyzing (or not) data about their peers.

Critical thinking and the Ukraine invasion

I’m not an IR person, and I know it.

Unfortunately, a lot of the people I follow on social media do think they are now specialists in warfare, diplomacy or the operations of civil nuclear facilities. These people were also once ‘experts’ in epidemiology, Brexit, macroeconomics, US presidential politics, populism, immigration and many other things besides.

I have my doubts.

This is probably also a problem you face as you try to make sense of the world around us: yes, you know some people who do actually really know stuff, but they get buried in a big pile of hot takes, motivated reasoning and even propaganda.

So what to do?

I’m guessing that Ukraine is an easier case for the readership of ALPS blog to handle, since it’s closer to many of our research interests: even if we don’t work on relevant topics ourselves, then we know the people who do and tap into their expertise.

Of course, as the whole Mearsheimer thing has shown in the past week, even very competent people come up with dubious positions, although you at least get lots of material for your next IR theory class.

(For my part, I’ve limited myself to working up the one element I do feel competent to speak on).

However, for your students this might still be at the edge of their knowledge, abilities and confidence, so how can we help them parse the situation?

For me, task number one has be a strong refresher on how to evaluate information (and it’ll be a refresher, because of course you teach this as a matter of course, right?).

That means making sure they understand the importance of verification, of triangulation, of expertise and of all the other things that we have probably internalised over the years. If we running a class that needed to engage with this I’d be asking students to locate good guides to how to do this, then pulling them together into a master document that they can all use for their subsequent research.

For as fluid as case as an active conflict, information is incomplete and often contradictory, so giving students the tools to determine what they know and what it means is essential. The growing OSINT community is a really good starting point for looking at the operational end of things, while the more strategic reasoning requires engagement with those working in a number of different domains, including Russian politics, military doctrine and sanctions.

As we’ve seen in recent years with whatever crisis you care to imagine, there is a huge potential to access properly informed and well-evidenced specialists on any given topic. But that means cutting through the guff and being able to contextualise what we read.

And that’s a great life-skill to be developing in our students, regardless.

What You Think Depends On Where You Stand

Our superb librarians survey students and faculty annually. Results from this year’s survey are in. Student responses to one of the questions:

Faculty responses:

Notice that the frequencies of responses from these two groups are essentially mirror images of each other. Students are extrinsically motivated by grades, so they think in instrumental terms: I need correctly formatted citations and the specified minimum number of sources. Otherwise my grade will be negatively affected. Knowing whether a source is reputable is far less important. Faculty think the reverse: the ability to locate scholarly source material and analyze information for bias matters most.

I have tried to solve this problem in the past, and could not find a satisfactory solution. Consequently, I have focused more on curating quality content for student to consume than on marking down students because of their reliance on websites that are top-listed in Google searches. In fact, it’s one of the reasons I decided to stop assigning traditional research papers.

Given the survey results though, the problem extends far beyond my small corner of the curriculum. I’m not going to solve it independently.

Readers might find these other posts on information literacy skills to be of interest:

The Methods Silo Effect and Fixing Poor Research Skills

Googling

Write Your Own Headlines Activity

Write Your Own Headlines Activity

This post comes from Chelsea Kaufman, assistant professor of political science at Wingate University. She can be contacted at c[dot]kaufman[at]wingate[dot]edu.

In teaching undergraduate research methods, I often find that the students are intimidated by the subject matter and don’t see its relevance to their lives. I have increasingly emphasized to students that it prepares them to be savvy consumers of political information wherever they might encounter it. This approach introduces an additional challenge, however: students often lack the information literacy skills to evaluate the sources that they access. If I want students to have the skills to evaluate the political information they encounter, I obviously need to teach them these skills. How exactly can this be accomplished? 

It is not enough to tell students which sources are acceptable, because people tend to trust information that aligns with their political predispositions. Simply lecturing to students about the dangers of misinformation can reinforce false beliefs and increase their distrust of reliable sources. 

To avoid this conundrum, I have students write their own headlines based on public opinion poll data. I first find a poll with results covered in several media outlets. I then send students a link to (or print out) the results of the poll, without providing them any context as to how it was covered in the media. After writing the headlines, students share them and compare theirs with those of their classmates and with published headlines about the data. Students learn to interpret data and evaluate whether it was given accurate coverage in the media. As the final part of the lesson, I then ask them to evaluate the polling methods used to obtain the data, by, for example, considering how a question’s wording might have impacted the responses. 

You can view detailed instructions for the activity on APSA Educate. You can also read more about this topic and find examples of additional activities in my article Civic Education in a Fake News Era: Lessons for the Methods Classroom or my chapter in The Palgrave Handbook of Political Research Pedagogy

Possible Improvement To Team Research Projects

A follow-up to my recent post about increasing the quality of students’ final products from collaborative research projects:

In my Spring 2021 research methods course, I gave students this outline to follow when writing their team’s research reports. I’ve revised the outline for Spring 2022. Each part in the new outline will get graded separately, with a summative grade for the entire report at the end of the semester.

I’m also thinking of being much more specific about the report’s layout, and grading the reports accordingly — similar to what has worked well with student presentations. I can envision the following criteria:

No more than two pages per part, which would limit the final report to eight pages.

Each part must include at least one data visualization — a chart or graph.

No photographic images.

How to measure whether your teaching’s working

As long as a cat…

As we hurtle towards the summer ‘break’ and everyone remembers the deadline they cut you some slack on, it’s also a time when we’re often thinking about next semester.

For those of you with interests in making L&T a bigger part of your work, one obvious route is researching and publishing on what you do in the classroom.

Often that might be about trying out something different with students, which you think generates benefits for their learning, and might be of use to others in the same situation: we’ve published lots of such pieces from our guest authors here at ALPS.

While the thing you’re doing is the obvious centre of attention, the second element – whether it works – sometimes gets a bit lost (speaking as someone who reviews a good number of journal submissions in this field), so I thought it’s useful to think a bit more about this.

Measuring learning turns out to be a less-than-simple task: if it weren’t, then we’d all know about how to do it. The problem turns in part on the multiplicity of things we might consider, and in part on the difficulty in making any accurate/meaningful measure of these things.

Learning is not simply about knowledge, but also skills, social capital and much more. Each of those itself has many sub-elements, not all of which might be immediately obvious to anyone, nor equally important to everyone. Likewise, learning happens at lots of different speeds, so do you focus on the immediate gains, or something more long-term?

The (faint) silver lining to this particular cloud is that everyone’s in the same boat. I’m yet to see a comprehensive evaluation tool that I could recommend to you, even though there a number of really good ideas out there (for example, this or this (which makes the good point that students’ perception of what they learn isn’t the same as teachers’ measure of what they learn)).

The important thing here is to be mindful of this from the start of any pedagogic research, embedding your measurement protocol into the design from the start, rather than hoping it’ll come to you later: a short post-course questionnaire about whether your students liked the thing they did isn’t likely to suffice.

That means thinking about what elements you focus on measuring (and why), then on how you’ll measure them. In particular, think about whether and how you can have a control for your teaching intervention: if it’s not practical to have another group of students not doing it, then will pre/post testing cover things robustly enough? Just like your other research, try to control your variables as much as you can, so you can be more confident about isolating effects.

And it also means asking for help if you’re unsure. Your institution probably has a great bunch of people centrally who work on just these kinds of projects and who can give you excellent advice and support. Likewise, you can ask us here or online about specific ideas: it’s worth looking back at our posts for suggestions and colleagues who’ve worked on similar things.

Do all that and your pedagogic research will be off to a flying start (which might be the only flying you get to do).

A Lesson Learned About Team Research Projects

Looking at student performance in the 2020-2021 academic year, I see evidence that team research projects due at the end of the semester can’t be scaffolded solely around individually-graded assignments completed throughout the semester. For example, in my Middle East politics course, each student shared four individually-completed assignments with their teammates for use in their team’s historical timeline. In my research methods course, there were ten individual assignments that teammates were supposed to share with each other as drafts of sections of team research reports. While this approach does decrease free riding and encourage collaboration, it apparently does not ensure high quality research in the final product. Four of the five timelines that teams created in the Middle East course lacked mention of significant events. None of the four teams in the research methods course collected information from coffee farmers, processors, or distributors in Central America, despite my instructions to do so, nor did the final reports resemble the industry exemplars I had provided.

It seems that in students’ minds, my formative assessment of their individual work is totally unconnected to the summative assessment of their collaborative work. I probably need to break the team project into discrete, graded chunks, with each chunk layered on top of some of the individual assignments. Teams can use the feedback they receive on each successive chunk of the project to improve the quality of the final product.

Exam Essays that Develop Research Skills: A Second Look at Zotero

Today we have a guest post from Adam Irish, an assistant professor of political science at California State University, Chico.

Like many professors, I change my teaching to fit the class or, in the past year, the Zoom discussion I am leading. My lower division, survey courses focus on building a scholarly vocabulary and an understanding of concepts; upper division courses dive deeper into issues so that students can wade into the intellectual fray. However, this past year of online teaching revealed a potential for overlap for this dichotomy: the development of research citation skills through the incorporation of Zotero.

Continue reading “Exam Essays that Develop Research Skills: A Second Look at Zotero”

The Online Field Research Project

To pick up the gauntlet metaphorically thrown down by Amanda last week, here is the first of what will probably be a series of posts on my experience teaching an introduction to research methods course online this semester. When I last taught this course two years ago, I used Amanda’s Best Breakfast in Town project. Given the constraints imposed by the coronavirus pandemic, sending students into restaurants simply wasn’t an option this time around. Yet I still wanted students to experience the trials and tribulations of real-world field research. I decided create a new research project on specialty coffees from Central America, with teams investigating coffee from Costa Rica, El Salvador, Guatemala, and Honduras, respectively. To increase the authenticity of the project, students are responsible for designing a survey (replete with a pilot test and my coaching to try to avoid problems like sampling bias), conducting remote interviews with the people who produce and sell these coffees, analyzing the resulting primary source quantitative and qualitative data, and communicating their conclusions in an industry-style report.

Continue reading “The Online Field Research Project”

Redesigning methods teaching: parallel workshops for interdisciplinary learning

This guest post comes from Dr Viviane Gravey and Dr Heather Johnson, both of Queen’s University Belfast

Research methods are crucial, particularly in Graduate learning, but methods modules are often the most unpopular with students and staff alike.

This makes methods modules prime candidates for either offloading onto temporary staff, or confining to designated ‘methods heavy’ positions for often isolated staff. This shunting of methods teaching onto precarious staff communicates unspoken but negative messages to students about the importance of this training, while consistently lower-than-average student evaluations (regardless of actual teaching excellence) negatively impact the profiles of vulnerable colleagues. 

At a time where we see silly op-eds calling for a Deliveroo approach to higher education (students deciding what they want to learn at MA level, and taught by temporary providers hired ‘on demand’), methods module would be first on the chopping block. Yet these unloved offerings provide, or at least should provide, the building blocks for that much-loved rite of passage: independent research and the MA dissertation. Beyond the dissertation, a deep engagement with methods is needed to better understand where we position ourselves in our respective fields, and so provide critical insights into both the mainstream and its critics.

Redesigning how we teach methods is far from a new topic on ALPS, with examples from using games to make students’ introduction to methods less frightening, to a series of posts on flipping the methods classroom.

This post draws on our own experiences, alongside reflections from EUROTLC discussions on curriculum design. Usual caveats apply: this is not a silver bullet. It depends on our local conditions and is still very much a work in progress. But at a time where the pandemic is forcing a rethink in how, what, and even where we teach, our stranded, workshop-based module can offer a useful starting point.

Context and problem

Following an administrative merger in 2016 we are a bigger school, with a growing number of MA students across 11 programs in Politics/International Relations, Anthropology, and History. Many students have backgrounds in other disciplines, and a growing proportion come from overseas. Some programmes are interdisciplinary, some more discipline-specific, with significant variation in student numbers from 6 up to 80+.    

Teaching different methods modules for each pathway is impractical, and while the merger offers opportunities for interdisciplinarity, combining methods teaching raises three dilemmas. First, should we aim for depth and specialization, or breadth and variety? Second, could we agree core teaching across the disciplinary boundaries?  Finally, how might we achieve student-led learning that encourages exploration and recognizes diverse backgrounds?

An innovative stranded, workshop-based module

Core or optional, breadth or depth? Instead of choosing we opted for both, via two simple design choices (a) ditching the one week/one topic model in favour of parallel workshops and (b) designing ‘strands’ to organise these workshops. Instead of covering 10 to 12 topics in as many weeks, we offer a wide range of parallel workshops, limited only by staff and room availability (and our collective imagination). Last year we offered 40 workshops, delivered in 8 weeks, taught by a team of over 25 colleagues according to their expertise, for close to 200 students. This also served to engage staff at all levels and in all areas of the School, centralizing rather than isolating methods teaching in the curriculum.

Workshops are organized across 6 strands (see examples in Figure 1) – from epistemology to case studies, whereby colleagues walk students through their own research design in a recent project.  These strands are populated according to the demands of our different MA programs, and also reflect the best practices of RCUK graduate training by exposing students to philosophy of science, and to both quantitative and qualitative methods.  They seek to enable flexibility for students according to their prior experience, with workshops that build upon one another in complexity and with different entry points.  A good example is the quantitative methods strand, which offers both basic training for primarily qualitative-focused researchers, alongside both beginner and advanced workshops for students who wish to specialize.

Figure 1

Students can, in effect, design their own path through the module: guided by their own interests and goals, they must take at least 9 workshops, including at least one from each strand.  Each individual program has designated compulsory workshops that students must include in their schedule in order to meet any specialization requirements. Thus, students have the opportunity to specialize, for example, by comparing different approaches to research interviews (5 workshops), or to explore new methods or move beyond their disciplinary boundaries.

Students are assessed on an applied methods portfolio of two items such as a short essay on epistemology, a data analysis exercise, or a practice interview or observation – and a research design proposal, bringing together content from the entire module (literature review, research questions, methods choices, ethical considerations). This proposal can be linked to the MA dissertation, and students are encouraged to treat it as preparation for their own independent research, working with their dissertation supervisors where possible.

Where next?

Reflecting on the first two years of this module, the welcome increase in student choice came at three costs – which we working to offset.

First, we need to ensure we do not ask students to run before they can walk: some students have no background in either methods or epistemological debates, and the kind of writing required in research design is often different than in a traditional essay.  As general training in writing skills is offered elsewhere in the university, this is difficult to address.  Nevertheless, we can both develop more ‘nuts and bolts’ workshops, and also sign-post students early on to outside support.

Second, the workshop model plays havoc with student timetables and our room-booking. Students can have different teaching loads week on week, and our commitment to (relatively) small class sizes means that we often need to add duplicate sessions to accommodate workshop popularity. This lack of certainty does not impact our student population equally – students working alongside their studies, those with caring responsibilities, or those living far from campus, will see their choices limited in practice. Providing more sessions online via asynchronous means will solve some, although not all, of these difficulties. We can also commit to publishing the timetable of workshops before term begins to facilitate student planning. 

Third, while the teaching load is shared, such a large and complex module comes with a commensurate administrative load for the course convenor. While some of that burden can be front-loaded in preparing the online learning environment (e.g. online workshop registration), the administrative load will remain large and often invisible.

Methods in a time of coronavirus

How teaching will happen in September remains uncertain. Nevertheless, we can focus on a number of ‘no regrets’ options.

First, we can ‘flip’ lectures, with pre-recorded, asynchronous introductions to different methods, and focusing any in-person class time on application. This would also allow students to discover a wider range of methods, and provide long term resources for their dissertation.

Second, it will be important to provide some dedicated training towards online research methods and ways to adapt traditional methods to social distancing. 

Finally, we can draw on external sources to broaden workshop options and resources. There is a wealth of methods teaching resources online – for example podcast series such as the UK National Centre for Research Methods podcast, or the Give Methods a Chance series.

In these trying times, it is time for universities to collaborate – where better than on methods teaching?