What You Think Depends On Where You Stand

Our superb librarians survey students and faculty annually. Results from this year’s survey are in. Student responses to one of the questions:

Faculty responses:

Notice that the frequencies of responses from these two groups are essentially mirror images of each other. Students are extrinsically motivated by grades, so they think in instrumental terms: I need correctly formatted citations and the specified minimum number of sources. Otherwise my grade will be negatively affected. Knowing whether a source is reputable is far less important. Faculty think the reverse: the ability to locate scholarly source material and analyze information for bias matters most.

I have tried to solve this problem in the past, and could not find a satisfactory solution. Consequently, I have focused more on curating quality content for student to consume than on marking down students because of their reliance on websites that are top-listed in Google searches. In fact, it’s one of the reasons I decided to stop assigning traditional research papers.

Given the survey results though, the problem extends far beyond my small corner of the curriculum. I’m not going to solve it independently.

Readers might find these other posts on information literacy skills to be of interest:

The Methods Silo Effect and Fixing Poor Research Skills

Googling

Write Your Own Headlines Activity

Write Your Own Headlines Activity

This post comes from Chelsea Kaufman, assistant professor of political science at Wingate University. She can be contacted at c[dot]kaufman[at]wingate[dot]edu.

In teaching undergraduate research methods, I often find that the students are intimidated by the subject matter and don’t see its relevance to their lives. I have increasingly emphasized to students that it prepares them to be savvy consumers of political information wherever they might encounter it. This approach introduces an additional challenge, however: students often lack the information literacy skills to evaluate the sources that they access. If I want students to have the skills to evaluate the political information they encounter, I obviously need to teach them these skills. How exactly can this be accomplished? 

It is not enough to tell students which sources are acceptable, because people tend to trust information that aligns with their political predispositions. Simply lecturing to students about the dangers of misinformation can reinforce false beliefs and increase their distrust of reliable sources. 

To avoid this conundrum, I have students write their own headlines based on public opinion poll data. I first find a poll with results covered in several media outlets. I then send students a link to (or print out) the results of the poll, without providing them any context as to how it was covered in the media. After writing the headlines, students share them and compare theirs with those of their classmates and with published headlines about the data. Students learn to interpret data and evaluate whether it was given accurate coverage in the media. As the final part of the lesson, I then ask them to evaluate the polling methods used to obtain the data, by, for example, considering how a question’s wording might have impacted the responses. 

You can view detailed instructions for the activity on APSA Educate. You can also read more about this topic and find examples of additional activities in my article Civic Education in a Fake News Era: Lessons for the Methods Classroom or my chapter in The Palgrave Handbook of Political Research Pedagogy

Possible Improvement To Team Research Projects

A follow-up to my recent post about increasing the quality of students’ final products from collaborative research projects:

In my Spring 2021 research methods course, I gave students this outline to follow when writing their team’s research reports. I’ve revised the outline for Spring 2022. Each part in the new outline will get graded separately, with a summative grade for the entire report at the end of the semester.

I’m also thinking of being much more specific about the report’s layout, and grading the reports accordingly — similar to what has worked well with student presentations. I can envision the following criteria:

No more than two pages per part, which would limit the final report to eight pages.

Each part must include at least one data visualization — a chart or graph.

No photographic images.

How to measure whether your teaching’s working

As long as a cat…

As we hurtle towards the summer ‘break’ and everyone remembers the deadline they cut you some slack on, it’s also a time when we’re often thinking about next semester.

For those of you with interests in making L&T a bigger part of your work, one obvious route is researching and publishing on what you do in the classroom.

Often that might be about trying out something different with students, which you think generates benefits for their learning, and might be of use to others in the same situation: we’ve published lots of such pieces from our guest authors here at ALPS.

While the thing you’re doing is the obvious centre of attention, the second element – whether it works – sometimes gets a bit lost (speaking as someone who reviews a good number of journal submissions in this field), so I thought it’s useful to think a bit more about this.

Measuring learning turns out to be a less-than-simple task: if it weren’t, then we’d all know about how to do it. The problem turns in part on the multiplicity of things we might consider, and in part on the difficulty in making any accurate/meaningful measure of these things.

Learning is not simply about knowledge, but also skills, social capital and much more. Each of those itself has many sub-elements, not all of which might be immediately obvious to anyone, nor equally important to everyone. Likewise, learning happens at lots of different speeds, so do you focus on the immediate gains, or something more long-term?

The (faint) silver lining to this particular cloud is that everyone’s in the same boat. I’m yet to see a comprehensive evaluation tool that I could recommend to you, even though there a number of really good ideas out there (for example, this or this (which makes the good point that students’ perception of what they learn isn’t the same as teachers’ measure of what they learn)).

The important thing here is to be mindful of this from the start of any pedagogic research, embedding your measurement protocol into the design from the start, rather than hoping it’ll come to you later: a short post-course questionnaire about whether your students liked the thing they did isn’t likely to suffice.

That means thinking about what elements you focus on measuring (and why), then on how you’ll measure them. In particular, think about whether and how you can have a control for your teaching intervention: if it’s not practical to have another group of students not doing it, then will pre/post testing cover things robustly enough? Just like your other research, try to control your variables as much as you can, so you can be more confident about isolating effects.

And it also means asking for help if you’re unsure. Your institution probably has a great bunch of people centrally who work on just these kinds of projects and who can give you excellent advice and support. Likewise, you can ask us here or online about specific ideas: it’s worth looking back at our posts for suggestions and colleagues who’ve worked on similar things.

Do all that and your pedagogic research will be off to a flying start (which might be the only flying you get to do).

A Lesson Learned About Team Research Projects

Looking at student performance in the 2020-2021 academic year, I see evidence that team research projects due at the end of the semester can’t be scaffolded solely around individually-graded assignments completed throughout the semester. For example, in my Middle East politics course, each student shared four individually-completed assignments with their teammates for use in their team’s historical timeline. In my research methods course, there were ten individual assignments that teammates were supposed to share with each other as drafts of sections of team research reports. While this approach does decrease free riding and encourage collaboration, it apparently does not ensure high quality research in the final product. Four of the five timelines that teams created in the Middle East course lacked mention of significant events. None of the four teams in the research methods course collected information from coffee farmers, processors, or distributors in Central America, despite my instructions to do so, nor did the final reports resemble the industry exemplars I had provided.

It seems that in students’ minds, my formative assessment of their individual work is totally unconnected to the summative assessment of their collaborative work. I probably need to break the team project into discrete, graded chunks, with each chunk layered on top of some of the individual assignments. Teams can use the feedback they receive on each successive chunk of the project to improve the quality of the final product.

Exam Essays that Develop Research Skills: A Second Look at Zotero

Today we have a guest post from Adam Irish, an assistant professor of political science at California State University, Chico.

Like many professors, I change my teaching to fit the class or, in the past year, the Zoom discussion I am leading. My lower division, survey courses focus on building a scholarly vocabulary and an understanding of concepts; upper division courses dive deeper into issues so that students can wade into the intellectual fray. However, this past year of online teaching revealed a potential for overlap for this dichotomy: the development of research citation skills through the incorporation of Zotero.

Continue reading “Exam Essays that Develop Research Skills: A Second Look at Zotero”

The Online Field Research Project

To pick up the gauntlet metaphorically thrown down by Amanda last week, here is the first of what will probably be a series of posts on my experience teaching an introduction to research methods course online this semester. When I last taught this course two years ago, I used Amanda’s Best Breakfast in Town project. Given the constraints imposed by the coronavirus pandemic, sending students into restaurants simply wasn’t an option this time around. Yet I still wanted students to experience the trials and tribulations of real-world field research. I decided create a new research project on specialty coffees from Central America, with teams investigating coffee from Costa Rica, El Salvador, Guatemala, and Honduras, respectively. To increase the authenticity of the project, students are responsible for designing a survey (replete with a pilot test and my coaching to try to avoid problems like sampling bias), conducting remote interviews with the people who produce and sell these coffees, analyzing the resulting primary source quantitative and qualitative data, and communicating their conclusions in an industry-style report.

Continue reading “The Online Field Research Project”

Redesigning methods teaching: parallel workshops for interdisciplinary learning

This guest post comes from Dr Viviane Gravey and Dr Heather Johnson, both of Queen’s University Belfast

Research methods are crucial, particularly in Graduate learning, but methods modules are often the most unpopular with students and staff alike.

This makes methods modules prime candidates for either offloading onto temporary staff, or confining to designated ‘methods heavy’ positions for often isolated staff. This shunting of methods teaching onto precarious staff communicates unspoken but negative messages to students about the importance of this training, while consistently lower-than-average student evaluations (regardless of actual teaching excellence) negatively impact the profiles of vulnerable colleagues. 

At a time where we see silly op-eds calling for a Deliveroo approach to higher education (students deciding what they want to learn at MA level, and taught by temporary providers hired ‘on demand’), methods module would be first on the chopping block. Yet these unloved offerings provide, or at least should provide, the building blocks for that much-loved rite of passage: independent research and the MA dissertation. Beyond the dissertation, a deep engagement with methods is needed to better understand where we position ourselves in our respective fields, and so provide critical insights into both the mainstream and its critics.

Redesigning how we teach methods is far from a new topic on ALPS, with examples from using games to make students’ introduction to methods less frightening, to a series of posts on flipping the methods classroom.

This post draws on our own experiences, alongside reflections from EUROTLC discussions on curriculum design. Usual caveats apply: this is not a silver bullet. It depends on our local conditions and is still very much a work in progress. But at a time where the pandemic is forcing a rethink in how, what, and even where we teach, our stranded, workshop-based module can offer a useful starting point.

Context and problem

Following an administrative merger in 2016 we are a bigger school, with a growing number of MA students across 11 programs in Politics/International Relations, Anthropology, and History. Many students have backgrounds in other disciplines, and a growing proportion come from overseas. Some programmes are interdisciplinary, some more discipline-specific, with significant variation in student numbers from 6 up to 80+.    

Teaching different methods modules for each pathway is impractical, and while the merger offers opportunities for interdisciplinarity, combining methods teaching raises three dilemmas. First, should we aim for depth and specialization, or breadth and variety? Second, could we agree core teaching across the disciplinary boundaries?  Finally, how might we achieve student-led learning that encourages exploration and recognizes diverse backgrounds?

An innovative stranded, workshop-based module

Core or optional, breadth or depth? Instead of choosing we opted for both, via two simple design choices (a) ditching the one week/one topic model in favour of parallel workshops and (b) designing ‘strands’ to organise these workshops. Instead of covering 10 to 12 topics in as many weeks, we offer a wide range of parallel workshops, limited only by staff and room availability (and our collective imagination). Last year we offered 40 workshops, delivered in 8 weeks, taught by a team of over 25 colleagues according to their expertise, for close to 200 students. This also served to engage staff at all levels and in all areas of the School, centralizing rather than isolating methods teaching in the curriculum.

Workshops are organized across 6 strands (see examples in Figure 1) – from epistemology to case studies, whereby colleagues walk students through their own research design in a recent project.  These strands are populated according to the demands of our different MA programs, and also reflect the best practices of RCUK graduate training by exposing students to philosophy of science, and to both quantitative and qualitative methods.  They seek to enable flexibility for students according to their prior experience, with workshops that build upon one another in complexity and with different entry points.  A good example is the quantitative methods strand, which offers both basic training for primarily qualitative-focused researchers, alongside both beginner and advanced workshops for students who wish to specialize.

Figure 1

Students can, in effect, design their own path through the module: guided by their own interests and goals, they must take at least 9 workshops, including at least one from each strand.  Each individual program has designated compulsory workshops that students must include in their schedule in order to meet any specialization requirements. Thus, students have the opportunity to specialize, for example, by comparing different approaches to research interviews (5 workshops), or to explore new methods or move beyond their disciplinary boundaries.

Students are assessed on an applied methods portfolio of two items such as a short essay on epistemology, a data analysis exercise, or a practice interview or observation – and a research design proposal, bringing together content from the entire module (literature review, research questions, methods choices, ethical considerations). This proposal can be linked to the MA dissertation, and students are encouraged to treat it as preparation for their own independent research, working with their dissertation supervisors where possible.

Where next?

Reflecting on the first two years of this module, the welcome increase in student choice came at three costs – which we working to offset.

First, we need to ensure we do not ask students to run before they can walk: some students have no background in either methods or epistemological debates, and the kind of writing required in research design is often different than in a traditional essay.  As general training in writing skills is offered elsewhere in the university, this is difficult to address.  Nevertheless, we can both develop more ‘nuts and bolts’ workshops, and also sign-post students early on to outside support.

Second, the workshop model plays havoc with student timetables and our room-booking. Students can have different teaching loads week on week, and our commitment to (relatively) small class sizes means that we often need to add duplicate sessions to accommodate workshop popularity. This lack of certainty does not impact our student population equally – students working alongside their studies, those with caring responsibilities, or those living far from campus, will see their choices limited in practice. Providing more sessions online via asynchronous means will solve some, although not all, of these difficulties. We can also commit to publishing the timetable of workshops before term begins to facilitate student planning. 

Third, while the teaching load is shared, such a large and complex module comes with a commensurate administrative load for the course convenor. While some of that burden can be front-loaded in preparing the online learning environment (e.g. online workshop registration), the administrative load will remain large and often invisible.

Methods in a time of coronavirus

How teaching will happen in September remains uncertain. Nevertheless, we can focus on a number of ‘no regrets’ options.

First, we can ‘flip’ lectures, with pre-recorded, asynchronous introductions to different methods, and focusing any in-person class time on application. This would also allow students to discover a wider range of methods, and provide long term resources for their dissertation.

Second, it will be important to provide some dedicated training towards online research methods and ways to adapt traditional methods to social distancing. 

Finally, we can draw on external sources to broaden workshop options and resources. There is a wealth of methods teaching resources online – for example podcast series such as the UK National Centre for Research Methods podcast, or the Give Methods a Chance series.

In these trying times, it is time for universities to collaborate – where better than on methods teaching?

Looking Backward and Forward

Expanding on my last post on failures from this semester:

From where I stand, information literacy skills are important, because they help one identify and demolish specious claims made by authority figures. An assignment that, for example, forces students to locate three peer-reviewed journal articles is practice in finding credible information. It also allows students to determine whether a topic is suitable for a semester-long research project.

To me, these outcomes are both beneficial and rather obvious. But from the students’ perspective, the assignment could simply be yet another meaningless hoop to jump through on the way to getting another A+ on a transcript. Given the sources many students cited in the different stages of their storymap projects, it looks like too many of them customarily take the latter approach to research.

Therefore, in future courses that involve research projects, I should create assignments that are limited to the task of locating scholarly sources and place those assignments at the beginning of the semester. I should demonstrate why this skill is useful outside of the classroom.

I’ve noticed a similar problem with student writing — really basic errors that indicate a lack of proofreading. I don’t expend more effort evaluating a student’s work than the student did creating it. But I do know that sloppy writing indicates sloppy thinking and that the former advertises one’s propensity for the latter to the rest of the world. Again, I should demonstrate early in the semester why it’s important to proofread one’s work before it reaches an audience. My favorite example? The missing Oxford comma that cost a dairy company US$5 million.

I’m also seeing, from the last few journal article worksheets students are submitting, that many still do not have a clear understanding of how evidence-based arguments are constructed in academic literature. An author typically poses a research hypothesis or question at the beginning of a journal article and concludes with the same hypothesis or question reworded as declarative statement. I.e., “Why is the sky blue?” in the introduction with “The sky is blue because . . . ” as the conclusion. Yet on worksheets some students are writing that the hypothesis is about one thing while the conclusion is about some other thing. So again, students need practice in understanding the components of a written argument in scholarly literature, and that practice needs to happen early in the semester.

In principle I’m talking about scaffolding. But many of my assignments are attempts at getting students to builds several different skills simultaneously. I think I need to disentangle my goals for these assignments so that they target only one skill at a time.

The Article Summary

(Photo credit: Joanne H. Lee, Santa Clara University)

Today we have a guest post about teaching the research process by Anne Baker, assistant professor of political science at Santa Clara University. She can be reached at aebaker [at] scu [dot] edu.

Getting students to use academic articles for research papers can be a challenge. In my experience, many students, even those in upper-level courses, are not familiar with search engines such as JSTOR, Lexus Nexus, or Political Science Complete. And if students do happen to use Google Scholar, they frequently rely on excerpts from sources instead of entire articles that they might not have access to. So, what can be done to replace these habits with better practices?

In my advanced writing course on the presidency, I have developed a class activity which provides students with skills they will need if they are going to successfully locate and utilize academic references for their research papers. First, I want them to be able to use the library’s website to access search engines. Second, I want them to understand that research is an iterative process. Sometimes you don’t find what you need for a variety of reasons and you should be able to determine what those reasons are—whether its human error, the need for a wider search net, or that no one has written on the topic (this last possibility always surprises the Google generation). Third, students need to become acquainted with the literature on the presidency, including the subfield’s primary journal, by discovering how research practices in political science have changed overtime, even in a subfield which remains largely qualitative.

I have students work in pairs and I provide them with two search terms related to the institution of the presidency (e.g. signing statements, executive orders, oath of office). I pick the search terms carefully knowing that some topics have no scholarship and represent dead ends and others have later but not earlier scholarship or vice versa. The first step of the activity provides instructions about how to first locate JSTOR on the library’s website and then how to access Presidential Studies Quarterly using JSTOR’s advanced search options. Helpfully, for the purposes of this activity, JSTOR only has copies of the journal until 2000. To access later copies, students have to use the Wiley database, which students have to figure out how to find.

For each search term, I have students locate one article published in the last few years and then another for 1995-2000—a total of four articles. Next, students identify the research question and method the authors used, noting whether it is qualitative or quantitative, the sources of data regardless of method, the type of analysis (e.g. text, interviews, statistical), and the date of publication. After they have their four articles and perform this analysis, I ask them to compare the results of both searches. Finally, we have a class discussion in which we explore road blocks and challenges encountered and review how the field has changed over time.

I have found that this activity makes students more likely to cite academic articles in their final research papers and use them more effectively to support their arguments. Students also exhibit a much better understanding of the subfield and are more likely to use the other search engines that they encountered while on the library’s website. And they learn that research takes time and requires shifting your strategies to find the information you need.