A follow-up to my recent post about increasing the quality of students’ final products from collaborative research projects:
In my Spring 2021 research methods course, I gave students this outline to follow when writing their team’s research reports. I’ve revised the outline for Spring 2022. Each part in the new outline will get graded separately, with a summative grade for the entire report at the end of the semester.
I’m also thinking of being much more specific about the report’s layout, and grading the reports accordingly — similar to what has worked well with student presentations. I can envision the following criteria:
No more than two pages per part, which would limit the final report to eight pages.
Each part must include at least one data visualization — a chart or graph.
As we hurtle towards the summer ‘break’ and everyone remembers the deadline they cut you some slack on, it’s also a time when we’re often thinking about next semester.
For those of you with interests in making L&T a bigger part of your work, one obvious route is researching and publishing on what you do in the classroom.
Often that might be about trying out something different with students, which you think generates benefits for their learning, and might be of use to others in the same situation: we’ve published lots of such pieces from our guest authors here at ALPS.
While the thing you’re doing is the obvious centre of attention, the second element – whether it works – sometimes gets a bit lost (speaking as someone who reviews a good number of journal submissions in this field), so I thought it’s useful to think a bit more about this.
Measuring learning turns out to be a less-than-simple task: if it weren’t, then we’d all know about how to do it. The problem turns in part on the multiplicity of things we might consider, and in part on the difficulty in making any accurate/meaningful measure of these things.
Learning is not simply about knowledge, but also skills, social capital and much more. Each of those itself has many sub-elements, not all of which might be immediately obvious to anyone, nor equally important to everyone. Likewise, learning happens at lots of different speeds, so do you focus on the immediate gains, or something more long-term?
The (faint) silver lining to this particular cloud is that everyone’s in the same boat. I’m yet to see a comprehensive evaluation tool that I could recommend to you, even though there a number of really good ideas out there (for example, this or this (which makes the good point that students’ perception of what they learn isn’t the same as teachers’ measure of what they learn)).
The important thing here is to be mindful of this from the start of any pedagogic research, embedding your measurement protocol into the design from the start, rather than hoping it’ll come to you later: a short post-course questionnaire about whether your students liked the thing they did isn’t likely to suffice.
That means thinking about what elements you focus on measuring (and why), then on how you’ll measure them. In particular, think about whether and how you can have a control for your teaching intervention: if it’s not practical to have another group of students not doing it, then will pre/post testing cover things robustly enough? Just like your other research, try to control your variables as much as you can, so you can be more confident about isolating effects.
And it also means asking for help if you’re unsure. Your institution probably has a great bunch of people centrally who work on just these kinds of projects and who can give you excellent advice and support. Likewise, you can ask us here or online about specific ideas: it’s worth looking back at our posts for suggestions and colleagues who’ve worked on similar things.
Do all that and your pedagogic research will be off to a flying start (which might be the only flying you get to do).
Looking at student performance in the 2020-2021 academic year, I see evidence that team research projects due at the end of the semester can’t be scaffolded solely around individually-graded assignments completed throughout the semester. For example, in my Middle East politics course, each student shared four individually-completed assignments with their teammates for use in their team’s historical timeline. In my research methods course, there were ten individual assignments that teammates were supposed to share with each other as drafts of sections of team research reports. While this approach does decrease free riding and encourage collaboration, it apparently does not ensure high quality research in the final product. Four of the five timelines that teams created in the Middle East course lacked mention of significant events. None of the four teams in the research methods course collected information from coffee farmers, processors, or distributors in Central America, despite my instructions to do so, nor did the final reports resemble the industry exemplars I had provided.
It seems that in students’ minds, my formative assessment of their individual work is totally unconnected to the summative assessment of their collaborative work. I probably need to break the team project into discrete, graded chunks, with each chunk layered on top of some of the individual assignments. Teams can use the feedback they receive on each successive chunk of the project to improve the quality of the final product.
Today we have a guest post from Adam Irish, an assistant professor of political science at California State University, Chico.
Like many professors, I change my teaching to fit the class or, in the past year, the Zoom discussion I am leading. My lower division, survey courses focus on building a scholarly vocabulary and an understanding of concepts; upper division courses dive deeper into issues so that students can wade into the intellectual fray. However, this past year of online teaching revealed a potential for overlap for this dichotomy: the development of research citation skills through the incorporation of Zotero.
To pick up the gauntlet metaphorically thrown down by Amanda last week, here is the first of what will probably be a series of posts on my experience teaching an introduction to research methods course online this semester. When I last taught this course two years ago, I used Amanda’s Best Breakfast in Town project. Given the constraints imposed by the coronavirus pandemic, sending students into restaurants simply wasn’t an option this time around. Yet I still wanted students to experience the trials and tribulations of real-world field research. I decided create a new research project on specialty coffees from Central America, with teams investigating coffee from Costa Rica, El Salvador, Guatemala, and Honduras, respectively. To increase the authenticity of the project, students are responsible for designing a survey (replete with a pilot test and my coaching to try to avoid problems like sampling bias), conducting remote interviews with the people who produce and sell these coffees, analyzing the resulting primary source quantitative and qualitative data, and communicating their conclusions in an industry-style report.
Research methods are crucial, particularly in Graduate learning, but methods modules are often the most unpopular with students and staff alike.
This makes methods modules prime candidates for either offloading onto temporary staff, or confining to designated ‘methods heavy’ positions for often isolated staff. This shunting of methods teaching onto precarious staff communicates unspoken but negative messages to students about the importance of this training, while consistently lower-than-average student evaluations (regardless of actual teaching excellence) negatively impact the profiles of vulnerable colleagues.
At a time where we see silly op-eds calling for a Deliveroo approach to higher education (students deciding what they want to learn at MA level, and taught by temporary providers hired ‘on demand’), methods module would be first on the chopping block. Yet these unloved offerings provide, or at least should provide, the building blocks for that much-loved rite of passage: independent research and the MA dissertation. Beyond the dissertation, a deep engagement with methods is needed to better understand where we position ourselves in our respective fields, and so provide critical insights into both the mainstream and its critics.
Redesigning how we teach methods is far from a new topic on ALPS, with examples from using games to make students’ introduction to methods less frightening, to a series of posts on flipping the methods classroom.
This post draws on our own experiences, alongside reflections from EUROTLC discussions on curriculum design. Usual caveats apply: this is not a silver bullet. It depends on our local conditions and is still very much a work in progress. But at a time where the pandemic is forcing a rethink in how, what, and even where we teach, our stranded, workshop-based module can offer a useful starting point.
Context and problem
Following an administrative merger in 2016 we are a bigger school, with a growing number of MA students across 11 programs in Politics/International Relations, Anthropology, and History. Many students have backgrounds in other disciplines, and a growing proportion come from overseas. Some programmes are interdisciplinary, some more discipline-specific, with significant variation in student numbers from 6 up to 80+.
Teaching different methods modules for each pathway is impractical, and while the merger offers opportunities for interdisciplinarity, combining methods teaching raises three dilemmas. First, should we aim for depth and specialization, or breadth and variety? Second, could we agree core teaching across the disciplinary boundaries? Finally, how might we achieve student-led learning that encourages exploration and recognizes diverse backgrounds?
An innovative stranded, workshop-based module
Core or optional, breadth or depth? Instead of choosing we opted for both, via two simple design choices (a) ditching the one week/one topic model in favour of parallel workshops and (b) designing ‘strands’ to organise these workshops. Instead of covering 10 to 12 topics in as many weeks, we offer a wide range of parallel workshops, limited only by staff and room availability (and our collective imagination). Last year we offered 40 workshops, delivered in 8 weeks, taught by a team of over 25 colleagues according to their expertise, for close to 200 students. This also served to engage staff at all levels and in all areas of the School, centralizing rather than isolating methods teaching in the curriculum.
Workshops are organized across 6 strands (see examples in Figure 1) – from epistemology to case studies, whereby colleagues walk students through their own research design in a recent project. These strands are populated according to the demands of our different MA programs, and also reflect the best practices of RCUK graduate training by exposing students to philosophy of science, and to both quantitative and qualitative methods. They seek to enable flexibility for students according to their prior experience, with workshops that build upon one another in complexity and with different entry points. A good example is the quantitative methods strand, which offers both basic training for primarily qualitative-focused researchers, alongside both beginner and advanced workshops for students who wish to specialize.
Students can, in effect, design their own path through the module: guided by their own interests and goals, they must take at least 9 workshops, including at least one from each strand. Each individual program has designated compulsory workshops that students must include in their schedule in order to meet any specialization requirements. Thus, students have the opportunity to specialize, for example, by comparing different approaches to research interviews (5 workshops), or to explore new methods or move beyond their disciplinary boundaries.
Students are assessed on an applied methods portfolio of two items such as a short essay on epistemology, a data analysis exercise, or a practice interview or observation – and a research design proposal, bringing together content from the entire module (literature review, research questions, methods choices, ethical considerations). This proposal can be linked to the MA dissertation, and students are encouraged to treat it as preparation for their own independent research, working with their dissertation supervisors where possible.
Reflecting on the first two years of this module, the welcome increase in student choice came at three costs – which we working to offset.
First, we need to ensure we do not ask students to run before they can walk: some students have no background in either methods or epistemological debates, and the kind of writing required in research design is often different than in a traditional essay. As general training in writing skills is offered elsewhere in the university, this is difficult to address. Nevertheless, we can both develop more ‘nuts and bolts’ workshops, and also sign-post students early on to outside support.
Second, the workshop model plays havoc with student timetables and our room-booking. Students can have different teaching loads week on week, and our commitment to (relatively) small class sizes means that we often need to add duplicate sessions to accommodate workshop popularity. This lack of certainty does not impact our student population equally – students working alongside their studies, those with caring responsibilities, or those living far from campus, will see their choices limited in practice. Providing more sessions online via asynchronous means will solve some, although not all, of these difficulties. We can also commit to publishing the timetable of workshops before term begins to facilitate student planning.
Third, while the teaching load is shared, such a large and complex module comes with a commensurate administrative load for the course convenor. While some of that burden can be front-loaded in preparing the online learning environment (e.g. online workshop registration), the administrative load will remain large and often invisible.
Methods in a time of coronavirus
How teaching will happen in September remains uncertain. Nevertheless, we can focus on a number of ‘no regrets’ options.
First, we can ‘flip’ lectures, with pre-recorded, asynchronous introductions to different methods, and focusing any in-person class time on application. This would also allow students to discover a wider range of methods, and provide long term resources for their dissertation.
Second, it will be important to provide some dedicated training towards online research methods and ways to adapt traditional methods to social distancing.
From where I stand, information literacy skills are important, because they help one identify and demolish specious claims made by authority figures. An assignment that, for example, forces students to locate three peer-reviewed journal articles is practice in finding credible information. It also allows students to determine whether a topic is suitable for a semester-long research project.
To me, these outcomes are both beneficial and rather obvious. But from the students’ perspective, the assignment could simply be yet another meaningless hoop to jump through on the way to getting another A+ on a transcript. Given the sources many students cited in the different stages of their storymap projects, it looks like too many of them customarily take the latter approach to research.
Therefore, in future courses that involve research projects, I should create assignments that are limited to the task of locating scholarly sources and place those assignments at the beginning of the semester. I should demonstrate why this skill is useful outside of the classroom.
I’ve noticed a similar problem with student writing — really basic errors that indicate a lack of proofreading. I don’t expend more effort evaluating a student’s work than the student did creating it. But I do know that sloppy writing indicates sloppy thinking and that the former advertises one’s propensity for the latter to the rest of the world. Again, I should demonstrate early in the semester why it’s important to proofread one’s work before it reaches an audience. My favorite example? The missing Oxford comma that cost a dairy company US$5 million.
I’m also seeing, from the last few journal article worksheets students are submitting, that many still do not have a clear understanding of how evidence-based arguments are constructed in academic literature. An author typically poses a research hypothesis or question at the beginning of a journal article and concludes with the same hypothesis or question reworded as declarative statement. I.e., “Why is the sky blue?” in the introduction with “The sky is blue because . . . ” as the conclusion. Yet on worksheets some students are writing that the hypothesis is about one thing while the conclusion is about some other thing. So again, students need practice in understanding the components of a written argument in scholarly literature, and that practice needs to happen early in the semester.
In principle I’m talking about scaffolding. But many of my assignments are attempts at getting students to builds several different skills simultaneously. I think I need to disentangle my goals for these assignments so that they target only one skill at a time.
we have a guest post about teaching the research process by Anne Baker,
assistant professor of political science at Santa Clara University. She can be
reached at aebaker [at] scu [dot] edu.
Getting students to use academic
articles for research papers can be a challenge. In my experience, many students,
even those in upper-level courses, are not familiar with search engines such as
JSTOR, Lexus Nexus, or Political Science Complete. And if students do happen to
use Google Scholar, they frequently rely on excerpts from sources instead of
entire articles that they might not have access to. So, what can be done to replace
these habits with better practices?
In my advanced writing course
on the presidency, I have developed a class activity which provides students
with skills they will need if they are going to successfully locate and utilize
academic references for their research papers. First, I want them to be able to
use the library’s website to access search engines. Second, I want them to
understand that research is an iterative process. Sometimes you don’t find what
you need for a variety of reasons and you should be able to determine what
those reasons are—whether its human error, the need for a wider search net, or that
no one has written on the topic (this last possibility always surprises the
Google generation). Third, students need to become acquainted with the
literature on the presidency, including the subfield’s primary journal, by discovering
how research practices in political science have changed overtime, even in a
subfield which remains largely qualitative.
I have students work in pairs
and I provide them with two search terms related to the institution of the presidency (e.g. signing statements, executive orders, oath of office).
I pick the search terms carefully knowing that some topics have no
scholarship and represent dead ends and others have later but not earlier
scholarship or vice versa. The first step of the activity provides instructions
about how to first locate JSTOR on the library’s website and then how to access
Presidential Studies Quarterly using JSTOR’s
advanced search options. Helpfully, for the purposes of this activity, JSTOR
only has copies of the journal until 2000. To access later copies, students
have to use the Wiley database, which students have to figure out how to find.
For each search term, I have students locate one article published in the last few years and then another for 1995-2000—a total of four articles. Next, students identify the research question and method the authors used, noting whether it is qualitative or quantitative, the sources of data regardless of method, the type of analysis (e.g. text, interviews, statistical), and the date of publication. After they have their four articles and perform this analysis, I ask them to compare the results of both searches. Finally, we have a class discussion in which we explore road blocks and challenges encountered and review how the field has changed over time.
I have found that this activity makes students more likely to cite academic articles in their final research papers and use them more effectively to support their arguments. Students also exhibit a much better understanding of the subfield and are more likely to use the other search engines that they encountered while on the library’s website. And they learn that research takes time and requires shifting your strategies to find the information you need.
Today we have a call for proposals from Jeffrey Bernstein at Eastern Michigan University.
I am working with Edward Elgar Publishing to produce an
edited volume, tentatively entitled “Teaching Political Methodology,” that will
focus on teaching this subject at the undergraduate level. Such a collection, I
believe, will fill a hole in the literature.
Most of our departments offer such a class; however, it usually proves
to be a hard course to teach. I’m excited about the possibility of a book that
articulates rationales for what this course should look like, and for how it
can be done well.
The publishers are looking for fairly thin (200-250 page) book, most likely with around twelve contributors. The volume will likely consist of two parts. Section One will focus more on the larger, theoretical questions involved in teaching research methods to political science undergraduates. Why do we see this as an important topic for students to learn? Do we want to approach the course as teaching mostly research design, statistical analysis, or programming and using Big Data? How much should we focus on qualitative versus quantitative tools? While quantitative methods have traditionally dominated, scholars have noted the limitations and biases in both the questions asked and the tools used to answer these questions. To what extent should our courses reflect this?
Section Two will focus less on the theoretical and more on
the applied. Once we have determined the
sort of methods course we want to teach, how do we do it effectively? What are the best means to get across the
central lessons from methods classes?
What does it look like when students achieve our learning goals? Papers for this section should move beyond
assertions of what we should be doing, or what we believe will work, and
present evidence of student learning drawn from their work. They should include things such as sample
assignments to help other instructors build on successful approaches to the
If you are interested in contributing to this collection, please email me as soon as possible at firstname.lastname@example.org with a summary of the idea you are proposing, as well as a CV. The proposal deadline is May 1. Completed chapters will be due to me by May 31, 2020; this extended time frame will allow people to develop ideas for teaching these classes and test these approaches against data during the 2019-2020 academic year.