Where Did Your Stuff Come From?

Most American students are challenged to understand the extent to which international trade affects their lives, and the way that the US trades with the world. I can (and have) shown statistics about trade and economics in very graphic and immediate form, but numbers in the scale of trillions are hard to conceptualize.

To combat that, I asked students in an introductory international politics class to go on a scavenger hunt. They were tasked to find one item from each of 5 world regions – Europe, Latin America, Sub-Saharan Africa, Middle East & North Africa, and Asia & the Pacific. They had to take a picture of the ‘made in’ indicator (and part of their student ID, to ensure that they didn’t just go grabbing stock photography or Instagram stuff) and post it to the class learning management system’s discussion board. To sweeten the pot, I offered 2 bonus points for unique entries, where no one else posted something from that country. Specialty foods and beverages were excluded (no taking a picture of a bottle of Stella for Belgium).

Students went crazy hunting for stuff. The two bonus points were apparently a huge incentive, with students finding and posting additional items when someone else duplicated “their” country.

Continue reading

The Power of Unpublished Research: You Be the Reviewer

We want our students to learn to read critically and to interrogate and evaluate what they read. Does the author have the right data? Do the conclusions actually follow from the data? Are other explanations missing from the argument? That’s what we want them to ask themselves. A quick look at students’ notes from reading – if they even took any – reveal a totally different set of information, usually focused on the literature review and sometimes the theory. After all, this is the main textually-based body of an empirical paper, so it’s easiest for them to read.

Beyond steps we can take to teach students to read articles effectively (see my previous post on R&U and the Article Sort activity), I like to engage my intermediate and upper-level courses in an activity we call “You Be the Reviewer.” Students in all of my classes have already done the R&U activity and read (briefly) about the process papers go through to get published. So at some point in the term, I assign an unpublished article manuscript – often from a colleague or a conference paper pulled from the conference archives with author permission – and ask students to write a journal-style review, including a decision of whether the item should be published.

As support for this assignment, I distribute a handout like the one available here. It suggests some questions for students to consider, reminds them to check R&U for more guidance, and gives them a framework for writing a review. Typically, they are asked to post their reviews to the course learning management site and to bring a hard copy to class for reference. The resulting conversations have been far more in-depth and wide-ranging than anything else I’ve tried. At the end of the discussion, we collectively decide on the disposition of the article. Several classes – including a freshman-level intro course – have voted to reject manuscripts, though, as in the real world, R&Rs are the most common response.

While this activity obviously works better with upper-division classes, even lower-level students have enjoyed it and given very piercing feedback. For lower-level classes, qualitative research or very simple quantitative analysis works best. I normally compile the students’ feedback (copying particularly relevant bits from the CMS and pasting into a document) and send it to the author as thanks for sharing the manuscript. In an undergraduate methods class, I once was able to have the author come and give a (previously prepared) conference style presentation to the class on the manuscript they had reviewed. The author also took questions, so that the class had a model presentation to use in preparing their own as well as a chance to ask the author about research design decisions and practice giving useful feedback on research-in-progress before their own peer review process.

I’ve found that using a manuscript – an honest to goodness pre-publication, looks-like-it-was-written-in-Word-then-PDF’d manuscript – gets a far better reaction than published research. Students are reluctant to question or challenge work by ‘experts’ that’s already been vetted and published, but papers are a different matter.

Have you used unpublished research (other than your own) with your students? What was their reaction?

Binary Failure

kitten-fightAn example of the kind of assignment design that I mentioned in my last post about feature creep:

I have assigned three iterations of policy memos in this year’s iteration of my first-year seminar. Directions for the first memo are here. The memos are intended to function as authentic writing exercises — each has a specified purpose, audience, and format. The authenticity is supposed serve as a vehicle for stimulating students’ interest in the topic.

The memos require a small amount of creative problem-solving. First, each student chooses a policy recommendation that he or she prefers. Limited choice is always good because it generates mental investment in the outcome. Second, each student selects from information that I’ve provided to create a rationale for the policy recommendation, but this has to be done within the constraints of the memo’s format. There is a puzzle to solve. Continue reading

Teammate Evaluations, Revisited

Students in my courses do a lot of team-based projects. In an attempt to prevent free riders, I have teammates evaluate each other. This semester I have again modified the evaluation system a bit, with an unexpected result.fusball-table

For the past three semesters I’ve been using Google Forms to distribute and tabulate anonymous surveys in which students rank themselves and their teammates. The instructions on this survey:

Evaluate the performance of everyone on your team, including yourself, by ranking all members of your team with a different number. Each number can be used only once, otherwise your responses will be discarded. Assign the number 1 to the person who made the most valuable contribution to the project, the number 2 to the person who made the second most valuable contribution, etc. Only enter information for your team. Leave questions for other teams blank.

My explanation to students of how rankings translated into points — in this case, up to 40: Continue reading

When Less is More, More or Less

shopping-listIn 2015, I wrote about asking too many questions in instructions for assignments. What I as the information-craving professor sees as helpful detail, the student sees as a tangled and confusing mess.

I still notice occasions where I fall into this bad habit, most recently in an assignment in two of my online graduate courses, in which students analyze peer-reviewed journal articles. The old instructions said that analyses should answer the following questions: Continue reading

What we tell our dissertation students

3899986476_299854e165_z
via scienceblogs.com

We don’t usually write about dissertations on this site, partly because it’s not obviously an area ripe for discussions of active learning, and partly because we’re doing so many other things.

However, a capstone dissertation is often the single most important piece of work that a student undertakes: an opportunity to explore the subject as they see best, producing something (hopefully) akin to a research output.

Put like that, it’s obviously an instance of active learning, because they drive the entire process, with us taking a ‘supervisory’ role.

Now that I’m back in the ranks after my stint as Associate Dean, I’m getting to do more of this supervision, and it’s been a good refresher of what the most useful advice I can give might be.

Continue reading

Avoiding Feature Creep, Part 2

Finally revisiting the subject of feature creep:

creature-featureI revise my syllabi every semester. Typically I change twenty to thirty percent of a course’s content and assignments each time I teach it. After nearly two decades (eek!) of teaching undergraduates, I now find myself stripping things out. My philosophical approach to syllabus-building has changed from “what do I think students should know about X?” to “what might make students want to learn more about X?”

The long and jargon-laden peer-reviewed journal article by a famous theorist in the field? Gone. Students will regard the time and effort spent trying to decipher the terrible writing as wasted because the article is irrelevant to why they enrolled in the course.

Similarly, I no longer think about assignments only as tools for finding out whether people have learned something. Instead I try to craft them as opportunities for students to become interested in solving unfamiliar problems in creative ways. Some students seize these opportunities and run with them. Others don’t. But they decide this; I can’t make the decision for them. I suppose my next post will need to explain this a bit more.