Interactive Resources for Teaching Stats

The internet has allowed the creation and dissemination of a wide range of tools useful to those of us who teach statistics in our research methods courses. I found two to be particularly helpful.

  1. Guessthecorrelation.com – As its name implies, the site gives students a scatterplot of points and asks them to guess the correlation. My students were asked to play three games and upload screenshots of their final scores as evidence that they had completed the assignment. Many went on to play more than three games; the sound effects and points make it a very addictive game. What it brought home to them very effectively is that correlation is about how tight the points are to the (imaginary) best-fit line, not about the slope of the line. Students enjoyed playing a game as homework; it was certainly less onerous than practicing calculating correlations by hand.
  2. The Rice Virtual Lab in Statistics: Sampling Distributions – The Java-based simulation for sampling distributions allows you to draw a distribution of any shape you want, select repeated samples of any size, and then plot the sampling distribution of the means (or several other statistics). I even let them draw some of the distributions and do some of the simulations so that they were convinced it wasn’t just the values I was picking. I was able to demonstrate to the class in just a few minutes that the shape of the parent distribution doesn’t matter; the means will always be distributed normally.  Watching their faces, this really blew their minds; they probably would have blindly accepted it if I just told them this is how it is, but having seen it, we had a much easier time accepting that the same property held for regression coefficients. (The chance to visit the central limit theorem was a bonus for undergrads.) The whole activity took less time than a lecture of the similar material. (A similar lab simulation exists for confidence intervals as well.)

What about you? What are your favorite interactive sites for teaching research methods or statistics?

To tell the truth

Sic

Reading this week’s Economist article on new algorithms for generating audio and video content, I was really struck by the speed with which the assumptions we teach our students have to be questioned.

As the techy types interviewed in the piece argue, it’s only a question of time before it will be possible for anyone to generate any content they like; to get anybody to say anything you want them to.

While that might have some benefits – the technology will allow us to identify such fake content more easily too – it’s also clear that our traditional reliance on content as a repository of ‘truth’ is under attack.

More prosaically, we all have enough trouble as it is with our students’ (and (sometimes) colleagues’) inability to make critical judgements about the veracity of sources: if you doubt me, come and spend an hour or two on Twitter.

Continue reading

How Much Does the Layperson Know?

Today we have another guest post by Gigi Gokcek of the Dominican University of California.

Students are often surprised to learn how little the average person knows about politics, or even current events. In response, I encourage my students to ask their friends and neighbors how much they know about government in the United States or elsewhere. Occasionally a student reports back to me about his or her conversation in the dining hall with a few friends. I decided to create an assignment to demonstrate to students how much they knew about world events relative to their peers. Continue reading

The Intel Community and the Theory of Knowledge

Today we have a guest post from David Young, Head of Theory of Knowledge and Ideas, The English College in Prague. He can be reached at david [dot] young [at] englishcollege [dot] cz.

A while ago I was asked to  develop a critical thinking course for an International Baccalaureate (IB) school as a preparation for its Theory of Knowledge course.  As someone who teaches global politics, I was drawn to two books: David T. Moore’s Critical Thinking and Intelligence Analysis (2nd ed 2007), and the invaluable The Art of Intelligence (2014) by William J. Lahneman and Ruben Arcos. Both have had a significant impact on my teaching and my position as the school’s co-coordinator for Theory of Knowledge (ToK), a core element in the IB programme.

In ToK, students are supposed to formulate and evaluate knowledge claims and ask questions about the acquisition of knowledge, making it one of the most challenging elements in a congested pre-university curriculum. I’ve found the analysis of intelligence and the ethical issues surrounding its collection and dissemination to be an exciting way for students to learn about ToK concepts such as reason, imagination, intuition, and sense perception. From my perspective, using principles of intelligence analysis has both enhanced my understanding of ToK and improved the course for students.
Continue reading

Using Stats in the Regular Classroom: The 3 S’s Approach to Interpretation

One of the obstacles to using statistically-tested articles in the regular classroom is that most students don’t know (or don’t remember) how to interpret the results. I developed a very simple scheme, known as the 3 S’s, to help them understand results tables and quantitative articles more generally. While the basic framework was designed with linear regression in mind, after a few practices you should be able to introduce the framework in the context of more complicated modes (i.e., in probit/logit the size of the coefficients isn’t really meaningful by itself).

The first S: Sign. Hypothesis testing is generally about whether the relationship we find goes in the direction we think it should. This is predicted by the sign on the coefficient: whether the relationship is positive (upward slope) or negative (downward slope). So the first thing we’re interested in, when we’re testing a hypothesis, is whether we’ve gotten the sign right. Does what we found match what we expected?

The second S: Sureness. Now that we’ve found the sign or direction of the relationship, how sure are we that the sign is right? This is the concept of statistical significance, simplified down to its core element. Sureness asks about whether the value we found is “far enough” away from 0 to allow us to be sure that the sign is right. If the value we found is very close to zero and we’re very uncertain (statistically speaking) about that value, we can’t trust that the sign is right. The true value could actually lie on the other size of 0, and thus our sign would be incorrect. If the value is “far enough” from 0, then we can be reasonably sure that the sign is correct. I usually gloss over the concept of “far enough” and explain that they’ll cover standard errors in their statistics or methods course. For now it’s enough to know that we can be sure of most rather large numbers and even some small numbers if we’re very highly certain about the value we estimated for them.

The third S: Size. Only after we’re sure the sign is right can we meaningfully talk about the size of the relationship. Size isn’t the only thing that matters; in fact, it’s usually the least important in interpreting statistical results. The size of the relationship tells us how much the value of the dependent or outcome variable changes for each one-unit change in the independent or input variable. I have sometimes found it helpful to write out the middle-school equation for a line, y = mx + b, and explain the effect of coefficients by talking about what if x changed from 4 to 5 – how much would y change? What if it went from 58 to 59? Etc.

You can find a helpful powerpoint that walks through this logic – the Crash Course Statistics package – on my website.

Introducing Computer Programming in Political Science Classes

Today’s post is a guest posting from Jane Lawrence Sumner of the University of Minnesota. Jane’s research and teaching interests are in international political economy and research methods.

****************

Undergraduate methods classes frequently use statistical software programs, despite students typically have little background knowledge in programming. While we seldom ask students to do anything that would require that background knowledge, the idea of having to program can be a roadblock for many students. In an attempt to alleviate this, I designed an activity to introduce my students to the fundamentals of programming in a non-technical way. The objectives of the course were for students to understand that programming primarily involves providing the computer with very good, very specific instructions to carry out the task at hand.

Students were split into small groups, and each group was given a piece of paper with a hand-drawn cartoon on it. They were told to write instructions about how to reproduce the drawing that they could then give to another student. After twenty minutes, they stopped. Each group stood with their backs to the whiteboard while they read their instructions to a student from another group, positioned at the whiteboard with a marker. Students were not allowed to modify their instructions and were deliberately unable to see the student with the marker so that they could not course-correct. The student with the marker was allowed to ask only one question: “can you please repeat that?”

Only one of the drawings actually resembled the original. Although the students enjoyed drawing and seeing the monstrosities emerging on the board, and loved the great reveal at the end of what the drawing should have looked like, the key pedagogical point came in the discussion at the end, when I asked students, “What made this difficult? What would have made it easier?” The points that emerged organically were the same objectives I’d set out to convey. Specifically: many, short, precise steps were easier to follow well than longer, more detailed steps. Students also determined that having a clear and constant reference point (like a coordinate system) would have made the direction and scale easier, that naming parts of the drawing for later reference was easier than repeating “the curvy line that ended up at the first straight line” over and over, and that if each step relied upon the subsequent step, things could quickly go awry.

Doing “Measurement”

I like to introduce measurement by doing it. My class on measurement starts with a wide collection of fruit lined up on the chalkboard ledge. I ask students to select three items from the collection and compare them on three dimensions. (I often provide a matrix of appropriate size on a handout or project one on the screen to give students a framework for their work.) We usually briefly discuss what some of the dimensions are that they could compare on: color, taste, size, weight, water content or density, sweetness, etc. After giving students about 5 minutes to complete their measurement matrix, we compare some of the measurements they made and discuss what measurement is: the systematic comparison, evaluation and assignment of values to objects or phenomena. They have just engaged in ‘measuring’ the fruit, even though no tape measures or scales were involved.

We then usually move on to a discussion of precision and accuracy in measurement, usually by discussing the ways in which they measured weight and color. Replicability looms large in this part of the discussion, with a focus on ways to reduce the subjectivity of measurement so that other student or researchers would obtain the same values that they did. How would we obtain reliable measurements of color? Color comparison charts (an external reference) are one option, but how would that handle the bicolored apple? Are there uses of the variable ‘color’ where the coarse measurement of red, yellow, green, etc., is sufficient? In some terms, I’ve numbered the items; we then discuss how the simple act of naming (classifying) the object as an apple or mango or whatever constitutes a form of measurement.

Just to complicate the comparison and discussion, I usually include at least one bicolored apple (red and green), one can of juice (often pineapple), some type of dried fruit, a tomato, a bell pepper, and an eggplant. (The latter three are technically fruit since they have seeds on the inside, even though most Americans think of them as vegetables.) A can of fruit cocktail is also a good ‘wrench’ to throw into the mix. Whether the dried fruit or the juice “count” as fruit is always a good discussion. If I can get unusual fruit like starfruit, kiwi, persimmon, or plantain, I like to include those alongside the usual suspects of apples, oranges, bananas, grapes, plums and the like. I typically spend no more than about $15-20 on fruit for the activity, and frequently far less in the summer.

When we’re done with the class, I usually invite them to take whatever fruit they want from the collection for snacking. I take home the rest and use it myself, or leave it in the department lounge for others.