One of the obstacles to using statistically-tested articles in the regular classroom is that most students don’t know (or don’t remember) how to interpret the results. I developed a very simple scheme, known as the 3 S’s, to help them understand results tables and quantitative articles more generally. While the basic framework was designed with linear regression in mind, after a few practices you should be able to introduce the framework in the context of more complicated modes (i.e., in probit/logit the size of the coefficients isn’t really meaningful by itself).

**The first S: Sign.** Hypothesis testing is generally about whether the relationship we find goes in the direction we think it should. This is predicted by the sign on the coefficient: whether the relationship is positive (upward slope) or negative (downward slope). So the first thing we’re interested in, when we’re testing a hypothesis, is whether we’ve gotten the sign right. Does what we found match what we expected?

**The second S: Sureness.** Now that we’ve found the sign or direction of the relationship, how sure are we that the sign is right? This is the concept of statistical significance, simplified down to its core element. Sureness asks about whether the value we found is “far enough” away from 0 to allow us to be sure that the sign is right. If the value we found is very close to zero and we’re very uncertain (statistically speaking) about that value, we can’t trust that the sign is right. The true value could actually lie on the other size of 0, and thus our sign would be incorrect. If the value is “far enough” from 0, then we can be reasonably sure that the sign is correct. I usually gloss over the concept of “far enough” and explain that they’ll cover standard errors in their statistics or methods course. For now it’s enough to know that we can be sure of most rather large numbers and even some small numbers if we’re very highly certain about the value we estimated for them.

**The third S: Size.** Only after we’re sure the sign is right can we meaningfully talk about the size of the relationship. Size isn’t the only thing that matters; in fact, it’s usually the least important in interpreting statistical results. The size of the relationship tells us how much the value of the dependent or outcome variable changes for each one-unit change in the independent or input variable. I have sometimes found it helpful to write out the middle-school equation for a line, y = mx + b, and explain the effect of coefficients by talking about what if x changed from 4 to 5 – how much would y change? What if it went from 58 to 59? Etc.

You can find a helpful powerpoint that walks through this logic – the Crash Course Statistics package – on my website.