One of the obstacles to using statistically-tested articles in the regular classroom is that most students don’t know (or don’t remember) how to interpret the results. I developed a very simple scheme, known as the 3 S’s, to help them understand results tables and quantitative articles more generally. While the basic framework was designed with linear regression in mind, after a few practices you should be able to introduce the framework in the context of more complicated modes (i.e., in probit/logit the size of the coefficients isn’t really meaningful by itself).
The first S: Sign. Hypothesis testing is generally about whether the relationship we find goes in the direction we think it should. This is predicted by the sign on the coefficient: whether the relationship is positive (upward slope) or negative (downward slope). So the first thing we’re interested in, when we’re testing a hypothesis, is whether we’ve gotten the sign right. Does what we found match what we expected?
The second S: Sureness. Now that we’ve found the sign or direction of the relationship, how sure are we that the sign is right? This is the concept of statistical significance, simplified down to its core element. Sureness asks about whether the value we found is “far enough” away from 0 to allow us to be sure that the sign is right. If the value we found is very close to zero and we’re very uncertain (statistically speaking) about that value, we can’t trust that the sign is right. The true value could actually lie on the other size of 0, and thus our sign would be incorrect. If the value is “far enough” from 0, then we can be reasonably sure that the sign is correct. I usually gloss over the concept of “far enough” and explain that they’ll cover standard errors in their statistics or methods course. For now it’s enough to know that we can be sure of most rather large numbers and even some small numbers if we’re very highly certain about the value we estimated for them.
The third S: Size. Only after we’re sure the sign is right can we meaningfully talk about the size of the relationship. Size isn’t the only thing that matters; in fact, it’s usually the least important in interpreting statistical results. The size of the relationship tells us how much the value of the dependent or outcome variable changes for each one-unit change in the independent or input variable. I have sometimes found it helpful to write out the middle-school equation for a line, y = mx + b, and explain the effect of coefficients by talking about what if x changed from 4 to 5 – how much would y change? What if it went from 58 to 59? Etc.
You can find a helpful powerpoint that walks through this logic – the Crash Course Statistics package – on my website.
In general, I teach it this way, but the order and emphasis is different. I’m with Ed Tufte: “To understand is to know what cause provokes what effect, by what means, by what rate”
That means that Size come first in combination with Sign. Further, all of this is aimed at trying to ferret out what relationship between variables there actually is. That means the thing you didn’t emphasize: graphics. Pictures come first and foremost. Teaching the students how to read a picture is is perhaps th most important thing we can do.
Then and only then do I introduce Sureness. But I never use that word. When we do inference, I don’t think we are looking at whether something is correct. We are making a bet that the relationship we are examining is supported; if test results are unlikely, then we can say that the results are less uncertain. I don’t think significance tests do anything else. I usually delay introducing inference and then i emphasize sampling and randomization, not testing.
But, hey, that’s just me.