I proceed with this little essay with some trepidation due to the topic I’ve chosen: the idea of *margins of error* in survey research. By *survey research* I mean such things as political polls, public opinion surveys, market research, and so on.

Right up front I can share my conclusion with you. The common understanding of *margins of error* is incorrect. If you decide not to read this mini-exploration in its entirety, just remember this: Whatever you thought or heard that *margins of error* in surveys mean, simply abandon those ideas. Replace them with a mental question mark and you’ll be fine.

I say this because the precise meaning of margins of error is *stranger than fiction.* After all, the concept does come from statistical theory, which can definitely be on the fantastical side. Read on and you will see.

First, to correct those fictitious ideas about *margins of error.* In survey findings the real answers don’t necessarily fall somewhere between the “plus or minus X %” range that the reports announce to us. In other words, “true values” from the larger population surveyed *may or may not* fall within this range. So, the *margin of error*, itself, can be *wrong*. Or, more precisely, it will be somewhat imprecise.

Let’s see how this works by choosing an example survey research topic. How about, “What percentage of all monkeys living in North America like banana nut ice cream?”1 In this example, then, the *true value* is the percentage of the entire population of North American monkeys who like banana nut ice cream. We can only discover this percentage by conducting a *complete census* of the population of North American monkeys.

In statistics, we are pretty much in the dark when it comes to knowing things about an entire population—the whole group’s dietetic preferences, grooming habits, levels of flea infestation, average number of uncles, and so forth. Reality—meaning cost and practicality—stands between us and the *true values* from the population as a whole. So, we investigate the banana nut ice cream issue by conducting a survey whereby we select a fair and unbiased (i.e. randomly drawn) sample from the North American monkey population. (I’ll leave the topic of fair and unbiased sampling for a different discussion.)

Say that we have selected a sample of 250 monkeys and have learned that 84% of them “love” banana nut ice cream. If we want, we can add a clarification about how precise we believe this 84% figure is. This is the *margin of error* for our survey results. Basically, it tells us what level of inaccuracy in the results might come from the sampling process alone. (Inaccuracy in survey results, which statisticians call “error,” can also be due to other factors such as respondents not understanding questions, not telling the truth, mistakes in recording or tallying data, and so forth. Margins of error do not detect or estimate these kinds of errors.)

Plugging the count of monkeys surveyed and some other numbers into a statistical formula, we can amplify the 84% figure by declaring it to have “a ±3% margin of error.” Contrary to popular belief, this does not mean that the *true value* is known to lie somewhere between 81% and 87%.The only thing we know for certain about this *true value* is that it must be between 0% to 100%. Otherwise, we know nothing about it, including whether our survey’s margin of error range happens to include it or not.

But we can make a (you guessed it!) calculated guess about what range the true value might fall within. In fact, we are going to guess that the true value (percentage of North American monkeys loving banana nut ice cream) *does* lie somewhere between 81% and 87%. And we’re going to devise an argument saying that we are “pretty darn sure” that this is true. Since we’re going to resort to statistical tricks… er… methods, we can of course replace “pretty darn sure” with a number—in this case 95%.

A margin of error is actually one-half of its cousin calculation—a calculation known as a “confidence interval.” In our example, 3% is the margin of error and 6% (± 3% range) is the *confidence interval*. This is the range we are going to be “pretty darn sure” about. The 95% (our “pretty darn sureness”) we call our “confidence level.” We plug the *confidence level* into a statistical formula which generates our *confidence interval*. We then divide the result in half to get the margin of error.

The next step is the weirdest damned part, though. It’s a sort of thought experiment: We imagine that we iteratively select fair and unbiased samples of 250 North American monkeys, repeating this sampling thousands and thousands and thousands and more times—an infinite number of times, actually. Each time our thought-experiment sample provides us with a percentage of our surveyed monkeys who love banana nut ice cream. As we (really) have done with our (real) sample finding (the 84%), we can also imagine that we calculate a confidence interval for each imaginary sample.

It turns out that, due to the calculations involved (including our 95% confidence level) and theorems statisticians have about infinitely repetitive fair and un-biased sampling, 95% of the infinite number of samples drawn produce a confidence interval that includes the true value—the inscrutable real percentage of North American monkeys who love banana nut ice cream. In the other 5% of this infinite number of imagined samples, the confidence intervals end up being off-base, so that they do not encompass the true value.

Statisticians consider the process of imaginary sampling to be logically linked to the one real-world sample we drew. The “pretty darn sureness” (the 95% confidence level) pertains to the whole thought-experiment. And our real sample would obviously be a single instance from that infinite set. From this statisticians say (Abracadabra!) “We are 95 % confident that the interval from our sample (81% to 87%) contains the true value from the population.” Quantitative psychologists use somewhat different wording, saying that our survey confidence interval is “a range of plausible values for” the true value. (Together, they have great confidence in the plausibility of this whole idea!)

Be careful, though. This does not mean that there is a 95% *probability* that our sample’s confidence interval (81% to 87%) *actually contains the true value.* Either our single interval does or does not contain the true value.2 Nor does it mean that statisticians are convinced by the *specific confidence interval* calculated from a single, real-life survey (like the 81% to 87% in our example). Rather, they are convinced by the general process of drawing fair and unbiased samples in the long-run and then applying confidence interval formulas to these. Any sample-interval combination arrived at this way will win the confidence of statisticians. If our sample got a completely different answer, say 51% to 56%, statisticians will be “95% confident” about that result instead!

I know what you’re thinking. This line of reasoning fell flat for me, too. I’m not so sure—maybe only 21% to 37% so—I can honestly make the logical leap they want us to make. But, I’ll go with the flow. That leaves us, then, to hope that the one confidence interval we did get (81% to 87%) *is* among the 95% of samples. There is a 1-in-20 chance we’ll be wrong about this. But I think we are supposed to say that we are “really darned sure” about using this “pretty darn sure” approach. I can handle that. Monkey see, monkey do.

—————————

1 Since monkeys are not indigenous to the USA and Canada, we would need to decide which population interests us—only indigenous monkeys (meaning only those living in Mexico) or all monkeys currently residing on the continent, including in zoos.

2 Probability is about what might happen in situations where we don’t yet have a result. Before we interview a given monkey about ice cream preferences, we realize his answers can be *loves, is indifferent to, or hates.* So, we might say that each response has a 33 1/3% probability. But, once the monkey answers, probability is irrelevant because we have a definite result. (Or we could say the actual result now has the probability of 100%.) The *true value* in the population is the value we get after all monkeys have answered the banana nut ice cream question. There is no need to bet on this outcome because the value is fixed and certain. And the probability that this fixed value falls within any particular range, like our survey confidence interval, is either 0% or 100%.