Objects in Mirror Are Closer Than They Appear

ObjectsInMirror130In January my brother and I were laying laminate flooring in his house.  Each time we needed to trim a plank, we stood reverently by his table saw and incanted the familiar carpenter’s adage, “Measure twice, cut once. (Amen.)”  My brother said, “It’s the damnedest thing. You can repeat and repeat a measurement, and then find out it is still wrong.” As an electrical engineer (he’s working on the 3rd edition of his book on digital signal processing), his observation comes from dozens of real-life technical projects.

In the behavioral sciences as well as in program evaluation and performance assessment we attempt to measure fairly abstract things—like social class, anxiety, customer loyalty, community need, awareness of services, and so on. Measuring these is difficult. But even in the “hard” sciences measurement is a continuous challenge.

So, I want to write about what statisticians call measurement error. And I might as well start right off with a rather advanced idea:  Measurement is about reducing error. We try to be systematic in our measures to increase accuracy, and minimize error in the final measurements. The thing is, we are never 100% successful in this. And, truthfully, we hardly ever know how successful we have been. Our only hope is to keep refining our methods and measures to try to eliminate those sources of error we know about.

Given this, we in librarianship really need to discard the naive idea that we can obtain “hard facts and figures,” an idea bandied about most often in the field of business management.  I suggest that we not look to MBA’s and business consultants for advice on this topic (and we should especially avoid accountants and efficiency experts). Instead, I believe we will find the measurement approaches used in physical, natural, behavioral and statistical sciences to be more fruitful.

Dial90Rather than looking at measurement as producing facts, it is perhaps better to say it produces impressions. And impressions will vary on dimensions like accuracy (precision), breadth (scope), and validity (relevance).

Let’s look at the last of these—validity.  How faithfully a given measure or indicator reflects something we are interested in understanding. Say we want to determine how satisfied customers are. Let’s assume that satisfaction is  both an attitude and a feeling customers have. But we cannot actually tap these things directly. We can only get hints—indications, we say—of these. Usually we do this by interviewing customers or having them complete questionnaires. Yet, there will always be a disconnect between how customers answer questions and their real, internal level of satisfaction with our products and services. This disconnect is a form of error.

Our instruments just are not refined enough to get at the real-life phenomenon we are interested in. So we get only a taste of one aspect of the phenomenon. For instance, the field of business uses reported customer intent to recommend products or services to friends as an indicator of satisfaction. (Actually, the field is more attuned to the evil twin satisfaction indicator known as “negative word-of-mouth behavior!”)

But, suppose that—miraculously—we do develop an advanced instrument that perfectly detects the entire range of customer Spockattitudes and feelings that form “satisfaction.” Say we are able to create a mind probe! Even with this perfect instrument, other factors can make our measurements inaccurate. In other words, each time we measure something—even with the most proven measurement instruments—extraneous things interfere. A subject we are measuring may be distracted due               Or mind meld?
to a high caffeine level in his blood. Or an electric voltage spike overnight may have thrown off the sensitivity of the probe. Silly examples, I know, but the point is we don’t know what this myriad of interfering factors might be.

Statisticians view any given measurement as being a sum of two numbers:  First, a  true, valid number (in units we understand) reflecting what we are interested in. Second, another number that reflects how weird circumstances and other factors have “spun” the final measure to make it slightly, moderately, or even grossly out of whack. This second number is “error.” You can see this idea illustrated here and described further here.

Obviously, there is much more to this topic. But the idea is that we strive to produce accurate measurements so that our final numbers are mostly true. So repeat after me, “There are no such things as hard facts and figures. There are no such things as hard facts and figures. There are no such things…”

4 thoughts on “Objects in Mirror Are Closer Than They Appear

  1. “There are no such things as hard facts and figures. There are no such things as hard facts and figures. There are no such things…”

    Are you not saying this as a Hard fact? Repeating it thrice for emphasizing the hardness.
    🙂

    1. Of course you are correct, Dheeraj! On one level, my message is that people shouldn’t be mystified by figures, attributing more precision to them than they really have. (Especially business profit and loss statements!) On another level my message is that “everything is relative–including the idea that everything is relative!” Like Godel’s theorem. As for the 3 repetitions, I offer them more as incantations than emphases–though I suppose those boil down to the same thing, huh?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s