Fickle Users of Figures

The field of program evaluation has grappled with the political context of institutional performance measurement for decades. For libraries and universities, though, the politics of accountability is newer terrain. In some instances these organizations have unwittingly enrolled in a crash course on the subject, learning in real-time how volatile the process can be.

A prime example is the recent controversy about faculty productivity within the University of Texas System (UT). At the request of the its Board of Regents, UT released a 821 page spreadsheet disclosing detailed records on faculty compensation, course enrollment, class sections taught, research time allocations, and other related data. Each page of the document contains this curious disclaimer in red:

The data in its current draft form is incomplete and has not yet been fully verified or cross referenced. In its present raw form it cannot yield accurate analysis, interpretations or conclusions.

Essentially, they’re saying, “Well, here are our data, but they can’t be trusted.” How odd that UT administrators were so willing to imply that their institutional research capabilities are impaired rather than attribute any level of accuracy to their data. One would hope that the salary data are fairly accurate since UT’s financial records have to meet acceptable accounting standards. So maybe it’s the other data that are so shaky and unreliable.

Whatever the case, the professed squishiness of the data reminded me of the popular quotation by Sir Josiah Stamp, the early 20th century British inland revenue secretary, economist, banker, and jack-of-several-statistical-trades:

The Government are very keen on amassing statistics—they collect them, add them, raise them to the nth power, take the cube root, and prepare wonderful diagrams. But what you must never forget is that every one of those figures comes in the first instance from the chowty dar (village watchman) who just puts down what he damn pleases.1

Aside from the vagaries of self-reported data—a complex topic on its own—the point is that we must be cognizant of the quality of any data we are considering. And this assessment is always relative depending on our purposes. For some purposes we need highly accurate and precise data, for others less so. And the same data might be relevant for one investigation but irrelevant for another.

In his 1919 book Sir Josiah Stamp also had something to say about perceptions of the worthiness of data:

We are all familiar with the class of persons who despise and distrust statistics. They are the first to rush to statistics when they are in trouble, and use them without investigation or discrimination…  At another moment the fickle user of figures seeks to prove that statistics…have no real meaning; and because estimates made upon one particular principle are not really serviceable for every possible use, they are condemned as being useful for none.2

By dissing their own data the UT officials are trying to inoculate their institutions against unpleasant findings that might lurk in the data. (There are probably other motives for this tactic as well. I would have loved being a fly-on-the-wall in the meetings where that statement was crafted!)

But do the UT administrators seriously believe that sleuthing the data in their present form is a complete waste of time? I can’t imagine that they do. By claiming that the spreadsheet is basically 0% accurate the administrators imply that the final audited and cross-checked one will be 100% accurate. Neither of these estimates is very likely to be true.

“Accurate3 analysis, interpretations, and conclusions” can be derived from UT’s data as they are, as long as these are qualified by a fair estimate of the accuracy of the data. And you can bet that UT will be receiving requests for this very estimate. (“You mean you released garbage data now, intending to replace it with non-garbage data later?”)

Besides, of all people, academics realize that sound arguments depend upon the quality of the evidence and the logical consistency of the arguments themselves. People can draw really wrong conclusions from the most accurate of data.

The thing that bothers me, though, is that UT’s alarmist disclaimer is the mirror image of the sort of exaggeration I complain about in this blog. Eventually, libraries and universities are going to have to abandon their fickle, knee-jerk reactions of rushing to statistics that support their cases and condemning those that don’t.


1  Stamp, J. (1929). Some economic factors in modern life, London: P. S. King & Son, pp. 258-259.
2  Stamp, J. (1919). The wealth and income of the chief powers, London: Royal Statistical Society, p. 2.
3   Better wording of the disclaimer would be use of the term valid rather than accurate. The validity of a proposition is its relative weightiness and its logical consistency, including how well patterns in the evidence support the arguments made. Justifiable—or we might also say warranted—conclusions and interpretations are said to be valid, whereas trustworthy data are said to be accurate.
   Accuracy, a concept distinct from precision, pertains to the trueness of the data, themselves, and to their faithful preservation when quantitative techniques are applied. Data analysis typically means using quantitative techniques to examine and identify trends in data. However, when the term analysis refers to more general implications drawn from data, then the adjective valid would apply. (I think.)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s