Conducing To Error

Statistics are of value according only to their exactness. Without this essential quality they become useless, and even dangerous, because they conduce to error.

Adolphe Quetelet, Letters on the Theory of Probabilities as Applied to the Moral and Political Sciences, London, 1849.

You may have heard of the 19th century Belgian astronomer and statistician Adolphe Quetelet in connection with the novel ideas of the average man or quantifying the human body using the body-mass index.

But did you know that Quetelet was a pioneer of information literacy? He advocated tirelessly for the accuracy of published statistical data. And he believed it was the duty of discerning audiences to keep information sources honest. One of his biographers described Quetelet’s approach this way:

He insisted that every statistical work should give both the sources of the data and the manner of their collection. The checking of statistical documents, he held, should be both moral and material. By moral examination he meant an inquiry into the influences under which the data are collected and the nature of their sources. The material examination consists of whether the numbers are sufficiently large to assure the predominance of constant causes, and sufficiently continuous or uniform to make sure that accidental causes have not unduly affected some of them, and whether they have been combined with mathematical accuracy.1

Quetelet’s message was about not conducing to error. Published data needed to be accurate, consistent, and trustworthy. These are standards that should apply to assessment and research projects undertaken by or on behalf of libraries. Yet, too often national library organizations and projects consider their public relations mission as exempt from the mundane requirements of information quality. For these organizations and projects, and for their professional marketing colleagues, conducing to error is no big deal.

The most common way that library advocates and marketeers avoid the bother of assembling accurate and trustworthy information is to present their assertions as self-evident. They simply make the boldest of pronouncements like these from The State of American Libraries 2015 report recently released by the American Library Association (ALA):

Libraries touch people’s lives in many ways and stand as protectorates of the tenets of a democratic government.

Public libraries serve as community anchors that address economic, educational, and health disparities in the community.2

No questions asked nor justifications offered. Nor indications of how widespread or how effective and efficient these various people-touching, democracy-protecting, community-anchoring, and disparity-addressing activities might be.

A variant on this tactic is to present other information as independent verification of advocacy pronouncements regardless of the quality or relevance of the information. The ALA report provides a good example of this:

The impact of academic librarians on student learning can be seen in the 2014 National Survey of Student Engagement [NSSE], which reports that 33% of first-year students agreed that their experience at their institution contributed “very much” to their knowledge, skills, and personal development in using information effectively. More impressively, 47% of college seniors agreed with the same statement.3

The latter percentage was notable enough to re-quote in large print as seen here:

ALAquote

Highlighted quote (a pull quote) from The State of American Libraries 2015.

Curiously, the significance of the 47% figure goes unmentioned in the highlighted quote. The significance, seen in the ALA statement above, is the claim that the NSSE survey findings prove that U.S. academic libraries facilitate student learning nationwide.

This claim will definitely be news to the field of academic library assessment which has struggled for more than a decade with the libraries-and-student-learning conundrum. Who would have thought that the solution to this puzzle could be found in a single questionnaire item from the NSSE survey! What fantastic luck!

Except, of course, this is too good to be true. The NSSE survey doesn’t actually solve the libraries-and-student-learning puzzle because the ALA misconstrued what the survey says. To explain this let me begin with the wording of this questionnaire item:

How much has your experience at this institution contributed to your knowledge, skills, and personal development in using information effectively?
 Very Little    Some     Quite a Bit    Very Much 4

Note that, rather than asking students to assess their own learning about information use, the question is about contributions arising from institutional experiences. Thus, the construct researchers are measuring is broader than student learning, perhaps something akin to ownership or appreciation of information use principles. Nothing in the item indicates the levels of (let’s call it) information use competency which students possessed, nor what these levels were before the students entered the colleges/universities. Students only needed to estimate the institutions’ contributions to whatever competency levels they happened to possess at the time.

Therefore, it’s possible for students with low information use competency levels to report high institutional contributions to these levels. Conversely, students with high competency levels could still have low perceptions of institutional contributions. (These might be students well-versed in information use prior to attending the colleges/universities where they took the survey.) So, for students in either of these categories there’s a mismatch between competency levels and contribution levels.

While for most students the levels of reported contributions could correspond fairly well with competency levels, due to the exceptions just noted data from the NSSE survey item cited by ALA are not particularly strong indicators of student learning. In fact, the NSSE researchers didn’t put much stock in them, relying more on behaviorally-oriented survey items instead. Unfortunately, what the researchers found was discouraging:

Less positive were students’ uses of information sources. While most students used information sources outside of course readings to complete an assignment, many appeared to use information uncritically. Only 37% of first-year students and 36% of seniors frequently decided not to use an information source due to questionable quality…Only about half of first-year and senior students frequently looked for a reference cited in something they had read.5

The researchers were disappointed that only about 35% to 50% of students reported applying good information use practices while a bit less than 50% up to 65% did not. Clearly, they had higher expectations for students than the ALA did since the ALA report portrayed percentages in this lower range (33% and 47%) as favorable.

Even if we consider the NSSE item to be an accurate measure of student learning, the ALA claim has two other significant flaws. The first is that nothing in the item, nor in any other items in the survey, links library services to this measure. Apparently, the ALA hopes readers won’t notice this missing link and will accept the logic of their argument, which works like this:

1. Because some students reported that their institutional experiences contributed to their information use knowledge/skills; and
2. Because these contributions indicate that student learning did occur; and
3. Because university libraries are a part of institutional experiences;
4. Therefore, university libraries had an impact on student learning.

Credible evidence of a libraries-and-student-learning connection, which should replace items (2) through (4) above, has two aspects: Students need to be exposed to library teaching activities and these activities then need to be proven to have been effective. Elsewhere I’ve written about the research protocols required to be able to conclude that a given program intervention actually caused changes observed in research subjects. But we don’t have to worry about these protocols here because our common sense can guide us just as well. It only makes sense that one questionnaire item could not possibly prove that libraries produce student learning about information use. Surely, more in depth measurements would be required.

The second flaw in the ALA claim is something I just discussed in my prior post as the cherry-picking fallacy. The ALA report cherry-picked the smaller number of (apparent) student learning successes and discarded the larger number of failures. This also necessitated the ALA’s low expectations for student learning, as I mentioned.

Then, the ALA added a second level of cherry-picking by also ignoring other findings from the NSSE report indicating that student learning was disappointing. As a result, this doubly selective reporting led the ALA to credit libraries with the small amount of learning that occurred along with the larger amount that did not!

In the end, the supposed independent verification of the libraries-produce-student-learning claim doesn’t verify anything. I guess about all I can say is that the inexactness of the ALA claim conduces to error. It provides readers with wrong information, which is what Quetelet was trying to prevent in the 19th century.

Incidentally, Quetelet also happened to live during an information revolution, that is, in an era when large amounts of statistical data were just becoming publicly available. The audience he was trying to reach were practitioners in the developing arenas of state statistics and political arithmetic. So, Quetelet was appealing to producers of data.

In the 21st century statisticians and educators have come to recognize the importance of statistical literacy on the part of the general public, that is, consumers of data. This form of literacy challenges consumers to proactively assess the quality of quantitative information they encounter. As literacy/numeracy assessment specialist Iddo Gal advises, members of the general public should realize that:

…it is legitimate to be critical about statistical messages or arguments, whether they come from official or other sources, respectable as they may be…. [It] is legitimate to have concerns about any aspect of a reported study or a proposed interpretation of its results, and to raise [pertinent questions] even if [citizens] have not learned much formal statistics or mathematics, or do not have access to all the background details needed.6

Somebody has to keep the information producers honest.

 
—————————

1   Hank, F.B. 1908. Adolphe Quetelet as Statistician, New York: Columbia University, pp. 42-43.
2   American Library Association, 2015. State of American Libraries 2015, Chicago: ALA, p. 2.
3   American Library Association, 2015. State of American Libraries 2015, p. 6. The data cited in the statement are from tables appearing in the separate NSSE document referenced in the next footnote.
4   Center for Postsecondary Research, 2014. National Survey of Student Engagement: 2014 Topical Module: Experiences with Information Literacy, Bloomington, IN: Indiana University.
5   Center for Postsecondary Research, 2014. National Survey of Student Engagement:Selected Results, p. 14.
6   Gal, I. 2002. Adults’ Statistical Literacy: Meanings, Components, Responsibilities, International Statistical Review, 70:1, p. 19.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s