Research Literacy

In 2008 the Online Computer Library Center (OCLC) published a marketing research report addressing the need for increasing public support for libraries. The study, From Awareness to Funding: A Study of Library Support in America, was funded by a $1.2 million grant awarded by the Bill & Melinda Gates Foundation.

Respondents in the study were divided into four groups according to how supportive of public library funding they were. Somehow—the OCLC researchers don’t say how—certain questionnaire responses qualified respondents for assignment to each group. But the results don’t always make sense. For instance, 20% of “super-supporter” group said they were not definitely committed to voting in favor of libraries. And 6% of this group was either unsure how they’d vote or said they would vote “No.” Wondering how such uncommitted respondents ended up assigned to the group that is “super” supportive of libraries, I contacted the OCLC researchers. Unfortunately, I never did get a response to this question.

There are other perplexing aspects of the study, like details of the study sample: What was the sampling frame? Were subjects randomly selected? Is the sample geographically representative? Apparently, OCLC’s research is proprietary and not subject to the type of outside verification and review that is the mainstay of academic and scientific research.

So, now to a couple of things clearly evident from the 212-page report. The researchers devised two statistical indexes to describe key attributes of their study respondents: one based on respondents’ library visit frequency, the other on responses to specific questionnaire items. Imaginative charts comparing the two indexes for different respondent groups are spread throughout the first half of the report. An article I wrote examining these indexes appears in the current issue of Public Library Quarterly (PLQ), vol. 28 no. 3 (July-September) 2009.

In the PLQ article I point out that the indexes happen to have no units of measure. In the language of mathematicians, they are “dimensionless.” This makes it really difficult to interpret the repeated comparisons that appear in the study. This annotated graphic from the PLQ article attempts to explain a sample chart from the OCLC report:

PLQArticleFig3_380                                 ©  2009  Taylor and Francie Group

I won’t elaborate on this graphic because I want you to read the complete PLQ article. Suffice it to say that my annotations are the rectangular bubbles, arrows, and parenthetical axis labels. I conclude that, while the indexes appear to be accurate,*   they are more complicated than need be, given that they are based on such simple data. As a result, they aren’t particularly useful.

My main message from the article is, though, that as consumers of advocacy research we need to be research literate, meaning that we have the knowledge necessary for examining research findings to determine whether study data and methodology really do justify the study findings. Certainly, we need our leading library organizations to pursue advocacy research projects like this one. At the same time, we have to look beyond deluxe graphics and polished text and focus on the actual substance of library studies. As information specialists and brokers, we have a special obligation to verify that research done on behalf of libraries is understandable, accurate, objective, transparent and relevant.


*  OCLC declined to respond to my repeated requests for information necessary for auditing the data in their report. They did give permission for reproducing their charts in PLQ.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s