Data Detour

Nowadays libraries aspire to be data-driven. Almost everyone agrees that collecting and using data to improve organizational performance is a good thing. Implied in the various regimens promoting this idea (library assessment, managing-for-results, evidence-based practice, quality management, etc.) is the need for practicing two virtues: patience and determination. These virtues happen also to be part and parcel of information literacy. We repeatedly advise users to avoid settling for the most convenient and quickly accessible information that shows up. And urge them to put ample time and energy into thinking critically about their question, its context, and the complete range of potentially relevant information.

Being data-driven requires this same discipline. I mention discipline because this blog entry concerns a challenging but quite educational quantitative topic. If you follow this to its conclusion I guarantee your numeracy muscles will be invigoratingly exercised!

I begin with this caveat: When advocacy or public relations professionals fail to think critically about their data, they risk communicating the wrong message to their audiences. Last month I ran across an interesting case of this in a radio public service announcement by the Ohio Highway Patrol . . .    [Read more]

Posted in Advocacy, Measurement, Numeracy, Probability, Statistics | Leave a comment

Oh, The Weather Outside Is Frightful!

The graphic below is a variant of one I blogged about in my prior entry. Its designers added a storm to create what might be called an inclement tug-of-war. This version of the graphic is from the Libraries Connect Communities: Public Library Funding & Technology Access Study 2011-2012 (PLFTAS) press kit. Both versions communicate this same basic message: The Great Recession led to severe and cumulative cuts in library funding and, at the same time, to an unrelenting surge in demand for traditional and technological services. The supporting data are the same in both versions. As I explained last time, since the data are not measures of actual library funding or usage, they don’t really confirm the graphic’s claims.

PLFundingPressKitVers170

Source: ALA, Libraries Connect Communities, 2012.  Click for larger image.

Actually, there are funding and usage data that can shed light on these claims, namely the Institute for Museum and Library Services (IMLS) survey data collected annually from 9000+ U.S. public libraries.1 I decided to examine data for 2005 through 2010 from this dataset.2 These cover the official span of the Great Recession (December 2007 to June 2009) with some leeway to identify trends that were ongoing prior to the recession.

First, I’ll summarize what the data show and then follow with details. For U.S. public libraries as a whole, funding cuts during the Great Recession were not nearly the calamity that library advocacy stories described, nor were the cuts cumulative over multiple years. Overall funding grew moderately through the recession and then began falling in 2010 . . . [Read more]

 
—————————

1  Each survey year has different counts of libraries reporting specific measures.
2  The 2010 public library survey data, issued last June, is the most recent dataset available from IMLS.

Posted in Advocacy, Measurement, Statistics | Leave a comment

Statistical Hearsay

I admit it. I’ve been suffering from a case of statistician’s block. No inspiring ideas for this blog have presented themselves since July. Well, actually, a couple did surface but I resisted them. Very recently, though, the irresistible “infographic” shown here came to my attention. I am therefore pleased to return to my keyboard to discuss this captivating image with you!

Source: ALA, Libraries Connect Communities, 2012.  Click for larger image.

The infographic appears in the executive summary of the American Library Association’s (ALA) report, Libraries Connect Communities: Public Library Funding & Technology Access Study 2011-2012, published in June. The graphic’s basic message is an ongoing struggle between two sides. On the left the blue silhouetted figures represent public demand for technology services at libraries, with four percentages quantifying levels of use. The lone silhouette on the right side personifies library funding (is he a municipal budget official?), with a single percentage quantifying that. Apparently, the quantities on the left are, using the tug-of-war metaphor, overpowering the right side.

Let’s look a bit closer at the quantitative evidence in this infographic . . . [Read more]

Posted in Advocacy, Data vizualization, Measurement | Tagged , , , | Leave a comment

Honest-to-Goodness Transformation

A while back, in his 21st Century Library Blog Steve Matthews commented on some data appearing in a report entitled The Library in the City published by the PEW Charitable Trusts Philadelphia Research Initiative. Dr. Matthews was puzzled by an inconsistency between statistical trends highlighted in the report and standard per capita circulation, visits, and Internet computer measures. He noted, for example, that among the libraries studied Columbus Metropolitan Library had the greatest cumulative decline in visits (-17%) over the seven year study period. Yet, in 2011 Columbus ranked 2nd in the group on visits per capita. The opposite was true for the Enoch Pratt Library in Baltimore. Although the library showed the second highest cumulative increase in visits (at 25%), its 2011 per capita visit rate was the lowest in the group. Curious patterns, indeed.

There are a couple of statistical dynamics at play here . . . [Read more]

Posted in Data vizualization, Measurement, Statistics | Leave a comment

Assessment’s Top Models

I recently attended a library webinar where the question of the difference between outputs and outcomes came up. The main idea was that outputs are programs and services an organization delivers, whereas outcomes are changes that occur in recipients, or their life situations, as a result of having received program services. Another was that outputs are distinguished by their more specific focus compared with outcomes, which are more general in scope. When I heard this second idea, it seemed correct in a way but incorrect in another. Mulling this over later, I began to wonder whether the first idea is not quite right, either.

To explain these new definitional doubts I’m having, I’ll need to review a couple of evaluation models with you. But first I’d like to clear something up. Just because some expert somewhere has drawn a diagram with rectangles and arrows and concise labels and called it a “model” doesn’t mean her/his creation is true, or even remotely so. Models are only true if . . . [Read more]

Posted in Outcome assessment, Process evaluation, Program evaluation, Program implementation | Leave a comment

Fun With Numbers

After so much stuff about evaluation theory and practice in this blog, it’s time for some fun! And what better fun is there than fun with numbers?1

Let’s begin our diversion with a graph from my prior post shown here. Looking closely, notice how some of the gold circles lie in neat, parallel bands. These bands


Click for larger image. Rest cursor over any circle in larger image to see individual library data. Data Source: IMLS 2009 Public Libraries Datafiles.

are more obvious in next two charts, which ‘zoom in’ on the data by decreasing the vertical axes value ranges. When I first saw this pattern, I suspected that something had corrupted the data. Double-checking, I found the data were fine, or at least they were true to the values in the original IMLS datafile. So, I decided to resort to that popular and trusty problem-solving technique . . .    [Read more]

 
—————————
1  No, this is not an April Fool’s joke. I propose this fun in all seriousness!

Posted in Data vizualization, Measurement, Statistics | Leave a comment

Indentured Certitude

I want to share some information with you from a resource I mentioned last month. The resource is Edward Suchman’s 1967 book, Evaluative Research and the information is this diagram, which presents a basic model of evaluation:1

I share the diagram because it presents two ideas that don’t always percolate to the top of discussions of library outcome assessment. The first idea is the need for programmatic values to be made explicit beforehand. Suchman, who worked in the public health field, gave this example:

Suppose we begin with the value that it is better for people to have their own teeth rather than false teeth. We may then set our goal that people shall retain their teeth as long as possible.2

Of course, it’s quite possible to hold different values. For instance, one might prefer false teeth over natural ones . . .     [Read more]

 
—————————

1  Suchman, E. A. (1967). Evaluative research: Principles and practice in public service and social action programs, New York: Russell Sage, p.34.
2  Suchman, E. A., p. 35.

Posted in Library assessment, Outcome assessment, Program evaluation | Leave a comment