I admit it. I’ve been suffering from a case of statistician’s block. No inspiring ideas for this blog have presented themselves since July. Well, actually, a couple did surface but I resisted them. Very recently, though, the irresistible “infographic” shown here came to my attention. I am therefore pleased to return to my keyboard to discuss this captivating image with you!
Source: ALA, Libraries Connect Communities, 2012. Click for larger image.
The infographic appears in the executive summary of the American Library Association’s (ALA) report, Libraries Connect Communities: Public Library Funding & Technology Access Study 2011-2012, published in June. The graphic’s basic message is an ongoing struggle between two sides. On the left the blue silhouetted figures represent public demand for technology services at libraries, with four percentages quantifying levels of use. The lone silhouette on the right side personifies library funding (is he a municipal budget official?), with a single percentage quantifying that. Apparently, the quantities on the left are, using the tug-of-war metaphor, overpowering the right side.
Let’s look a bit closer at the quantitative evidence in this infographic . . . [Read more]
A while back, in his 21st Century Library Blog Steve Matthews commented on some data appearing in a report entitled The Library in the City published by the PEW Charitable Trusts Philadelphia Research Initiative. Dr. Matthews was puzzled by an inconsistency between statistical trends highlighted in the report and standard per capita circulation, visits, and Internet computer measures. He noted, for example, that among the libraries studied Columbus Metropolitan Library had the greatest cumulative decline in visits (-17%) over the seven year study period. Yet, in 2011 Columbus ranked 2nd in the group on visits per capita. The opposite was true for the Enoch Pratt Library in Baltimore. Although the library showed the second highest cumulative increase in visits (at 25%), its 2011 per capita visit rate was the lowest in the group. Curious patterns, indeed.
There are a couple of statistical dynamics at play here . . . [Read more]
I recently attended a library webinar where the question of the difference between outputs and outcomes came up. The main idea was that outputs are programs and services an organization delivers, whereas outcomes are changes that occur in recipients, or their life situations, as a result of having received program services. Another was that outputs are distinguished by their more specific focus compared with outcomes, which are more general in scope. When I heard this second idea, it seemed correct in a way but incorrect in another. Mulling this over later, I began to wonder whether the first idea is not quite right, either.
To explain these new definitional doubts I’m having, I’ll need to review a couple of evaluation models with you. But first I’d like to clear something up. Just because some expert somewhere has drawn a diagram with rectangles and arrows and concise labels and called it a “model” doesn’t mean her/his creation is true, or even remotely so. Models are only true if . . . [Read more]
After so much stuff about evaluation theory and practice in this blog, it’s time for some fun! And what better fun is there than fun with numbers?1
Let’s begin our diversion with a graph from my prior post shown here. Looking closely, notice how some of the gold circles lie in neat, parallel bands. These bands
Click for larger image. Rest cursor over any circle in larger image to see individual library data. Data Source: IMLS 2009 Public Libraries Datafiles.
are more obvious in next two charts, which ‘zoom in’ on the data by decreasing the vertical axes value ranges. When I first saw this pattern, I suspected that something had corrupted the data. Double-checking, I found the data were fine, or at least they were true to the values in the original IMLS datafile. So, I decided to resort to that popular and trusty problem-solving technique . . . [Read more]
1 No, this is not an April Fool’s joke. I propose this fun in all seriousness!
I want to share some information with you from a resource I mentioned last month. The resource is Edward Suchman’s 1967 book, Evaluative Research and the information is this diagram, which presents a basic model of evaluation:1
I share the diagram because it presents two ideas that don’t always percolate to the top of discussions of library outcome assessment. The first idea is the need for programmatic values to be made explicit beforehand. Suchman, who worked in the public health field, gave this example:
Suppose we begin with the value that it is better for people to have their own teeth rather than false teeth. We may then set our goal that people shall retain their teeth as long as possible.2
Of course, it’s quite possible to hold different values. For instance, one might prefer false teeth over natural ones . . . [Read more]
1 Suchman, E. A. (1967). Evaluative research: Principles and practice in public service and social action programs, New York: Russell Sage, p.34.
2 Suchman, E. A., p. 35.
The campaign to assess public library outcomes got a tremendous boost by Library Journal’s Director Summit held last month in Columbus, Ohio. It’s heartening to see library leaders getting serious about making outcome assessment integral to the management of U.S. public libraries! The excitement and determination are necessary for making progress on this front. And it sounds like the summit was designed to let folks absorb relevant ideas in ways that make them their own.
The onset of this newfound energy is the perfect time to commit ourselves to gaining a firm grasp on the core concepts and methods of outcome assessment. Although measurement of outcomes is a new undertaking for libraries, it has been around for a long time in other contexts. In fact, outcome evaluation approaches have been studied, debated, refined, and chronicled over the past forty-five years . . . [Read more]
It’s great to see other librarians advocating for the same causes I harp on in this blog. I’m referring to Sarah Robbins, Debra Engel, and Christina Kulp of the University of Oklahoma, whose article appears in the current issue of College & Research Libraries. The article, entitled “How Unique Are Our Users?”1 warns against the folly of using convenience samples. It implores library researchers to honestly explain the limitations of their studies. And the authors are resolute about the importance of understanding the generalizability of survey findings, a topic which also happens to be the main focus of their study.
I bring up their article for a different reason, however. It is an example of how difficult and nuanced certain aspects of research and statistics can be. Despite the best of intentions, it’s amazingly easy to get tripped up by one or another detail. Robbins and her colleagues got caught in the briar patch that is statistics and research methods. I say so because the main conclusions reached in their study are not actually borne out by their survey results . . . [Read more]
1 Robbins, S., Engel, D. and Kulp, C., 2011, How unique are our users? Comparing responses regarding the information-seeking habits of engineering faculty, College & Research Libraries, 72:6, pp. 515-532.