A basic tenet of public librarianship is the idea that each library and its communities are unique. While libraries share certain characteristics in common, their products, services, and operations are (in theory) highly customized to fit local conditions. I didn’t realize how strong a tenet this was until I heard this declaration at an Ohio Library Council conference: “All library excellence is local.” Wow, pretty unequivocal! Granted, public libraries do acknowledge that they have certain things in common with other libraries, but it sure sounds like unique characteristics trump everything else.
This contrast between things standard and things tailored (or customized) turns out to be a theme central to evaluation research also. The idea has been noted, for instance, by Mark Lipsey, co-author of the leading textbook on program evaluation:
One of the difficulties in evaluating a specific program is that [there is] little basis for knowing which aspects of the program work in relatively predictable ways and which are very distinctive to that particular program situation. A given intervention…may be known to have positive effects when used with some client populations but [not for others]. Similarly, one variation of a service may be effective, but that may not be true of another variation, especially when applied in a different program situation.1
In Lipsey’s quote just replace “relatively predictable” with “standard” and replace “distinctive” with “custom” or “tailored.”
Here’s the same idea from the Kellogg Foundation’s evaluation handbook:
All too often, conventional approaches to evaluation focus on examining only the outcomes or the impact of a project without examining the environment in which it operates or the processes involved in the project’s development. Although we agree that assessing short- and long-term outcomes is important and necessary, such an exclusive focus on impacts leads us to overlook equally important aspects of evaluation–including more sophisticated understandings of how and why programs and services work, for whom they work, and in what circumstances.2
Suppose that our profession produces a rigorously conducted outcome evaluation of, say, summer reading programs and the study affirms the effectiveness of these programs. Then, what claims can be made about library summer reading programs nationwide? Can we boast that this effectiveness applies to any and every public library summer reading program and attendee group? Experts from the field of program evaluation tell us otherwise.
Only to the extent that a library’s summer reading program matches the content and delivery approach of the programs in the outcome study, and the library’s clientele also matches those in the study—only to these extents can a public library point to the outcome study as evidence of its local program’s effectiveness. Public libraries view their attunement to the nature and needs of unique communities as the foundation for their excellence and effectiveness. This puts the onus on libraries to demonstrate how well their custom practices work for their local clientele. Pretty tall order.
1 Lipsey, M. W. (2000). Meta-analysis and the learning curve in evaluation practice, American Journal of Evaluation 21(2), p. 209.
2 W. K. Kellogg Foundation (1998). W. K. Kellogg Foundation Evaluation Handbook, p. 20.