A basic tenet of public librarianship is the idea that each library and its community are unique. While libraries share certain characteristics in common, their collections, services, and operations are ostensibly customized to fit local conditions. I didn’t realize how strong a tenet this was until I heard this declaration at an Ohio Library Council conference: “All library excellence is local.” Wow, pretty unequivocal! Granted, public libraries do acknowledge that they have certain things in common with other libraries. And libraries are notorious for collaborating and cooperating with each other. But, otherwise this sure sounds like unique characteristics trump most everything else.
This distinction between things standard and things tailored (customized) turns out to be a central theme in program evaluation also. The distinction has been discussed, for instance, by Mark Lipsey, co-author of the leading textbook on program evaluation:
One of the difficulties in evaluating a specific program is that [there is] little basis for knowing which aspects of the program work in relatively predictable ways and which are very distinctive to that particular program situation. A given intervention…may be known to have positive effects when used with some client populations but [not for others]. Similarly, one variation of a service may be effective, but that may not be true of another variation, especially when applied in a different program situation.1
As an exercize, I ask the reader to replace in Lipsey’s quote the phrase “relatively predictable” with the term “standard,” and the term “distinctive” with “tailored” or “customized.” So, the question becomes to what extent libraries understand which aspects of their operations work better when customized and which should follow standard library practices?
Here’s a similar statement from the Kellogg Foundation’s evaluation handbook:
All too often, conventional approaches to evaluation focus on examining only the outcomes or the impact of a project without examining the environment in which it operates or the processes involved in the project’s development. Although we agree that assessing short- and long-term outcomes is important and necessary, such an exclusive focus on impacts leads us to overlook equally important aspects of evaluation—including more sophisticated understandings of how and why programs and services work, for whom they work, and in what circumstances.2
Suppose that in our profession someone conducts a rigorous outcome evaluation study of, say, summer reading programs. And the study affirms the effectiveness of these programs. What claims can be made from this study about library summer reading programs nationwide? Can we boast that this effectiveness applies to any and every public library summer reading program and attendee group? Experts from the field of program evaluation advise otherwise.
Only to the extent that a library’s summer reading program matches the content and delivery methods of programs examined in the outcome study, and a library’s clientele is a fair match for those in the study, can a public library point this outcome study as evidence of its local program’s effectiveness. Public libraries view their attunement to the nature and needs of unique communities as the foundation for their excellence and effectiveness. Due to this they also bear the burden of demonstrating how well their local practices work for their specific clientele and in their unique communities. Pretty tall order.
1 Lipsey, M. W. 2000. Meta-analysis and the Learning Curve in Evaluation Practice, American Journal of Evaluation, 21:2, 209.
2 W. K. Kellogg Foundation. 1998. W. K. Kellogg Foundation Evaluation Handbook, 20.