These past several months the LJ Index of Public Library Service has been in the news in all regions of the USA, showcasing libraries that excel in providing services to their communities. So, it was troubling to see that the Index became newsworthy in a conflict between a library director and library board. A Library Journal article reports that the library is a 2009 LJ Index 5-Star winner and runner-up for LJ’s Best Small Library 2008 award.
I can’t vouch for the evaluation process used in the Best Small Library awards. But I do know that LJ Index ratings are no basis for drawing specific conclusions about the performance of the director or the library. Perhaps their performances have been exemplary, or perhaps unsatisfactory. It seems that LJ Index Star status would support the director’s side of this painful debate. Nevertheless, at the risk of sounding like a broken mp3 file, I have to repeat the caveat that my colleague Keith Curry Lance and I stress in our articles: Library ratings do not measure organizational effectiveness, performance excellence, service quality, or the extent to which a library meets community needs. This troubled library and board will need to look way beyond library ratings if performance is one of the issues they are grappling with.
And, since I’m preaching about how library ratings should be interpreted, I might as well share a more subtle and surprising fact with you: Ratings cannot provide much detail at all about an individual library. Sounds like a contradiction, doesn’t it? Giving a library an exact rating score, differentiating between low and high scoring libraries, and then saying ratings lack specificity? Because library ratings are based on aggregating data that are quite general, they are only useful for making (let’s call these) low-resolution statements about libraries. They give the general outline and shape of things from a given perspective, not a complete, filled-in, 360-degree virtual picture. In other words, as a general rule ratings are very general rulers.
Here’s a different way to think about this (if you will forgive the unpleasant connotations of this example): Consider psychological or psychiatric assessments of criminals. These assessments do not actually measure criminals on an individual basis. That is, the results of these assessments do not predict whether a given individual will or will not be a repeat offender. Instead, the assessments assign an individual to a certain classification or profile, and then describe how criminals matching this profile typically behave. Forensic psychiatrists and psychologists are careful to declare that their assessments do not absolutely describe the individual being examined. Rather, the results are primarily statements of probability based on group profiles.
Roughly speaking, a similar but much less rigorous process applies to the happier world of library ratings. Although the three- and four-digit scores appear precise, they don’t really “zero in on” a given library. Their only purpose is to assign libraries into final groupings. Ratings make very general statements about how libraries compare to their peers. But they do not provide details about how specific libraries under a given management team perform in their unique communities.
So, why consider ratings at all if they are such non-specific measures? Because they can fulfill an important role in promoting both libraries and library evaluation. Most importantly, they remind us of the need for libraries to be accountable to community stakeholders and funders. In this respect ratings represent a unique evaluation method that William Gormley and David Weimer examine in their 1999 book Organizational Report Cards. These researchers acknowledge that ratings and report cards are imperfect and “face all the major methodological problems encountered in designing performance monitoring systems and program evaluation” (p. 6).
Yet, these problems do not outweigh the importance of having publicly available data about libraries as a catalyst for ongoing local evaluation. By measuring one dimension of performance—service provision—the LJ Index recognizes exemplary libraries while also encouraging all libraries to identify those assessment questions worth pursuing in high-resolution and full living color.