The Path of Most Resistance

The campaign to assess public library outcomes got a tremendous boost by Library Journal’s Director Summit held last month in Columbus, Ohio. It’s heartening to see library leaders getting serious about making outcome assessment integral to the management of U.S. public libraries! The excitement and determination are necessary for making progress on this front. And it sounds like the summit was designed to let folks absorb relevant ideas in ways that make them their own.

The onset of this newfound energy is the perfect time to commit ourselves to gaining a firm grasp on the core concepts and methods of outcome assessment. Although measurement of outcomes is a new undertaking for libraries, it has been around for a long time in other contexts. In fact, outcome evaluation approaches have been studied, debated, refined, and chronicled over the past forty-five years by theorists and practitioners in the field of program evaluation.1

For example, the Library Journal article mentions logic models, a framework─a structured exercise, actually─that organizations use to spell out a rationale about how program activities will, theoretically, produce short-term, intermediate, and long-term outcomes. The term surfaced in the mid-1990’s when it began to be applied to a framework developed in the 1960’s and 1970’s by Edward Suchman2 and Carol Weiss,3 and enhanced by Joseph Wholey4 and Peter Rossi and Howard Freeman in the 1980’s.5

Incidentally, our profession can boast to having independently developed the same essential framework in Richard Orr’s groundbreaking 1973 article (cited in my April 2009 post). In his comprehensive book on library evaluation Joe Matthews uses Orr’s framework as a springboard from which he progresses to descriptions of evaluation and measurement topics, including library outcomes.6 And there are other resources on library outcome evaluation in our own literature, like the book by Peter Hernon and Robert Dugan.7

The Wheel Doesn’t Need Re-Invented

Following the example set by the LJ Directors’ Summit, we also need to venture beyond our own profession and learn from other fields. There is a wealth of program evaluation and performance measurement knowledge that we can take advantage of. For instance, information about logic models8 can be found in a definitive guide made available by the W. K. Kellogg Foundation,9 in the program evaluation textbook by McDavid and Hawthorn,10 and in the authoritative handbook edited by Wholey, Hatry, and Newcomer.11  To get a sense of the range of issues involved in outcome assessment and evaluation in general, take a look at the tables of contents in the McDavid and Hawthorn book and in the latest edition of the leading evaluation textbook by Rossi, Lipsey, and Freeman.12

With the passage of the Government Performance Results Act of 1993 performance measurement and program evaluation have been intensified on the federal level. An example is this model developed by the Center for Disease Control and Prevention. Outcomes have gotten more attention in local and state governments, for instance, in the second edition of Hatry’s definitive guide to performance measurement.13 And the Urban Institute recently developed a compendium of outcome indicators for nonprofit organizations.

Tapping knowledge from the fields of program evaluation and performance measurement will help us master evaluation concepts and methods which, in turn, prepares us to confront roadblocks and challenges such as the summit attendees foresaw. I don’t know what specific roadblocks and challenges they identified, but I suspect two likely candidates to be: (1) producing timely and relevant evaluation results and (2) integrating evaluation results into management decision-making. These two problems have been perpetual themes in the field of program evaluation for a long time.14

A Terminology Tip

As newcomers to outcome assessment we would be wise to learn the relevant terminology in order to assimilate the basic ideas rather than giving into the temptation to just parrot the new jargon. Good examples are the terms outcomes and impacts. Though usage varies the terms have these traditional meanings: Outcomes are changes that can be confirmed to have occurred in a target population or situation that programs were designed to change. Impacts are outcomes that can be shown to have been produced, that is, caused, by those programs, services, or interventions that were applied. (Impacts are also to referred to as program effects.)

Take for example a highway construction effort intended to decrease rush hour congestion in a given city. Say a project is approved that will double the lanes of a main highway. Somehow the project team measures and compares before- and after-project travel times along with traffic jam frequency and duration. When the project is completed they announce that their measurements show traffic congestion decreased by 30%, citing this percentage as the impact of the project. However, because the construction took two years to complete, a period which happened to include the commencement of the Great Recession, traffic volumes were decreasing anyway. Plus, more commuters began working at home than before and ride-sharing increased, both due to gasoline prices. Meaning the total measured outcome is not fully attributable to the highway expansion. Therefore, the 30% impact claim is too high (by how much we can’t be sure). The true impact is the percentage of improved traffic flow not attributable to other causes like those I listed. (A thorough account of impact evaluation methods can be found in Lawrence Mohr’s classic book.15 Incidentally, since the term impact is synonymous with program effect, impact evaluation is also called program effectiveness evaluation.)

Again, definitions of outcomes and impacts are not carved in stone. Sometimes the terms are used interchangeably. Joe Matthew’s book defines outcomes, impacts, and effects as essentially the same, all referring to results demonstrated to have been directly produced by library services. In academic library assessment causes and effects tend to be downplayed more, as they are for the most part by Hernon and Dugan.16 In their book the term outcomes means any and all relevant results regardless of what combination of factors may have produced or inhibited these.

Another terminology puzzle is the difference between evaluation and assessment, if indeed there is any difference! Some other time I may delve into this question.

Pre-Ordained Conclusions Are Not Data

Right now I want to offer two caveats that I hope will contribute to the success of the recent campaign for outcome evaluation. First, outcome evaluation is a quite sophisticated form of evaluation. For public libraries inexperienced with evaluation and assessment, attempting an outcome study as a first project is extremely ambitious. Libraries will do best if they approach this process in deliberately small and incremental steps. (Another topic to elaborate on at a later date.)

Second, the purpose of outcome evaluation is not to “share success stories” as the LJ article suggests. The purpose is to look impartially at successes and failures, and anything in between. Learning that programs, or portions of programs, have been ineffective or worked sub-optimally is itself a success! This helps organizations adjust program designs or replace them with something better. By the same token, reporting only wonderful program successes is a disservice to the community. (Just think about how people view spin in the political arena.)

Public libraries should not pursue outcome assessment merely to communicate glowing reports to their stakeholders. Being a data-driven organization does not mean collecting all the data you can for the purpose of reaching pre-ordained conclusions. It means the exact opposite, namely, that until you’ve measured and studied a situation systematically, your knowledge of it is mostly speculation and guesses.

It is important that the profession be as methodical as possible as we venture down this new outcomes path. Or I should say up, as it is surely an incline with plenty of resistance for our professional leg-muscles. Fortunately, the directors’ summit shows that hilly terrain can be invigorating!

—————————
1  Works from the literature of program evaluation and evaluation research are rarely cited in library assessment and evaluation literature, suggesting that our profession is unaware of the literature from this other field. The only exception I’ve encountered is Powell, R. R. 2006. Evaluation Research: An Overview, Library Trends, 51:1, 102-120.
2  Suchman, E. A. 1967. Evaluative Research: Principles and Practice in Public Service and Social Action Programs, New York: Russell Sage.
3  Weiss, C. 1972. Evaluation Research: Methods for Assessing Program Effectiveness, Englewood Cliffs, NJ: Prentice-Hall.
4  Wholey, J. S. 1983. Evaluation and Effective Public Management, Boston: Little-Brown.
5  Rossi, P. H. and Freeman, H. E. 1987. Evaluation: A Systematic Approach, 3rd ed., Beverly Hills, CA: Sage Publications.
6   Matthews, J. R. 2007. The Evaluation and Measurement of Library Services, Westport, CT: Libraries Unlimited.
7  Hernon, P. and Dugan, R. E. 2002. Outcomes Assessment in Your Library, Chicago: American Library Association.
8  I wish that the field of program evaluation would have chosen a less esoteric-sounding label. The underlying concepts are not particularly complex. Incidentally, a concept nearly identical to logic models resurfaced in the mid-1990’s in the business field in the balanced scorecard movement. That movement labeled the concept strategy maps.
9  W. K. Kellogg Foundation. 2004. W.K. Kellogg Foundation Logic Model Development Guide, The foundation also provides an excellent primer on evaluation, The W. K. Kellogg Foundation Evaluation Handbook.
10  McDavid, J. C. and Hawthorn, L. R. 2006. Program Evaluation & Performance Measurement: An Introduction to Practice, Thousand Oaks, CA: Sage Publications.
11  Wholey, J. S., Hatry, H. P., and Newcomer, K. E. 1994. Handbook of Practical Program Evaluation, San Francisco: Jossey-Bass. These editors are legends in the field of program evaluation.
12  Rossi, P. H., M. W. Lipsey, and Freeman, H. E. 2007. Evaluation: A Systematic Approach, 7th. ed., Thousand Oaks, CA: Sage Publications. These authors are legends in the field of program evaluation.
13  Hatry, H. 2006. Performance Measurement: Getting Results, 2nd, ed., Washington DC: Urban Institute Press.
14  See Rutman, L., 1980. Planning Useful Evaluations: Evaluability Assessment, Beverly Hills, CA: Sage Publications; Smith, M. F. 1989. Evaluability Assessment: A Practical Approach, Boston: Kluwer Academic; Patton, M. Q. 1978. Utilization-Focused Evaluation, Beverly Hills, CA: Sage Publications; Patton, M. Q. 2012. Essentials of Utilization-Focused Evaluation, Thousand Oaks, CA: Sage Publications.
15  Mohr, L. B. 1995. Impact Analysis for Program Evaluation, Thousand Oaks, CA: Sage Publications.
16  Hernon, P. and Dugan, R.E. 2002.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s