Assessment’s Top Models

I recently attended a library webinar where the question of the difference between outputs and outcomes came up. The main idea was that outputs are programs and services an organization delivers, whereas outcomes are changes that occur in recipients, or their life situations, as a result of having received program services. Another was that outputs are distinguished by their more specific focus compared with outcomes, which are more general in scope. When I heard this second idea, it seemed correct in a way but incorrect in another. Mulling this over later, I began to wonder whether the first idea is not quite right, either.

To explain these new definitional doubts I’m having, I’ll need to review a couple of evaluation models with you. But first I’d like to clear something up. Just because some expert somewhere has drawn a diagram with rectangles and arrows and concise labels and called it a “model” doesn’t mean her/his creation is true, or even remotely so. Models are only true if . . . [Read more]

Posted in Outcome assessment, Process evaluation, Program evaluation, Program implementation | Leave a comment

Fun With Numbers

After so much stuff about evaluation theory and practice in this blog, it’s time for some fun! And what better fun is there than fun with numbers?1

Let’s begin our diversion with a graph from my prior post shown here. Looking closely, notice how some of the gold circles lie in neat, parallel bands. These bands


Click for larger image. Rest cursor over any circle in larger image to see individual library data. Data Source: IMLS 2009 Public Libraries Datafiles.

are more obvious in next two charts, which ‘zoom in’ on the data by decreasing the vertical axes value ranges. When I first saw this pattern, I suspected that something had corrupted the data. Double-checking, I found the data were fine, or at least they were true to the values in the original IMLS datafile. So, I decided to resort to that popular and trusty problem-solving technique . . .    [Read more]

 
—————————
1  No, this is not an April Fool’s joke. I propose this fun in all seriousness!

Posted in Data vizualization, Measurement, Statistics | Leave a comment

Indentured Certitude

I want to share some information with you from a resource I mentioned last month. The resource is Edward Suchman’s 1967 book, Evaluative Research and the information is this diagram, which presents a basic model of evaluation:1

I share the diagram because it presents two ideas that don’t always percolate to the top of discussions of library outcome assessment. The first idea is the need for programmatic values to be made explicit beforehand. Suchman, who worked in the public health field, gave this example:

Suppose we begin with the value that it is better for people to have their own teeth rather than false teeth. We may then set our goal that people shall retain their teeth as long as possible.2

Of course, it’s quite possible to hold different values. For instance, one might prefer false teeth over natural ones . . .     [Read more]

 
—————————

1  Suchman, E. A. (1967). Evaluative research: Principles and practice in public service and social action programs, New York: Russell Sage, p.34.
2  Suchman, E. A., p. 35.

Posted in Library assessment, Outcome assessment, Program evaluation | Leave a comment

The Path of Most Resistance

The campaign to assess public library outcomes got a tremendous boost by Library Journal’s Director Summit held last month in Columbus, Ohio. It’s heartening to see library leaders getting serious about making outcome assessment integral to the management of U.S. public libraries! The excitement and determination are necessary for making progress on this front. And it sounds like the summit was designed to let folks absorb relevant ideas in ways that make them their own.

The onset of this newfound energy is the perfect time to commit ourselves to gaining a firm grasp on the core concepts and methods of outcome assessment. Although measurement of outcomes is a new undertaking for libraries, it has been around for a long time in other contexts. In fact, outcome evaluation approaches have been studied, debated, refined, and chronicled over the past forty-five years . . .     [Read more]

Posted in Advocacy, Outcome assessment, Reporting Evaluation/Assessment Results | Leave a comment

Data Are Not Psychic


It’s great to see other librarians advocating for the same causes I harp on in this blog. I’m referring to Sarah Robbins, Debra Engel, and Christina Kulp of the University of Oklahoma, whose article appears in the current issue of College & Research Libraries. The article, entitled “How Unique Are Our Users?”1  warns against the folly of using convenience samples. It implores library researchers to honestly explain the limitations of their studies. And the authors are resolute about the importance of understanding the generalizability of survey findings, a topic which also happens to be the main focus of their study.

I bring up their article for a different reason, however. It is an example of how difficult and nuanced certain aspects of research and statistics can be. Despite the best of intentions, it’s amazingly easy to get tripped up by one or another detail. Robbins and her colleagues got caught in the briar patch that is statistics and research methods. I say so because the main conclusions reached in their study are not actually borne out by their survey results . . .     [Read more]

  
—————————

1  Robbins, S., Engel, D. and Kulp, C., 2011, How unique are our users? Comparing responses regarding the information-seeking habits of engineering faculty, College & Research Libraries, 72:6, pp. 515-532.

Posted in Measurement, Research, Statistics | Tagged , , , , , | Leave a comment

Beauty Is As Beauty Does


Infographics is one of two new fashionable terms used nowadays to refer to statistical charts and graphs. The other term is visualizations, which replaces such archaic words as graphs, charts, pictures, diagrams, and illustrations. Sometimes the term is affectionately shortened to data viz by its really cool practitioners.

In the infographics/visualization/data viz movement there are two basic schools of thought. One school emphasizes principles of artistic design and the other emphasizes information clarity. The first prizes graphics that are beautiful and appealing, while the other judges visualizations based on how informative they are.1  Many adherents of the first approach to graphics are marketing and advertising professionals. Lest you presume that they subscribe to the motto
ars gratia artis
. . .     [Read more]

 
—————————

1  Of course, it is possible for graphics to be simultaneously beautiful and informational. Well-designed graphics can be elegant in their clarity and visual appeal. See Edward Tufte’s book Beautiful Evidence.

Posted in Uncategorized | Leave a comment

Library Science


Evaluation, assessment, and performance measurement are not what you’d call sciences. But these activities do share certain things in common with science and the scientific method.1  One is the requirement that theories be tested based on the compilation of objective evidence. Another is the idea of replication, which is carefully repeating a measurement or experiment in order to verify that the initial findings were not an accident or mistake of some sort.

Then there’s the more philosophical concept known as falsifiablity. A scientific theory needs to be such that there is some way that it can be examined and possibly disproved. A credible scientific theory is one that holds up under repeated attempts to be proven wrong.

In everyday terms, there is a lot of transparency and double-checking in science. I bring these ideas up because, as it happens, there is a claim made in my prior blog entry that needs rechecked. The claim is:

On the basis of per capita statistics, smaller U.S. public libraries out-perform the largest U.S. public libraries . . .  [Read more]

—————————

1  Some of the foundational ideas in evaluation, assessment, and especially performance measurement have also been borrowed from the field of financial auditing. See Beryl Radin’s 2006 book, Challenging the Performance Movement: Accountability, Complexity, and Democratic Values and Michael Power’s 1997 book, The Audit Society: Rituals of Verification.

Posted in Measurement, Reporting Evaluation/Assessment Results, Research | Tagged , | Leave a comment