Indentured Certitude

I want to share some information with you from a resource I mentioned last month. The resource is Edward Suchman’s 1967 book, Evaluative Research and the information is this diagram, which presents a basic model of evaluation:1

I share the diagram because it presents two ideas that don’t always percolate to the top of discussions of library outcome assessment. The first idea is the need for programmatic values to be made explicit beforehand. Suchman, who worked in the public health field, gave this example:

Suppose we begin with the value that it is better for people to have their own teeth rather than false teeth. We may then set our goal that people shall retain their teeth as long as possible.2

Of course, it’s quite possible to hold different values. For instance, one might prefer false teeth over natural ones . . .     [Read more]

 
—————————

1  Suchman, E. A. (1967). Evaluative research: Principles and practice in public service and social action programs, New York: Russell Sage, p.34.
2  Suchman, E. A., p. 35.

Posted in Library assessment, Outcome assessment, Program evaluation | Leave a comment

The Path of Most Resistance

The campaign to assess public library outcomes got a tremendous boost by Library Journal’s Director Summit held last month in Columbus, Ohio. It’s heartening to see library leaders getting serious about making outcome assessment integral to the management of U.S. public libraries! The excitement and determination are necessary for making progress on this front. And it sounds like the summit was designed to let folks absorb relevant ideas in ways that make them their own.

The onset of this newfound energy is the perfect time to commit ourselves to gaining a firm grasp on the core concepts and methods of outcome assessment. Although measurement of outcomes is a new undertaking for libraries, it has been around for a long time in other contexts. In fact, outcome evaluation approaches have been studied, debated, refined, and chronicled over the past forty-five years . . .     [Read more]

Posted in Advocacy, Outcome assessment, Reporting Evaluation/Assessment Results | Leave a comment

Data Are Not Psychic


It’s great to see other librarians advocating for the same causes I harp on in this blog. I’m referring to Sarah Robbins, Debra Engel, and Christina Kulp of the University of Oklahoma, whose article appears in the current issue of College & Research Libraries. The article, entitled “How Unique Are Our Users?”1  warns against the folly of using convenience samples. It implores library researchers to honestly explain the limitations of their studies. And the authors are resolute about the importance of understanding the generalizability of survey findings, a topic which also happens to be the main focus of their study.

I bring up their article for a different reason, however. It is an example of how difficult and nuanced certain aspects of research and statistics can be. Despite the best of intentions, it’s amazingly easy to get tripped up by one or another detail. Robbins and her colleagues got caught in the briar patch that is statistics and research methods. I say so because the main conclusions reached in their study are not actually borne out by their survey results . . .     [Read more]

  
—————————

1  Robbins, S., Engel, D. and Kulp, C., 2011, How unique are our users? Comparing responses regarding the information-seeking habits of engineering faculty, College & Research Libraries, 72:6, pp. 515-532.

Posted in Measurement, Research, Statistics | Tagged , , , , , | Leave a comment

Beauty Is As Beauty Does


Infographics is one of two new fashionable terms used nowadays to refer to statistical charts and graphs. The other term is visualizations, which replaces such archaic words as graphs, charts, pictures, diagrams, and illustrations. Sometimes the term is affectionately shortened to data viz by its really cool practitioners.

In the infographics/visualization/data viz movement there are two basic schools of thought. One school emphasizes principles of artistic design and the other emphasizes information clarity. The first prizes graphics that are beautiful and appealing, while the other judges visualizations based on how informative they are.1  Many adherents of the first approach to graphics are marketing and advertising professionals. Lest you presume that they subscribe to the motto
ars gratia artis
. . .     [Read more]

 
—————————

1  Of course, it is possible for graphics to be simultaneously beautiful and informational. Well-designed graphics can be elegant in their clarity and visual appeal. See Edward Tufte’s book Beautiful Evidence.

Posted in Uncategorized | Leave a comment

Library Science


Evaluation, assessment, and performance measurement are not what you’d call sciences. But these activities do share certain things in common with science and the scientific method.1  One is the requirement that theories be tested based on the compilation of objective evidence. Another is the idea of replication, which is carefully repeating a measurement or experiment in order to verify that the initial findings were not an accident or mistake of some sort.

Then there’s the more philosophical concept known as falsifiablity. A scientific theory needs to be such that there is some way that it can be examined and possibly disproved. A credible scientific theory is one that holds up under repeated attempts to be proven wrong.

In everyday terms, there is a lot of transparency and double-checking in science. I bring these ideas up because, as it happens, there is a claim made in my prior blog entry that needs rechecked. The claim is:

On the basis of per capita statistics, smaller U.S. public libraries out-perform the largest U.S. public libraries . . .  [Read more]

—————————

1  Some of the foundational ideas in evaluation, assessment, and especially performance measurement have also been borrowed from the field of financial auditing. See Beryl Radin’s 2006 book, Challenging the Performance Movement: Accountability, Complexity, and Democratic Values and Michael Power’s 1997 book, The Audit Society: Rituals of Verification.

Posted in Measurement, Reporting Evaluation/Assessment Results, Research | Tagged , | Leave a comment

How Do You Know That?

I borrowed the title for this entry from a 2009 study of student research practices by Randall McClure and Kellian Clink. Their study is cited in an article in the current issue of College & Research Libraries that Joe Matthews brought to my attention. This article is Students Use More Books After Library Instruction by Rachel Cooke and Danielle Rosenthal. Both articles explore research sources and citations that undergraduate students use in writing assignments. Though it’s the second article I want to discuss, McClure’s and Clink’s well-chosen title is too good to pass up. In fact, I’m thinking of making it the motto of this blog!

Anyway, in their article Cooke and Rosenthal report that university English composition students “used more books, more types of sources, and more overall sources when a librarian provided instruction.”1  Their statement contains two separate claims . . .    [Read more]

 
—————————

1   Cooke, R. and Rosenthal, D., 2011, Students Use More Books after Library Instruction: An Analysis of Undergraduate Paper Citations, College & Research Libraries, 72:4, p. 332.

Posted in Measurement, Research, Statistics | Tagged , , , , , | Leave a comment

Beware of Vengeful Prayer

I recently ran across a series of studies suggesting that prayer tends to lessen anger and aggression. Researchers concluded that prayer helps people adopt a more positive view of adverse or irritating circumstances. There also happens to be a sideline to their findings that illustrates something you don’t hear much about from proponents of outcomes assessment in libraries. It involves this statement by the researchers:

These results would only apply to the typical benevolent prayers that are advocated by most religions… Vengeful or hateful prayers, rather than changing how people view a negative situation, may actually fuel anger and aggression.

Though the aims of the prayer studies differ from those of library outcomes studies, the two research approaches are similar in this respect: When studying effects of a program, treatment, or intervention, if we’re not sure about the exact content of that program, treatment, or intervention, then we have a problem. In the field of program evaluation this problem falls under the rubric of program fidelity . . .   [Read more]

Posted in Accountability, Outcome assessment, Program implementation | Leave a comment