I borrowed the title for this entry from a 2009 study of student research practices by Randall McClure and Kellian Clink. Their study is cited in an article in the current issue of College & Research Libraries that Joe Matthews brought to my attention. This article is Students Use More Books After Library Instruction by Rachel Cooke and Danielle Rosenthal. Both articles explore research sources and citations that undergraduate students use in writing assignments. Though it’s the second article I want to discuss, McClure’s and Clink’s well-chosen title is too good to pass up. In fact, I’m thinking of making it the motto of this blog!
Anyway, in their article Cooke and Rosenthal report that university English composition students “used more books, more types of sources, and more overall sources when a librarian provided instruction.”1 Their statement contains two separate claims . . . [Read more]
1 Cooke, R. and Rosenthal, D., 2011, Students Use More Books after Library Instruction: An Analysis of Undergraduate Paper Citations, College & Research Libraries, 72:4, p. 332.
I recently ran across a series of studies suggesting that prayer tends to lessen anger and aggression. Researchers concluded that prayer helps people adopt a more positive view of adverse or irritating circumstances. There also happens to be a sideline to their findings that illustrates something you don’t hear much about from proponents of outcomes assessment in libraries. It involves this statement by the researchers:
These results would only apply to the typical benevolent prayers that are advocated by most religions… Vengeful or hateful prayers, rather than changing how people view a negative situation, may actually fuel anger and aggression.
Though the aims of the prayer studies differ from those of library outcomes studies, the two research approaches are similar in this respect: When studying effects of a program, treatment, or intervention, if we’re not sure about the exact content of that program, treatment, or intervention, then we have a problem. In the field of program evaluation this problem falls under the rubric of program fidelity . . . [Read more]
A recent article in AL Direct entitled The Smartest Readers presents some simple library rankings based on that stalwart library measure, circulation per capita. Rankings like these are, at least to me, a reminder of a perennial conundrum concerning the meaning of per capita library measures. For more than a century librarianship has puzzled over how to evaluate these statistics. Do per capita data tell us whether or not libraries are doing a good job? What amounts of materials made available or levels of services delivered are sufficient for libraries with specific missions and serving communities of a particular size and makeup?
Mainly, libraries have to rely on their own ingenuity to interpret per capita or per constituent data (like per student, faculty, employee, subscriber, stakeholder, and such). About the only official guidance they have gotten over the decades is advice about comparing (benchmarking) their data with appropriate peer libraries. Lacking some more objective gauge of statistical performance, libraries end up applying what might be called the more-is-better rule . . . [Read more]
The field of program evaluation has grappled with the political context of institutional performance measurement for decades. For libraries and universities, though, the politics of accountability is newer terrain. In some instances these organizations have unwittingly enrolled in a crash course on the subject, learning in real-time how volatile the process can be.
A prime example is the recent controversy about faculty productivity within the University of Texas System (UT). At the request of the its Board of Regents, UT released a 821 page spreadsheet disclosing detailed records on faculty compensation, course enrollment, class sections taught, research time allocations, and other related data. Each page of the document contains this curious disclaimer in red:
The data in its current draft form is incomplete and has not yet been fully verified or cross referenced. In its present raw form it cannot yield accurate analysis, interpretations or conclusions.
Essentially, they’re saying, “Well, here are our data, but they can’t be trusted . . .”
This week Chase Bank sent an email to its customers saying that one of their vendor’s computer systems were hacked. The bank stated that they:
…are confident that the information that was retrieved [i.e., stolen] included some Chase customer e-mail addresses, but did not include any customer account or financial information. Based on everything we know, your accounts and financial information remain secure.
Confidence based on whatever they happen to know, eh? Because Chase could easily be mistaken, customers would be foolish to put their full trust in the bank’s assurances. I definitely plan to keep an eye on my Chase account for the next several months.
This same caution also applies to the most recent OCLC membership report, Perceptions of Libraries, 2010: Context and Community. The report’s energetic graphics and narrative make the information seem to be true. But, as my prior posts1 explain, surveys are always incomplete and imperfect. Findings from a single survey like OCLC’s are just not weighty enough to deserve our unconditional trust . . . [Read more]
1 See Discussing Accuracy, Checking It Twice, Stranger Than Fiction, and Objects In Mirror Are Closer Than They Appear.
In the book John Adams author David McCullough writes about Adams’ legal defense of British soldiers on trial for murder in 1770. In his argument to the Massachusettes jury Adams said:
Facts are stubborn things. And whatever our wishes, our inclinations, or the dictums of our passions, they cannot alter the state of facts and evidence.1
Indisputable facts are difficult to ignore, indeed. Yet, facts are not always clear and unambiguous. Getting to the plain facts and drawing valid conclusions from them can be stubborn matters in their own right. To quote science teacher and YouTube lecturer, wonderingmind42, “Interpreting evidence well requires skill, training, and experience. . .”2 [Read more]
1 In McCullough, D., 2001, John Adams, Simon & Schuster, p. 68. Red emphasis added.
2 Quote appears in the video at the 5:18 time mark. Also watch the segment from 2:40 to 4:20 about facts versus the interpretation of facts.
A new OCLC membership report, Perceptions of Libraries, 2010: Context and Community, is hot off the…er…PDF-Maker! The report is formatted more like a magazine than a study, with key findings summarized in a myriad of graphical illustrations. So, I must confess that I have rather neglected the narrative so far. But from browsing mostly through the pictures, I have come up with a few suggestions that might enhance the report’s message, quantitatively speaking.
First, it would be better if the OCLC market researchers avoided citing large percentages, like the 1,544% growth in e-Book sales (p. 11) and 1,050% growth in smart phone ownership (p. 15). As Derrick Niederman and David Boyum explain in their book, percentages like these tend to be overstatements due to the baseline figures used. And the percentages just aren’t that informative . . . [Read more]