Infographics is one of two new fashionable terms used nowadays to refer to statistical charts and graphs. The other term is visualizations, which replaces such archaic words as graphs, charts, pictures, diagrams, and illustrations. Sometimes the term is affectionately shortened to data viz by its really cool practitioners.
In the infographics/visualization/data viz movement there are two basic schools of thought. One school emphasizes principles of artistic design and the other emphasizes information clarity. The first prizes graphics that are beautiful and appealing, while the other judges visualizations based on how informative they are.1 Many adherents of the first approach to graphics are marketing and advertising professionals. Lest you presume that they subscribe to the motto
ars gratia artis . . . [Read more]
1 Of course, it is possible for graphics to be simultaneously beautiful and informational. Well-designed graphics can be elegant in their clarity and visual appeal. See Edward Tufte’s book Beautiful Evidence.
I borrowed the title for this entry from a 2009 study of student research practices by Randall McClure and Kellian Clink. Their study is cited in an article in the current issue of College & Research Libraries that Joe Matthews brought to my attention. This article is Students Use More Books After Library Instruction by Rachel Cooke and Danielle Rosenthal. Both articles explore research sources and citations that undergraduate students use in writing assignments. Though it’s the second article I want to discuss, McClure’s and Clink’s well-chosen title is too good to pass up. In fact, I’m thinking of making it the motto of this blog!
Anyway, in their article Cooke and Rosenthal report that university English composition students “used more books, more types of sources, and more overall sources when a librarian provided instruction.”1 Their statement contains two separate claims . . . [Read more]
1 Cooke, R. and Rosenthal, D., 2011, Students Use More Books after Library Instruction: An Analysis of Undergraduate Paper Citations, College & Research Libraries, 72:4, p. 332.
I recently ran across a series of studies suggesting that prayer tends to lessen anger and aggression. Researchers concluded that prayer helps people adopt a more positive view of adverse or irritating circumstances. There also happens to be a sideline to their findings that illustrates something you don’t hear much about from proponents of outcomes assessment in libraries. It involves this statement by the researchers:
These results would only apply to the typical benevolent prayers that are advocated by most religions… Vengeful or hateful prayers, rather than changing how people view a negative situation, may actually fuel anger and aggression.
Though the aims of the prayer studies differ from those of library outcomes studies, the two research approaches are similar in this respect: When studying effects of a program, treatment, or intervention, if we’re not sure about the exact content of that program, treatment, or intervention, then we have a problem. In the field of program evaluation this problem falls under the rubric of program fidelity . . . [Read more]
A recent article in AL Direct entitled The Smartest Readers presents some simple library rankings based on that stalwart library measure, circulation per capita. Rankings like these are, at least to me, a reminder of a perennial conundrum concerning the meaning of per capita library measures. For more than a century librarianship has puzzled over how to evaluate these statistics. Do per capita data tell us whether or not libraries are doing a good job? What amounts of materials made available or levels of services delivered are sufficient for libraries with specific missions and serving communities of a particular size and makeup?
Mainly, libraries have to rely on their own ingenuity to interpret per capita or per constituent data (like per student, faculty, employee, subscriber, stakeholder, and such). About the only official guidance they have gotten over the decades is advice about comparing (benchmarking) their data with appropriate peer libraries. Lacking some more objective gauge of statistical performance, libraries end up applying what might be called the more-is-better rule . . . [Read more]
The field of program evaluation has grappled with the political context of institutional performance measurement for decades. For libraries and universities, though, the politics of accountability is newer terrain. In some instances these organizations have unwittingly enrolled in a crash course on the subject, learning in real-time how volatile the process can be.
A prime example is the recent controversy about faculty productivity within the University of Texas System (UT). At the request of the its Board of Regents, UT released a 821 page spreadsheet disclosing detailed records on faculty compensation, course enrollment, class sections taught, research time allocations, and other related data. Each page of the document contains this curious disclaimer in red:
The data in its current draft form is incomplete and has not yet been fully verified or cross referenced. In its present raw form it cannot yield accurate analysis, interpretations or conclusions.
Essentially, they’re saying, “Well, here are our data, but they can’t be trusted . . .”
This week Chase Bank sent an email to its customers saying that one of their vendor’s computer systems were hacked. The bank stated that they:
…are confident that the information that was retrieved [i.e., stolen] included some Chase customer e-mail addresses, but did not include any customer account or financial information. Based on everything we know, your accounts and financial information remain secure.
Confidence based on whatever they happen to know, eh? Because Chase could easily be mistaken, customers would be foolish to put their full trust in the bank’s assurances. I definitely plan to keep an eye on my Chase account for the next several months.
This same caution also applies to the most recent OCLC membership report, Perceptions of Libraries, 2010: Context and Community. The report’s energetic graphics and narrative make the information seem to be true. But, as my prior posts1 explain, surveys are always incomplete and imperfect. Findings from a single survey like OCLC’s are just not weighty enough to deserve our unconditional trust . . . [Read more]
1 See Discussing Accuracy, Checking It Twice, Stranger Than Fiction, and Objects In Mirror Are Closer Than They Appear.