In recent years in suburban Cleveland, Ohio the Cuyahoga County Public Library embarked on an ambitious building program which ended up alienating some community members. In a public forum last year one citizen asked how the building campaign could be justified when the library’s own “statistically valid” survey indicated that 90% of patrons were satisfied with the library facilities as they were.1 The library director began her response by saying that “some of it is the way the [survey] questions were asked.” She then went on to explain that a range of other relevant information, beyond patron satisfaction percentages, was considered in the library board’s decision, including systematically gathered community input to building designs.
I cannot say whether the library’s decisions were in the best interests of the local community or not. However, I can comment on the data aspect of this story. So, let me restate that part of the director’s argument more forcefully:
A statistically valid survey finding applied to the wrong research question is logically invalid. The community’s level of satisfaction with current library facilities is not a reliable indicator of its feelings about the sufficiency of those facilities over the longer term. Nor whether the community believes it is better to incur large costs to maintain older facilities or to invest in new ones that permit the library to adapt better to changing community needs. In other words, that’s the wrong question.
Contrary to popular belief, on their own data don’t necessarily mean anything. Their meaning comes from how the data are interpreted and what questions they are used to address. Interpreting data with respect to pressing questions is the crux of data analysis. This is why Johns Hopkins biostatistician Jeff Leek begins his new book, Elements of Data Analytic Style, with a chapter about how the type of research question predetermines what analysis is needed. [Read more]
1 The citizen with the question used the phrase “statistically valid.”