Thoroughly Modern Museums and Libraries

I think I get it now.  I had thought the term assessment meant a systematic and appropriately rigorous measurement of a construct or phenomenon of interest, like program outcomes, community needs, service quality, and so on.  Only now have I come to understand that a self-assessment is a different animal altogether. Who would have thought that the purpose of a self-assessment is not really to assess anything?  The purpose, I now realize, is to inform and educate. All this time I have been applying research methodology standards to tools that are intended to advocate and indoctrinate. No wonder my observations have been so off-base!

When I critiqued WebJunction’s online competencies assessment questionnaire (see my April 22, 2009 entry), the WebJunction staff explained to me that the true objective for their surveys was to increase awareness of these competencies. I immediately wondered, “Well, how then will WebJunction measure awareness?”  But that is quite an irrelevant question when these questionnaires are actually teaching tools, not measurement instruments. Since the instruments don’t really have to measure anything, we don’t have to obsess about how reliable or valid they are. They can be evaluated (I guess) according to how well they apply proven methods for facilitating adult learning.

The irony of using a research instrument like a survey questionnaire this way will probably escape the majority of librarians (i.e. those who disliked library school research methods class.)  But here’s the story: One of the giant problems in designing behavioral science measures is making sure the measures don’t alter the thing you’re trying to measure. Measures are supposed to be unobtrusive. You would never trust a thermometer if you found that, while measuring the temperature of water, the thermometer also happened to heat the water! The same goes for questionnaires and tests in behavioral science and education.

Worries like this are old hat nowadays. So easy and cheap to post online, the new questionnaires are designed to induce change by informing, educating, and motivating respondents. Millie95I ran across another one of these in connection with a new initiative on “21st century skills” launched last week by the Institute of Museum and Library Services (IMLS). This campaign presents a thoroughly modern take on the mission of libraries and museums. You can read the details and access the “self-assessment tool” here.

Still stuck in my 20th century research methodology paradigm, I found the IMLS questionnaire technically interesting. It is what I call a “Goldilocks instrument” since it uses a 3-point ordinal scale that amounts to a little, a medium amount, and a lot. The response options are something like this:

Goldilocks110

  1. The institution rarely practices such-and-such 21st century skills enhancement task or technique
  2. The institution practices the task or technique fairly often, or
  3. The institution almost always practices the task or technique.

In several questions in the survey, this tripartite scale appears as less than 25% of the time, 25% to 75% of the time, and over 75% of the time. But you get the idea—small, medium, large.

Specific questionnaire items address a series of general institutional dimensions like accountability, leadership, partnerships, and so on.  (See the self-assessment tool matrix.)  Then, in each area, the institution is rated as being in one of three developmental stages:  Early, Transitional, or 21st Century. An institution’s Goldilocks responses fall conveniently into these stages (surprise!!).  If you perform a 21st century skill enhancement task less than 25% of the time, you are in the Early (Neolithic?) stage on that one.  If you perform it more than 75% of the time, you are thoroughly modern!

At the completion of the questionnaire, the self-assessment tool simply parrots back an institution’s responses in graphical form. There are “Recommendations” buttons users can click on, but the advice offered is pretty much the same, regardless of an institution’s rating: Use the results “to initiate a dialogue with your institution’s leaders, board, colleagues, and other stakeholders” so you can improve your rating. In Goldilocks measurement terms, having the most 21st century skills possible is always just right!

Obviously, the survey is a teaching tool, not an assessment. That’s why there is no need for the instrument to gauge how libraries and museums compare to any independently derived standards.  nutrition100Like some “minimum recommended daily allowance” of a particular 21st century practice. This makes things much simpler for IMLS because the idea of library or museum standards, itself, is notoriously tricky.  Several of the approaches endorsed in their model don’t apply to many institutions.  (How can a small rural library or a historic police museum be collaborating with community partners on its new educational programs “over 75% of the time?”) When the goals are education and persuasion, these sticky measurement issues are immaterial. 

Using questionnaires for non-research purposes is an accepted practice, of course. But I just want to make the observation that this use blurs the GIANT difference between questionnaires designed to disseminate information and those meant to gather information. And I am not sure the general audiences for these questionnaires appreciate this distinction. Traditional questionnaires must be carefully constructed and painstakingly tested and validated. You might say their goal is to detect trends rather than establish them!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s