Poor WebJunction Survey Design Makes Findings Pretty Much Useless

This week I noticed that WebJunction is conducting a survey entitled “Technology Competencies Evaluation.”  I think this must be a sequel to a survey I saw there last month about “management core competencies.”  While the surveys are probably marketing research for WebJunction’s e-learning product line, the researchers say they want to use the data to “establish a baseline for the library field.” Thus, they do profess an interest in identifying larger and, we might conclude, non-commercial trends within the library profession.

Whatever their intentions, the surveys won’t produce much reliable information due to poor designs. First, neither questionnaire actually assesses questionnaire130competencies, that is, knowledge or skill levels. Instead, they measure respondents’ opinions about their own knowledge and skills in a dozen or so training topics. So, any baselines WebJunction comes up with will be merely about current opinions which would later be compared to some subsequent set of opinions.

So, what will they learn?  At the most, they can determine whether library staff believe they are more (or less) knowledgeable over time. That type of information, while mildly interesting, seems beside the point. Wouldn’t it make more sense to measure competencies as compared to some minimum acceptable levels, the way that IT certification or professional licensure exams do?  Later, perhaps, it might be useful to compare these over time, but that would not be as significant as a comparison of skills and knowledge to well-thought-out minimum standards.

Second, information from these surveys is compromised by the sampling method that WebJunction researchers have chosen.  They use what is called convenience sampling.

Soil sampling on Mars. NASA/JPL/Univ. Ariz.
Rather than using some more systematic method (i.e., random sampling), they get respondents where and when it is convenient. This method severely limits the usefulness of study results. Because the respondents are self-selected, their responses will, in all likelihood, differ from the larger population of library staff the researchers are interested in. That is, the findings will be biased.

Suppose that mostly tech savvy librarians tend to take the surveys. Then, levels of self-reported competency will be artificially higher than the larger population of librarians and library staff overall. Or perhaps the opposite is true, that respondents tend to be mostly tech un-savvy non-librarian staff. Either way, allowing respondents to self-select introduces a troublesome and typically unknown slant, making results biased and misleading. Statisticians describe this situation by saying that “results cannot be generalized to the larger population of interest.”  This research validity issue–called external validity–is a central concern in behavioral and marketing research methods.

Convenience sampling also hampers the baseline comparisons WebJunction talks about. Without making sure they have representative samples of the larger population of interest, there is no way to know whether differences between baseline measures from this month’s survey and later surveys are bogus.  Perhaps the original (baseline) respondents were very tech-savvy, and the future (comparison) respondents are not at all. In this case the researchers will be comparing two non-equivalent groups’ opinions. This will lead to incorrect conclusions about apparent changes in opinions of the larger population of library staff over time. It may be that overall library staff opinions have remained unchanged even though the two samples—baseline and later comparison—differ quite a bit.

Producing assessment data represents a big investment of time, effort, and expense. Data collection methods need to be designed to produce maximally reliable and valid information in order to justify these costs. balance110Spending researcher and respondent time on surveys that can only produce questionable results is a poor use of library resources. Also, researchers should never portray findings from studies that use poor designs as if they were fair and balanced depictions of the subjects being studied. That would be mis-information, indeed!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s