I recently attended a library webinar where the question of the difference between outputs and outcomes came up. The main idea was that outputs are programs and services an organization delivers, whereas outcomes are changes that occur in recipients, or their life situations, as a result of having received program services. Another was that outputs are distinguished by their more specific focus compared with outcomes, which are more general in scope. When I heard this second idea, it seemed correct in a way but incorrect in another. Mulling this over later, I began to wonder whether the first idea is not quite right, either.
To explain these new definitional doubts I’m having, I’ll need to review a couple of evaluation models with you. But first I’d like to clear something up. Just because some expert somewhere has drawn a diagram with rectangles and arrows and concise labels and called it a “model” doesn’t mean her/his creation is true, or even remotely so. Models are only true if they are confirmed with empirical evidence. In many cases, models—especially in fields like management and economics—are so grand that they can never be reality-tested. The best thing to do, then, is to judge a model based on whether it helps clarify our conversations about the topic that we are concerned with.
Three Models in a Pod
With this caveat in mind let’s begin with the work of library evaluation pioneer Richard Orr, whom I’ve discussed before in this blog.1 This time I’m using Orr’s evaluation model as it appears in Joe Matthews’s definitive book on library evaluation, and is shown here:
Orr’s Evaluation Model.2 Click for larger image.
The diagram differs slightly from the one appearing in my Feb. 2012 post in that it contains no reference to the concepts of quality and value, and indicates the types of measures that correspond with the model stages. For example, note under the capability stage that the term process measures appears, referring to measures that tap specific program materials and procedures that an organization uses to convert inputs into outputs.3
Next, let’s look at logic models. Logic models are analytic tools developed in the late 1980’s in the field of program evaluation that have only recently come to the attention of the library world (see my prior post). There’s an authoritative article on logic models in the 3rd edition of a classic handbook on program evaluation. The article, written by John McLaughlin and Gretchen Jordan, presents a basic logic model seen in the next diagram. I include the complete diagram here but have grayed out the bottom to draw your attention to the top portion.4
McLaughlin & Jordan’s basic logic model.5 Click for larger image.
You can see from the diagram that logic models are structured pretty much the same as Orr’s model. Both approaches utilize the same initial steps, except logic models apply the term activities instead of capabilities; and resources rather than inputs.
If you view the diagram (and Orr’s diagram, too) as a kind of timeline, things on the left occur before things on the right. The logic model separates outcomes into short-term, intermediate, and long-term, which obviously reflect shorter versus longer timelines. I suspect the second idea from the webinar that I mention above, the idea about specificity of outputs versus the generality of outcomes, was referring to the sequential aspects of models like these. Earlier stages are typically more concrete than later stages. But, as you can see, earlier outcomes are more specific than later ones, too. So, specificity alone doesn’t necessarily differentiate outputs from outcomes.
An example will help demonstrate how the left-to-right sequencing in these models represents program service delivery and effectiveness. The diagram below, from an outcomes guidebook from the Urban Institute, describes a school-sponsored parenting program:
Urban Institute model.6 Click for larger image.
Though the Urban Institute authors call the diagram an outcome-sequence chart, it’s essentially a logic model. (Or could it be that logic models are really outcome-sequence charts? Myself, I prefer this latter term over the term logic models.)
If you want to talk about generalities, take a look at the contents of box (7) in the diagram. We could imagine a lot of stuff happening between box (6)—children not dropping out of school—and box (7)—long-term economic well-being. Things like classroom learning, high school graduation, college enrollment, college graduation, qualifying for employment, employment advancement, and so forth.
Also, note that the line just below the diagram title is a continuum of a sort. There are no clear lines dividing the stages, although the Urban Institute authors did put a minor separation between intermediate outcomes and end outcomes. Do you agree with their designating box (3)—parents complete program—as an intermediate outcome? I’d expect in traditional school statistical record-keeping that program attendance would be classified as an output.
Caution: Ambiguous Intersection Ahead
The location of box (3) illustrates an important fact which you might already realize. A model is a theoretical and, in a sense, artificial mindset applied to real-life situations. The models never fit the situations perfectly. There can easily be gray areas and real-life ambiguities they don’t take into account. Box (3) is just such a gray area because it lies at what could be called the intersection of services/programs and clients/recipients. Libraries circulate materials and patrons borrow them. Libraries conduct programs and citizens attend them. Patrons present information requests and librarians deliver information. Where does the line between the library’s output activity and the behaviors of the clients/recipients fall?
An interesting question, indeed. And it can get even more involved. Changes in the status or behavior of clients—which, we might agree, qualify as outcomes—could entail clients increasing or decreasing their usage of programs or services. In the diagram above, it’s possible that a parent might decide to attend another school-sponsored program related to parenting, say one addressing child obesity. In school statistics program attendance will be an output. But the parent’s attendance can justifiably be considered an outcome from the earlier parenting program.
Sometimes the differentiation of outputs from outcomes is in the eyes of the beholders. (Or would that be stakeholders?) Here’s an example: Say a library has developed an advocacy campaign to increase community financial support. The main goal is passage of a referendum (levy) for library funding. The library outlines a project plan, including objectives such as establishing a campaign committee, getting petitions signed and filed, creating a public relations campaign, and so on. All of these objectives are essentially activities the library intends to perform. Would you agree, then, that when the library actually accomplishes these milestone objectives, their accomplishments would be classified as outputs (or as results in the language of managing-for-results approaches)?
Now let’s say the library succeeds at getting the referendum on the ballot, but the referendum is rejected by the majority of voters. What would you say the outcome(s) was (were)? The desired change in the client/recipient—the community in this case—was increased financial support effected by passage of the referendum. Unfortunately, that change failed.
But what about the minority of community residents who voted for the referendum? And the residents who may have signed petitions and joined work committees? And the community organizations and schools that collaborated with the library during the campaign? And even the fact that the community placed the issue on the ballot in the first place? Isn’t each of these occurrences a case of behavioral and/or status changes in the clients/recipients/community, rather than merely the library’s efforts and output? And aren’t these results therefore successful outcomes?
Well, practically speaking, this would be up to the local stakeholders in the campaign project evaluation to decide. As for a resolution to my definitional quandary described earlier, I propose this: Changes in clients/recipients are not the only requirement for classifying program results as outcomes. In addition, the program results must be directly relevant to the initial problem(s) that program efforts are supposed to alleviate, such as inadequate financial support for the library, or box (7) in the Urban Institute’s outcome-sequence diagram above.7
So I have now effectively ducked this question: Is it reasonable for an organization to take credit for achieving short-term outcomes regardless of its success in achieving longer-term ones? Hardly a new question, by the way. Edward Suchman raised it in his 1967 book (which I introduced in a prior post):
The extent to which immediate and intermediate goals can be divorced from ultimate goals as valid in themselves poses a difficult question. Certainly there is a tremendous amount of activity, perhaps the largest portion of all public service work, devoted to the successful attainment of immediate and intermediate goals which appear to have only very indirect bearing upon ultimate goals.8
You’ll have to read his book to see how he answered the question.
1 You can read more about Richard Orr’s evaluation model in my Feb. 22, 2012, Jan. 31, 2012, and Apr. 25, 2009 posts.
2 Matthews, J. R. 2007. The Evaluation and Measurement of Library Services, Westport, CT: Libraries Unlimited, 19.
3 In the field of program evaluation measures of program process can include both activity and output measures like counts and types of services delivered, demographic characteristics of recipients, and so forth. Thus, the term process evaluation is broader than Orr’s process measures is. See Rossi, P. H., Lipsey, M. W., and Freeman, H. E. 2004. Evaluation: A Systematic Approach, 7th ed., Thousand Oaks, CA: Sage Publications, 171-179.
4 The 2nd edition of the Handbook of Practical Program Evaluation includes an earlier version of the authors’ diagram. I used the most recent diagram so as not to misrepresent their current thoughts on this subject. See next footnote for the full citation.
5 McLaughlin, J. A. & Jordan, G. B. 2010. Using logic models. In Wholey, J.S., Hatry, H. P, & Newcomer, K. E., Handbook of Practical Program Evaluation, 3rd ed., San Francisco: Jossey-Bass, 57. As mentioned above, I added the gray shading to de-emphasize the lower portion of the diagram.
6 Hatry, H. P. & Lampkin, L. 2003. Key Steps in Outcome Management. Washington, DC: Urban Institute, 12.
7 There’s another type of outcomes that I’m overlooking here. These are unintended consequences that programs and services may produce.
8 Suchman, E. A. 1967. Evaluative Research: Principles and Practice in Public Service and Social Action Programs, New York: Russell Sage, 55.