I recently ran across a series of studies suggesting that prayer tends to lessen anger and aggression. Researchers concluded that prayer helps people adopt a more positive view of adverse or irritating circumstances. There also happens to be a sideline to their findings that illustrates something you don’t hear much about from proponents of outcome assessment in libraries. It involves this statement by the researchers:
These results would only apply to the typical benevolent prayers that are advocated by most religions… Vengeful or hateful prayers, rather than changing how people view a negative situation, may actually fuel anger and aggression.
Though the aims of the prayer studies differ from those of outcome studies, the two research approaches are similar in this respect: When studying effects of a program, treatment, or intervention, if we’re not sure about the exact content of that program, treatment, or intervention, then we have a problem. In the field of program evaluation this problem falls under the rubric of program fidelity. Here’s a brief explanation:
In outcome research, an intervention can be said to satisfy fidelity requirements if it can be shown that each of its components is delivered in a comparable manner to all participants and is true to the theory and goals underlying the research.1
In the prayer studies, subjects were permitted to use whatever type of prayer they wanted. They might have recited traditional prayers, made up extemporaneous prayers, or chosen silent or contemplative forms of prayer. Or they may have just pretended to pray. The researchers considered any and all styles of prayer to be equivalent, except, as we learn later, vengeful and hateful prayer. (Which makes me wonder how they could be sure that no subjects chose this form!)
However, in the arena of publicly funded programs, insufficient information about the specific content of program interventions impedes good evaluation and decision-making. An example will help make this clearer. Say a rural literacy program includes specific educational materials for parents along with a specially prepared video. But suppose only a portion of the parents actually receive the materials. And suppose others don’t have a way to play the video, and that others don’t have the time to watch it. Ignorance of these facts can lead program managers to the wrong conclusions when they look at outcomes. They might decide, for instance, that the parental education component should be discontinued because it was not cost-effective.
It’s also possible that a program might be implemented uniformly but incorrectly. Maybe all participants in the family literacy program received an outdated version of the video. Or there may have been errors in the eligibility information communicated to the school district. Or enthusiastic staff may have decided to improvise by awarding prizes to children who completed program milestones early. Obviously, there are all kinds of ways that program implementation can deviate slightly or substantially from the original plan.
Of course, deviations from the official program or intervention design can lead to undesirable results. The problem of incomplete distribution of materials to parents mentioned above is an instance where a program variation interferes with desired outcomes. And so, apparently, is the case of vengeful prayer. But the opposite is also possible. What if staff deliver a poorly conceived program so creatively that the outcomes are positive?
In either instance we’d have to concede that the delivered program is not the one that had been intended. So, the basic message is:
Without evidence that a program has been implemented properly, it is difficult to determine whether a program ‘works’ or meets its intended goals.2
Notice that evidence about (careful measurement of) program implemention is essential. We can’t just assume that programs consist of the right things and are delivered in the right ways. Nor should we rely on hopes and prayers that programs and services are delivered uniformly and correctly.
1 Dumas, J. E., Lynch, A. M., Laughlin, J. E., Smith, E. P., and Prinz, R. J. (2001). Promoting intervention fidelity: Conceptual issues, methods, and preliminary results from the Early Aalliance Prevention Program, American Journal of Preventive Medicine, 20, 38-47.
2 Esbensen, F., Matsuda, K. N., Taylor, T. J., and Peterson, D. (2011). Multi-method strategy for assessing program fidelity: National evaluation of the revised G.R.E.A.T. program, Evaluation Review, 35(1), 14-39.