In the extreme powerball approach to evaluation, driven monolithically by the question “Is anyone healthier?,” no formal evaluation study would be done until the project was extremely mature. At that point, the study ultimately conducted would ideally be a randomized trial that compared the health of individuals in the target community who used the intervention of interest with that of those who did not. A statistical comparison of appropriate health indexes of the two groups would reveal the probability that differences between the groups, if any, were due to chance alone. The statistical controls built into the design of the experiment would have eliminated any other causal explanation of the differences. From this powerball view of evaluation, any differences favoring the intervention group, with less than a 5% probability of having occurred by chance alone, would then lead to rejoicing throughout the land.
In contrast, a smallball approach to evaluation would consist of a series of more focused studies, conducted across the life cycle of the informational intervention. As illustrated in , formal study of the intervention would begin even before deployment, and perhaps even before the design of the proposed information resource begins, to verify whether a need for the resource exists and to understand the character of these needs. Smallball studies conducted at this stage would address the need for the intervention from several perspectives, including the perceptions of the conceivers of the intervention, the perceptions of the end users, and the information that can be gleaned from statistical indicators relevant to the end-user group and their environment. The potential magnitude of differences in these perceptions has been startlingly pointed out by Forsythe and colleagues in studies contrasting what migraine patients really wanted to know about their disease and what their care providers thought they would want to know [1
]. Other smallball studies conducted before deployment of the intervention might explore whether the proposed resource design is in fact in line with the validated needs in the population.
Stages of information resource development and evaluative study
During the deployment, suggests a chain of events that must occur if the intervention is going to realize, farther down the road, the desired positive effects on health outcomes. From the outset, the intervention must be properly deployed in a technical sense, meaning that it must function in the field as intended and at least as well as it functioned in more controlled laboratory testing. Then, in the next step, the intended users must actually use the resource and use it appropriately. Even if a resource is used and used in the manner intended, it must then engender health behavior change, invoking the next link in the chain. Depending upon the nature of the informational intervention—for example, if the end users of the intervention are health professionals—behavior change must first occur in these professionals before the behavior of health care consumers can be affected. Other kinds of interventions are directed at health care consumers as end users. In these cases, as shown by the dotted arrow in , appropriate use of the resource can lead more directly to health behavior changes in consumers.
As this chain relates to evaluation, the desired effect represented by each link, or each arrow depicted in , cannot be assumed. Smallball evaluation studies are required to see if the resource was appropriately deployed, whether and how it was used, and whether the anticipated behavior changes occurred in consumers or health care providers. If, and more often when, the answers to some of these questions turn out to be negative, the smallball evaluation studies can direct the developers and managers of the intervention to alternative strategies that can lead to more favorable results.
Finally, and with attention to the bottom of , the posited beneficial effects on health can only be seen sometime after the deployment and use of the intervention. Sometimes, the time lag between a consumer's use of an intervention and the realization of the desired health outcomes is considerable. Consider, for example, the time required between implementation of a successful smoking cessation intervention and the realization of any reduced incidence of lung cancer in the target population. While everyone would argue that reduced rates of illness associated with smoking are the desired end point of the intervention, as a practical matter few audiences for evaluations of such an intervention would be willing to wait until such reductions could be demonstrated (or not). Some might advocate that studies directed at this stage of a project are not the province of informatics at all, but rather fall into the domain of health services research. In this light, informatics evaluation might end with the investigation of health behavior change. If this change is along lines that have, in health services research studies, been shown to engender desirable health outcomes, the project can be termed a success from the perspective of informatics.