Our results showed that participants valued all of the 12 features studied. However, they regarded six features as most valuable for both advancing improvement efforts overall and (general and implementation) knowledge acquisition in particular. These features were: collaborative faculty, solicitation of staff ideas, change package, PDSA cycles, Learning Session interactions, and collaborative extranet. These features also helped participants maintain their motivation, access social support, and improve their project management skills.
Most notably, our results revealed that features that enable interorganizational learning are viewed as significantly more helpful by participants from organizations that experienced significant improvement than by participants from organizations that did not. This finding suggests a potential explanation for the mixed results of research on collaborative effectiveness. The use of interorganizational features, and interorganizational learning in turn, may mediate the impact of collaborative membership on organizational performance. This potential mediating effect suggests the importance of not only assessing “Do collaboratives work?” but also assessing “What [about collaboratives] works for whom in what circumstances?” (Pawson and Tilley 1997
). Our results suggest that collaboratives may work most for participants who capitalize on their interorganizational features in addition to their intraorganizational features. This hypothesis is consistent with research in other industries showing performance benefits of combining inter- and intra-organizational activities (e.g., Ancona and Caldwell 1992
Our findings raise the question: why did teams from organizations that experienced less improvement view features that enable interorganizational learning as less helpful? One possibility is that less successful teams were unable to capitalize on the benefits of these features due to internal or external constraints such as lack of management support, an unsupportive organizational culture, or poor team functioning, factors other studies show influence performance improvement (Mills and Weeks 2004
; Shortell et al. 2004
; Bradley et al. 2006
). A second possibility is that these teams suffered a misattribution error. Attribution theorists have shown that individuals attribute their poor performance to situational factors outside of their domain (Jones and Nisbett 1971
; Gilbert and Malone 1995
). Thus, poorer-performing teams may have negatively attributed their performance to features that involved others. A third possibility is that these features are inherently less helpful than better performers claim. Better performers may rate all features highly, overestimating the effect of some, because they are overly enthusiastic about the collaborative experience. Unfortunately, we cannot identify the true reason with our data.
We studied participants in four collaboratives that varied in clinical focus, and found no significant difference in helpfulness ratings across collaboratives. However, the helpfulness of features may vary for a different set of collaborative topics. For instance, features that facilitate interorganizational learning may be less helpful than features that facilitate intraorganizational learning when the practices recommended by the collaborative require substantial adaptation to fit the organizational context. The possibility that feature helpfulness is contingent upon new practice characteristics is consistent with research showing that hospital units in which improvement teams used learning activities that facilitated the adaptation of context dependent practices experienced greater implementation success (Tucker, Nembhard, and Edmondson 2007
Our survey results suggest that teams do not find site visits extremely valuable. However, we caution against concluding that site visits are not a great help to participants. Our small sample size for users, who rated this feature (N
=12), may have limited our power to detect significant differences. Moreover, our qualitative data suggest that site visits can be a great help for two reasons. First, they afford the visitor an opportunity to observe recommended practices in operation. Research on best practice transfer suggest that such observation is beneficial for practice implementation because many new practices in health care have a large tacit component that is not easily described or codified (Berta and Baker 2004
). Second, site visits grant visitor and host greater opportunity to interact and share ideas and experiences related to the topic area. The two organizations that participated in site visits in Fremont et al.'s (2006)
qualitative study echoed this view, supporting the high value of this feature for collaborative participants.
A central question for designers and implementers of collaboratives is whether and how to modify the model to increase its effectiveness for improving quality of care. Our findings imply that modifications that reduce the emphasis on the six great features are ill-advised because they would diminish the value of collaboratives from the participant perspective. Decreasing Learning Sessions or replacing them with virtual sessions, for example, would reduce the opportunities for participant interactions, a most helpful feature of collaboratives. Whether increasing the emphasis on the six great features would yield additional benefits or whether there are diminishing returns due to resource constraints or information saturation is a question that requires further study.
Although our study's findings are informative, they should be considered in light of the methodological limitations. First, our effective response rate of 68 percent is less than ideal; however, this response rate is comparable to other studies of collaboratives (Landon et al. 2004
; Pearson et al. 2005
). Furthermore, differences between study teams and nonstudy teams in terms of performance improvement and other characteristics were generally modest and not statistically significant. It is likely, however, that study teams were more engaged in the collaborative and its features than nonstudy teams, and therefore not representative of nonstudy teams. Nevertheless, our study teams are representative of our population of interest: users of collaborative features. Study teams' familiarity with features makes them rich sources of information for a study assessing the helpfulness of features for those that use them. Other collaborative participants may have different views about the helpfulness of such features. For example, nonusers may view features as unhelpful for their situation. Thus, we caution against generalizing our findings to all collaborative participants. More work is needed to understand how nonusers view features.
Second, while our qualitative measure of performance improvement enabled us to combine data from collaboratives with different outcome measures, we lack objective, longitudinal measures of clinical practice, or outcome change. Third, although we found no evidence of evaluator bias in tests comparing collaborative-level improvement rates, the possibility of evaluator bias cannot be eliminated given our reliance on a single evaluator of performance per collaborative (i.e., the IHI Director). Finally, we only examined the BTS collaborative model. While BTS models are common, there are also several variations on this model (Solberg 2005
). Additional studies are needed to evaluate those models in depth.