The literature review and the survey showed respectively an 18% and 44% implementation rate, suggesting that actual implementation occurred more often than reported in literature. Also the quality of the research methods in these cases was higher in reality (17%) than was reported in the literature (3%). Likewise the survey showed that more models proved to be correct in practice than in the literature. However, the survey reported that 14 models proved to be at least partially correct, while only 7 projects were evaluated with a before and after design. This casts some doubt on the reliability of the survey. It seems that some authors reported their model to be correct based on (subjective) reactions of the hospital.
An explanation for the differences between literature and practice is that the majority of papers focused on the technical simulation design and not the contribution to hospital improvements. This might be related to the authors' affiliations; the majority of the papers (66 out of 89) included at least one member of mathematical, operations research, industrial engineering or economics research groups. These researchers may prefer to publish technical modeling details. Consequently, most conditions for success were related to the technical quality of the model.
Fone et al. [9
] provided another explanation; due to the time pressure to publish "it is likely that many modeling studies are published before validation is complete and before implementation has been carried out (and assessed)."
In addition, scientific publications on improvements achieved with simulation may be hampered because of the difficulty to draft a "ceteris paribus" design, to find control sites and of many unpredictable interfering variables.
Our study found that the majority of factors contributing to actual implementation concerned the technical quality of the simulation. Data availability was the most frequently mentioned factor (57 times). At least 43% of the involved hospitals had to generate new data to gain sufficient insight into their problem, this percentage could even be higher as 27% of the papers did not report on data collection. Data availability is important because the reliability of a simulation model is affected by the quality of the data used to calculate input distributions. Validation and verification of the models are essential to check the quality of the model (see Table ).
Our study found that the process management factors are related to managing the expectations of the hospital and the modeler (see Table ). This finding is consistent with Robinson & Pidd [24
]. The use of animation was mentioned 19 times as a means to simplify communication between modeler and hospital. Animation, however, should be used carefully as it can distract staff from the model details [27
]. Many of the process management factors are in agreement with Forsberg et al. [28
], as they also identified cooperation, careful planning of the project and stakeholder and customer involvement as success factors for implementation of simulation models in healthcare.
Our finding that only 9 of the 89 papers reported the use of their model in more than one setting is in line with Proudlove et al. [29
]. An explanation is that the emphasis placed on working closely with the client, meaning the best model for one hospital may be inappropriate for other hospitals [30
]. Often researchers make detailed models to increase the statistical descriptive power, but this can hinder the demonstration of general principles [29
]. Others [31
] plead for more generic models after identifying differences between specific and general models in healthcare. In emergency care, initiatives have been undertaken to develop generic models [32
There may be a selection bias in the paper selection process as papers may be found in medical, health services and operations management and-research domains. We feel, however, that the included 89 papers are a good reflection of the available literature in this field. As this study focused on the implementation of recommendations and found a limited number of papers, we did not exclude papers because of the journals' impact factor or the quality of the models.
Although non-scientific literature contains many examples of simulation models in healthcare [6
], we did not include these since non-peer reviewed articles are not held to the same rigorous quality standards. Additionally, it is difficult to systematically identify these publications [6
]. Another limitation is the possible bias in respondents of the survey. We were only able to contact authors from 67 of the 89 papers. Furthermore, of the contacted authors related to the 67 papers, only 41 responded. It is more likely that staffs still present were involved in implementation.
The relative advantage that an innovation (here simulation) has over other methods affects the uptake [33
]. This paper found that implementation took place in 44% of the studies, however actual evidence that simulation leads to improved hospital performance is limited. To increase the uptake of simulation, researchers should provide high quality evidence of improvements. To get these results published, (scientific) journals could ask their authors to state whether or not the findings were accepted and implemented, and whether there is evidence of an improvement. A step further would be to have journals encouraging authors to submit follow-up papers describing the implementation of the recommendations. Furthermore, examining popular literature (e.g. trade journals, news media, etc.) on this subject remains an item for further research [6
More research into perceived success factors seems necessary. Because of the encountered differences between literature and practice, it would be interesting to examine whether the technical, process and outcome quality of implemented recommendations are higher than those of studies that were not implemented. Furthermore surveys among multiple hospital respondents, such as management and medical professionals, that examine organizational characteristics contributing to the implementation of model findings seems worthy of further research.
This research was limited to simulation studies on operations management in hospitals. It would be interesting to extent the scope to other techniques to enable researchers to select the most appropriate OR techniques for specific settings. The overview of OR techniques and their advantages and disadvantages recently published by the RIGHT project is an important contribution [4
] because it also discusses when to apply a specific technique and the required resources. In addition generalization of the methods and results needs further attention. It is relevant to identify a pool of generic approaches and to design a decision schedule for its use, involving the contingent factors relevant for the decision to embark on a specific simulation approach.