Our systematic review of systematic reviews on the impact of eHealth has demonstrated that many of the clinical claims made about the most commonly deployed eHealth technologies cannot be substantiated by the empirical evidence. Overall, the evidence base in support of these technologies is weak and inconsistent, which highlights the need for more considered claims, particularly in relation to the patient-level benefits, associated with these technologies. Also of note is that we found virtually no evidence in support of the cost-effectiveness claims (–) that are frequently being made by policy makers when constructing business cases to raise funding for the large-scale eHealth deployments that are now taking place in many parts of the world 
This work is characterised by a number of strengths and limitations, which need to be considered when interpreting this work. Strengths include the multifaceted approach to the identification of systematic reviews and the synthesis of this body of evidence. Juxtaposing the conceptual maps of the fields of quality, safety, and eHealth permitted us to produce a comprehensive framework for assessing the impact of these technologies in an otherwise poorly ordered discipline. In addition, reflecting on methodological considerations and socio-technical factors enabled us to produce an overview that is sensitive to the intricacies of the discipline.
Given the poor indexing of this literature and the fact that our searches were centred on English-language databases, there is the possibility that we may have missed some systematic reviews. Our use of a novel, multimethod approach may be criticised as being less rigorous than a conventional systematic review in that we were not in a position to appraise individual primary studies. These more novel methods of synthesis are less well developed and employed, and therefore less evaluated 
. The fact that we needed to adapt the instrument used for critical appraisal is another potential limitation. Further, our assumptions about the theoretical benefits expected presumes that the eHealth technologies considered are capable of delivering these and are used in a manner that allows them to do so. Likewise, it could be argued that some of the expected benefits outlined in this overview are assured and perhaps do not therefore require formal evaluation. It is our view, based on the prevailing climate surrounding EHRs and large-scale implementations underway globally, that the claims made about these technologies are subjected to critical review in the light of the empirical evidence. The overlap in reviews and inconsistent use of terminology required us to make judgment calls regarding what reviews, and indeed which included primary studies, pertained to which interventions. Our focus on clinician-orientated information systems being used in predominantly economically developed country settings are further limitations. More patient-oriented technologies such as telehealth care are no less important than those oriented towards professionals. We are currently engaged in follow-on work, which broadens our field of enquiry along these lines 
. Finally, our synthesis was limited by critical deficits within the literature, which undermined our efforts to generate a fully reproducible quantitative summary of findings 
At the most elementary level, the literature that constitutes the evidence base is poorly referenced within bibliographic databases reflecting the nonstandard usage of terminology and lack of consensus on a taxonomy relating to eHealth technologies 
. There were, furthermore, varying degrees of overlap between individual reviews and contradictory findings even amongst reviews of the same primary studies. In addition, we found considerable heterogeneity in the ways in which findings and other aspects relating to the fundamental features of reviews (motivation, objectives, methods, presentation of findings, etc.) from individual papers were presented. This imprecision and nonstandard usage of terminology, as well as the poor quality of reviews, posed additional challenges, both with respect to interpretation of findings from individual reviews and in relation to synthesising the overall body of evidence.
Our greatest cause for concern was the weakness of the evidence base itself. A strong evidence base is characterised by quantity, quality, and consistency. Unfortunately, we found that the eHealth evidence base falls short in all of these respects. In addition, relative to the number of eHealth implementations that have taken place, the number of evaluations is comparatively small. Apart from several barriers and challenges that impede the evaluation of eHealth interventions per se 
, a number of factors might contribute to evaluative findings going unpublished 
. Conflict of interests can, in particular, make it difficult to publish negative findings 
, which means that the potential for publication bias should not be underestimated in this discipline 
. Moreover, published primary research has been repeatedly found to be of poor quality – particularly with regards to outcome measurement and analysis 
. The highly heterogeneous and complex nature of these interventions makes consistency of findings, even across very similar scenarios, difficult to detect. Our critical appraisal exercise found the same to be true for secondary research. How the included reviews fared with regards to our critical appraisal, merits further comment and will be the subject of a further publication.
Another commonly criticised element of the existing evidence base is its utility 
. Evaluations have to date largely favoured simplistic approaches, which have provided little insight into why a particular outcome has occurred 
. Understanding the underlying mechanisms, typically by studying the particular context of the evaluation, is critical for drawing conclusions in relation to causal pathways and effectiveness of eHealth interventions 
. In addition, evaluations have tended to focus on the benefits with little attention to the risks and costs, which are rarely assessed or rigorously appraised 
. Consequently, the existing evidence base is often of little utility to decision making in relation to the strategic direction of implementation efforts 
A handful of high-profile primary studies demonstrating the greatest evidence of benefit often serve as exemplars of the transformative power of clinical information systems 
. These often include advanced multifunctional clinical information systems incorporating storage, retrieval, management, decision support, order and results communication, and viewing functionality. Evidence of the beneficial impact of such systems is limited, however, to a few academic clinical centres of excellence where the systems were developed in house, undergoing extensive evaluation with continual improvement, supported by a strong sense of local ownership by their clinical users 
. The contrast between the success of these systems and the relative failure of much of the wider body of evidence is striking. Clearly, there are important lessons to be learned from these centres of excellence, but the extent to which the results of these primary studies can be generalised beyond their local environment to those institutions procuring “off-the-shelf” systems is questionable. It is encouraging, however, to see evaluations of commercial systems increasingly taking place 
. A range of factors tend to contribute to the lack of successful implementations of these off-the-shelf systems. In particular, these commercial systems typically have assumptions about work practices embedded within them, which are often not easily transferable to different contexts of use. Additionally, it is not unusual for insufficient time and effort to be devoted to the all-important customisation process 
. NHS Connecting for Health's difficulties with the implementation of EHRs into hospitals in England is a prime example of the challenges that can ensue if such socio-technical factors are given insufficient attention 
Keeping in mind the above, the maturation of evaluation is vital to the success of eHealth 
. There is some indication that the quality of evaluations is beginning to improve with regards to methodological rigour 
, but there is clearly still considerable scope for improvement 
. Most of the reviews we included in our work made calls for more rigorous research to establish impact with some calling for more randomised controlled trials (RCTs) in particular 
. A growing number of authors have however argued for trials of eHealth interventions to employ guidance specifically for complex interventions 
. However, there are a number of challenges to conducting RCTs of eHealth 
, and many calls have also been made for using other complementary methodologies 
. Strategies for improving the quality of research should include building the capacity and competency of researchers. In the shorter term, developing resources, tool-kits, frameworks, and the like for researchers and consumers of research should be prioritised 
. Such developments are pivotal to furthering the science of evaluation in eHealth and the use of evidence-based principles in health informatics 
. Another important development that is needed is the collaboration of different disciplines in evaluation 
We found an important literature pertaining to the design and deployment aspects of eHealth technologies. This literature is central to understanding why some interventions succeed and others fail (or being judged as such). At the individual level, “human factors” play an important role in the design of an intervention, determining usability and ultimately adoption 
. At the aggregate level, “organisational issues” are critical in strategising deployment that ultimately influences adoption 
. Although both enablers and barriers to success are being elicited retrospectively from the literature for design, development, and deployment, the findings for both of these concepts, inter-related as they are, have largely gone untested prospectively. Although there is greater attention being paid to the socio-technical aspects in formal evaluations than ever before, there is still much that needs to be understood 
It is clear that there is now a large volume of work studying the impact of eHealth on the quality and safety of health care. This might be seen as setting a firm foundation for realising the potential benefits of eHealth. However, although seminal reports on quality and safety of health care invariably point to eHealth as one of the main vehicles for driving forwards sweeping improvements 
, our work indicates that realising these benefits is not guaranteed and if it is to be achieved, this will require substantial research resources and effort.
Our major finding from reviewing the literature is that empirical evidence for the beneficial impact of most eHealth technologies is often absent or, at best, only modest. While absence of evidence does not equate with evidence of ineffectiveness, reports of negative consequences indicate that evaluation of risks – anticipated or otherwise – is essential. Clinical informatics should be no less concerned with safety and efficacy than the pharmaceutical industry. Given this, there is a pressing need for further evaluations before substantial sums of money are committed to large-scale national deployments under the auspices of improving health care quality and/or safety.
Promising technologies, unless properly evaluated with results fed back into development, might not “mature” to the extent that is needed to realise their potential when deployed in everyday clinical settings. The paradox is that while the number of eHealth technologies in health care is growing, we still have insufficient understanding of how and why such interventions do or do not work 
. To resolve this, it is essential to not only devote more effort to evaluation, but to ensure that the methodology adopted is multidisciplinary and thus capable of untangling the often complex web of factors that may influence the results. Moreover, a fuller description of the rationale for the choice of methodological approach employed to evaluate eHealth technologies in health care would facilitate synthesis and comparison.
Finally, it is equally important that deployments already commissioned are subject to rigorous, multidisciplinary, and independent evaluations. In particular, we should take every opportunity to learn from the largest eHealth commissioning and deployment project in health care in the world – the £12.8 billion NPfIT and the at least equally ambitious national programme that has recently begun in the US 
. These and similar initiatives being pursued in other parts of the world offer an unparalleled opportunity not just for improving health care systems, but also for learning how to (or how not to) implement eHealth systems and for refining these further once introduced.