Search tips
Search criteria 


Logo of clinbiorevLink to Publisher's site
Clin Biochem Rev. 2013 August; 34(2): 47–60.
PMCID: PMC3799219

From Evidence to Best Practice in Laboratory Medicine


Laboratory tests offer value if they provide benefit to patients at acceptable costs. Laboratory testing is one of the most widely used diagnostic interventions supporting medical decisions, yet evidence demonstrating its value and impact on health outcomes is limited. This contributes to wide variations in test utilisation including underdiagnosis, overdiagnosis and misdiagnosis, which may impact the quality and the clinical- and cost-effectiveness of care and patient safety. Therefore implementing evidence into the care of patients is a moral and social imperative to laboratory professionals and all health care staff.

This review investigates the reasons research does not get into practice, or only does with a very long delay. Apart from reviewing the common barriers to implementation, it also discusses the drivers of inappropriate test utilisation. By reviewing the theoretical and practical aspects of implementation science, recommendations are made for approaches that are thought to be most effective and that can be adopted to close the gap between evidence and practice, and to facilitate evidence-based laboratory medicine. Passive dissemination of the evidence and educational interventions are insufficient and do not offer sustainable solutions. A multifaceted and individualised implementation strategy, including individually tailored academic detailing, reminder systems, clinical decision support systems, feedback on performance, and participation of doctors and laboratory professionals in quality improvement activities addressing test selection and interpretation and in clinical audits, has greater potential for success. Examples of these initiatives at the laboratory and clinical interface are provided with links to valuable resources.

‘Knowing is not enough; we must apply.

Willing is not enough; we must do.’

JW von Goethe


Medical tests impact patient health by influencing clinical management decisions on what treatment options are selected, when these are administered and how such interventions are followed up to assess response and adequacy of treatment. Medical tests may modify patient perceptions and behaviour, and apart from their intended contribution to better management of conditions, they may also put patients at risk of harm.1,2 Evidence-based laboratory medicine (EBLM) assists clinical management of patients by integrating into clinical decision making the best available research evidence for the use of laboratory investigations with the clinical expertise of the physician and the needs, expectations and concerns of the patients, in order to improve the care and outcomes of individual patients and the effective use of healthcare resources. In other words, the aims of EBLM are to improve the value and impact of laboratory testing on health and health care delivery. Laboratory tests offer value only if they are: clinically valid i.e. they provide highly accurate diagnostic or prognostic information for clinical decision making; clinically effective i.e. they contribute to improved patient-centred outcomes; and cost effective i.e. they contribute to reduced health care costs. In brief, laboratory tests have value if they provide benefit to patients at acceptable costs.

How can these aims be achieved and what is the role of laboratory professionals in getting evidence into practice? Our responsibilities are best summed up by Muir Gray, author of the highly acclaimed book Evidence-Based Health Care: (i) eliminate poor or useless tests before they become widely available i.e. stop starting; (ii) remove old tests with no proven benefit or, in fact, harm from the laboratory’s repertoire i.e. start stopping; (iii) introduce new tests if evidence proves their efficacy and effectiveness i.e. start starting or stop stopping.3 Whilst the aims are clear and laboratory professionals, clinicians, patients, policymakers and industry generally endorse such principles, practice data often show the opposite or, at best, wide variations in test utilisation. The widening gap between evidence and practice leads to various undesirable scenarios such as underutilisation, overutilisation or inappropriate utilisation of medical tests that may contribute to underdiagnosis, overdiagnosis or misdiagnosis of patients with potentially harmful health and other consequences for patients and society, respectively.

With the rapid growth of proteomic and genetic research, many new potential biomarkers are being developed and may reach the market before their value is appropriately investigated and systematically proven in clinical research and practice. As research trial methods for medical test evaluation are still being debated and are under development, a number of laboratory tests have gained widespread use in the last decades which were subsequently shown not to have any clinical value or may even cause harm, and which could not be approved if they were released to the market today.4 It is also common sense that once a test becomes available or easily accessible, and especially if it is heavily promoted, it will be used. Patients’ access to health information via the world-wide web also increases demand for testing. Electronic order systems allowing ‘one-click’ requesting of test panels enforce certain testing patterns which then become habits that are very hard to change. Inappropriate test utilisation, due to wrong indication for or timing of the test, is also a well-known phenomenon and may provide misleading information for selection of further therapeutic interventions or for drug dosing decisions.5

Examples of overutilisation include:

  • unnecessary routine pre-operative laboratory testing of low risk patients before elective surgery;6
  • routine health checks of asymptomatic individuals7 in spite of observations that the prevalence of detecting a significant health condition is only 0.5–3% in such low pre-test probability situations;8
  • serum and red cell folate testing in the general population in spite of the fact that mandatory folate fortification of flour has been in place since 2009 in Australia. A study two years after the introduction of mandatory folate fortification showed that the prevalence of low serum and red cell folate concentrations reduced by 77% and 85%, respectively.9 In spite of this evidence, in the financial year of 2011–2012 over 2.3 million services were claimed for the two relevant Medicare items and testing increased by 26% since the preceding financial year.10

Advances in imaging and laboratory techniques allow the detection of early pathophysiological changes, many of which are incidental findings of no clinical significance warranting any medical interventions.11Overdiagnosis of such ‘pseudo-diseases’ may lead to an array of further unnecessary investigations and thus costs, not to mention the psychological and psychosomatic consequences to the patients and their families. For example, estimated glomerular filtration rate (eGFR) below 60 mL/min/1.73m2 is common in elderly people. According to a recent study, this criterion may potentially lead to overdiagnosis in up to one-third of this population, whilst the incidence of end stage renal disease is less than 1 in 1000 per annum.12 Overdiagnosis of prostate cancer due to prostate-specific antigen screening of the general population is responsible for the annual incidence to mortality ratio for prostate cancer of approximately 6 to 1.13 More examples and commentaries for overdiagnosis from experts in evidence-based medicine can be found at

In contrast, it has also been shown that useful tests are sometimes underutilised, and it may take up to two decades for such tests to be put into routine clinical practice. This delay in the translation of credible research findings into clinical practice remains a substantial obstacle to improving the quality and effectiveness of health care interventions. For example, a survey of over 10,000 patients in the US has shown that they received only 55% of recommended care for both acute and chronic conditions. Diagnostic indicators of care were also met in only 56% of the cases.14 An earlier study has also shown that low-density lipoprotein cholesterol targets set by the National Cholesterol Education Program recommendations were achieved in only one-third of patients.15

Further examples for underutilisation and underdiagnosis include:

  • coeliac disease with an estimated prevalence of nearly 1% in the US and Europe, whilst the prevalence of diagnosed cases is only about 0.27% or even less;16,17
  • familial hypercholesterolaemia;18 and
  • a number of chronic conditions, particularly hypogonadism and osteoporosis, in the elderly.19

The consequences of under- and overdiagnosis have become central in recent discussions.11 Whilst numerous examples have been published about the health and other harms of inappropriate test utilisation, it is alarming to see how inappropriate diagnostic labels change the perceptions and choices of patients and their families. A recent study on the overdiagnosis of gastroesophageal reflux in children has shown that parents whose children received this diagnosis were more keen to accept medical interventions than parents whose children had the same symptoms but were not given a disease label, in spite of both groups being told that the medications are likely ineffective.20 These findings confirm that inappropriate disease labels are associated with unnecessary overtreatment and cause anxiety and potential other harms. Cost implications of inappropriate testing are also well known. For example, a recent British study has estimated that eliminating inappropriate testing could save the National Health Service up to £1 billion in test costs alone.21 These problems and perceptions call for a more systematic approach and a concerted effort by all stakeholders targeting both health care professionals and patients in order to ensure that good research evidence is implemented without much delay into practice (stop stopping – start starting) and inappropriate testing practices are discontinued (start stopping) or never get implemented (stop starting).

Before embarking on the review of potential solutions, the reasons research does not get into practice, or only does with a very long delay, need to be investigated. The approaches that have been shown to be effective and can be adopted to close the gap between evidence and practice, and facilitate a better and faster dissemination of evidence-based laboratory test utilisation will be discussed. Finally we will also discuss how the aims and objectives of EBLM can be achieved and what the laboratory profession and other key stakeholders need to do to improve the clinical value and impact of laboratory testing.

Why Test Utilisation is Often Inappropriate and Not Driven by Evidence?

Joseph Schumpeter, an Austrian philosopher in the mid-nineties, concluded that ‘It was not enough to produce satisfactory soap, it was also necessary to induce people to wash’. So why ‘do not people wash’? What makes it so hard to do something for which we already have some research data (i.e. the ‘soap’)?

Is it because the ‘soap’ itself may not be satisfactory? In other words, could it be that the evidence itself is:

  • wrongly produced and of poor quality e.g. due to faulty methodology and design,22 and therefore is unsafe. For example, a recent study investigating reporting bias in meta-analyses of cardiovascular biomarkers revealed that selective reporting of the associations of emerging biomarkers with cardiovascular disease have led to inflated estimates of effect.23
  • not working in practice, not applicable? For example, research evidence is not transferable to daily operations due to (a) differences between populations studied and those to be managed in real practice; or (b) differing test performance in research and clinical settings; or (c) biomarkers being used as surrogate outcome measures in trials of approved drugs which in real life subsequently translated to no benefits or even harm to health outcomes.24
  • not affordable or too costly for the benefit it provides.

Alternatively, is it because even though the ‘soap’ is fine, there is something wrong with its user? Perhaps the user:

  • does not know about the product e.g. due to inappropriate dissemination of information for lack of specialty society membership, or lack of easy access to library or online resources;
  • is ‘lazy to wash’ e.g. due to inappropriate education or training or to inertia to current practice and reluctance to change practice;
  • finds it complicated or cannot afford to wash with it in daily practice e.g. due to organisational, economical or other barriers to application, such as lack of experience with patient group, lack of knowledge about alternatives, or anticipated practical difficulties and high costs or doubts about patient compliance and acceptance;
  • finds it unacceptable to use the product e.g. for cultural, religious, ethical, legal, social or other societal reasons, or due to personal beliefs or differential beliefs about applicability and utility of recommendation to the patient or population;
  • does not trust or believe in the product e.g. due to poor reporting or lack of clear and consistent guideline recommendations and conflicting or ambiguous information from various guideline sources and fear from litigation.25

Apart from the above mentioned examples of common barriers to implementation, a number of drivers also fuel inappropriate test utilisation. The list of such drivers is extensive and the summary below may not be exhaustive:

  • the availability of and curiosity and excitement about often heavily promoted new technologies and tests;
  • ‘herd mentality’ whereby overly optimistic initial reports result in overuse of incompletely studied diagnostic methods or therapies;2
  • cultural or behavioural drivers and beliefs that screening individuals for low risk conditions, for example by general health checks, strengthens the physician-patient relationship. However, a recent study has shown that such health checks had no significant impact on all cause and cardiovascular mortality;7
  • demands and perceptions of patients. It has been shown that patients willing to take more responsibility for and control over health issues overrate the real value of testing and screening, and are hostile to recommendations or medical advice that less testing is often more beneficial;26
  • testing to save time until clinical decisions can be made;
  • reassurance of doctor or patient, even though a recent meta-analysis has shown that diagnostic tests in low risk and low prevalence conditions do little to reassure patients, decrease their anxiety, or resolve their symptoms, and at best the tests only reduce further primary care visits;27,28
  • re-definition of diseases due to more sensitive tests or testing protocols and the self-perpetuating cycle in which overdiagnosing conditions earlier leads to increased prevalence of disease, and then to overestimation of the benefit of treatment;29
  • fear of litigation, and legal incentives rewarding defensive attitudes of ‘better to overtest and over-manage 100 patients than miss or mismanage one patient’;
  • career ambitions, with the extreme examples of some opinion leaders earning professional recognition by extensively producing and releasing premature research data; or occasionally even publishing fake research data to inflate publication lists and curricula vitae, which then strongly influence clinical guideline recommendations;30
  • financial drivers2 and fee-for-service reimbursement models that reward more testing without investigating the need for such testing practices.

Closing the Gap Between Evidence and Practice

Field and Lohr have pointed out that ‘guidelines do not implement themselves’.31 More literarily and quoting Goethe, ‘Knowing is not enough; we must apply. Willing is not enough; we must do.’ Even though no one would possibly argue with these statements, it has been widely acknowledged that major gaps exist between what is known as effective practice (i.e. theory and scientific evidence) and what is actually done (i.e. policy and practice).32 Even a good ‘soap’ can be utilised wrongly if implementation of appropriate use fails and, vice versa, a faulty ‘soap’ can just as well be implemented very efficiently. Only the right introduction of the right ‘soap’ to the right customer will achieve the desired positive outcomes of service.3 Indisputably, this calls for objective measurement of both intervention and implementation outcomes.

Theoretical Considerations

Implementation science experts hold the view that a wide array of multi-level variables need to be considered for the successful implementation of evidence-based health innovations. A recent systematic review groups these into causal factors, such as structural, organisational, provider, patient, and innovation level measures, and into implementation outcomes that the analysis of causal factors is able to predict.33 The structural measures could represent aspects of the local health care environment, including physical infrastructure and human resources, political, social and health policy aspects, and economic climate. The organisational level encompasses aspects of the organisation’s management culture and strategic vision and values including commitment to high quality and evidence-based service, employee morale and customer satisfaction. The provider level constructs consider aspects of clinical staff’s and patients’ attitudes and incorporate aspects of behavioural change for those who are ultimately responsible for implementing the innovation. The innovation level includes analysis of the benefits of introducing a new health technology into existing clinical pathways and reviewing the efficacy of such changes. The patient-related characteristics include considerations of the patients’ perspectives, health-relevant beliefs, motivation, and personality traits that can impact implementation outcomes. It is important to highlight that these factors are interrelated. For example, changes at the organisational level can facilitate changes at provider level and affect individual behaviours both at health care staff and patient level.34

According to the model of Chaudoir and colleagues, implementation outcomes are classified as adoption, fidelity/adherence, implementation cost, penetration, and sustainability.33 Before conceptualising and discussing the measurement of implementation outcomes we need to investigate the process of translating the evidence into practice which is largely affected by the above causal factors. The process is best described as behavioural change management and starts with acknowledgement that there is a problem and recognition of the need for a change. The next steps are exploration and search for a solution and proceed through awareness and acceptance to adoption and adherence. The latter two stages of the implementation process involve program installation, initial implementation, full operation, sustainability and innovation linked to continuous monitoring of impact and improvement of the quality and effectiveness of care.32 The above highlight the need for a system approach and the collaboration of all stakeholders if successful implementation of evidence-based best practice is to be achieved.

Practical Considerations

There is good evidence in the literature for which approaches do not work, and reasonable evidence for which do work. Educational methods, such as disseminating practice guidelines and continuing medical education, are useful tools for stimulating awareness (Table 1).35 However, most literature on implementation science reiterates and a systematic review by Mickan et al. has convincingly demonstrated that the production, active dissemination and even acceptance of the evidence do not guarantee that evidence-based recommendations for best practice are adopted and adhered to.25,36 This study showed that both adoption and adherence were affected by provider and organisational level constructs25. For example, specialists with higher skills and experience or working in large hospitals with better equipment and staff resources were more likely to adopt and adhere to recommendations than general practitioners working single-handed or in small group practices and with less advanced technologies or support staff. Laboratories therefore may need to develop different implementation strategies for their hospital and general practitioner clients. It may be an argument for the joint development of laboratory medicine specific clinical recommendations that this study has also found that national or regional recommendations developed by credible specialty organisations were more likely to be accepted and adopted than global or other international guidelines that are more remote or culturally and organisationally different from the actual stakeholder professions or place of implementation.25 This study also showed that informed patients can influence adherence to best practice and this again calls for guideline implementation strategies that provide and apply patient information and empowerment tools.

Table 1
General approaches for implementing evidence into practice.

Understanding better the causes for leakage along this awareness–acceptance–adoption–adherence pipeline is essential for designing the best strategies for implementation of evidence-based recommendations. Targeted dissemination of guidelines to less experienced clinicians and to clinicians working in small centres using expert outreach visits, presentations and recommendations by influential peers or professional bodies in prestigious conferences or journals may facilitate acceptance of the value of change by the targeted individuals and their organisations (Table 1). Clear and consistent laboratory testing-related guidelines, conceived in collaboration with clinical specialists and which are pilot tested and adapted to local settings and equipped with tools and resources for monitoring, achieve higher success with adoption and adherence (Table 1).

Implementing evidence-based recommendations is a complex process which involves behavioural, organisational and policy approaches and a combination of various tools that facilitate best practice and change in existing behaviour. Guidelines, policies, educational information or training alone are not effective and may have a very short lifespan in terms of adherence. Sustainability of change presents the biggest challenge. Long-term multifaceted implementation strategies that address all stakeholders simultaneously and quality systems that employ regular reminders and feedback on performance compared to peers are more effective. Whilst there are no standard recipes for success, the strategies and approaches presented in Table 1, and especially their various combinations, have been shown to be effective for getting evidence into practice in some but not all circumstances.37

Evidence-based practice requires the whole organisation, not just the laboratory personnel and individual physicians, to be ready for the change. Policy changes that require top-level commitment and change at organisational or management level may help to institutionalise the change and ensure that evidence-based practice becomes routine and part of the culture of organisational operations. This could be achieved by (Table 1):33,34,38

  • commitment to evidence-based best practice in strategic plans and job descriptions;
  • allocation of sufficient time and tools for staff training and training assessment to achieve the competence required for the change;
  • re-engineering care processes (e.g. acceptable caseload, appropriate skill-mix of colleagues, availability of necessary resources including staff, time, equipment, health care facilities);
  • information systems aligned with implementation of evidence-based practice;
  • investing in patient information and patient education;
  • devising and using quality measures and systems for monitoring key performance indicators and rewarding positive outcomes (health outcomes, patient and customer satisfaction, cost savings, etc.);
  • continuous feedback of information to demonstrate the impacts achieved by the change; and
  • linking budget to outcomes and performance.

Performance payment and financial incentives are becoming popular at regulatory level and seem to work in some settings.39 A recent Cochrane review concluded that the currently available body of evidence is insufficient to prove their success in improving the quality of primary health care, and that such approaches are cost effective relative to other means of quality improvement. Therefore current evidence simply suggests that financial incentive schemes should be carefully designed and evaluated.40

How to Achieve the Aims and Objectives of EBLM

Improving Awareness and Acceptance of EBLM

Informed health care professionals as well as patients influence the uptake of evidence-based best practice recommendations. A good example for patient information and empowerment tools is the LabTests Online portal which is available in locally adapted versions and in many different languages (Table 2). In Australia, patient information factsheets advising the public on the risks and benefits of pathology testing, including information on direct-to-consumer genetic testing, is also available from the Royal College of Pathologists of Australasia website and have been disseminated to general practitioners (Table 2). The College, in collaboration with relevant clinical colleges, also provides evidence- and consensus-based guidance on pathology testing. For example, a recent addition to this resource makes recommendations for appropriate laboratory test selection and for reducing pre-analytical errors in emergency care, and defines key elements of service level agreements including the management of point of care testing.41Common Sense Pathology, published as a supplement of the Australian Doctor magazine, primarily aims at the quality use of pathology by general practitioners across Australia. It presents typical case scenarios with clear and concise recommendations for deciding which pathology tests are most appropriate for the diagnosis and management of different illnesses (Table 2). The National Prescribing Service’s MedicineWise program, delivered by health professionals to primary care physicians, uses the method of academic detailing. This is a service-oriented face-to-face outreach education visit for physicians and provides an accurate, up-to-date synthesis of the evidence about relevant medical interventions. The goal of academic detailing is to change prescribing of medications or requesting of medical tests to be consistent with medical evidence, support patient safety, cost-effective use of health care resources, and to improve the clinical effectiveness of patient care. The National Prescribing Service provides evidence-based guidance for both the public and health care professionals, for example on the diagnosis and monitoring of diabetes and on vitamin-D testing (Table 2). The Quality Use of Pathology Program (QUPP) established in 1999 by pathology stakeholders under the umbrella of the Australian Government runs projects that promote evidence-based patient choice and information, rational test requesting and professional practice standards (Table 2). Their report on quality test ordering has been released recently and provides a matrix that can assist in the assessment of whether a request for a pathology test is appropriate.42

Table 2
Tools and resources supporting the implementation of evidence-based laboratory medicine.

In the US, the Choosing Wisely campaign, sponsored by the American Board of Internal Medicine Foundation, provides key clinical recommendations for physicians and patients that help avoid unnecessary medical interventions. In this campaign a number of clinical disciplines advise against the unnecessary use of certain diagnostic tests (Table 2). For example, haematologists advise not to do workup for clotting disorders for patients who develop a first episode of deep vein thrombosis in the setting of a known cause; or not to perform repetitive full blood count and chemistry testing in stable conditions to avoid diagnostic phlebotomy-related anaemia in hospitalised patients. The American College of Clinical Pathologists suggests that the following tests are stopped:

  • routine pre-operative testing (typically a full blood count, prothrombin time and activated partial thromboplastin time, basic ‘metabolic panel’ and urinalysis) for low risk surgeries without a clinical indication;
  • bleeding time test;
  • population based screening for 25-OH-Vitamin D deficiency.

The Laboratory Medicine Best Practices initiative from the Division of Laboratory Science and Standards of the US Centers for Disease Control and Prevention summarises the best evidence in systematic reviews and meta-analyses on common laboratory medicine related topics. These include, for example, evidence reports on the effectiveness of barcoding in avoiding sample identification errors, effectiveness of practices for reducing blood culture contamination, effectiveness of various forms of electronic or verbal communications for reporting critical results, and effective practices to reduce haemolysis rates in emergency departments. The National Academy of Clinical Biochemistry (NACB) issues evidence- and consensus-based guidelines for the laboratory evaluation and monitoring of patients with specified disorders. The NACB involves clinicians and relevant professional and patient organisations in the process of guideline development and its recommendations are often jointly published with these organisations both in laboratory and clinical journals for wider dissemination (Table 2).

In the UK, the National Institute for Health and Care Excellence (NICE) issues guidelines and brief extracts for patients on recommended best practice for a wide range of conditions. It also operates a Diagnostics Assessment Programme which focuses on the evaluation of new diagnostic interventions in order to adopt technologies with proven clinical and cost-effectiveness rapidly and consistently (Table 2).

Improving Adoption and Adherence of EBLM

The above mentioned examples of professional and patient portals support awareness and contribute to better dissemination and acceptance of evidence-based practice recommendations. They may also assist in improving patient outcomes.43 The UK NICE Pathways offer an interactive online clinical algorithm that is linked to detailed evidence-based information from diagnosis to treatment options and provide implementation tools such as self-assessment questionnaires, costing models with templates, educational materials and slide sets for the dissemination of NICE guidance, as well as quality standards and clinical audit criteria and other related materials for monitoring the impact of implementation (Table 2). Guidelines supplemented with such practical tools are more likely to be adopted and adhered to. This is demonstrated by a postal survey carried out nearly 10 y after the implementation of the NICE guideline on preoperative testing.44 This post-implementation survey reported that low risk patients aged <40 y with no co-morbidities undergoing minor surgery did not have routine tests for full blood count, electrolytes and renal and pulmonary function.45

Clinical decision support interventions in Computerised Physician Order Entry (CPOE) systems can identify inappropriate requesting or inappropriate frequency of requesting a laboratory test, and generate electronic alerts to prompt physicians to better utilisation of laboratory services. Regulatory and reimbursement agencies also can, and in some settings do, penalise providers if a laboratory investigation is performed more frequently than the predefined retesting intervals. Laboratory professionals have long recognised the need for a clear definition of retesting intervals. Several projects in Australia, such as the RIO project in New South Wales and the AUSLAB Retest Interval Project in Queensland have addressed these issues. At the time of the study the latter projected savings of $10,000 per month for the pathology network by reducing unnecessary repeats of laboratory tests.46 The RIO project introduced a ‘traffic light’ system indicating the level of authority required for pathology test ordering (‘green’ – no restrictions, ‘amber’ – authorised by senior medical officer, and ‘red’ only orderable by consultant medical staff). This system, coupled with reminders and other implementation facilitators, helped reduce the number of inappropriate tests by over 25% and resulted in significant cost savings (Table 2). A recent study investigating retesting frequency of HbA1c monitoring in over 100,000 patients in the UK has found that only 49% of requests conformed to guidance; 21% were too frequent and 30% were too infrequent. They also found that under-requesting was more prevalent with potential consequences on longer-term health outcomes, and that publication of international and national guidance on diabetes management had no significant impact on under- or over-requesting rates.47

Computerised decision support tools in CPOE systems have been reviewed in a recent technology assessment report and were found to have a statistically significant benefit on both process and practitioner performance-related outcomes in nearly two-thirds of the studies. However, only a small positive impact was observed on test ordering, and evidence was insufficient to judge the cost-effectiveness and the impact of the reviewed systems on patient outcomes.48 Recent Cochrane reviews investigating the impact of electronic point-of-care reminders and paper-based reminders showed small to modest improvements in provider behaviour.49,50 Automated alerts or reminders for redundant test orders do not always improve adherence to best practice and are often seen more like a nuisance than help and are overridden due to electronic alert fatigue and ignorance.51 Nevertheless, carefully designed prompts have been shown to reduce unnecessary testing.39,52 Physicians often resist change and stopping certain test requesting habits, even when the test almost never identifies anything useful, unless alternative and quickly or easily actionable options are provided simultaneously. Electronic alerts coupled with immediate link to more rational test ordering or user-friendly diagnostic algorithms may facilitate acceptance and can be quickly adopted by clinicians.

Simple measures, such as moving a test, the use of which we wish to discourage, to the back of paper request forms or to the bottom of a scroll-down menu in CPOE platforms, achieve surprisingly large impacts.5356 Harmonisation of test profiles of common conditions57 and declining or vetting inappropriate requests before analysing the samples,58 coupled with regulating retesting intervals46 are tools commonly employed by laboratories. However, their application depends on local regulation or legislation and can succeed only if they are adopted in consultation with clinical users.

Individually targeted postanalytical quality assurance surveys of general practitioners, coordinated by the Norwegian Quality Improvement of Laboratory Services in Primary Care, NOKLUS, circulate case-based scenarios to investigate appropriate test selection, test interpretation and medical decisions based on laboratory results in common medical conditions. Such interventions are also very efficient educational tools for disseminating and enforcing the use of evidence-based recommendations and in changing the practice and behaviour of physicians, especially when they are coupled with individually tailored feedback and benchmarking of responses compared to peers.59,60 Clinical audit of current test requesting patterns and testing-related outcomes (e.g. how many patients on warfarin have the recommended frequency of testing and how many have International Normalised Ratios (INRs) that are within therapeutic targets) coupled with individually tailored presentation of findings is a powerful tool to raise local awareness of inappropriate requesting patterns. These types of audits linked with educational interventions and widely disseminated posters on proposed changes in testing protocols to health care staff, and limiting the requesting authority of nurses or junior doctors to agreed test panels provided sustainable results in rationalising test utilisation when practice was re-audited within 12 months in a UK hospital setting.61

A recent editorial by Fryer and Hanna emphasises the need for stronger liaison of laboratory professionals with clinical teams in order to review the real impact of testing on the patient pathway and to rationalise testing based on such measurable outcomes. Whilst acknowledging the difficulties of collecting such data and that the link between testing and clinical outcomes is often indirect, they call for regular assessment of results: (a) not reviewed by the requestor or clinical team; (b) reviewed but which had no impact on clinical management; and (c) reviewed and changed management, but which did not affect patient outcome. Such exchanges are often eye-opening to clinicians and may stimulate efforts to join forces in quality improvement projects.21 Such crosstalk at the laboratory-clinical interface often results in the development of joint local protocols which all stakeholders feel more closely associated with, and therefore adherence is more likely and sustainable.

Towards Evidence-Based Laboratory Testing

As discussed above, there are a number of tools and approaches that may facilitate evidence-based laboratory testing. Which one is best to choose and how the various options can be combined to achieve the greatest success depends mostly on local settings and resources. A number of conclusions can be drawn from the vast literature on implementation science which helps the profession in setting priorities for achieving better progress in getting research into practice.

Clearly our profession needs to be engaged in better diagnostic research and contribute to the performance of more reliable studies that will provide the right ‘soap’ which users are willing to ‘buy’ and ‘wash’ with. Better evidence on the clinical effectiveness of laboratory tests will ultimately lead to better clinical guidelines on the rational utilisation of laboratory tests and new emerging biomarkers. The appearance of so many guidelines paradoxically has created a new gap between their development and use. This presents new challenges to both guideline developers and users of evidence-based recommendations.

Since the quality and reliability of guidelines have been challenged, it is recommended that, before any evidence or recommendations are put into local practice, laboratory teams, together with clinicians, critically assess the quality and content validity, the transferability and implementability of the evidence and evidence-based guideline recommendations. The Appraisal of Guidelines for Research and Evaluation (AGREE) II instrument is a useful critical appraisal tool to assess the methodological rigour and transparency of guideline development.62 It is not suitable for assessing the validity of the clinical content of the guideline or the quality of evidence that underpins the recommendations. The Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach has been designed to assess and rate the quality of the evidence in individual studies and the strength of the body of evidence across various studies, as well as to grade the overall strength of guideline recommendations.63 Grading the quality and strength of evidence and recommendations increases the transparency and validity and facilitates the acceptability and adoption of guidelines. Implementability also needs to be considered, for example with the help of the GuideLine Implementability Appraisal (GLIA) instrument which explores individual recommendations’ executability, decidability, validity, flexibility, effect on process of care, measurability, novelty/innovation and computability.64,65

At the user level, variability in the uptake and implementation of guideline recommendations is a well-known phenomenon, leading to confusion and unnecessary practice variations. This is partly due to the fact that the same evidence is sometimes interpreted differently, leading to inconsistent recommendations by various guideline agencies. For example, about 10 years ago various organisations proposed different troponin cut-offs for diagnosing acute myocardial infarction (AMI). These included a Receiver Operating Characteristic (ROC) established cut-off point, the 97.5th and the 99th percentile of healthy populations as well as the arbitrarily chosen value at which 10% or 20% CV of the troponin assays could be achieved in the laboratory. A survey of Australian laboratories in 2003 revealed that 67% of laboratories reporting troponin T determined a cut-off for AMI by ROC analysis; 7% and 1% based the cut-off value on 10% CV or 20% CV, respectively; and only 24% used the recommended 99th percentile concentration for a reference population.66 Such heterogeneity in practices is not just confusing to clinicians but may lead to misdiagnosis and harm to patients. The universal definition of AMI and authoritative guidelines refer to a rising and/or falling pattern of cardiac troponin with at least one result above the 99th percentile of a reference population. This definition is universally adopted in many international and national guidelines, irrespective of the generation of the troponin assay and type of molecule measured, which presents a significant challenge to laboratories and clinicians alike. When a laboratory assay becomes part of the universal definition of the disease, much stronger evidence and appropriately designed diagnostic accuracy and clinical outcome studies and the harmonisation or standardisation of those assays should be achieved before recommendations can be safely introduced into practice.67 In the absence of these, laboratory professionals have a responsibility to closely monitor the clinical performance of sometimes prematurely released biomarkers and educate their clinical users on the characteristics and limitations of the assays they provide for critical diagnoses. This information should also be fed back to manufacturers and guideline developers so that recommendations can be further refined as more evidence is being gathered on the real-life clinical effectiveness of test-treatment pathways and their impact on patient outcomes.

Some positive examples, widely adopted in laboratory and clinical practice, demonstrate that evidence-based laboratory practice may contribute to improved patient outcomes. For example, HbA1c monitoring of diabetic patients with standardised assays to provide comparable results and achieving safe treatment targets in various subsets of patients, that were established on data from large randomised controlled trials, has been shown to be associated with improved clinical outcomes.68 The wide adoption of colorectal cancer screening programs has been shown in a Cochrane review to achieve 16% reduction in the relative risk of colorectal cancer mortality and no difference in all-cause mortality. After adjustment for screening attendance, relative risk reduction increased to 25%, highlighting some important psycho-social barriers in the uptake of widely promoted national screening programs already backed by strong evidence of effectiveness and cost-effectiveness.69 A recent systematic review and meta-analysis of individual patient data with 5 years of follow-up found that self-monitoring and self-adjustment of warfarin therapy using point-of-care INR testing by patients was more effective in reducing thromboembolic events; in terms of major haemorrhagic events or death it was no worse than usual care with laboratory-based monitoring and office-based dose adjustment. Participants aged <55 y and those with mechanical heart valves showed greater reductions in thromboembolic events than patients >55 y, those assigned to self-testing but with clinical dosing, and those with atrial fibrillation.70

The point-of-care INR monitoring and the colorectal cancer screening program are good examples for highlighting that not even strong evidence is generalisable and always implementable. Apart from assessing the validity of evidence and guideline recommendations (i.e. is it the right ‘soap’?), implementation teams should always analyse the local setting and patient context (i.e. is it the right ‘soap’ for the right patient?) and identify barriers and drivers of implementation (i.e. can the right ‘soap’ for the right patient be used in my local health care environment?). Teams should design a strategy that is feasible and has the potential to achieve the greatest and longest lasting impact in improving the quality, effectiveness and cost-effectiveness of current practice. During this planning and design phase due consideration should be given to local characteristics of critical importance including legislation, regulatory and policy affairs, specific local health care settings, availability of and access to human and other resources, organisational culture, capability of information technology, patient preferences, behavioural, legal or social issues, etc. While designing an implementation plan it should be borne in mind that there are no ‘magic bullets’; simple but multifaceted interventions using electronic or other decision support tools and linked to education and regular audit and feedback on outcomes, developed and agreed by all stakeholders, have the greatest potential to achieve success. It is also important to recognise that at the level of the local health care organisation, quality management system changes need to be put in place to ensure that behavioural and organisational changes remain sustainable. Financial incentives and disincentives need to be planned carefully, keeping in mind that neither the quality nor the health outcomes and impacts of pathology services on patients can be compromised. It is a moral and social imperative to laboratory professionals and all health care staff that the effectiveness and efficiency of the utilisation of laboratory services is improved. Otherwise quick political decisions and simple policy measures such as cost cutting may be imposed upon many laboratories which potentially may channel away precious resources that otherwise could support the better management of undiagnosed, under-diagnosed or misdiagnosed conditions from the savings achieved by rationalisation.

Laboratory professionals, similarly to all health care professions, face major challenges in the coming decades of aging populations living longer with many chronic conditions, and need to keep pace with rapid advancements of medical technologies. To prepare future generations of laboratory medicine specialists, education and training should also incorporate practising state-of-the-art evidence-based laboratory medicine and clinical research. Training should address and equip laboratory staff with evidence-based knowledge and tools so that laboratory professionals can expand their consultative role at the laboratory-clinical interface for the benefit of better patient care and outcomes.


Some aspects of this review have been discussed in and influenced by the work of the Test Evaluation Working Group of the European Federation of Clinical Chemistry and Laboratory Medicine. Roche Diagnostics is thanked for supporting the Working Group with an independent educational grant to the European Federation of Clinical Chemistry and Laboratory Medicine.


Competing Interests: None declared.


1. Ferrante di Ruffano L, Hyde CJ, McCaffery KJ, Bossuyt PM, Deeks JJ. Assessing the value of diagnostic tests: a framework for designing and evaluating trials. BMJ. 2012;344:e686. [PubMed]
2. Stillman AE, Woodard PK, RESCUE Investigators Consequence of overuse of invasive coronary angiography. Arch Intern Med. 2011;171:709. author reply 709–10. [PubMed]
3. Gray JAM. Evidence-Based Health Care: How to Make Health Policy and Management Decisions. London: Churchill Livingstone; 1997.
4. Ioannidis JP. Biomarker failures. Clin Chem. 2013;59:202–4. [PubMed]
5. Bates DW, Kuperman GJ, Wang S, Gandhi T, Kittler A, Volk L, et al. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J Am Med Inform Assoc. 2003;10:523–30. [PMC free article] [PubMed]
6. Munro J, Booth A, Nicholl J. Routine preoperative testing: a systematic review of the evidence. Health Technol Assess. 1997;1:i–iv. 1–62. [PubMed]
7. Prochazka AV, Caverly T. General health checks in adults for reducing morbidity and mortality from disease: summary review of primary findings and conclusions. JAMA Intern Med. 2013;173:371–2. [PubMed]
8. Kroenke K. Diagnostic testing and the illusory reassurance of normal results: comment on “Reassurance after diagnostic testing with a low pretest probability of serious disease” JAMA Intern Med. 2013;173:416–7. [PubMed]
9. Brown RD, Langshaw MR, Uhr EJ, Gibson JN, Joshua DE. The impact of mandatory fortification of flour with folic acid on the blood folate levels of an Australian population. Med J Aust. 2011;194:65–7. [PubMed]
10. Australian Government Department of Health and Ageing. MBS Review: Vitamin B12/Folate Testing Protocol. Jan, 2013.
11. Moynihan R, Doust J, Henry D. Preventing overdiagnosis: how to stop harming the healthy. BMJ. 2012;344:e3502. [PubMed]
12. Winearls CG, Glassock RJ. Classification of chronic kidney disease in the elderly: pitfalls and errors. Nephron Clin Pract. 2011;119(Suppl 1):c2–4. [PubMed]
13. Sandhu GS, Andriole GL. Overdiagnosis of prostate cancer. J Natl Cancer Inst Monogr. 2012;2012:146–51. [PMC free article] [PubMed]
14. McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A, et al. The quality of health care delivered to adults in the United States. N Engl J Med. 2003;348:2635–45. [PubMed]
15. Marcelino JJ, Feingold KR. Inadequate treatment with HMG-CoA reductase inhibitors by health care providers. Am J Med. 1996;100:605–10. [PubMed]
16. Hin H, Bird G, Fisher P, Mahy N, Jewell D. Coeliac disease in primary care: case finding study. BMJ. 1999;318:164–7. [PMC free article] [PubMed]
17. Vilppula A, Collin P, Mäki M, Valve R, Luostarinen M, Krekelä I, et al. Undetected coeliac disease in the elderly: a biopsy-proven population-based study. Dig Liver Dis. 2008;40:809–13. [PubMed]
18. Neil HA, Hammond T, Huxley R, Matthews DR, Humphries SE. Extent of underdiagnosis of familial hypercholesterolaemia in routine practice: prospective registry study. BMJ. 2000;321:148. [PMC free article] [PubMed]
19. Frost M, Wraae K, Gudex C, Nielsen T, Brixen K, Hagen C, et al. Chronic diseases in elderly men: underreporting and underdiagnosis. Age Ageing. 2012;41:177–83. [PubMed]
20. Scherer LD, Zikmund-Fisher BJ, Fagerlin A, Tarini BA. Influence of “GERD” label on parents’ decision to medicate infants. Pediatrics. 2013;131:839–45. [PMC free article] [PubMed]
21. Fryer AA, Hanna FW. Managing demand for pathology tests: financial imperative or duty of care? Ann Clin Biochem. 2009;46:435–7. [PubMed]
22. Lijmer JG, Mol BW, Heisterkamp S, Bonsel GJ, Prins MH, van der Meulen JH, et al. Empirical evidence of design-related bias in studies of diagnostic tests. JAMA. 1999;282:1061–6. [PubMed]
23. Tzoulaki I, Siontis KC, Evangelou E, Ioannidis JP. Bias in associations of emerging biomarkers with cardiovascular disease. JAMA Intern Med. 2013;173:664–71. [PubMed]
24. Svensson S, Menkes DB, Lexchin J. Surrogate outcomes in clinical trials: a cautionary tale. JAMA Intern Med. 2013;173:611–2. [PubMed]
25. Mickan S, Burls A, Glasziou P. Patterns of ‘leakage’ in the utilisation of clinical guidelines: a systematic review. Postgrad Med J. 2011;87:670–9. [PMC free article] [PubMed]
26. Schleifer D, Rothman DJ. “The ultimate decision is yours”: exploring patients’ attitudes about the overuse of medical interventions. PLoS One. 2012;7:e52552. [PMC free article] [PubMed]
27. Rolfe A, Burton C. Reassurance after diagnostic testing with a low pretest probability of serious diseasesystematic review and meta-analysis. JAMA Intern Med. 2013;173:407–16. [PubMed]
28. Redberg R, Katz M, Grady D. Diagnostic tests: another frontier for less is more: or why talking to your patient is a safe and effective method of reassurance. Arch Intern Med. 2011;171:619. [PubMed]
29. Power M, Fell G, Wright M. Principles for high-quality, high-value testing. Evid Based Med. 2013;18:5–10. [PMC free article] [PubMed]
30. Wise J. Boldt: the great pretender. BMJ. 2013;346:f1738. [PubMed]
31. Field MJ, Lohr KN, editors. Guidelines for Clinical Practice: From Development to Use Institute of Medicine. Washington DC: National Academy Press; 1992.
32. Fixsen DL, Naoom SF, Blasé KA, Friedman RM, Wallace F. Implementation Research: A Synthesis of the Literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network; 2005. FMHI Publication #231.
33. Chaudoir SR, Dugan AG, Barr CH. Measuring factors affecting implementation of health innovations: a systematic review of structural, organizational, provider, patient, and innovation level measures. Implement Sci. 2013;8:22. [PMC free article] [PubMed]
34. Cockburn J. Adoption of evidence into practice: can change be sustainable? Med J Aust. 2004;180(Suppl):S66–7. [PubMed]
35. Green LA, Seifert CM. Translation of research into practice: why we can’t “just do it” J Am Board Fam Pract. 2005;18:541–5. [PubMed]
36. National Health and Medical Research Council. A guide to the development, implementation and evaluation of clinical practice guidelines. Canberra: NHMRC, Commonwealth of Australia; 1999.
37. National Health and Medical Research Council. How to put the evidence into practice: implementation and dissemination strategies. Handbook series on preparing clinical practice guidelines. Canberra: NHMRC, Commonwealth of Australia; 2000.
38. Kiefe CI, Sales A. A state-of-the-art conference on implementing evidence in health care. Reasons and recommendations (Editorial) J Gen Intern Med. 2006;21(Suppl 2):S67–70. [PMC free article] [PubMed]
39. Janssens PM. Managing the demand for laboratory testing: options and opportunities. Clin Chim Acta. 2010;411:1596–602. [PubMed]
40. Scott A, Sivey P, Ait Ouakrim D, Willenberg L, Naccarella L, Furler J, et al. The effect of financial incentives on the quality of health care provided by primary care physicians. Cochrane Database Syst Rev. 2011;(9):CD008451. [PubMed]
41. Guideline on Pathology Testing in the Emergency Department. Australasian College for Emergency Medicine Guideline G125. Royal College of Pathologists of Australasia Document 1/2013. (Accessed 14 May 2013)
42. National Coalition of Public Pathology. Encouraging Quality Pathology Ordering in Australia’s Public Hospitals. Final Report, February 2012. (Accessed 14 May 2013)
43. Osborn CY, Mayberry LS, Mulvaney SA, Hess R. Patient web portals to improve diabetes outcomes: a systematic review. Curr Diab Rep. 2010;10:422–35. [PMC free article] [PubMed]
44. National Collaborating Centre for Acute Care. The use of routine preoperative tests for elective surgery. Clinical Guideline 3. National Institute for Health and Clinical Excellence. 2003. (Accessed 14 May 2013)
45. Czoski-Murray C, Lloyd Jones M, McCabe C, Claxton K, Oluboyede Y, Roberts J, et al. What is the value of routinely testing full blood count, electrolytes and urea, and pulmonary function tests before elective surgery in patients with no apparent clinical indication and in subgroups of patients with common comorbidities: a systematic review of the clinical and cost-effective literature. Health Technol Assess. 2012;16:i–xvi. 1–159. [PubMed]
46. Queensland Health Pathology and Scientific Services AUSLAB Retest Interval Project. Final Project Report, December 2004
47. Driskell OJ, Holland D, Hanna FW, Jones PW, Pemberton RJ, Tran M, et al. Inappropriate requesting of glycated hemoglobin (Hb A1c) is widespread: assessment of prevalence, impact of national guidance, and practice-to-practice variability. Clin Chem. 2012;58:906–15. [PubMed]
48. Main C, Moxham T, Wyatt JC, Kay J, Anderson R, Stein K. Computerised decision support systems in order communication for diagnostic, screening or monitoring test ordering: systematic reviews of the effects and cost-effectiveness of systems. Health Technol Assess. 2010;14:1–227. [PubMed]
49. Shojania KG, Jennings A, Mayhew A, Ramsay CR, Eccles MP, Grimshaw J. The effects of on-screen, point of care computer reminders on processes and outcomes of care. Cochrane Database Syst Rev. 2009;3:CD001096. [PubMed]
50. Arditi C, Rège-Walther M, Wyatt JC, Durieux P, Burnand B. Computer-generated reminders delivered on paper to healthcare professionals; effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2012;12:CD001175. [PubMed]
51. Matheny ME, Sequist TD, Seger AC, Fiskio JM, Sperling M, Bugbee D, et al. A randomized trial of electronic clinical reminders to improve medication laboratory monitoring. J Am Med Inform Assoc. 2008;15:424–9. [PMC free article] [PubMed]
52. Levick DL, Stern G, Meyerhoefer CD, Levick A, Pucklavage D. “Reducing unnecessary testing in a CPOE system through implementation of a targeted CDS intervention” BMC Med Inform Decis Mak. 2013;13:43. [PMC free article] [PubMed]
53. Zaat JO, van Eijk JT, Bonte HA. Laboratory test form design influences test ordering by general practitioners in The Netherlands. Med Care. 1992;30:189–98. [PubMed]
54. van Walraven C, Goel V, Chan B. Effect of population-based interventions on laboratory utilization: a time-series analysis. JAMA. 1998;280:2028–33. [PubMed]
55. Barth JH, Balen AH, Jennings A. Appropriate design of biochemistry request cards can promote the use of protocols and reduce unnecessary investigations. Ann Clin Biochem. 2001;38:714–6. [PubMed]
56. Bailey J, Jennings A, Parapia L. Change of pathology request forms can reduce unwanted requests and tests. J Clin Pathol. 2005;58:853–5. [PMC free article] [PubMed]
57. Smellie WS, Association for Clinical Biochemistry’s Clinical Practice Section Time to harmonise common laboratory test profiles. BMJ. 2012;344:e1169. [PubMed]
58. Fryer AA, Smellie WS. Managing demand for laboratory tests: a laboratory toolkit. J Clin Pathol. 2013;66:62–72. [PubMed]
59. Skeie S, Perich C, Ricos C, Araczki A, Horvath AR, Oosterhuis WP, et al. Postanalytical external quality assessment of blood glucose and hemoglobin A1c: an international survey. Clin Chem. 2005;51:1145–53. [PubMed]
60. Aakre KM, Thue G, Subramaniam-Haavik S, Bukve T, Morris H, Müller M, et al. Postanalytical external quality assessment of urine albumin in primary health care: an international survey. Clin Chem. 2008;54:1630–6. [PubMed]
61. Willis EA, Datta BN. Effect of an educational intervention on requesting behaviour by a medical admission unit. Ann Clin Biochem. 2013;50:166–8. [PubMed]
62. The AGREE Next Steps Consortium. Appraisal of Guidelines for Research & Evaluation II. AGREE II instrument. May, 2009. (Accessed 14 May 2013)
63. Hsu J, Brożek JL, Terracciano L, Kreis J, Compalati E, Stein AT, et al. Application of GRADE: making evidence-based recommendations about diagnostic tests in clinical practice guidelines. Implement Sci. 2011;6:62. [PMC free article] [PubMed]
64. National Institute of Clinical Studies. Assessing the implementability of guidelines. Melbourne: NICS; 2006.
65. Shiffman RN, Dixon J, Brandt C, Essaihi A, Hsiao A, Michel G, et al. The GuideLine Implementability Appraisal (GLIA): development of an instrument to identify obstacles to guideline implementation. BMC Med Inform Decis Mak. 2005;5:23. [PMC free article] [PubMed]
66. Davey RX. Troponin reporting in Australia in 2003. Ann Clin Biochem. 2005;42:61–3. [PubMed]
67. Ungerer JP, Marquart L, O’Rourke PK, Wilgen U, Pretorius CJ. Concordance, variance, and outliers in 4 contemporary cardiac troponin assays: implications for harmonization. Clin Chem. 2012;58:274–83. [PubMed]
68. Cheung NW, Conn JJ, d’Emden MC, Gunton JE, Jenkins AJ, Ross GP, et al. Australian Diabetes Society Position Statement: Individualization of HbA1c targets for adults with diabetes mellitus. Sep, 2009. (Accessed 9 July 2013)
69. Hewitson P, Glasziou P, Watson E, Towler B, Irwig L. Cochrane systematic review of colorectal cancer screening using the fecal occult blood test (hemoccult): an update. Am J Gastroenterol. 2008;103:1541–9. [PubMed]
70. Heneghan C, Ward A, Perera R, Bankhead C, Fuller A, Stevens R, et al. Self-Monitoring Trialist Collaboration Self-monitoring of oral anticoagulation: systematic review and meta-analysis of individual patient data. Lancet. 2012;379:322–34. [PubMed]

Articles from The Clinical Biochemist Reviews are provided here courtesy of The Australian Association of Clinical Biochemists