|Home | About | Journals | Submit | Contact Us | Français|
Laboratory tests offer value if they provide benefit to patients at acceptable costs. Laboratory testing is one of the most widely used diagnostic interventions supporting medical decisions, yet evidence demonstrating its value and impact on health outcomes is limited. This contributes to wide variations in test utilisation including underdiagnosis, overdiagnosis and misdiagnosis, which may impact the quality and the clinical- and cost-effectiveness of care and patient safety. Therefore implementing evidence into the care of patients is a moral and social imperative to laboratory professionals and all health care staff.
This review investigates the reasons research does not get into practice, or only does with a very long delay. Apart from reviewing the common barriers to implementation, it also discusses the drivers of inappropriate test utilisation. By reviewing the theoretical and practical aspects of implementation science, recommendations are made for approaches that are thought to be most effective and that can be adopted to close the gap between evidence and practice, and to facilitate evidence-based laboratory medicine. Passive dissemination of the evidence and educational interventions are insufficient and do not offer sustainable solutions. A multifaceted and individualised implementation strategy, including individually tailored academic detailing, reminder systems, clinical decision support systems, feedback on performance, and participation of doctors and laboratory professionals in quality improvement activities addressing test selection and interpretation and in clinical audits, has greater potential for success. Examples of these initiatives at the laboratory and clinical interface are provided with links to valuable resources.
‘Knowing is not enough; we must apply.
Willing is not enough; we must do.’JW von Goethe
Medical tests impact patient health by influencing clinical management decisions on what treatment options are selected, when these are administered and how such interventions are followed up to assess response and adequacy of treatment. Medical tests may modify patient perceptions and behaviour, and apart from their intended contribution to better management of conditions, they may also put patients at risk of harm.1,2 Evidence-based laboratory medicine (EBLM) assists clinical management of patients by integrating into clinical decision making the best available research evidence for the use of laboratory investigations with the clinical expertise of the physician and the needs, expectations and concerns of the patients, in order to improve the care and outcomes of individual patients and the effective use of healthcare resources. In other words, the aims of EBLM are to improve the value and impact of laboratory testing on health and health care delivery. Laboratory tests offer value only if they are: clinically valid i.e. they provide highly accurate diagnostic or prognostic information for clinical decision making; clinically effective i.e. they contribute to improved patient-centred outcomes; and cost effective i.e. they contribute to reduced health care costs. In brief, laboratory tests have value if they provide benefit to patients at acceptable costs.
How can these aims be achieved and what is the role of laboratory professionals in getting evidence into practice? Our responsibilities are best summed up by Muir Gray, author of the highly acclaimed book Evidence-Based Health Care: (i) eliminate poor or useless tests before they become widely available i.e. stop starting; (ii) remove old tests with no proven benefit or, in fact, harm from the laboratory’s repertoire i.e. start stopping; (iii) introduce new tests if evidence proves their efficacy and effectiveness i.e. start starting or stop stopping.3 Whilst the aims are clear and laboratory professionals, clinicians, patients, policymakers and industry generally endorse such principles, practice data often show the opposite or, at best, wide variations in test utilisation. The widening gap between evidence and practice leads to various undesirable scenarios such as underutilisation, overutilisation or inappropriate utilisation of medical tests that may contribute to underdiagnosis, overdiagnosis or misdiagnosis of patients with potentially harmful health and other consequences for patients and society, respectively.
With the rapid growth of proteomic and genetic research, many new potential biomarkers are being developed and may reach the market before their value is appropriately investigated and systematically proven in clinical research and practice. As research trial methods for medical test evaluation are still being debated and are under development, a number of laboratory tests have gained widespread use in the last decades which were subsequently shown not to have any clinical value or may even cause harm, and which could not be approved if they were released to the market today.4 It is also common sense that once a test becomes available or easily accessible, and especially if it is heavily promoted, it will be used. Patients’ access to health information via the world-wide web also increases demand for testing. Electronic order systems allowing ‘one-click’ requesting of test panels enforce certain testing patterns which then become habits that are very hard to change. Inappropriate test utilisation, due to wrong indication for or timing of the test, is also a well-known phenomenon and may provide misleading information for selection of further therapeutic interventions or for drug dosing decisions.5
Examples of overutilisation include:
Advances in imaging and laboratory techniques allow the detection of early pathophysiological changes, many of which are incidental findings of no clinical significance warranting any medical interventions.11Overdiagnosis of such ‘pseudo-diseases’ may lead to an array of further unnecessary investigations and thus costs, not to mention the psychological and psychosomatic consequences to the patients and their families. For example, estimated glomerular filtration rate (eGFR) below 60 mL/min/1.73m2 is common in elderly people. According to a recent study, this criterion may potentially lead to overdiagnosis in up to one-third of this population, whilst the incidence of end stage renal disease is less than 1 in 1000 per annum.12 Overdiagnosis of prostate cancer due to prostate-specific antigen screening of the general population is responsible for the annual incidence to mortality ratio for prostate cancer of approximately 6 to 1.13 More examples and commentaries for overdiagnosis from experts in evidence-based medicine can be found at http://theconversation.com/ending-over-diagnosis-how-to-help-without-harming-9633.
In contrast, it has also been shown that useful tests are sometimes underutilised, and it may take up to two decades for such tests to be put into routine clinical practice. This delay in the translation of credible research findings into clinical practice remains a substantial obstacle to improving the quality and effectiveness of health care interventions. For example, a survey of over 10,000 patients in the US has shown that they received only 55% of recommended care for both acute and chronic conditions. Diagnostic indicators of care were also met in only 56% of the cases.14 An earlier study has also shown that low-density lipoprotein cholesterol targets set by the National Cholesterol Education Program recommendations were achieved in only one-third of patients.15
Further examples for underutilisation and underdiagnosis include:
The consequences of under- and overdiagnosis have become central in recent discussions.11 Whilst numerous examples have been published about the health and other harms of inappropriate test utilisation, it is alarming to see how inappropriate diagnostic labels change the perceptions and choices of patients and their families. A recent study on the overdiagnosis of gastroesophageal reflux in children has shown that parents whose children received this diagnosis were more keen to accept medical interventions than parents whose children had the same symptoms but were not given a disease label, in spite of both groups being told that the medications are likely ineffective.20 These findings confirm that inappropriate disease labels are associated with unnecessary overtreatment and cause anxiety and potential other harms. Cost implications of inappropriate testing are also well known. For example, a recent British study has estimated that eliminating inappropriate testing could save the National Health Service up to £1 billion in test costs alone.21 These problems and perceptions call for a more systematic approach and a concerted effort by all stakeholders targeting both health care professionals and patients in order to ensure that good research evidence is implemented without much delay into practice (stop stopping – start starting) and inappropriate testing practices are discontinued (start stopping) or never get implemented (stop starting).
Before embarking on the review of potential solutions, the reasons research does not get into practice, or only does with a very long delay, need to be investigated. The approaches that have been shown to be effective and can be adopted to close the gap between evidence and practice, and facilitate a better and faster dissemination of evidence-based laboratory test utilisation will be discussed. Finally we will also discuss how the aims and objectives of EBLM can be achieved and what the laboratory profession and other key stakeholders need to do to improve the clinical value and impact of laboratory testing.
Joseph Schumpeter, an Austrian philosopher in the mid-nineties, concluded that ‘It was not enough to produce satisfactory soap, it was also necessary to induce people to wash’. So why ‘do not people wash’? What makes it so hard to do something for which we already have some research data (i.e. the ‘soap’)?
Is it because the ‘soap’ itself may not be satisfactory? In other words, could it be that the evidence itself is:
Alternatively, is it because even though the ‘soap’ is fine, there is something wrong with its user? Perhaps the user:
Apart from the above mentioned examples of common barriers to implementation, a number of drivers also fuel inappropriate test utilisation. The list of such drivers is extensive and the summary below may not be exhaustive:
Field and Lohr have pointed out that ‘guidelines do not implement themselves’.31 More literarily and quoting Goethe, ‘Knowing is not enough; we must apply. Willing is not enough; we must do.’ Even though no one would possibly argue with these statements, it has been widely acknowledged that major gaps exist between what is known as effective practice (i.e. theory and scientific evidence) and what is actually done (i.e. policy and practice).32 Even a good ‘soap’ can be utilised wrongly if implementation of appropriate use fails and, vice versa, a faulty ‘soap’ can just as well be implemented very efficiently. Only the right introduction of the right ‘soap’ to the right customer will achieve the desired positive outcomes of service.3 Indisputably, this calls for objective measurement of both intervention and implementation outcomes.
Implementation science experts hold the view that a wide array of multi-level variables need to be considered for the successful implementation of evidence-based health innovations. A recent systematic review groups these into causal factors, such as structural, organisational, provider, patient, and innovation level measures, and into implementation outcomes that the analysis of causal factors is able to predict.33 The structural measures could represent aspects of the local health care environment, including physical infrastructure and human resources, political, social and health policy aspects, and economic climate. The organisational level encompasses aspects of the organisation’s management culture and strategic vision and values including commitment to high quality and evidence-based service, employee morale and customer satisfaction. The provider level constructs consider aspects of clinical staff’s and patients’ attitudes and incorporate aspects of behavioural change for those who are ultimately responsible for implementing the innovation. The innovation level includes analysis of the benefits of introducing a new health technology into existing clinical pathways and reviewing the efficacy of such changes. The patient-related characteristics include considerations of the patients’ perspectives, health-relevant beliefs, motivation, and personality traits that can impact implementation outcomes. It is important to highlight that these factors are interrelated. For example, changes at the organisational level can facilitate changes at provider level and affect individual behaviours both at health care staff and patient level.34
According to the model of Chaudoir and colleagues, implementation outcomes are classified as adoption, fidelity/adherence, implementation cost, penetration, and sustainability.33 Before conceptualising and discussing the measurement of implementation outcomes we need to investigate the process of translating the evidence into practice which is largely affected by the above causal factors. The process is best described as behavioural change management and starts with acknowledgement that there is a problem and recognition of the need for a change. The next steps are exploration and search for a solution and proceed through awareness and acceptance to adoption and adherence. The latter two stages of the implementation process involve program installation, initial implementation, full operation, sustainability and innovation linked to continuous monitoring of impact and improvement of the quality and effectiveness of care.32 The above highlight the need for a system approach and the collaboration of all stakeholders if successful implementation of evidence-based best practice is to be achieved.
There is good evidence in the literature for which approaches do not work, and reasonable evidence for which do work. Educational methods, such as disseminating practice guidelines and continuing medical education, are useful tools for stimulating awareness (Table 1).35 However, most literature on implementation science reiterates and a systematic review by Mickan et al. has convincingly demonstrated that the production, active dissemination and even acceptance of the evidence do not guarantee that evidence-based recommendations for best practice are adopted and adhered to.25,36 This study showed that both adoption and adherence were affected by provider and organisational level constructs25. For example, specialists with higher skills and experience or working in large hospitals with better equipment and staff resources were more likely to adopt and adhere to recommendations than general practitioners working single-handed or in small group practices and with less advanced technologies or support staff. Laboratories therefore may need to develop different implementation strategies for their hospital and general practitioner clients. It may be an argument for the joint development of laboratory medicine specific clinical recommendations that this study has also found that national or regional recommendations developed by credible specialty organisations were more likely to be accepted and adopted than global or other international guidelines that are more remote or culturally and organisationally different from the actual stakeholder professions or place of implementation.25 This study also showed that informed patients can influence adherence to best practice and this again calls for guideline implementation strategies that provide and apply patient information and empowerment tools.
Understanding better the causes for leakage along this awareness–acceptance–adoption–adherence pipeline is essential for designing the best strategies for implementation of evidence-based recommendations. Targeted dissemination of guidelines to less experienced clinicians and to clinicians working in small centres using expert outreach visits, presentations and recommendations by influential peers or professional bodies in prestigious conferences or journals may facilitate acceptance of the value of change by the targeted individuals and their organisations (Table 1). Clear and consistent laboratory testing-related guidelines, conceived in collaboration with clinical specialists and which are pilot tested and adapted to local settings and equipped with tools and resources for monitoring, achieve higher success with adoption and adherence (Table 1).
Implementing evidence-based recommendations is a complex process which involves behavioural, organisational and policy approaches and a combination of various tools that facilitate best practice and change in existing behaviour. Guidelines, policies, educational information or training alone are not effective and may have a very short lifespan in terms of adherence. Sustainability of change presents the biggest challenge. Long-term multifaceted implementation strategies that address all stakeholders simultaneously and quality systems that employ regular reminders and feedback on performance compared to peers are more effective. Whilst there are no standard recipes for success, the strategies and approaches presented in Table 1, and especially their various combinations, have been shown to be effective for getting evidence into practice in some but not all circumstances.37
Evidence-based practice requires the whole organisation, not just the laboratory personnel and individual physicians, to be ready for the change. Policy changes that require top-level commitment and change at organisational or management level may help to institutionalise the change and ensure that evidence-based practice becomes routine and part of the culture of organisational operations. This could be achieved by (Table 1):33,34,38
Performance payment and financial incentives are becoming popular at regulatory level and seem to work in some settings.39 A recent Cochrane review concluded that the currently available body of evidence is insufficient to prove their success in improving the quality of primary health care, and that such approaches are cost effective relative to other means of quality improvement. Therefore current evidence simply suggests that financial incentive schemes should be carefully designed and evaluated.40
Informed health care professionals as well as patients influence the uptake of evidence-based best practice recommendations. A good example for patient information and empowerment tools is the LabTests Online portal which is available in locally adapted versions and in many different languages (Table 2). In Australia, patient information factsheets advising the public on the risks and benefits of pathology testing, including information on direct-to-consumer genetic testing, is also available from the Royal College of Pathologists of Australasia website and have been disseminated to general practitioners (Table 2). The College, in collaboration with relevant clinical colleges, also provides evidence- and consensus-based guidance on pathology testing. For example, a recent addition to this resource makes recommendations for appropriate laboratory test selection and for reducing pre-analytical errors in emergency care, and defines key elements of service level agreements including the management of point of care testing.41Common Sense Pathology, published as a supplement of the Australian Doctor magazine, primarily aims at the quality use of pathology by general practitioners across Australia. It presents typical case scenarios with clear and concise recommendations for deciding which pathology tests are most appropriate for the diagnosis and management of different illnesses (Table 2). The National Prescribing Service’s MedicineWise program, delivered by health professionals to primary care physicians, uses the method of academic detailing. This is a service-oriented face-to-face outreach education visit for physicians and provides an accurate, up-to-date synthesis of the evidence about relevant medical interventions. The goal of academic detailing is to change prescribing of medications or requesting of medical tests to be consistent with medical evidence, support patient safety, cost-effective use of health care resources, and to improve the clinical effectiveness of patient care. The National Prescribing Service provides evidence-based guidance for both the public and health care professionals, for example on the diagnosis and monitoring of diabetes and on vitamin-D testing (Table 2). The Quality Use of Pathology Program (QUPP) established in 1999 by pathology stakeholders under the umbrella of the Australian Government runs projects that promote evidence-based patient choice and information, rational test requesting and professional practice standards (Table 2). Their report on quality test ordering has been released recently and provides a matrix that can assist in the assessment of whether a request for a pathology test is appropriate.42
In the US, the Choosing Wisely campaign, sponsored by the American Board of Internal Medicine Foundation, provides key clinical recommendations for physicians and patients that help avoid unnecessary medical interventions. In this campaign a number of clinical disciplines advise against the unnecessary use of certain diagnostic tests (Table 2). For example, haematologists advise not to do workup for clotting disorders for patients who develop a first episode of deep vein thrombosis in the setting of a known cause; or not to perform repetitive full blood count and chemistry testing in stable conditions to avoid diagnostic phlebotomy-related anaemia in hospitalised patients. The American College of Clinical Pathologists suggests that the following tests are stopped:
The Laboratory Medicine Best Practices initiative from the Division of Laboratory Science and Standards of the US Centers for Disease Control and Prevention summarises the best evidence in systematic reviews and meta-analyses on common laboratory medicine related topics. These include, for example, evidence reports on the effectiveness of barcoding in avoiding sample identification errors, effectiveness of practices for reducing blood culture contamination, effectiveness of various forms of electronic or verbal communications for reporting critical results, and effective practices to reduce haemolysis rates in emergency departments. The National Academy of Clinical Biochemistry (NACB) issues evidence- and consensus-based guidelines for the laboratory evaluation and monitoring of patients with specified disorders. The NACB involves clinicians and relevant professional and patient organisations in the process of guideline development and its recommendations are often jointly published with these organisations both in laboratory and clinical journals for wider dissemination (Table 2).
In the UK, the National Institute for Health and Care Excellence (NICE) issues guidelines and brief extracts for patients on recommended best practice for a wide range of conditions. It also operates a Diagnostics Assessment Programme which focuses on the evaluation of new diagnostic interventions in order to adopt technologies with proven clinical and cost-effectiveness rapidly and consistently (Table 2).
The above mentioned examples of professional and patient portals support awareness and contribute to better dissemination and acceptance of evidence-based practice recommendations. They may also assist in improving patient outcomes.43 The UK NICE Pathways offer an interactive online clinical algorithm that is linked to detailed evidence-based information from diagnosis to treatment options and provide implementation tools such as self-assessment questionnaires, costing models with templates, educational materials and slide sets for the dissemination of NICE guidance, as well as quality standards and clinical audit criteria and other related materials for monitoring the impact of implementation (Table 2). Guidelines supplemented with such practical tools are more likely to be adopted and adhered to. This is demonstrated by a postal survey carried out nearly 10 y after the implementation of the NICE guideline on preoperative testing.44 This post-implementation survey reported that low risk patients aged <40 y with no co-morbidities undergoing minor surgery did not have routine tests for full blood count, electrolytes and renal and pulmonary function.45
Clinical decision support interventions in Computerised Physician Order Entry (CPOE) systems can identify inappropriate requesting or inappropriate frequency of requesting a laboratory test, and generate electronic alerts to prompt physicians to better utilisation of laboratory services. Regulatory and reimbursement agencies also can, and in some settings do, penalise providers if a laboratory investigation is performed more frequently than the predefined retesting intervals. Laboratory professionals have long recognised the need for a clear definition of retesting intervals. Several projects in Australia, such as the RIO project in New South Wales and the AUSLAB Retest Interval Project in Queensland have addressed these issues. At the time of the study the latter projected savings of $10,000 per month for the pathology network by reducing unnecessary repeats of laboratory tests.46 The RIO project introduced a ‘traffic light’ system indicating the level of authority required for pathology test ordering (‘green’ – no restrictions, ‘amber’ – authorised by senior medical officer, and ‘red’ only orderable by consultant medical staff). This system, coupled with reminders and other implementation facilitators, helped reduce the number of inappropriate tests by over 25% and resulted in significant cost savings (Table 2). A recent study investigating retesting frequency of HbA1c monitoring in over 100,000 patients in the UK has found that only 49% of requests conformed to guidance; 21% were too frequent and 30% were too infrequent. They also found that under-requesting was more prevalent with potential consequences on longer-term health outcomes, and that publication of international and national guidance on diabetes management had no significant impact on under- or over-requesting rates.47
Computerised decision support tools in CPOE systems have been reviewed in a recent technology assessment report and were found to have a statistically significant benefit on both process and practitioner performance-related outcomes in nearly two-thirds of the studies. However, only a small positive impact was observed on test ordering, and evidence was insufficient to judge the cost-effectiveness and the impact of the reviewed systems on patient outcomes.48 Recent Cochrane reviews investigating the impact of electronic point-of-care reminders and paper-based reminders showed small to modest improvements in provider behaviour.49,50 Automated alerts or reminders for redundant test orders do not always improve adherence to best practice and are often seen more like a nuisance than help and are overridden due to electronic alert fatigue and ignorance.51 Nevertheless, carefully designed prompts have been shown to reduce unnecessary testing.39,52 Physicians often resist change and stopping certain test requesting habits, even when the test almost never identifies anything useful, unless alternative and quickly or easily actionable options are provided simultaneously. Electronic alerts coupled with immediate link to more rational test ordering or user-friendly diagnostic algorithms may facilitate acceptance and can be quickly adopted by clinicians.
Simple measures, such as moving a test, the use of which we wish to discourage, to the back of paper request forms or to the bottom of a scroll-down menu in CPOE platforms, achieve surprisingly large impacts.53–56 Harmonisation of test profiles of common conditions57 and declining or vetting inappropriate requests before analysing the samples,58 coupled with regulating retesting intervals46 are tools commonly employed by laboratories. However, their application depends on local regulation or legislation and can succeed only if they are adopted in consultation with clinical users.
Individually targeted postanalytical quality assurance surveys of general practitioners, coordinated by the Norwegian Quality Improvement of Laboratory Services in Primary Care, NOKLUS, circulate case-based scenarios to investigate appropriate test selection, test interpretation and medical decisions based on laboratory results in common medical conditions. Such interventions are also very efficient educational tools for disseminating and enforcing the use of evidence-based recommendations and in changing the practice and behaviour of physicians, especially when they are coupled with individually tailored feedback and benchmarking of responses compared to peers.59,60 Clinical audit of current test requesting patterns and testing-related outcomes (e.g. how many patients on warfarin have the recommended frequency of testing and how many have International Normalised Ratios (INRs) that are within therapeutic targets) coupled with individually tailored presentation of findings is a powerful tool to raise local awareness of inappropriate requesting patterns. These types of audits linked with educational interventions and widely disseminated posters on proposed changes in testing protocols to health care staff, and limiting the requesting authority of nurses or junior doctors to agreed test panels provided sustainable results in rationalising test utilisation when practice was re-audited within 12 months in a UK hospital setting.61
A recent editorial by Fryer and Hanna emphasises the need for stronger liaison of laboratory professionals with clinical teams in order to review the real impact of testing on the patient pathway and to rationalise testing based on such measurable outcomes. Whilst acknowledging the difficulties of collecting such data and that the link between testing and clinical outcomes is often indirect, they call for regular assessment of results: (a) not reviewed by the requestor or clinical team; (b) reviewed but which had no impact on clinical management; and (c) reviewed and changed management, but which did not affect patient outcome. Such exchanges are often eye-opening to clinicians and may stimulate efforts to join forces in quality improvement projects.21 Such crosstalk at the laboratory-clinical interface often results in the development of joint local protocols which all stakeholders feel more closely associated with, and therefore adherence is more likely and sustainable.
As discussed above, there are a number of tools and approaches that may facilitate evidence-based laboratory testing. Which one is best to choose and how the various options can be combined to achieve the greatest success depends mostly on local settings and resources. A number of conclusions can be drawn from the vast literature on implementation science which helps the profession in setting priorities for achieving better progress in getting research into practice.
Clearly our profession needs to be engaged in better diagnostic research and contribute to the performance of more reliable studies that will provide the right ‘soap’ which users are willing to ‘buy’ and ‘wash’ with. Better evidence on the clinical effectiveness of laboratory tests will ultimately lead to better clinical guidelines on the rational utilisation of laboratory tests and new emerging biomarkers. The appearance of so many guidelines paradoxically has created a new gap between their development and use. This presents new challenges to both guideline developers and users of evidence-based recommendations.
Since the quality and reliability of guidelines have been challenged, it is recommended that, before any evidence or recommendations are put into local practice, laboratory teams, together with clinicians, critically assess the quality and content validity, the transferability and implementability of the evidence and evidence-based guideline recommendations. The Appraisal of Guidelines for Research and Evaluation (AGREE) II instrument is a useful critical appraisal tool to assess the methodological rigour and transparency of guideline development.62 It is not suitable for assessing the validity of the clinical content of the guideline or the quality of evidence that underpins the recommendations. The Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach has been designed to assess and rate the quality of the evidence in individual studies and the strength of the body of evidence across various studies, as well as to grade the overall strength of guideline recommendations.63 Grading the quality and strength of evidence and recommendations increases the transparency and validity and facilitates the acceptability and adoption of guidelines. Implementability also needs to be considered, for example with the help of the GuideLine Implementability Appraisal (GLIA) instrument which explores individual recommendations’ executability, decidability, validity, flexibility, effect on process of care, measurability, novelty/innovation and computability.64,65
At the user level, variability in the uptake and implementation of guideline recommendations is a well-known phenomenon, leading to confusion and unnecessary practice variations. This is partly due to the fact that the same evidence is sometimes interpreted differently, leading to inconsistent recommendations by various guideline agencies. For example, about 10 years ago various organisations proposed different troponin cut-offs for diagnosing acute myocardial infarction (AMI). These included a Receiver Operating Characteristic (ROC) established cut-off point, the 97.5th and the 99th percentile of healthy populations as well as the arbitrarily chosen value at which 10% or 20% CV of the troponin assays could be achieved in the laboratory. A survey of Australian laboratories in 2003 revealed that 67% of laboratories reporting troponin T determined a cut-off for AMI by ROC analysis; 7% and 1% based the cut-off value on 10% CV or 20% CV, respectively; and only 24% used the recommended 99th percentile concentration for a reference population.66 Such heterogeneity in practices is not just confusing to clinicians but may lead to misdiagnosis and harm to patients. The universal definition of AMI and authoritative guidelines refer to a rising and/or falling pattern of cardiac troponin with at least one result above the 99th percentile of a reference population. This definition is universally adopted in many international and national guidelines, irrespective of the generation of the troponin assay and type of molecule measured, which presents a significant challenge to laboratories and clinicians alike. When a laboratory assay becomes part of the universal definition of the disease, much stronger evidence and appropriately designed diagnostic accuracy and clinical outcome studies and the harmonisation or standardisation of those assays should be achieved before recommendations can be safely introduced into practice.67 In the absence of these, laboratory professionals have a responsibility to closely monitor the clinical performance of sometimes prematurely released biomarkers and educate their clinical users on the characteristics and limitations of the assays they provide for critical diagnoses. This information should also be fed back to manufacturers and guideline developers so that recommendations can be further refined as more evidence is being gathered on the real-life clinical effectiveness of test-treatment pathways and their impact on patient outcomes.
Some positive examples, widely adopted in laboratory and clinical practice, demonstrate that evidence-based laboratory practice may contribute to improved patient outcomes. For example, HbA1c monitoring of diabetic patients with standardised assays to provide comparable results and achieving safe treatment targets in various subsets of patients, that were established on data from large randomised controlled trials, has been shown to be associated with improved clinical outcomes.68 The wide adoption of colorectal cancer screening programs has been shown in a Cochrane review to achieve 16% reduction in the relative risk of colorectal cancer mortality and no difference in all-cause mortality. After adjustment for screening attendance, relative risk reduction increased to 25%, highlighting some important psycho-social barriers in the uptake of widely promoted national screening programs already backed by strong evidence of effectiveness and cost-effectiveness.69 A recent systematic review and meta-analysis of individual patient data with 5 years of follow-up found that self-monitoring and self-adjustment of warfarin therapy using point-of-care INR testing by patients was more effective in reducing thromboembolic events; in terms of major haemorrhagic events or death it was no worse than usual care with laboratory-based monitoring and office-based dose adjustment. Participants aged <55 y and those with mechanical heart valves showed greater reductions in thromboembolic events than patients >55 y, those assigned to self-testing but with clinical dosing, and those with atrial fibrillation.70
The point-of-care INR monitoring and the colorectal cancer screening program are good examples for highlighting that not even strong evidence is generalisable and always implementable. Apart from assessing the validity of evidence and guideline recommendations (i.e. is it the right ‘soap’?), implementation teams should always analyse the local setting and patient context (i.e. is it the right ‘soap’ for the right patient?) and identify barriers and drivers of implementation (i.e. can the right ‘soap’ for the right patient be used in my local health care environment?). Teams should design a strategy that is feasible and has the potential to achieve the greatest and longest lasting impact in improving the quality, effectiveness and cost-effectiveness of current practice. During this planning and design phase due consideration should be given to local characteristics of critical importance including legislation, regulatory and policy affairs, specific local health care settings, availability of and access to human and other resources, organisational culture, capability of information technology, patient preferences, behavioural, legal or social issues, etc. While designing an implementation plan it should be borne in mind that there are no ‘magic bullets’; simple but multifaceted interventions using electronic or other decision support tools and linked to education and regular audit and feedback on outcomes, developed and agreed by all stakeholders, have the greatest potential to achieve success. It is also important to recognise that at the level of the local health care organisation, quality management system changes need to be put in place to ensure that behavioural and organisational changes remain sustainable. Financial incentives and disincentives need to be planned carefully, keeping in mind that neither the quality nor the health outcomes and impacts of pathology services on patients can be compromised. It is a moral and social imperative to laboratory professionals and all health care staff that the effectiveness and efficiency of the utilisation of laboratory services is improved. Otherwise quick political decisions and simple policy measures such as cost cutting may be imposed upon many laboratories which potentially may channel away precious resources that otherwise could support the better management of undiagnosed, under-diagnosed or misdiagnosed conditions from the savings achieved by rationalisation.
Laboratory professionals, similarly to all health care professions, face major challenges in the coming decades of aging populations living longer with many chronic conditions, and need to keep pace with rapid advancements of medical technologies. To prepare future generations of laboratory medicine specialists, education and training should also incorporate practising state-of-the-art evidence-based laboratory medicine and clinical research. Training should address and equip laboratory staff with evidence-based knowledge and tools so that laboratory professionals can expand their consultative role at the laboratory-clinical interface for the benefit of better patient care and outcomes.
Some aspects of this review have been discussed in and influenced by the work of the Test Evaluation Working Group of the European Federation of Clinical Chemistry and Laboratory Medicine. Roche Diagnostics is thanked for supporting the Working Group with an independent educational grant to the European Federation of Clinical Chemistry and Laboratory Medicine.
Competing Interests: None declared.