|Home | About | Journals | Submit | Contact Us | Français|
Complete and informative reporting and interpretation can only lead to better decisions in healthcare
Clinicians depend on reliable laboratory results as they are central to diagnosis, monitoring and risk assessment of patients. These results are then judged against the current evidence base, which arises from previously conducted studies, to assist decision making on future patient care. However, no single study is guaranteed to be both feasible and able to provide valid, informative and relevant answers with optimal precision to all study questions.
The reliability of laboratory results depends heavily on accuracy, understood as a joint index of precision and trueness.1 Trueness is of particular importance for comparability of results as it allows the use of common reference intervals, treatment strategies and risk assessment tools. The true value is a value consistent with the definition of a given particular quantity obtained by a perfect measurement.2 Quantities are measured using procedures with different degrees of complexity. The measurement procedure needed to measure a particular quantity such as the “number of fingers on the hand of a given person” is very simple: counting by direct visual inspection is enough. In contrast, the measurement procedure needed to quantify a particular analyte for example, “concentration of glucose in plasma or the amylase concentration in serum” is performed by various chemical processes with a relatively high degree of complexity. Apart from the difference in complexity of the measurement procedure, there is another important difference between the two examples. In the first example, the measurement procedure enables one to determine the true value of the particular quantity without any kind of doubt: the measurement procedure when applied to the aforementioned quantity “five fingers” has neither random nor systematic error. However, in the second example, the true value of the particular quantity is, at the present state of science and technology, unknown, whatever the measurement procedure is. Because of the complexity and the diversity of measurement procedures, all the measurements made in the routine clinical laboratory are subject to both systematic and random errors.
The metrological problems may be due to a variety of reasons: analytes such as human chorionic gonadotropin, which lack an unequivocally recognised entity with the consequence that they do not fit into the SI system (Système International d'Unités; http://www.bipm.org/en/si/); in cases where different SI analytes are measured and reported as a “family” or “group” such as triacylglycerides; and finally with enzymes, such as alanine aminotransferase, which are defined in terms of an agreed‐upon substrate that they convert under predefined conditions.
Instruments and measurement procedures differ between clinical laboratories. For example, more than 200 methods of activity determination have been described for amylase.3 Each of these methods will have their own inherent inaccuracy (ie, the difference between a result of a measurement and the true concentration of an analyte).1 Subsequently, different and often clinically relevant results may be generated for the same patient sample depending on which laboratory performed the measurement. For example, this means that absolute values, such as an alanine aminotransferase level of 250 IU/l, a cut‐point for Ranson's score for acute pancreatitis,4 may not be directly transferable between laboratories. Clinicians faced with making a clinical decision based on a laboratory result often overlook this fact. The scientific community is sometimes also oblivious to this.5
To improve the completeness, accuracy and comparability of reports and to avoid possible confusion, the Standards for reporting diagnostic accuracy (STARD) statement forms a helpful resource that suggests describing the full technical specifications including the instruments used.6,7,8,9 Complete and accurate reporting allows the reader to detect the potential for bias and to judge the generalisations and applicability of results. Measures of test accuracy may vary from study to study. Variability may reflect differences in patient groups, differences in setting, differences in definition of the target condition and differences in test protocols or in criteria for test positivity.10 The STARD project group has developed a single‐page checklist, which can be used to verify that all essential elements are included in the report of a study to help the reader in judging the relevance, the potential for bias and the limitations to applicability of a study.6,7,8,9
To improve the equivalence of measurements in laboratory medicine and traceability to appropriate measurement standards, the International Committee of Weights and Measures, the International Federation for Clinical Chemistry and Laboratory Medicine and the International Laboratory Accreditation Cooperation have agreed to cooperate to establish a Joint Committee for traceability in laboratory medicine (http://www.bipm.org/en/committees/jc/jctlm/). Clinicians should be made aware of the inaccuracies of measurement, and clinical laboratories must communicate with users of this service and supply information about the uncertainty of their results of measurement when applicable; this information may be attached to each patient's result, may be contained in the user's laboratory handbook or may be available on request.11
The issue of accuracy is more relevant today than ever before. The new General Medical Services contract for general practitioners in the UK awards points based on achieving target levels using biochemical markers such as haemoglobin A1c and creatinine.12,13 However, despite alignment, differences exist depending on the analytical method used14; such differences may potentially affect patient care as well as the number of points earned by general practitioners. Exaggerated results from poorly designed studies can trigger premature adoption of diagnostic tests and can mislead doctors to incorrect decisions about the care for individual patients. Reviewers and other readers of diagnostic studies must therefore be aware of the potential for bias and a possible lack of applicability. Complete and informative reporting can only lead to better decisions in healthcare.
Competing interests: None declared.