PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of clinbiorevLink to Publisher's site
 
Clin Biochem Rev. 2007 August; 28(3): 93–96.
PMCID: PMC1994108

Standardisation - The Theory and the Practice

In 1926 when Sir Henry Dale prepared the first International Standard for insulin and it was adopted by the League of Nations Health Organization (the forerunner of the World Health Organization), biological standardisation was aimed at harmonising the calibration of therapeutic substances.1 Ever since then, the word “standardisation” has meant different things to different people. For example, it has meant the use of a calibration standard in an analytical method, or it has meant the process of adopting the same procedures or methods in order that comparable (harmonised) patient results are achieved. Nowadays the term “standardisation” encompasses the reference measurement procedures and reference materials (i.e. the reference measurement system) required to achieve greater comparability of patient test values between different clinical assays.2 The establishment of reference systems also requires a reliable transfer of analytical accuracy bases by means of a network of reference laboratories performing the reference methods in well-standardised operating conditions.

There are theoretical and practical advantages to adopting this “standardisation” approach. Clinical laboratory testing is now a “global business” and laboratories are no longer working in isolation but are linked through information technology networks. Therefore, it is only logical and, in the best interests of patients, to achieve the close agreement of test results for disease management. In turn, obtaining comparable results will generate a larger database of clinical information that can enable the definitive determination of the diagnostic specificity and sensitivity of a test, and the establishment of unique reference intervals or decision cut-points for medical intervention.

Both the standardisation and harmonisation processes aim to improve the comparability of test results between laboratories. Whereas standardised results approach more closely the “true” value, harmonised results may be biased in terms of trueness. The use of the standardisation process ensures the traceability of results to an accepted reference measurement system and greater certainty that a result is close to the “true value”.3,4

The key components of the process for establishing traceability include:

  1. Development and characterisation of suitable reference materials and their value assignment in meaningful units using reference measurement procedures;
  2. Establishment of commercial routine (field) assays yielding results traceable to higher order reference materials and methods; and
  3. Availability of appropriate reference intervals and decision limits.

The formalisation of these standardisation components has accelerated in recent years. Governmental agencies, metrological institutes, standards organisations, clinical laboratory societies, diagnostics industry, etc. have worked more closely together to rapidly and effectively improve the standardisation of tests used in clinical laboratories. International bodies such as the International Organization for Standardization (ISO), International Federation of Clinical Chemistry and Laboratory Medicine (IFCC), International Union for Pure and Applied Chemistry (IUPAC), International Council for Standardization in Hematology (ICSH), and the Clinical and Laboratory Standards Institute (CLSI) have agreed that metrologically-based reference measurement systems be implemented in laboratory medicine. In 2002 the Joint Committee for Traceability in Laboratory Medicine (JCTLM) was established by the Bureau International des Poids et Mesures (BIPM), together with the IFCC and the International Laboratory Accreditation Cooperation (ILAC).

A major driving force for the establishment of JCTLM has been the implementation of the European in Vitro Diagnostics (IVD) Directive, which requires the traceability of values assigned to calibrators and control materials for IVD through available reference measurement procedures and/or reference materials of a higher order.

In this issue of The Clinical Biochemist Reviews some of the above mentioned aspects of standardisation that are having practical effects in the diagnostics industry and in clinical laboratories are discussed. Mauro Panteghini, chair of the Scientific Division of the IFCC and director of the Centre for Metrological Traceability in Laboratory Medicine (CIRME) at the University of Milan, introduces the concepts of the reference measurement system and metrological traceability. This author gives examples showing the use of a hierarchy of reference materials and reference measurement procedures that have resulted in the standardisation of analytes such as serum creatinine and HbA1c. He also comments on the need for validation of assays according to clinical specifications, if standardisation activities are to be at all useful in practice.

In the second article on the activities of the JCTLM, Dave Armbruster and Rick Miller describe how JCTLM Working Groups identify acceptable reference materials, reference measurement procedures and reference laboratories that conform to appropriate international documentary standards, i.e. ISO 15193, 15194 and 15195. As a result of these activities, databases of available higher-order reference materials and higher-order reference measurement procedures as well as reference laboratories that can be used by the IVD industry and other users are now available on the BIPM website. By supporting the metrologically-correct approach, JCTLM aims (at least in principle) to achieve standardisation, even if in some complicated situations harmonisation can be a suitable intermediate step. While global standardisation efforts can contribute to improved patient care and foster closer laboratory-clinician interactions, as exemplified by the estimated glomerular filtration rate (eGFR) and standardisation of serum creatinine measurement, Armbruster and Miller comment: “Laboratories must also continuously assess their routine methods to ensure that they consistently produce accurate, medically useful (‘fit for purpose’) results”.

In the final article by Ferruccio Ceriotti, chair of the IFCC Committee on Reference Intervals and Decision Limits (C-RIDL), we read about the theory behind reference intervals and the prerequisites for use of common reference intervals. Ceriotti describes the practical difficulties encountered in establishing these reference intervals and how multicentre collaborative studies are being used to produce them. He goes on to describe the processes whereby we in the clinical laboratory can adopt common reference intervals provided there is verification of similar pre-analytical conditions, the analytical method used and its traceability, and the characteristics of the local population.

The authors have presented here the theory that forms the basis of standardisation. While it is desirable to produce a true value that is traceable to a reference measurement system, the reality is that in practice, standardisation can be difficult to achieve. Nevertheless, the example of serum creatinine and eGFR serves to highlight the patient and clinical laboratory benefits from global standardisation efforts.5 In the next issue of this journal we will read more about some of the practical issues that beset the clinical laboratory’s search for standardisation and the efforts to overcome these.

Abbreviations

BIPM
Bureau International des Poids et Mesures (International Bureau of Weights and Measures)
CCQM
Consultative Committee on the Quantity of Material
CIPM MRA
International Committee on Weights and Measures Mutual Recognition Arrangement
CLSI
Clinical and Laboratory Standards Institute (formerly NCCLS)
DGKL
German Society for Clinical Chemistry and Laboratory Medicine
EU
European Union
GUM
Guide to the Expression of Uncertainty in Measurement
IDMS
Isotope Dilution Mass Spectrometry
IFCC
International Federation of Clinical Chemistry and Laboratory Medicine
ILAC
International Laboratory Accreditation Cooperation
IMEP
International Measurement Evaluation Program
IRMM
Institute of Reference Materials and Measurements of the European Union
ISO
International Organization for Standardization
IU
International Unit
IVDD
In Vitro Diagnostics Directive of the European Union
JCTLM
Joint Committee for Traceability in Laboratory Medicine
KCDB
Key Comparison Database
LMPG
Laboratory Medicine Practice Guidelines
NIST
National Institute of Standards and Technology
NMI
National Metrology Institute
PT/EQAS
Proficiency Testing/External Quality Assurance Schemes
SRM
Standard Reference Material
VIM
Vocabulary in Metrology
WHO
World Health Organization

Glossary of Terms

Accuracy of measurement
the result of the measurement agrees closely with the true value of the measurand. Accuracy is related to both trueness and precision of the measurement.
Analyte
the chemical component or substance that is intended to be measured.
Analytical specificity
the property of a method to measure only the analyte.
Certified reference material (CRM)
a material that is used as a standard or reference and whose assigned value is traceable to a reference measurement system. An accompanying certificate states the analyte value and its measurement uncertainty.
Commutability of a material
indicates the similarity between a patient sample and manufactured material, e.g. a quality control, in terms of analytical reactivity. Non-commutable materials that are used to calibrate or monitor the trueness of a method may lead to inaccurate values.
Manufacturer’s product calibrator
calibration material provided to the customer.
Manufacturer’s selected measurement procedure
highest level measurement procedure within the manufacturer’s operation unless the manufacturer maintains their own reference laboratory. Generally used to transfer a value to the “manufacturer’s working calibrator”. The calibration may make use of a primary calibrator or a secondary calibrator.
Manufacturer’s standing measurement procedure
testing procedure used to assess the product calibrator, calibrated with a reference material or with the “manufacturer’s working calibrator”.
Manufacturer’s working calibrator
material used to calibrate the “manufacturer’s standing measurement procedure”.
Matrix
constitutes all other components of the analytical system, except for the analyte.
Matrix effect
the effect of all other components of the analytical system, except for the analyte, on the value of the measurand.
Measurand
the analyte that is measured with respect to a specified condition, e.g. creatinine in plasma.
Metrological traceability
the property of a measurement tracing its value and measurement uncertainty to a manufacturer’s calibrator, which in turn may have traceability to a higher-order, metrologically-based reference measurement system.
Primary calibrator/primary reference material
a reference material having the highest metrological qualities and whose value is determined by means of a primary reference measurement procedure directly to the SI or indirectly by determining the impurities of the material by appropriate analytical methods.
Primary reference measurement procedure
a reference measurement procedure of the highest metrological level whose measurements in SI units are independent of a reference standard.
Reference measurement laboratory
laboratory that performs a reference measurement procedure and provides results with stated uncertainties.
Reference measurement procedure
measurement procedure that has been validated for its fitness of purpose and is used to assess lower-order methods for trueness, and to value-assign reference materials.
Secondary calibrator
a reference material whose value is assigned using a reference (secondary or primary) procedure calibrated with a primary calibrator.
Secondary reference measurement procedure
a procedure usually calibrated with a primary calibrator. Often these procedures are appropriate for a patient’s sample.
Trueness
closeness of agreement between the average value obtained from a large series of results of measurements and the true value of the measurand. Bias is used to express numerically the degree of trueness.
Uncertainty of measurement
parameter, associated with the result of a measurement, which characterises the dispersion of the values that could reasonably be attributed to the measurand. Inaccuracy is expressed by the uncertainty of measurement.

Footnotes

Rick Miller passed away after acceptance of the article he coauthored with Dave Armbruster published in this issue (pages 105–113). Rick contributed significantly to clinical chemistry and will be greatly missed.

Competing Interests: None declared.

Footnote: these terms are based on ISO 17511 and CLSI X5-R documents.6,7

References

1. Jeffcoate SL. Role of reference materials in immuno-assay standardization. Scand J Clin Lab Invest Suppl. 1991;205:131–3. [PubMed]
2. Panteghini M, Forest JC. Standardization in laboratory medicine: new challenges. Clin Chim Acta. 2005;355:1–12. [PubMed]
3. Tietz N. A model for a comprehensive measurement system in clinical chemistry. Clin Chem. 1979;25:833–9. [PubMed]
4. Thienpont LM, Van Uytfanghe K, Rodriguez Cabaleiro D. Metrological traceability of calibration in the estimation and use of common medical decision-making criteria. Clin Chem Lab Med. 2004;42:842–50. [PubMed]
5. Peake M, Whiting M. Measurement of serum creatinine current status and future goals. Clin Biochem Rev. 2006;27:173–84. [PMC free article] [PubMed]
6. ISO 17511:2003. Metrological traceability of values assigned to calibrators and control materials. ISO; Geneva, Switzerland: In vitro diagnostic medical devices - Measurement of quantities in biological samples.
7. Clinical and Laboratory Standards Institute. CLSI document X5-R. Wayne, PA; CLSI: 2006. Metrological traceability and its implementation.

Articles from The Clinical Biochemist Reviews are provided here courtesy of The Australian Association of Clinical Biochemists