|Home | About | Journals | Submit | Contact Us | Français|
COPYRIGHT LICENSE STATEMENT The Corresponding Author has the right to grant on behalf of all authors and does grant on behalf of all authors, an exclusive licence (or non-exclusive for government employees) on a worldwide basis to the BMJ Publishing Group Ltd and its Licensees to permit this article (if accepted) to be published in Archives of Disease in Childhood and any other BMJPGL products to exploit all subsidiary rights, as set out in our licence (http://adc.bmj.com/iforalicence.pdf).
Quality indicators are systematically developed statements that can be used to assess the appropriateness of specific healthcare decisions, services and outcomes. In this review, we highlight the range and type of indicators that have been developed for children in the UK and US by prominent governmental agencies and private organizations. We also classify these indicators in an effort to identify areas of child health that may lack quality measurement activity. We review the current state of health information technology in both countries since these systems are vital to quality efforts. Finally, we propose several recommendations to advance the quality indicator development agenda for children. The convergence of quality measurement and indicator development, a growing scientific evidence base and integrated information systems in healthcare may lead to substantial improvements for child health in the 21st century.
Health in childhood contributes to adult health 1-3 and improving the quality of healthcare for UK and US children is now viewed as critical.4,5 An essential component of any improvement effort is the identification of specific, measurable indicators of quality. Quality indicators, also known as performance indicators and review criteria, are systematically developed statements that can be used to assess the appropriateness of specific healthcare decisions, services and outcomes.6 These indicators are developed as a first step in a quality improvement effort and are ideally drawn from the available scientific evidence.
There is evidence that the healthcare system is underperforming for children and there is large variation in care. A study conducted by the RAND Corporation showed that US children received less than 50% of overall indicated care in the outpatient setting.7 A national survey also found that pediatricians used more than 100 different practice guidelines, but no single guideline except for asthma was used by more than 27% of pediatricians.8 And recent reports found widespread variation in hospital services available to children9 and the availability of pediatric cardiology services10 in the UK.
In this review, we present examples of widely available quality indicators and their use in quality measurement efforts in the UK and US. It is not our intent to comprehensively review all indicators; rather, we will use examples from prominent agencies to illustrate the current state of quality indicators and quality measurement. We will also discuss the role of health information technology (HIT) in the measurement of quality indicators. Finally, we will outline the areas requiring additional work in indicator development in order to improve healthcare provided to all children.
The quality movement in the 20th century stemmed from efforts to produce high quality products from increasingly complex industrial processes. Three theorists are credited with the early development and dissemination of quality assurance and improvement methodologies in manufacturing: Walter A. Shewart, William E. Demming and Joseph M. Juran. In the 1920s and 1930s, Shewart and Deming developed the concept of statistical control of processes (i.e., common cause and special cause variations), the statistical process control chart to manage and improve processes, and the Plan-Do-Study-Act (or Plan-Do-Check-Act) cycle, a simple method to test information before making a major decision or change.11,12 Juran overlapped briefly with Shewart and Deming at Western Electric in the 1920s and is credited with expanding the Pareto principle through its application to quality (i.e., 80% of a problem is caused by 20% of the causes) and the development of the quality improvement method, “Total Quality Management.”13 Many have applied these concepts to the medical field, including Donald Berwick and the Institute for Healthcare Improvement.14,15
The Institute of Medicine defines healthcare quality as “the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge.”16 To achieve high quality healthcare, there must be a framework to structure quality improvement efforts, a scientifically-based system to define best practices, demonstration of variation in the quality of care provided, and the means to monitor the effectiveness of interventions - all of which exist today. In 1966, Donabedian published his seminal work outlining the “structure/process/outcome” quality improvement paradigm which is used routinely today.17 Evidence-based medicine (EBM) provides the scientific basis for defining high quality care and is produced through efforts from groups such as the Cochrane Collaboration and the National Institute of Health and Clinical Excellence (NICE) in the UK, and the Agency for Healthcare Research and Quality and specialty societies in the US. Improved health services research methodologies have revealed wide variations in the care provided to children in the UK9,10 and the US18. Finally, advances in health information technology (HIT) have given researchers the ability to measure and monitor quality.
Quality indicators are explicitly defined and measurable items referring to the structures (e.g., the environment care was provided), processes (e.g., did the patient receive care) or outcomes of care (e.g., mortality).19,20 Desirable characteristics of quality indicators include: unambiguous descriptions and clear definitions of variables to be measured; explicit definition of the population to be included and the setting to which they apply; and links between process measures to health outcomes for the care being reviewed.21 Targets monitored by these indicators should be specific, measurable, achievable, relevant and time-specific.22
Whenever possible, indicators should be based on strongest scientific evidence available (e.g., randomized control trials).21,23 However, many areas of healthcare have a limited evidence base, therefore systematic methods have been developed that combine evidence and expert opinion. These methods include the consensus development conference (e.g., use of hydroxyurea in sickle cell disease24), guidelines developed by iterated consensus rating procedure (e.g., NICE guidelines for diabetes25), Delphi technique (e.g, UK prescribing practices in general practice26) and the RAND appropriateness method (e.g., very low birth weight follow-up care27).
Clinical practice guidelines differ from quality indicators in that they are systematically developed statements designed to assist practitioner and patient decisions about appropriate healthcare for specific clinical circumstances.28 Guidelines are often less specific than quality indicators and may not provide sufficient detail to support the actual measurement of recommendations. They are also used prospectively whereas quality indicators often assess care retrospectively.29 For example, the National Heart, Lung and Blood Institute and National Asthma Education and Prevention Program released guidelines in 2007 for the diagnosis and management of asthma, which recommended the identification of asthma triggers to educate patients to avoid unnecessary exposures or alert them to exposures that might require increased treatment.30 In contrast, a quality indicator for asthma developed by RAND recommended that all patients diagnosed with asthma should have an evaluation of possible triggers within 6 months of diagnosis.31 The quality indicator clearly defines the population, time frame and the specific goal to be measured.
Process data often provide the basis of quality indicators because they are more easily measured. Processes are also more sensitive measures of quality than outcome data because outcomes are only partially produced by health services and are frequently influenced more by other factors (e.g., natural history of the disease, patient physiologic reserve, or patient age).32 Processes are likely to be under providers' control and thus can be improved through individual training or systems change. Finally, a poor outcome does not occur every time there is an error in the provision of care.20
To date, studies attempting to demonstrate that improved processes lead to better outcomes have produced mixed results. Better performance on process quality measures was strongly associated with better survival among community-dwelling vulnerable adults,33 and 100% adherence to a set of quality indicators was significantly associated with better overall survival with breast cancer.34 However, acute care processes for stroke were not associated with functional outcome at 12 months,35 and mixed results have been reported on the relationship between HbA1c levels and mortality.36-38 These studies highlight the difficulty and complexity in selecting appropriate processes to measure and the need for additional studies to examine this relationship further, especially in children.
For the purposes of this commentary, we searched governmental healthcare and accreditation agencies in both the UK and US (e.g., National Health Service, Agency for Healthcare Research and Quality) as well as private foundations (e.g., RAND, Cystic Fibrosis Foundation) to identify examples of quality indicators developed for children.
In the UK, the government introduced the Quality and Outcomes Framework (QOF) as part of the new General Medical Services (GMS) contract in 2004.39 Currently, the QOF is organized into 4 domains: clinical, organizational, patient experience and other services. The clinical domain consists of 80 indicators covering 19 different clinical areas, and less than 20% of these indicators apply to children. The National Service Framework for Children was released in 2005 and provides standards of care rather than quality indicators.4
In the US, AHRQ developed 18 indicators for inpatient care provided to children, including asthma, nosocomial infections, and selected postoperative complications.40,41 In addition, the Joint Commission on Accreditation of Healthcare Organizations has worked with the Centers for Medicare and Medicaid (CMS) and the National Quality Forum (NQF) to develop quality indicators to provide data about the best treatments or practices for hospital-based care, and mandates annual reporting by healthcare organizations for accreditation. The Joint Commission, in conjunction with the Pediatric Data Quality System Collaborative Measure Workgroup (Pedi-QS), developed indicators to review the delivery of inpatient asthma care42 and care provided in the pediatric intensive care unit.43 Finally, CMS established the Physician Quality and Reporting Initiative (PQRI) in 2007.44 This program is a voluntary quality reporting program that provides financial incentives to eligible professionals to provide data on over 100 different indicators and delivers electronic feedback reports to those that participate. Few of the indicators in this program are specific to children.
In 1993, the National Committee for Quality Assurance (NCQA) introduced the Health Plan Employer Data and Information Set (HEDIS), which contained performance measures that were used in the assessment and licensure of managed care plans. Currently, NCQA publishes annual “report cards” on over 90% of US health plans for outpatient care using performance measures included in HEDIS. The 2008 dataset consists of 71 measures in 8 domains of care, with 25% of indicators specific to children.45
The RAND Corporation has been instrumental in the development of specific methodologies to develop quality indicators, namely the RAND/UCLA Appropriateness Method (or the Modified Delphi Method).46 In 2000, the RAND Corporation published a comprehensive set of over 400 outpatient indicators for children and adolescents in the US31, and have used them to analyze the state of healthcare delivered to children.7 In addition, they developed a set of 70 indicators for the follow-up care of very low birth weight infants (<1500 grams) after discharge from the neonatal intensive care unit.27
The Institute of Healthcare Improvement (IHI) is a leader in the development and implementation of quality indicators in collaborative quality improvement efforts at institutions across the US and Europe. In 1995, IHI developed a series of collaborative projects called the Breakthrough Series (BTS) to address issues such as reducing Cesarean section rates while maintaining or improving maternal and fetal outcomes; reducing costs and improving outcomes in adult cardiac surgery; and improving prescribing practices.15 Presently, IHI has pediatric indicators for HIV and asthma care.47,48
Finally, private foundations have developed quality indicators to improve care for special diseases or conditions. For example, the Cystic Fibrosis Foundation has seven goals to improve cystic fibrosis care in their sponsored care centers, including 14 quality indicators.49 The CF Foundation efforts are unique in that they provide information systems and financial support to operationalize the collection of data and indicator measurement. There is interest in developing quality indicators for other common chronic conditions in pediatrics such as type 1 diabetes and inflammatory bowel disease.
Table 1 provides a listing of quality indicators described previously, organized by setting. From this list, less than 5% of indicators are devoted to inpatient care. Even though some are applicable to children, most of the indicators developed for the inpatient setting are not unique to children. For example, “foreign body left in during procedure” and “accidental puncture or laceration during surgery” are relevant to quality healthcare for adults as well as children.
Quality indicators can be further organized by function, such as screening, diagnosis and treatment (see Table 2). Less than one-third of indicators assembled for this paper focus on preventative care (i.e., screening and prevention), while nearly two-thirds are devoted to diagnosis and treatment of medical conditions. However, most of the care provided to children in clinical practice has a preventative focus.
Finally, we organized the indicators by Donabedian classification (see Table 3). None of the pediatric indicators included in this paper measure structural data, while more than 95% focus on care processes. In addition, none of these indicators measure school achievement or consider the family context4,50, both of which are important contributors to child health.
The implementation and monitoring of quality through the use of indicators depends on the availability of discrete and accurate data. Three primary sources from which data may be extracted for indicator measurement: paper charts, claims or administrative data, and electronic health record (EHR) data. While many have hoped for a seamless, automated approach to quality measurement through electronic systems, each of these systems has advantages and disadvantages.
Paper medical charts are still in widespread use in the US and are capable of providing detailed information about the care and timing of treatment that a patient received. While it is possible to extract meaningful data from paper charts, extracting these data is costly and often requires specially-trained individuals. The time required to gather data often limits the feasibility of performing large-scale projects. Missing data are also a concern, since the chart primarily reflects the documentation of care an individual receives from a certain provider or practice and may not contain a comprehensive set of data nor data about care provided elsewhere.
Electronic claims data are widely available, particularly in the US, and can be useful because they contain data on large numbers of patients and provide data on both the utilization51 and cost52 of care provided. There is clearly an incentive to submit comprehensive claims data to maximize reimbursement. However, claims data are usually limited to diagnoses and procedures; do not provide longitudinal information as enrollees change insurance policies or employer; and only capture those services covered by the health plan or system, therefore, may not truly reflect all care received by an individual. Claims databases can also be expensive for researchers due to fees charged for their use.
The EHR can be a very useful tool for monitoring quality indicators. Data from EHRs has the potential to most closely describe the actual care delivered to patients, and the use of automated systems has the potential to deliver much more information at lower cost than that available from manual review of paper charts. Currently, EHR systems are not yet optimized for quality improvement. For example, there are conflicting reports about the effectiveness of EHRs for quality measuring purposes. A recent review of health information technology (HIT) and EHRs demonstrated improved quality and efficiency among four healthcare institutions in the US who developed their own systems,53 and use of EHRs was associated with improved quality of care for children in an urban ambulatory practice.54 However, a large retrospective cross-sectional analysis of National Ambulatory Medical Care Survey data in 2003 and 2004 showed that use of EHR was not associated with better quality ambulatory care.55
The National Programme for Information Technology (NPfIT) is a ¤12 billion (US$21 billion) component of a ten-year investment in HIT commissioned by NHS.56 A key element of the NPfIT is the NHS Care Records service, an integrated system that will provide important elements of a patient's clinical record electronically throughout England.57 The programme's infrastructure and broadband networks are scheduled to be fully operational by 2014.58 However, a 2007 report on the NPfIT found a two-year delay in the piloting and deployment of the electronic patient record; IT suppliers to the programme are struggling to deliver; clinicians are uncertain about NHS' ability to fulfill its promises; and for the local NHS, the total cost of the program and the value of its benefits remain uncertain.59 There is also concern that those responsible for designing the new system are unaware of the potential research uses of routinely collected healthcare data.60
In the US, information systems for healthcare are fragmented across insurers, healthcare organizations and providers. The US lags as much as a dozen years behind other industrialized countries in HIT adoption,58 and a recent study revealed that only 13% of US physicians in ambulatory care reported using a basic system and 4% reported having an extensive, fully functional electronic-records system.61 In 2004, President Bush established the Office of the National Coordinator for Health Information Technology (ONCHIT) to promote HIT, but funds were not appropriated for it by Congress at that time.58 In 2009, Congress passed the American Recovery and Reinvestment Act which appropriated $19.2 billion to support widespread deployment and utilization of HIT and the availability of an EHR for all US citizens by 2014.62 Included in this appropriation was $2 billion in funding for the ONCHIT.
Patient privacy is a key concern in any quality measurement effort. In the UK, legislation protecting patient privacy is complex. In a recent report, the Council for Science and Technology recommended a detailed legislative review and possibly new legislation to allow the use of personal data for researchers and statisticians, especially as the NHS Care Records Service becomes reality.63 Interestingly, the British public did not seem to consider confidential use of personal, identifiable data as an invasion of privacy in a recent survey.64
In the US, the Health Insurance Portability and Accountability Act (HIPAA) of 1996 created the Privacy Rule, which permits health care provider organizations (“covered entities”) to disclose individually identifiable health information (called protected information) for research purposes only if the researcher has obtained written authorization from each patient or has obtained a waiver of authorization requirement from an institutional review board.65 HIPAA rules implicitly require that the amount of protected information released is the “minimum necessary” for a specific project.66 Since implementation, many researchers have expressed concerns that HIPAA has a negative influence on research and quality improvement efforts.65,67,68 The balance between patient privacy and research will continue to be an issue in the US as systems evolve.
We are in the midst of a transformation in healthcare stemming from the convergence of the quality movement and use of quality indicators, use of EBM in routine practice, and the potential (US) or implementation (UK) of interconnected health information systems. Advances in quality healthcare for children are evidenced by the growing library of quality indicators developed by prestigious institutions in the UK and US.
However, there are gaps in quality measurement and quality indicator development in areas important for child health. First, the majority of indicators are devoted to routine care provided in the outpatient setting, yet nearly 40% of US healthcare spending for children in 2004 was on inpatient care compared to 28% for physician/clinic visits.69 Second, few indicators have been developed for children with special healthcare needs. In 2000, 15.6% of US children younger than 18 years had a special healthcare need yet they accounted for 34% of total healthcare costs.52 Quality indicators are needed for this population of children. Third, there are few quality indicators focused on educating parents about fundamental child-rearing topics such as safety and child development. Education of caregivers in these areas may prevent accidental injury and allow early identification of potential learning or behavioral problems. Finally, the indicators surveyed for this paper do not address the greater social context of childhood, including family functioning and school performance, which may be amenable to intervention and increase the likelihood that a child will be a productive member of society as an adult.
The likelihood that quality indicators will improve care delivery and health outcomes is dependent on many factors. Implementation of quality indicators often leads to increased adherence to those measures (i.e., Hawthorne effect: what gets measured gets improved). If adherence to a quality indicator is also highly correlated with an increase in desired health outcome, then increased adherence to this indicator may result in improved health. However, the degree of improvement may vary across different patient populations due to issues such as general health, co-morbidities, and genetic and environmental factors. Therefore, proving the process-outcome relationship can be difficult.
At this juncture, we propose several recommendations to advance the quality indicator development agenda for children. First, the library of quality indicators for children needs to be expanded, particularly in inpatient care and in chronic care, and made available to the child health community in an integrated, easy-to-use format. Second, continued support of integrated, comprehensive HIT efforts from government and healthcare agencies is necessary to support quality measurement and ultimately provide evidence for the process-outcome relationship and the development of better quality indicators. Finally, the science and tools necessary to measure quality and develop quality indicators should be taught to more healthcare professionals to allow widespread integration of quality efforts into routine clinical practice.
The 20th century saw significant decreases in child mortality following the introduction of immunizations and the use of antibiotics for common childhood illnesses. The science of quality measurement and indicator development combined with a growing scientific evidence base and integrated information systems in healthcare may prove to be the next leap forward in improving child health in the 21st century.
We are grateful to Howard Bauchner, MD for his thoughtful review of the manuscript.
FUNDING Dr. Kavanagh is supported by the T32 HP10263 research training grant from the National Institute of Health (USA). Dr. Adams is supported by the D55 HP00006 Faculty Development Award from the Health Resources and Services Administration, Department of Health and Human Services (USA). Dr. Wang is supported by the National Eye Institute K23 Career Development Award (USA) and the Robert Wood Johnson Physician Faculty Scholars Program (USA).
COMPETING INTERESTS None.