Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Arch Dis Child. Author manuscript; available in PMC 2009 November 9.
Published in final edited form as:
PMCID: PMC2774840

Quality indicators and quality assessment in child health


Quality indicators are systematically developed statements that can be used to assess the appropriateness of specific healthcare decisions, services and outcomes. In this review, we highlight the range and type of indicators that have been developed for children in the UK and US by prominent governmental agencies and private organizations. We also classify these indicators in an effort to identify areas of child health that may lack quality measurement activity. We review the current state of health information technology in both countries since these systems are vital to quality efforts. Finally, we propose several recommendations to advance the quality indicator development agenda for children. The convergence of quality measurement and indicator development, a growing scientific evidence base and integrated information systems in healthcare may lead to substantial improvements for child health in the 21st century.

Keywords: Quality, Quality indicators, General pediatrics, Health information technology

Health in childhood contributes to adult health 1-3 and improving the quality of healthcare for UK and US children is now viewed as critical.4,5 An essential component of any improvement effort is the identification of specific, measurable indicators of quality. Quality indicators, also known as performance indicators and review criteria, are systematically developed statements that can be used to assess the appropriateness of specific healthcare decisions, services and outcomes.6 These indicators are developed as a first step in a quality improvement effort and are ideally drawn from the available scientific evidence.

There is evidence that the healthcare system is underperforming for children and there is large variation in care. A study conducted by the RAND Corporation showed that US children received less than 50% of overall indicated care in the outpatient setting.7 A national survey also found that pediatricians used more than 100 different practice guidelines, but no single guideline except for asthma was used by more than 27% of pediatricians.8 And recent reports found widespread variation in hospital services available to children9 and the availability of pediatric cardiology services10 in the UK.

In this review, we present examples of widely available quality indicators and their use in quality measurement efforts in the UK and US. It is not our intent to comprehensively review all indicators; rather, we will use examples from prominent agencies to illustrate the current state of quality indicators and quality measurement. We will also discuss the role of health information technology (HIT) in the measurement of quality indicators. Finally, we will outline the areas requiring additional work in indicator development in order to improve healthcare provided to all children.


The quality movement in the 20th century stemmed from efforts to produce high quality products from increasingly complex industrial processes. Three theorists are credited with the early development and dissemination of quality assurance and improvement methodologies in manufacturing: Walter A. Shewart, William E. Demming and Joseph M. Juran. In the 1920s and 1930s, Shewart and Deming developed the concept of statistical control of processes (i.e., common cause and special cause variations), the statistical process control chart to manage and improve processes, and the Plan-Do-Study-Act (or Plan-Do-Check-Act) cycle, a simple method to test information before making a major decision or change.11,12 Juran overlapped briefly with Shewart and Deming at Western Electric in the 1920s and is credited with expanding the Pareto principle through its application to quality (i.e., 80% of a problem is caused by 20% of the causes) and the development of the quality improvement method, “Total Quality Management.”13 Many have applied these concepts to the medical field, including Donald Berwick and the Institute for Healthcare Improvement.14,15

The Institute of Medicine defines healthcare quality as “the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge.”16 To achieve high quality healthcare, there must be a framework to structure quality improvement efforts, a scientifically-based system to define best practices, demonstration of variation in the quality of care provided, and the means to monitor the effectiveness of interventions - all of which exist today. In 1966, Donabedian published his seminal work outlining the “structure/process/outcome” quality improvement paradigm which is used routinely today.17 Evidence-based medicine (EBM) provides the scientific basis for defining high quality care and is produced through efforts from groups such as the Cochrane Collaboration and the National Institute of Health and Clinical Excellence (NICE) in the UK, and the Agency for Healthcare Research and Quality and specialty societies in the US. Improved health services research methodologies have revealed wide variations in the care provided to children in the UK9,10 and the US18. Finally, advances in health information technology (HIT) have given researchers the ability to measure and monitor quality.


Quality indicators are explicitly defined and measurable items referring to the structures (e.g., the environment care was provided), processes (e.g., did the patient receive care) or outcomes of care (e.g., mortality).19,20 Desirable characteristics of quality indicators include: unambiguous descriptions and clear definitions of variables to be measured; explicit definition of the population to be included and the setting to which they apply; and links between process measures to health outcomes for the care being reviewed.21 Targets monitored by these indicators should be specific, measurable, achievable, relevant and time-specific.22

Development of quality indicators

Whenever possible, indicators should be based on strongest scientific evidence available (e.g., randomized control trials).21,23 However, many areas of healthcare have a limited evidence base, therefore systematic methods have been developed that combine evidence and expert opinion. These methods include the consensus development conference (e.g., use of hydroxyurea in sickle cell disease24), guidelines developed by iterated consensus rating procedure (e.g., NICE guidelines for diabetes25), Delphi technique (e.g, UK prescribing practices in general practice26) and the RAND appropriateness method (e.g., very low birth weight follow-up care27).

Quality indicators versus clinical practice guidelines

Clinical practice guidelines differ from quality indicators in that they are systematically developed statements designed to assist practitioner and patient decisions about appropriate healthcare for specific clinical circumstances.28 Guidelines are often less specific than quality indicators and may not provide sufficient detail to support the actual measurement of recommendations. They are also used prospectively whereas quality indicators often assess care retrospectively.29 For example, the National Heart, Lung and Blood Institute and National Asthma Education and Prevention Program released guidelines in 2007 for the diagnosis and management of asthma, which recommended the identification of asthma triggers to educate patients to avoid unnecessary exposures or alert them to exposures that might require increased treatment.30 In contrast, a quality indicator for asthma developed by RAND recommended that all patients diagnosed with asthma should have an evaluation of possible triggers within 6 months of diagnosis.31 The quality indicator clearly defines the population, time frame and the specific goal to be measured.

Process-outcome relationship

Process data often provide the basis of quality indicators because they are more easily measured. Processes are also more sensitive measures of quality than outcome data because outcomes are only partially produced by health services and are frequently influenced more by other factors (e.g., natural history of the disease, patient physiologic reserve, or patient age).32 Processes are likely to be under providers' control and thus can be improved through individual training or systems change. Finally, a poor outcome does not occur every time there is an error in the provision of care.20

To date, studies attempting to demonstrate that improved processes lead to better outcomes have produced mixed results. Better performance on process quality measures was strongly associated with better survival among community-dwelling vulnerable adults,33 and 100% adherence to a set of quality indicators was significantly associated with better overall survival with breast cancer.34 However, acute care processes for stroke were not associated with functional outcome at 12 months,35 and mixed results have been reported on the relationship between HbA1c levels and mortality.36-38 These studies highlight the difficulty and complexity in selecting appropriate processes to measure and the need for additional studies to examine this relationship further, especially in children.


For the purposes of this commentary, we searched governmental healthcare and accreditation agencies in both the UK and US (e.g., National Health Service, Agency for Healthcare Research and Quality) as well as private foundations (e.g., RAND, Cystic Fibrosis Foundation) to identify examples of quality indicators developed for children.

Quality indicators developed by UK and US government agencies

In the UK, the government introduced the Quality and Outcomes Framework (QOF) as part of the new General Medical Services (GMS) contract in 2004.39 Currently, the QOF is organized into 4 domains: clinical, organizational, patient experience and other services. The clinical domain consists of 80 indicators covering 19 different clinical areas, and less than 20% of these indicators apply to children. The National Service Framework for Children was released in 2005 and provides standards of care rather than quality indicators.4

In the US, AHRQ developed 18 indicators for inpatient care provided to children, including asthma, nosocomial infections, and selected postoperative complications.40,41 In addition, the Joint Commission on Accreditation of Healthcare Organizations has worked with the Centers for Medicare and Medicaid (CMS) and the National Quality Forum (NQF) to develop quality indicators to provide data about the best treatments or practices for hospital-based care, and mandates annual reporting by healthcare organizations for accreditation. The Joint Commission, in conjunction with the Pediatric Data Quality System Collaborative Measure Workgroup (Pedi-QS), developed indicators to review the delivery of inpatient asthma care42 and care provided in the pediatric intensive care unit.43 Finally, CMS established the Physician Quality and Reporting Initiative (PQRI) in 2007.44 This program is a voluntary quality reporting program that provides financial incentives to eligible professionals to provide data on over 100 different indicators and delivers electronic feedback reports to those that participate. Few of the indicators in this program are specific to children.

Quality indicators developed by private foundations/organizations

In 1993, the National Committee for Quality Assurance (NCQA) introduced the Health Plan Employer Data and Information Set (HEDIS), which contained performance measures that were used in the assessment and licensure of managed care plans. Currently, NCQA publishes annual “report cards” on over 90% of US health plans for outpatient care using performance measures included in HEDIS. The 2008 dataset consists of 71 measures in 8 domains of care, with 25% of indicators specific to children.45

The RAND Corporation has been instrumental in the development of specific methodologies to develop quality indicators, namely the RAND/UCLA Appropriateness Method (or the Modified Delphi Method).46 In 2000, the RAND Corporation published a comprehensive set of over 400 outpatient indicators for children and adolescents in the US31, and have used them to analyze the state of healthcare delivered to children.7 In addition, they developed a set of 70 indicators for the follow-up care of very low birth weight infants (<1500 grams) after discharge from the neonatal intensive care unit.27

The Institute of Healthcare Improvement (IHI) is a leader in the development and implementation of quality indicators in collaborative quality improvement efforts at institutions across the US and Europe. In 1995, IHI developed a series of collaborative projects called the Breakthrough Series (BTS) to address issues such as reducing Cesarean section rates while maintaining or improving maternal and fetal outcomes; reducing costs and improving outcomes in adult cardiac surgery; and improving prescribing practices.15 Presently, IHI has pediatric indicators for HIV and asthma care.47,48

Finally, private foundations have developed quality indicators to improve care for special diseases or conditions. For example, the Cystic Fibrosis Foundation has seven goals to improve cystic fibrosis care in their sponsored care centers, including 14 quality indicators.49 The CF Foundation efforts are unique in that they provide information systems and financial support to operationalize the collection of data and indicator measurement. There is interest in developing quality indicators for other common chronic conditions in pediatrics such as type 1 diabetes and inflammatory bowel disease.


Table 1 provides a listing of quality indicators described previously, organized by setting. From this list, less than 5% of indicators are devoted to inpatient care. Even though some are applicable to children, most of the indicators developed for the inpatient setting are not unique to children. For example, “foreign body left in during procedure” and “accidental puncture or laceration during surgery” are relevant to quality healthcare for adults as well as children.

Table 1
Commonly reference quality indicators, sorted by topic

Quality indicators can be further organized by function, such as screening, diagnosis and treatment (see Table 2). Less than one-third of indicators assembled for this paper focus on preventative care (i.e., screening and prevention), while nearly two-thirds are devoted to diagnosis and treatment of medical conditions. However, most of the care provided to children in clinical practice has a preventative focus.

Table 2
Selected quality indicators categorized by function

Finally, we organized the indicators by Donabedian classification (see Table 3). None of the pediatric indicators included in this paper measure structural data, while more than 95% focus on care processes. In addition, none of these indicators measure school achievement or consider the family context4,50, both of which are important contributors to child health.

Table 3
Selected quality indicators categorized by Donabedian classification


Data sources for quality indicators

The implementation and monitoring of quality through the use of indicators depends on the availability of discrete and accurate data. Three primary sources from which data may be extracted for indicator measurement: paper charts, claims or administrative data, and electronic health record (EHR) data. While many have hoped for a seamless, automated approach to quality measurement through electronic systems, each of these systems has advantages and disadvantages.

Paper medical charts are still in widespread use in the US and are capable of providing detailed information about the care and timing of treatment that a patient received. While it is possible to extract meaningful data from paper charts, extracting these data is costly and often requires specially-trained individuals. The time required to gather data often limits the feasibility of performing large-scale projects. Missing data are also a concern, since the chart primarily reflects the documentation of care an individual receives from a certain provider or practice and may not contain a comprehensive set of data nor data about care provided elsewhere.

Electronic claims data are widely available, particularly in the US, and can be useful because they contain data on large numbers of patients and provide data on both the utilization51 and cost52 of care provided. There is clearly an incentive to submit comprehensive claims data to maximize reimbursement. However, claims data are usually limited to diagnoses and procedures; do not provide longitudinal information as enrollees change insurance policies or employer; and only capture those services covered by the health plan or system, therefore, may not truly reflect all care received by an individual. Claims databases can also be expensive for researchers due to fees charged for their use.

The EHR can be a very useful tool for monitoring quality indicators. Data from EHRs has the potential to most closely describe the actual care delivered to patients, and the use of automated systems has the potential to deliver much more information at lower cost than that available from manual review of paper charts. Currently, EHR systems are not yet optimized for quality improvement. For example, there are conflicting reports about the effectiveness of EHRs for quality measuring purposes. A recent review of health information technology (HIT) and EHRs demonstrated improved quality and efficiency among four healthcare institutions in the US who developed their own systems,53 and use of EHRs was associated with improved quality of care for children in an urban ambulatory practice.54 However, a large retrospective cross-sectional analysis of National Ambulatory Medical Care Survey data in 2003 and 2004 showed that use of EHR was not associated with better quality ambulatory care.55


The National Programme for Information Technology (NPfIT) is a ¤12 billion (US$21 billion) component of a ten-year investment in HIT commissioned by NHS.56 A key element of the NPfIT is the NHS Care Records service, an integrated system that will provide important elements of a patient's clinical record electronically throughout England.57 The programme's infrastructure and broadband networks are scheduled to be fully operational by 2014.58 However, a 2007 report on the NPfIT found a two-year delay in the piloting and deployment of the electronic patient record; IT suppliers to the programme are struggling to deliver; clinicians are uncertain about NHS' ability to fulfill its promises; and for the local NHS, the total cost of the program and the value of its benefits remain uncertain.59 There is also concern that those responsible for designing the new system are unaware of the potential research uses of routinely collected healthcare data.60

In the US, information systems for healthcare are fragmented across insurers, healthcare organizations and providers. The US lags as much as a dozen years behind other industrialized countries in HIT adoption,58 and a recent study revealed that only 13% of US physicians in ambulatory care reported using a basic system and 4% reported having an extensive, fully functional electronic-records system.61 In 2004, President Bush established the Office of the National Coordinator for Health Information Technology (ONCHIT) to promote HIT, but funds were not appropriated for it by Congress at that time.58 In 2009, Congress passed the American Recovery and Reinvestment Act which appropriated $19.2 billion to support widespread deployment and utilization of HIT and the availability of an EHR for all US citizens by 2014.62 Included in this appropriation was $2 billion in funding for the ONCHIT.


Patient privacy is a key concern in any quality measurement effort. In the UK, legislation protecting patient privacy is complex. In a recent report, the Council for Science and Technology recommended a detailed legislative review and possibly new legislation to allow the use of personal data for researchers and statisticians, especially as the NHS Care Records Service becomes reality.63 Interestingly, the British public did not seem to consider confidential use of personal, identifiable data as an invasion of privacy in a recent survey.64

In the US, the Health Insurance Portability and Accountability Act (HIPAA) of 1996 created the Privacy Rule, which permits health care provider organizations (“covered entities”) to disclose individually identifiable health information (called protected information) for research purposes only if the researcher has obtained written authorization from each patient or has obtained a waiver of authorization requirement from an institutional review board.65 HIPAA rules implicitly require that the amount of protected information released is the “minimum necessary” for a specific project.66 Since implementation, many researchers have expressed concerns that HIPAA has a negative influence on research and quality improvement efforts.65,67,68 The balance between patient privacy and research will continue to be an issue in the US as systems evolve.


We are in the midst of a transformation in healthcare stemming from the convergence of the quality movement and use of quality indicators, use of EBM in routine practice, and the potential (US) or implementation (UK) of interconnected health information systems. Advances in quality healthcare for children are evidenced by the growing library of quality indicators developed by prestigious institutions in the UK and US.

However, there are gaps in quality measurement and quality indicator development in areas important for child health. First, the majority of indicators are devoted to routine care provided in the outpatient setting, yet nearly 40% of US healthcare spending for children in 2004 was on inpatient care compared to 28% for physician/clinic visits.69 Second, few indicators have been developed for children with special healthcare needs. In 2000, 15.6% of US children younger than 18 years had a special healthcare need yet they accounted for 34% of total healthcare costs.52 Quality indicators are needed for this population of children. Third, there are few quality indicators focused on educating parents about fundamental child-rearing topics such as safety and child development. Education of caregivers in these areas may prevent accidental injury and allow early identification of potential learning or behavioral problems. Finally, the indicators surveyed for this paper do not address the greater social context of childhood, including family functioning and school performance, which may be amenable to intervention and increase the likelihood that a child will be a productive member of society as an adult.

The likelihood that quality indicators will improve care delivery and health outcomes is dependent on many factors. Implementation of quality indicators often leads to increased adherence to those measures (i.e., Hawthorne effect: what gets measured gets improved). If adherence to a quality indicator is also highly correlated with an increase in desired health outcome, then increased adherence to this indicator may result in improved health. However, the degree of improvement may vary across different patient populations due to issues such as general health, co-morbidities, and genetic and environmental factors. Therefore, proving the process-outcome relationship can be difficult.

At this juncture, we propose several recommendations to advance the quality indicator development agenda for children. First, the library of quality indicators for children needs to be expanded, particularly in inpatient care and in chronic care, and made available to the child health community in an integrated, easy-to-use format. Second, continued support of integrated, comprehensive HIT efforts from government and healthcare agencies is necessary to support quality measurement and ultimately provide evidence for the process-outcome relationship and the development of better quality indicators. Finally, the science and tools necessary to measure quality and develop quality indicators should be taught to more healthcare professionals to allow widespread integration of quality efforts into routine clinical practice.

The 20th century saw significant decreases in child mortality following the introduction of immunizations and the use of antibiotics for common childhood illnesses. The science of quality measurement and indicator development combined with a growing scientific evidence base and integrated information systems in healthcare may prove to be the next leap forward in improving child health in the 21st century.


We are grateful to Howard Bauchner, MD for his thoughtful review of the manuscript.

FUNDING Dr. Kavanagh is supported by the T32 HP10263 research training grant from the National Institute of Health (USA). Dr. Adams is supported by the D55 HP00006 Faculty Development Award from the Health Resources and Services Administration, Department of Health and Human Services (USA). Dr. Wang is supported by the National Eye Institute K23 Career Development Award (USA) and the Robert Wood Johnson Physician Faculty Scholars Program (USA).




1. Case A, Fertig A, Paxson C. The lasting impact of childhood health and circumstance. Journal of Health Economics. 2005;24(2):365–389. [PubMed]
2. Dietz WH. Childhood weight affects adult morbidity and mortality. J Nutr. 1998;128(2):411S–414S. [PubMed]
3. Gunnell DJ, Frankel SJ, Nanchahal K, Peters TJ, Davey Smith G. Childhood obesity and adult cardiovascular mortality: A 57-y follow-up study based on the Boyd Orr cohort. Am J Clin Nutr. 1998;67(6):1111–1118. [PubMed]
4. National service framework for children, young people and maternity services. Department of Health; London: 2004. [PubMed]
5. National Research Council and Institute of Medicine . Committee on Evaluation of Children's Health, Board on Children, Youth, and Families, Division of Behavioral and Social Sciences and Education. The National Academy Press; Washington, DC: 2004. Children's health, the nation's wealth: Assessing and improving child health.
6. Field MJ, Lohr KN, editors. Guidelines for clinical practice: From development to use. Institute of Medicine; Washington, DC: 1992.
7. Mangione-Smith R, DeCristofaro AH, Setodji CM, et al. The quality of ambulatory care delivered to children in the United States. N Engl J Med. 2007;357(15):1515–1523. [PubMed]
8. Flores G, Lee M, Bauchner H, Kastner B. Pediatricians' attitudes, beliefs, and practices regarding clinical practice guidelines: A national survey. Pediatrics. 2000;105(3):496–501. [PubMed]
9. Improving services for children in hospital. Commission for Healthcare Audit and Inspection; London: 2007. Available at Accessed September 9, 2008.
10. Karuppaswamy V, Kelsall W. Review of paediatric cardiology services in district general hospitals in United Kingdom. Arch Dis Child. 2008:141481. adc.2008. [PubMed]
11. Shewart WA. Economic control of quality manufactured product. D. Van Nostrand Co; New York: 1931.
12. Langley GJ, Nolan KM, Nolan TW, Norman CL, Provost LP. The improvement guide. Jossey-Bass Publishers; San Francisco, CA: 1996.
13. Juran JM. Architect of quality. McGraw-Hill; New York, NY: 2004.
14. Berwick DM. Continuous improvement as an ideal in health care. N Engl J Med. 1989;320(1):53–56. [PubMed]
15. Kilo CM. Improving care through collaboration. Pediatrics. 1999;103(1):384–393. [PubMed]
16. Lohr KN, Schroeder SA. A strategy for quality assurance in Medicare. N Engl J Med. 1990;322:707–712. [PubMed]
17. Donabedian A. Evaluating the quality of medical care. The Milbank Memorial Fund Quarterly. 1966;44(3):166–203. Reprinted in The Milbank Quarterly in 2005. [PubMed]
18. Gawande A. The bell curve. The New Yorker. 2004 December 6;:82–91.
19. Campbell SM, Braspenning J, Hutchinson A, Marshall MN. Improving the quality of health care: Research methods used in developing and applying quality indicators in primary care. BMJ. 2003;326(7393):816–819. [PMC free article] [PubMed]
20. Brook RH, McGlynn EA, Cleary PD. Measuring quality of care- part two of six. N Engl J Med. 1996;335(13):966–970. [PubMed]
21. Hearnshaw HM, Harker RM, Cheater FM, Baker RH, Grimshaw GM. Expert consensus on the desirable characteristics of review criteria for improvement of health care quality. Qual Saf Health Care. 2001;10(3):173–178. [PMC free article] [PubMed]
22. Arah OA, Klazinga NS, Delnoij DMJ, Asbroek AHAT, Custers T. Conceptual frameworks for health systems performance: A quest for effectiveness, quality, and improvement. Int J Qual Health Care. 2003;15(5):377–398. [PubMed]
23. McGlynn EA, Asch SM. Developing a clinical performance measure. Am J Prev Med. 1998;14(3 Supplement 1):14–21. [PubMed]
24. Brawley OW, Cornelius LJ, Edwards LR, et al. National Institutes of Health consensus development conference statement: Hydroxyurea treatment for sickle cell disease. Ann Intern Med. 2008;148(12):1–9. [PubMed]
25. National Diabetes Support Team . NICE and diabetes: A summary of relevant guidelines. NHS; London: Jul, 2006.
26. Campbell SM, Cantrill JA, Roberts D. Prescribing indicators for UK general practice: Delphi consultation study. BMJ. 2000;321(7258):425–428. [PMC free article] [PubMed]
27. Wang CJ, McGlynn EA, Brook RH, et al. Quality-of-care indicators for the neurodevelopmental follow-up of very low birth weight children: Results of an expert process. Pediatrics. 2006;117:2080–2092. [PubMed]
28. Institute of Medicine . In: Clinical practice guidelines: Directions for a new program. Field MJ, Lohr KN, editors. The National Academy Press; Washington, DC: 1990.
29. Campbell SM, Roland MO, Shekelle PG, Cantrill JA, Buetow SA, Cragg DK. Development of review criteria for assessing the quality of management of stable angina, adult asthma, and non-insulin dependent diabetes mellitus in general practice. Qual Health Care. 1999;8(1):6–15. [PMC free article] [PubMed]
30. National Heart Lung and Blood Institute, National Asthma Education and Prevention Program . Expert panel report 3: Guidelines for the diagnosis and management of asthma. Bethesda, MD: 2007.
31. McGlynn EA, Damberg CL, Kerr EA, Schuster MA. Quality of care for children and adolescents: A review of the literature and quality indicators. RAND; Santa Monica, CA: 2000.
32. Brook RH, McGlynn EA, Shekelle PG. Defining and measuring quality of care: A perspective from US researchers. Int J Qual Health Care. 2000;12(4):281–295. [PubMed]
33. Higashi T, Shekelle PG, Adams JL, et al. Quality of care is associated with survival in vulnerable older patients. Ann Intern Med. 2005;143(4):274–281. [PubMed]
34. Cheng SH, Wang CJ, Lin J-L, et al. Adherence to quality indicators and survival in breast cancer patients. Medical Care. 2008;47(2):217–225. [PubMed]
35. McNaughton H, McPherson K, Taylor W, Weatherall M. Relationship between process and outcome in stroke care. Stroke. 2003;34(3):713–717. [PubMed]
36. Eshaghian S, Horwich TB, Fonarow GC. An unexpected inverse relationship between HbA1c levels and mortality in patients with diabetes and advanced systolic heart failure. American Heart Journal. 2006;151(1):91.e1–91.e6. [PubMed]
37. Gaede P, Lund-Andersen H, Parving H-H, Pedersen O. Effect of a multifactorial intervention on mortality in type 2 diabetes. N Engl J Med. 2008;358(6):580–591. [PubMed]
38. Action to Control Cardiovascular Risk in Diabetes. ACCORD study announcement. Accessed on September 15, 2008.
39. NHS Employers and British Medical Association . Quality outcomes framework guidance for gms contract, 2007/08. National Health Service; London: Apr, 2008. Available at:$File/qof06.pdf. Accessed on August 20, 2008.
40. Miller MR, Elixhauser A, Zhan C. Patient safety events during pediatric hospitalizations. Pediatrics. 2003;111(6):1358–1366. [PubMed]
41. McDonald K, Romano P, Davies S, et al. Measures of pediatric health care quality based on hospital administrative data: The pediatric quality indicators: Prepared for the Agency for Health Care Research and Quality. Feb, 2006. Available at: Accessed July 25, 2008.
42. Pediatric Data Quality System Collaborative Measure Workgroup . Children's inpatient asthma care measures. National Association of Childrens Hospitals and Related Institutions, National Association of Childrens Hospitals; Alexandria, VA: Available at: D=22925&TEMPLATE=/CM/HTMLDisplay.cfm.
43. Scanlon MC, Kshitij PM, Jeffries HE. Determining pediatric intensive care unit quality indicators for measuring pediatric intensive care unit safety. Pediatr Crit Care Med. 2007;8(Suppl):S3–S10. [PubMed]
44. Centers for Medicare & Medicaid Services . New PQRI EHR measure specifications. Washington, DC: Available at:
45. NCQA . HEDIS 2008 measures. Washington, DC: Available at:
46. Brook RH. The RAND/UCLA Appropriateness method. In: McCormick KA, Moore SR, Siegel RA, editors. Clinical Practice Guideline Development: Methodology Perspectives. Agency for Health Care Policy and Research; Rockville, MD: 1994.
47. Martin JR, Hamilton BE, Sutton PD. National vital statistics reports. no1. vol 55. National Center for Health Statistics; Hyattsville, MD: 2006. Final data for 2004. al e.
48. Therrell BL, Johnson A, Williams D. Status of newborn screening programs in the United States. Pediatrics. 2006;117(5):S212–252. [PubMed]
49. Cystic Fibrosis Foundation . Cystic fibrosis foundation patient registry: Annual data report 2006. Bethesda, MD: 2008.
50. Palmer RH, Miller MR. Methodologic challenges in developing and implementing measures of quality for child health care. Ambul Pediatr. 2001;1(1):39–52. [PubMed]
51. Wang CJ, Elliott MN, McGlynn EA, Brook RH, Schuster MA. Population-based assessments of ophthalmologic and audiologic follow-up in children with very low birth weight enrolled in Medicaid: A quality-of-care study. Pediatrics. 2008;121(2):e278–285. [PubMed]
52. Newacheck PW, Kim SE. A national profile of health care utilization and expenditures for children with special health care needs. Arch Pediatr Adolesc Med. 2005;159(1):10–17. [PubMed]
53. Chaudhry B, Wang J, Wu S, et al. Systematic review: Impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006;144:E12–E22. [PubMed]
54. Adams WG, Mann AM, Bauchner H. Use of an electronic medical record improves the quality of urban pediatric primary care. Pediatrics. 2003;111(3):626–632. [PubMed]
55. Linder JA, Ma J, Bates DW, Middleton B, Stafford RS. Electronic health record use and the quality of ambulatory care in the United States. Arch Intern Med. 2007;167(13):1400–1405. [PubMed]
56. Hendy J, Fulop N, Reeves BC, Hutchings A, Collin S. Implementing the NHS information technology programme: Qualitative study of progress in acute trusts. BMJ. 2007;334(7608):1360. [PMC free article] [PubMed]
57. Chantler C, Clarke T, Granger R. Information technology in the English National Health Service. JAMA. 2006;296(18):2255–2258. [PubMed]
58. Anderson GF, Frogner BK, Johns RA, Reinhardt UE. Health care spending and use of information technology in OECD countries. Health Aff. 2006;25(3):819–831. [PubMed]
59. House of Commons, Committee of Public Accounts, Department of Health The National Programme for IT in the NHS. Twentieth report of the session 2006-2007. Mar, 2007. Available at: Accessed August 15, 2008.
60. Black N. Maximising research opportunities of new NHS information systems. BMJ. 2008;336(7636):106–107. [PMC free article] [PubMed]
61. DesRoches CM, Campbell EG, Rao SR, et al. Electronic health records in ambulatory care -- a national survey of physicians. N Engl J Med. 2008;359(1):50–60. [PubMed]
62. American recovery and reinvestment act of 2009. 111th Congress of the United States ed. 2009
63. Council for Science and Technology Better use of personal information: Opportunities and risks, 2005. Available at: Accessed September 1, 2008.
64. Barrett G, Cassell JA, Peacock JL, Coleman MP. National survey of British public's views on use of identifiable medical data by the National Cancer Registry. BMJ. 2006;332(7549):1068–1072. [PMC free article] [PubMed]
65. Ness RB, the Joint Policy Committee Societies of Epidemiology Influence of the HIPAA privacy rule on health research. JAMA. 2007;298(18):2164–2170. [PubMed]
66. Kulynych J, Korn D. The effect of the new federal medical-privacy rule on research. N Engl J Med. 2002;346(3):201–204. [PubMed]
67. Armstrong D, Kline-Rogers E, Jani SM, et al. Potential impact of the HIPAA privacy rule on data collection in a registry of patients with acute coronary syndrome. Arch Intern Med. 2005;165(10):1125–1129. [PubMed]
68. Marsh JL, McMaster W, Parvizi J, Katz SI, Spindler K. AOA symposium. Barriers (threats) to clinical research. J Bone Joint Surg Am. 2008;90(8):1769–1776. [PubMed]
69. Hartman M, Catlin A, Lassman D, Cylus J, Heffler S. U.S. Health spending by age, selected years through 2004. Health Aff. 2008;27(1):w1–12. [PubMed]