PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
J Am Board Fam Med. Author manuscript; available in PMC Feb 6, 2013.
Published in final edited form as:
PMCID: PMC3565596
NIHMSID: NIHMS419646
Are pediatric quality care measures too stringent?
Allison Casciato, BA (MSII),1 Heather Angier, MPH,1 Christina Milano, MD,1 Nicholas Gideonse, MD,1 Rachel Gold, PhD, MPH,2 and Jennifer DeVoe, MD, DPhil1,3
1Oregon Health & Science University Department of Family Medicine 3181 SW Sam Jackson Park Road Portland, Oregon 27239 503-494-8311
2Kaiser Permanente Northwest, Center for Health Research, Portland, Oregon
3OCHIN Community Health Information Network Research Director, Portland, Oregon
Corresponding Author
Introduction
We aimed (1) to demonstrate the application of national pediatric quality measures derived from claims-based data, for use with Electronic Medical Record (EMR) data, and (2) to determine the extent to which rates differ if specifications were modified to allow for flexibility in measuring receipt of care.
Methods
We reviewed EMR data for all patients up to 15 years with≥1 office visit to a safety net family medicine clinic in 2010 (n=1,544). We assessed rates of appropriate well-child visits (WCVs), immunizations, and body mass index (BMI) documentation, defined strictly by national guidelines versus by guidelines with clinically relevant modifications.
Results
Among children <3 years, 52.4% attended ≥6 WCVs by 15 months; 60.8% had ≥6 visits by 2 years. Less than 10% completed 10 vaccination series before their 2nd birthday; with modifications, 36% were up-to-date. Among children aged 3-15 years, 63% had a BMI percentile recorded; 91% had BMI recorded within 36 months of the measurement year.
Discussion
Applying relevant modifications to national quality measure definitions captured a substantial number of additional services. Strict adherence to measure definitions might miss the true quality of care provided, especially in populations who may have sporadic patterns of care utilization.
Keywords: quality of health care, child health, Electronic Medical Record, low-income population
Practice-based research capabilities have been enhanced by the increasing availability of data from electronic medical records.1,2 In many cases, it is necessary to adapt definitions from previous data sources, such as health insurance claims data, for use in Electronic Medical Record (EMR) data. In this study, we aimed (1) to demonstrate the application of national pediatric quality measures (primarily designed for claims-based data analyses) for use with EMR data, and (2) to determine the extent by which rates might differ if specifications were modified to allow for some flexibility in measuring receipt of care based on how care is often provided in the “real world.” We applied these measures at a specific Oregon safety net family medicine clinic. This clinic was preparing to begin quality improvement efforts around child health but had not yet started; our work was designed to inform their future efforts.
We identified pediatric quality measures from those developed to meet the Children’s Health Insurance Program Reauthorization Act (CHIPRA) of 2009 mandate. The CHIPRA included provisions for developing a set of universal measures to facilitate standardized reporting and measurement of pediatric care quality.3-5 This set of measures was developed by the Agency for Healthcare Research and Quality in collaboration with the Centers for Medicare and Medicaid Services using a transparent, evidence-based process, with input from multiple stakeholders.2,5 Twenty-four measures were selected as benchmarks of the quality of disease prevention, surveillance, and treatment for conditions commonly seen in primary care; most are measured as annual rates.3,4,6 Measures were chosen based on their validity, feasibility, and significance to improving health outcomes.
Analyses of the CHIPRA quality measures will ideally identify gaps in the provision of pediatric health care within and across populations.5,7 Information and specifics about the CHIPRA measures are reported elsewhere.3,5,8 Most of the CHIPRA measures were designed for querying health insurance claims data. Claims data, however, do not include the uninsured,9,10 and may miss care that is delivered but not submitted for billing to the insurance plan.11,12. Additionally, while many of the CHIPRA measures specify strict timelines for receipt of preventative services, there is a general lack of evidence for much of this specificity.13,14 Our study makes a valuable contribution to the literature by demonstrating the use of these measures in EMR data and highlights adaptations to reflect how pediatric care is often delivered in clinical practice.
Study Population
We conducted a retrospective cohort study of all children ages 6 months to 15 years as of July 1, 2011 who had at least one primary care visit to the clinic in 2010 (n=1,544). All clinicians at this study clinic use the same Epic© Systems EMR for all clinical encounters; their EMR has been in place for over 5 years.
Measures
Our practice-based research team of clinicians and researchers selected CHIPRA measures that were relevant to the clinic’s current quality improvement efforts. We assessed each measure for age-appropriate patients from our cohort.
In team discussions regarding the measure specifications, clinicians highlighted concerns about whether the measures were reflective of, and relevant to, how care is actually delivered. This led to an iterative process with researchers, clinicians, and policy-makers to establish clinically-relevant modified versions of the original CHIPRA specifications, as noted below. The purpose of creating these modifications was to incorporate some of the valid considerations for not meeting a strict measure requirement, while staying true to the overall intent of the measure (e.g. a child got the service but slightly late due to lapse in insurance coverage or service was offered but patient/parent refused). This process augmented our evaluation by allowing us to analyze both the feasibility of utilizing EMR data, and the extent to which quality measure rates changed when accounting for “real world” considerations. Our measure specifications were as follows:
Well-child visits (WCV) for children by 15 months
We first assessed the percentage of patients with 6 or more WCVs with a primary care provider by 15 months of age. WCVs were counted if the visit was labeled as such in the EMR; the content of the visit was not assessed. This is congruent with the CHIPRA measure which uses CPT and ICD-9 codes to identify WCVs.4 The clinicians at the study clinic reported visits are sometimes missed or delayed and that it would be useful to try to capture visits at or near 15 months and/or 18 months. We modified the specifications to calculate the percentage of participants with 6 WCVs by 24 months of age to capture these potential visits. Controversy over the actual number of well child visits needed and over what time period remains.15
Early childhood immunization receipt
We assessed the percentage of patients with the 10 CHIPRA-identified vaccine series completed by 2 years of age. The 10-vaccine series is as follows (numbers in parenthesis indicate number of shots required per vaccine): Diphtheria, Tetanus, acellular Pertussis (4), Polio (3), Measles, Mumps, Rubella (1), Haemophilus influenza type b (3 or 4), Hepatitis B (3), Varicella (1), Pneumococcal (4), Hepatitis A (2), Influenza (2), Rotavirus (3). All children for whom this 10-vaccine series was incomplete were screened for exclusion criteria specific to individual vaccines, as indicated by the CHIPRA core set of technical standards.4 In our modified assessment, we calculated the percent completing each of 10 vaccines independently, recorded documentation of a vaccine having been offered but refused, and included data on vaccinations received up to 1 year after the CHIPRA deadline. This additional year was thought relevant due to vaccine shortages (e.g. Haemophilus influenza type b), manufacturer recalls (e.g. Rotavirus), and changes in age recommendations (e.g. Hepatitis A). Immunization schedules change often due to new information, and though it is known that early vaccinations administered at intervals that are too close together can lead to inadequate immune response, dosing after the recommended timeframe will still lead to adequate antibodies.14
Adolescent immunization receipt
We assessed the percentage of patients with one Meningococcal vaccine received between ages 11 and 13, and one Tetanus, Diphtheria acellular Pertussis (Tdap), or Tetanus, Diphtheria (Td) booster vaccine between ages 10 and 13.4 In this assessment, we also accounted for CHIPRA-specified exclusion criteria. Our modifications included information on declined vaccines and vaccines received by 15 years of age. This additional timeframe was thought relevant due to variation in the age at which children might have received initial childhood Tdap immunizations, and the age at which they might be entering an environment with higher likelihood of exposure to meningococcal infections.
Body mass index (BMI) percentile documentation
We assessed the percentage of patients between ages 3 and 15 years who had a BMI percentile recorded during the measurement year as indicated by CHIPRA measure specifications.4 A systematic review of the literature uncovered a lack of evidence that screening for BMI improves health outcomes over any time period.16 Thus, our modification relaxed the timeframe for this measure: we assessed whether patients had documentation of a previous BMI percentile recorded in the EMR’s growth chart data within 36 months of the measurement year. This additional timeframe was thought relevant since patients/parents will sometimes decline having a height and/or weight measured at every visit, and a 36-month timeframe was considered reasonable by clinicians to assess for obesity or significant changes in weight.
Data Collection and Analyses
We conducted a manual review of the full EMR chart for each study subject (n = 1,544) between July 1 and August 31, 2011. To achieve this, we created standardized data collection algorithms for each measure. We then abstracted clinical data from the EMR following these standardized data collection algorithms and organized the data into a secure, electronic data management system formatted for this study using Research Electronic Data Capture (REDCap) software.17 The Epic© Systems Summer 2009 IU1 EMR (Epic Systems Corporation: Verona, Wisconsin) platform utilized by the safety net clinic contains discrete fields for medical history, surgical history, social history, problem list, current and past medications, immunizations (including those given elsewhere if recorded in the Oregon Immunization Registry and added to the EMR, which is a common documentation practice for this clinic), allergies, vital signs, and encounter diagnoses. Free text is included in progress notes that contain both history and physical examination data. Our data collection algorithm utilized both discrete and free text fields, allowing us to include data that are found solely in physician encounter notes and would be typically inaccessible in electronically abstracted data. Each chart took between 15 and 30 minutes to review. All recorded data were transferred into SPSS software, version 18.0.0 (Chicago, Illinois) for statistical analyses. This study was approved by the Oregon Health & Science University Institutional Review Board.
Demographics
Thirty-two percent of the 1,544 study subjects were aged 6 months to 3 years, 47% aged 4 to 11 years, and 21% aged 12 to 15 years. There were a similar number of males and females. On the date of chart review, 68% of the subjects had public insurance, 19% private insurance, 12% uninsured/self-pay, and 1% were covered by Medicaid plans managed by private insurers.
Well Child Visits
Among children <3 years of age, 52% attended ≥6 well -child visits by the age of 15 months, and 61% had≥6 visits by age 2.
Immunizations
When considering each vaccine series independently, 8 of the 10 recommended vaccine series had been received by 65% of children by age 2 (rates were lower for rotavirus and hepatitis A). When including vaccination receipt by age 3, and excluding cases of parental vaccination refusal from the denominator of the analysis, this number rose to over 70%. When assessing the composite rate of immunization receipt, as suggested by the CHIPRA measure, only 10% of children (n=14) completed 9 of the 10 vaccination series before their 2nd birthday (we excluded rotavirus which was not available at the clinic during the entire study time period due to a manufacturer’s recall). The composite rate went up to 36% complete when counting all vaccines received by age 3, and excluding patients with documented vaccine refusals from the denominator. Fifteen percent of children’s parents had refused the child’s recept of ovaccines (n=21); the distribution of refusals ranged from a single vaccine (n=10) to documentation of all vaccines refused (n=2).
Among adolescents, 43% met the CHIPRA measure of receiving both a Tdap (or Td) and meningococcal vaccination by 13 years of age. The Tdap or Td vaccination was received by 69% of adolescents by 13 years of age, and the meningococcal vaccination was received by 46%. When we included vaccinations through age 15, these values increased to 83% and 57%, respectively. The meningococcal vaccine was refused by 1% (n=2), and there were no refusals for the Tdap vaccine recorded in patients’ charts.
Body Mass Index (BMI) Documentation
Among children 3-15 years of age (N=1,181), 63% had their BMI recorded in the measurement year. As for the measure modification, 91% had a BMI percentile recorded in their growth chart at any single time point within 36 months of the measurement year. When assessed by age, BMI percentile was recorded consistently for over 50% of children within the measurement year with some variation but no consistent pattern. The lowest overall percentage recorded was for children 7 years of age. See figure 3.
Figure 3
Figure 3
Percentage of BMI Percentile Recorded within 12 Months and within 36 Months of the Measurement Year, by Age
We successfully calculated several pediatric care quality measures using outpatient EMR data from a safety net clinic, and confirmed the feasibility of using EMR data to conduct such evaluations. Using EMR data likely allowed us to capture care delivered during periods of uninsurance, which would not have been possible with insurance claims data.11 We also discovered that modest adjustments to measurement parameters enabled a real world view of the care delivered. To the best of our knowledge, this is one of the first studies to utilize CHIPRA measures for practice-based research.
Practice Implications
As in previously reported analyses, the most significant adaptation required to assess performance of the CHIPRA measures using EMR data was the method used to determine a population denominator.18 Many of the CHIPRA measures were designed for assessing quality of care provided to patients enrolled in an insurance plan. Instead of the CHIPRA measures’ enrollment-based approach, we used a visit-based approach (i.e. ≥1 visit in the measurement year) to identify an ‘established’ patient population.
Although most of the child health care services that we identified were delivered on time and at the recommended frequency in our study population, our modified assessments captured a substantial number of additional services. For example, 61% of children had 6 well child visits by age 2, compared with 52% by age 15 months. Immunization rates were higher when assessing rates by age 3 as compared with age 2. Similarly, 91% of children had BMI percentile documented within 36 months of the measurement year, but only 63% when only examining the study year. Of note, the percentage of BMI recorded in the chart was even higher if absolute BMI value was included in the measurement.
Another notable finding was the observed rate of immunization “refusals.” Our manual chart reviews allowed us to capture information about immunizations that had been offered but refused, which would have been invisible in either billing data or in automated chart abstraction. There are differing opinions about whether or not refusals should be counted in the denominator. On one hand, addressing parental refusal is part of ensuring high quality care; on the other hand, it could be argued that the practice is responsible for offering recommended care, but should not be penalized for low rates resulting from parental refusal. Standardized EMR documentation would help to improve quality assessments by capturing important explanations for not immunizing or for not delivering other evidence-based services (e.g. the service was offered but refused). Further, there is a need for more uniformity in documentation of services usually provided during a well child visit (e.g. developmental screening, preventive counseling) that are also being delivered at acute care visits. Currently, this care is not recorded in a standardized way outside of the well child visit, and it is inadequately captured in other types of visit notes.
Policy Implications
This paper suggests that expanding requirements beyond strict timeframes may yield a real world view of care received, compared to the results obtained when following the CHIPRA specifications. Allowing such “wiggle room” is especially important when measuring care provided to publicly-insured populations, as they sometimes have sporadic patterns of care utilization and often experience insurance coverage gaps.9,19-24 These publicly-insured families seek care more often when they can afford it and/or when they are insured.25-27 Further, without strong evidence supporting strict timeframes, it is better to allow for some flexibility in measure specifications to reflect clinical practice.
This study also demonstrates the use of a visit-based approach to identify clinic populations when using EMR data. The visit-based model contrasts the traditional use of enrollment-based denominators derived from claims data as described by the CHIPRA definitions. Creating visit-based definitions is not well standardized (i.e. Should a minimum level of visits be required to ensure continuity of care? Should at least one designated preventive care visit be required versus any type of visit?). However, overall this adapted approach is relevant to current policies, such as the establishment of Patient-Centered Medical Homes and Accountable Care Organizations, in which providers will be responsible for measuring the quality of care being delivered to their population of patients by using EMR data.28,29 In practice models utilizing a pay-for-performance financial scheme, seemingly small alterations to the requirements of these quality measures could result in very different completion rates and, consequently, have a profound impact on provider payment.10,30-32
Next Steps
This pilot study fits within a larger body of research related to the use of EMR data for conducting quality assessments.18 To better quantify the extent to which the CHIPRA measures’ capture rates differ when applied in EMR versus claims data, we suggest possible next steps for consideration. Estimates obtained using the “gold standard” of manual chart review should be compared to rates obtained when abstracting EMR data from the same population using electronic methods. This information should be further compared to rates obtained from administrative claims data only, as specified for the original CHIPRA measures, to allow for triangulation. We are actively working with our state policy makers to conduct these comparisons.
Limitations
This study has some important limitations. First, it was conducted in one practice; thus, our findings may not be generalizable to other sites. The methods used, however, could be replicated in other settings though we acknowledge our chart review methods were labor intensive. Second, besides information in the clinic EMR obtained from our state immunization registry, we did not have access to information about health care services utilized by our study population at other clinic sites (unless these were clearly documented in provider notes or elsewhere in the study clinic’s EMR). Third, we identified a cohort of children who visited the study clinic during one calendar year only (2010), which may have over- or under-estimated the clinics’ true panel of ‘active’ pediatric patients. Finally, although our modified measure specifications may provide a more complete picture of care receipt, we acknowledge that this approach would make rates from one site less comparable with another site unless both used the same timeframe and other specifications.
Conclusion
It is possible to measure quality through manual chart audit of an EMR, however, without more generous timeframes and standardization practices for documentation, quality of care assessments may present an inaccurate picture of the quality of children’s health care being delivered in primary care settings.
Figure 1
Figure 1
Percentage of Children who Received the Recommended Number of Well Child Visits Received by 15 Months of Age Versus 2 Years of Age
Figure 2
Figure 2
Percentage of Children Up to Date on Each Vaccine Series by Age 2 versus Those Completed by Age 3, Percentage of Adolescents Up to Date by age 13 versus Those Completed by Age 15, and those with Documentation of Parent Refusal
Table 1
Table 1
Patient Characte ristics at initial data collection as reported in the Electronic Medical Record (N=1,544)
Acknowledgments
We would like to acknowledge the clinic for helping us with this study, the patients whose medical records we used to conduct this analysis, and LeNeva Spires.
Funding Statement: This project was supported by grant 1 R01 HS018569 from the Agency for Healthcare Research and Quality (AHRQ), the Department of Family Medicine, and the Oregon Clinical and Translational Research Institute (OCTRI), grant number UL1 RR024140 from the National Center for Research Resources (NCRR), a component of the National Institutes of Health (NIH), and NIH Roadmap for Medical Research. The funding agencies had no involvement in the design and conduct of the study; analysis and interpretation of the data; nor preparation, review, or approval of the manuscript.
Footnotes
Conflicting and Competing Interests: The authors have no conflicts of interest to report.
1. Davis K, Stremikis K. Family Medicine: Preparing for a High-Performance Health Care System. J Am Board Fam Med. 2010;23(S):11–16. [PubMed]
2. Mangione-Smith R, Schiff J, Dougherty D. Identifying children’s health care quality measures for Medicaid and CHIP: an evidence-informed publicly transparent expert process. Acad Pediatr. 2011;11(3 Suppl):S11–21. [PubMed]
3. Kaiser Commission on Medicaid and the Uninsured [Accessed January 14, 2011];Children’s Health Insurance Program Reauthorization Act of 2009 (CHIPRA) 2009 http://www.kff.org/medicaid/upload/7863.pdf.
4. Centers for Medicare and Medicaid Services CHIPRA Initial Core Set Technical Specifications Manual 2011. Feb, 2011.
5. U.S. Department of Health and Human Services [Accessed March 8, 2011];Medicaid and CHIP Programs; Initial Core Set of Children’s Healthcare Quality Measures for Voluntary Use by Medicaid and CHIP Programs. Federal Register. 2009 74(248):68846–68849. http://www.insurekidsnow.gov/professionals/CHIPRA/federalregisternotice.pdf. Published December 28, 2009.
6. [Accessed January 14, 2011];111th Congress. Children’s Health Insurance Program Reauthorization Act of 2009. 2009 http://frwebgate.access.gpo.gov/cgibin/getdoc.cgi?dbname=111_cong_public_laws&docid=f:publ003.111.
7. Shone LP, Dick AW, Klein JD, Swanziger J, Szilagyi PG. Reduction in racial and ethnic disparities after enrollment in the State Children’s Health Insurance Program. Pediatrics. 2005;115(6):e697–e705. [PubMed]
8. Centers for Medicare and Medicaid Services CHIPRA Initial Core Set Technical Specifications Manual 2011. Feb, 2011.
9. Kenney GM, Pelletier JE. Monitoring duration of coverage in Medicaid and CHIP to assess program performance and quality. Acad Pediatr. 2011;11(3 Suppl):S34–41. [PubMed]
10. Fairbrother G, Simpson LA. Measuring and reporting quality of health care for children: CHIPRA and behond. Acad Pediatr. 2011;11(3 Suppl):S77–84. [PubMed]
11. Iezzoni LI. Assessing Quality Using Administrative Data. Annals of Internal Medicine. 1997;127(8 Part 2):666–674. [PubMed]
12. Lawthers AG, McCarthy EP, Davis RB, Peterson LE, Palmer RH, Iezzoni LI. Indentification of In-Hospital Complications From Claims Data: Is It Valid? Medical Care. 2000;38(8):785–795. [PubMed]
13. Dougherty D, Clancy C. Transforming Children’s Health Care Quality and Outcomes–A Not-So-Random Non-linear Walk Across the Translational Continuum. Acad Pediatr. 2011;11(3):S91–S94. [PubMed]
14. Centers for Disease Control and Prevention General Recommendations on Immunization: Recommendations of the Advisory Committee on Immunization Practices (ACIP) and the American Academy of Family Physicians (AAFP) MMWR. 2002;51(RR02):1–36. [PubMed]
15. Moyer VA, Butler M. Gaps in the Evidence for Well-Child Care: A Challenge to Our Profession. Pediatrics. 2004;114(6):1511–1521. [PubMed]
16. Whitlock EP, Williams SB, Gold R, Smith PR, Shipman SA. Screening and Interventions for Childhood Overweight: A Summary of Evidence for the US Preventive Services Task Force. Pediatrics. 2005;116(1):e125–e144. [PubMed]
17. Harris PA, Taylor R, Thielke R, et al. Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42:377–381. [PMC free article] [PubMed]
18. Gold R, Angier H, Mangione-Smith R, et al. Feasibility of Evaluating the CHIPRA Care Quality Measures in Electronic Health Record Data. Pediatrics. (In Press) [PMC free article] [PubMed]
19. Sommers A, Dubay L, Blumberg L, Blavin F, Czajka J. Dynamics in Medicaid and SCHIP eligibility among children in SCHIP’s early years: implications for reauthorization. Health Affairs. 2007;26(5):w598–w607. [PubMed]
20. Sommers B. Why millions of children eligible for Medicaid and SCHIP are uninsured: poor retention versus poor take-up. Health Affairs. 2007;26(5):w560–w567. [PubMed]
21. Fairbrother GL, Emerson HP, Partridge L. How Stable Is Medicaid Coverage For Children? Health Aff. 2007;26(2):520–528. [PubMed]
22. Sommers BD, Rosenbaum S. Issues In Health Reform: How Changes In Eligibility May Move Millions Back And Forth Between Medicaid And Insurance Exchanges. Health Aff. 2011;230(2):228–236. [PubMed]
23. DeVoe JE, Graham A, Krois L, Smith J, Fairbrother GL. “Mind the Gap” in Children’s Health Insurance Coverage: Does the Length of a Child’s Coverage Gap Matter? Ambulatory Pediatrics. 2008;8(2):129–134. [PubMed]
24. Delone SE, Hess CA. Mediciad and CHIP children’s healthcare quality measures: what states use and what they want. Acad Pediatr. 2011;11(3 Suppl):S68–76. [PubMed]
25. Olson LM, Tang S-fS, Newacheck PW. Children in the United States with Discontinuous Health Insurance Coverage. N Engl J Med. 2005 Jul 28;353(4):382–391. 2005. [PubMed]
26. Cassedy A, Fairbrother G, Newacheck PW. The impact of insurance instability on children’s access, utilization, and satisfaction with health care. Ambulatory Pediatrics. 2008 Sep-Oct;8(5):321–328. [PubMed]
27. DeVoe JE, Ray M, Krois L, Carlson MJ. Uncertain Health Insurance Coverage and Unmet Children’s Health Care Needs. Family Medicine. 2010;42(2):121–132. [PubMed]
28. Rosenthal T. The Medical Home: Growing Evidence to Support a New Approach to Primary Care. J Am Board Fam Med. 2008;21:427–440. [PubMed]
29. Gavagan T, Du H, Saver B, et al. Effect of Financial Incentives on Improvement in Medical Quality Indicators for Primary Care. J Am Board Fam Med. 2010;23:622–631. [PubMed]
30. U.S. Department of Health and Human Services Medicare Program; Medicare Shared Savings Program: Accountable Care Organizations. Federal Register. 2011;76(212) [PubMed]
31. Simpson L, Fairbrother G, Touschner J, Guyer J. Implementation Choices for the Children’s Health Insurance Reauthorization Act of 2009. The Commonwealth Fund. Sep, 2009.
32. Kaye N, Takach M. Building Medical Homes in State Medicaid and CHIP Programs: National Academy for State Health Policy. 2009.