We successfully calculated several pediatric care quality measures using outpatient EMR data from a safety net clinic, and confirmed the feasibility of using EMR data to conduct such evaluations. Using EMR data likely
allowed us to capture care delivered during periods of uninsurance, which would not have been possible with insurance claims data.11
We also discovered
that modest adjustments to measurement parameters enabled a real world view
of the care delivered. To the best of our knowledge, this is one of the first studies to utilize CHIPRA measures for practice-based research.
As in previously reported analyses, the most significant adaptation required to assess performance of the CHIPRA measures using EMR data was the method used to determine a population denominator.18
Many of the CHIPRA measures were designed for assessing quality of care provided to patients enrolled in an insurance plan. Instead of the CHIPRA measures’ enrollment-based approach, we used a visit-based approach (i.e. ≥1 visit in the measurement year) to identify an ‘established’ patient population.
Although most of the child health care services that we identified were delivered on time and at the recommended frequency in our study population, our modified assessments captured a substantial number of additional services. For example, 61% of children had 6 well child visits by age 2, compared with 52% by age 15 months. Immunization rates were higher when assessing rates by age 3 as compared with age 2. Similarly, 91% of children had BMI percentile documented within 36 months of the measurement year, but only 63% when only examining the study year. Of note, the percentage of BMI recorded in the chart was even higher if absolute BMI value was included in the measurement.
Another notable finding was the observed rate of immunization “refusals.” Our manual chart reviews allowed us to capture information about immunizations that had been offered but refused, which would have been invisible in either billing data or in automated chart abstraction. There are differing opinions about whether or not refusals should be counted in the denominator. On one hand, addressing parental refusal is part of ensuring high quality care; on the other hand, it could be argued that the practice is responsible for offering recommended care, but should not be penalized for low rates resulting from parental refusal. Standardized EMR documentation would help to improve quality assessments by capturing important explanations for not immunizing or for not delivering other evidence-based services (e.g. the service was offered but refused). Further, there is a need for more uniformity in documentation of services usually provided during a well child visit (e.g. developmental screening, preventive counseling) that are also being delivered at acute care visits. Currently, this care is not recorded in a standardized way outside of the well child visit, and it is inadequately captured in other types of visit notes.
This paper suggests that expanding requirements beyond strict timeframes may yield a real world view
of care received, compared to the results obtained when following the CHIPRA specifications. Allowing such “wiggle room” is especially important when measuring care provided to publicly-insured populations, as they sometimes have sporadic patterns of care utilization and often experience insurance coverage gaps.9,19-24
These publicly-insured families seek care more often when they can afford it and/or when they are insured.25-27
Further, without strong evidence supporting strict timeframes, it is better to allow for some flexibility in measure specifications to reflect clinical practice.
This study also demonstrates the use of a visit-based approach to identify clinic populations when using EMR data. The visit-based model contrasts the traditional use of enrollment-based denominators derived from claims data as described by the CHIPRA definitions. Creating visit-based definitions is not well standardized (i.e. Should a minimum level of visits be required to ensure continuity of care? Should at least one designated preventive care visit be required versus any type of visit?). However, overall this adapted approach is relevant to current policies, such as the establishment of Patient-Centered Medical Homes and Accountable Care Organizations, in which providers will be responsible for measuring the quality of care being delivered to their population of patients by using EMR data.28,29
In practice models utilizing a pay-for-performance financial scheme, seemingly small alterations to the requirements of these quality measures could result in very different completion rates and, consequently, have a profound impact on provider payment.10,30-32
This pilot study fits within a larger body of research related to the use of EMR data for conducting quality assessments.18
To better quantify the extent to which the CHIPRA measures’ capture rates differ when applied in EMR versus claims data, we suggest possible next steps for consideration. Estimates obtained using the “gold standard” of manual chart review should be compared to rates obtained when abstracting EMR data from the same population using electronic methods. This information should be further compared to rates obtained from administrative claims data only, as specified for the original CHIPRA measures, to allow for triangulation. We are actively working with our state policy makers to conduct these comparisons.
This study has some important limitations. First, it was conducted in one practice; thus, our findings may not be generalizable to other sites. The methods used, however, could be replicated in other settings though we acknowledge our chart review methods were labor intensive. Second, besides information in the clinic EMR obtained from our state immunization registry, we did not have access to information about health care services utilized by our study population at other clinic sites (unless these were clearly documented in provider notes or elsewhere in the study clinic’s EMR). Third, we identified a cohort of children who visited the study clinic during one calendar year only (2010), which may have over- or under-estimated the clinics’ true panel of ‘active’ pediatric patients. Finally, although our modified measure specifications may provide a more complete picture of care receipt, we acknowledge that this approach would make rates from one site less comparable with another site unless both used the same timeframe and other specifications.
It is possible to measure quality through manual chart audit of an EMR, however, without more generous timeframes and standardization practices for documentation, quality of care assessments may present an inaccurate picture of the quality of children’s health care being delivered in primary care settings.