The CHIPRA measures are an important step toward standardizing the assessment of children’s health care quality in the United States6
; however, they are designed to be evaluated by using health plan claims data, limiting their ability to assess population health care quality. Importantly, insurance claims do not include uninsured and discontinuously insured people, potentially making them “invisible” in care quality evaluations. Further, clinic-level quality evaluations based on claims alone will unfairly represent clinics serving discontinuously insured populations, as many of their patients receive services during periods without coverage.13,22,23
To measure population health care quality, including the uninsured and discontinuously insured, we assessed the feasibility of adapting the CHIPRA measures for application in EHR data, concluding that most of the measures could be adapted, with modifications. The most notable adaptation pertains to determining population denominators. Our approach to identifying an “established” patient population was to include patients with ≥1 visit in a measurement year, and in some cases a visit in the year prior. This visit-based approach differed from the original enrollment-based measure specifications. In addition, our suggested methods for adapting some of the measures’ denominators allow for either well visits or urgent outpatient visits. We made this choice on the premise that any visit can be an opportunity to deliver needed care; however, another approach would be to define denominators as persons with ≥1 “well” visits.
We used data from a network of CHCs because these clinics serve many uninsured and discontinuously insured children, who would be missed in insurance claims data. For example, 25% of OCHIN clinics’ pediatric visits in 2009, and 18% in 2010, were self-paid, suggesting the child was uninsured. However, our suggested adaptations are relevant to any practice wanting to use EHR data to measure care quality. This approach is relevant to current policies, which establish Accountable Care Organizations and similar infrastructure, that require providers to measure the quality of care delivered to populations.24
If Accountable Care Organizations are fully realized with defined populations, there may be less need for visit-based denominators; however, the current standard requires this approach for identifying a population denominator in many practices.
Claims data document services billed and associated diagnoses, which does not include all care received, as in the BMI example.17,25–27
Thus, even apparently complete claims data may yield inaccurate rates when used alone. In addition, such data are often not accessible to clinicians in a useable form. In this article, we took the next step and outlined how to operationalize many of the CHIPRA measures for use in EHR data.
Limitations to the Chlamydia Testing and BMI Documentation Examples
The examples presented in this article (Chlamydia
testing and BMI percentile documentation) illustrate how quality of care measurements may vary depending on the content of the measure, the specifications, and the availability of appropriate data. For example, <1% of patients were identified as having annual BMI documentation by using claims-based specifications, versus 71% to 73% in the EHR-adapted measurement. The Chlamydia
testing example demonstrated substantial variability in rates depending on how the denominator population was identified, similar to findings of Mangione-Smith et al.28
There are limitations to the “adapted” methods we used. When measuring Chlamydia testing, we were unable to access the inpatient codes that are part of the original claims-based algorithm used to identify sexually active women. Had we been able to include inpatient data, the original measure might have identified even more women as sexually active. It may have also been useful to add pregnancy to our definition of sexually active women, as we found 93 who were known to be pregnant, but were not identified as sexually active in the social history section of the EHR. In addition, in OCHIN’s network of CHCs, the sexual activity field is not populated for about 15% of patients. Different health care organizations collect and store data differently in the EHR. Thus, including additional fields to identify sexual activity in the EHR would likely have yielded a larger denominator. The data for these analyses came from up to 47 Oregon clinics, which may have different coding practices; some of the CPT codes used in the original specifications were not found in all of the clinics’ data, likely because those codes are not regularly used at those sites. For example, a CPT code from the original specifications to identify Chlamydia screening (87810) was rarely used by several of the clinics.
Limitations to Using Outpatient EHR Data
Although there are clear limitations to using claims data only, there are also important limitations to EHR data when adapting the CHIPRA measures, as discussed here. First, we could adapt only a subset of the measures because the data needed for some measures (such as data on hospital and dental care) were not available in outpatient EHR data. Claims data linked to EHR data might address this limitation for some populations, but would still exclude people receiving inpatient care during periods of uninsurance. A linked inpatient-outpatient EHR dataset may also help to address this limitation. Second, the lack of medication dispense data required the adaptation of several measures to use prescribing data; however, this adaptation may be beneficial, as prescribing data are commonly used in care quality measures because it reflects providers’ actions.27
Third, visit-based population denominator definitions may introduce bias toward care “users” who receive care at rates greater than the overall patient population. Similarly, this approach is limited to people who are seen by the clinic. When uninsured people are not seen at the clinic, it is difficult to determine whether they received care elsewhere. Fourth, similar to claims data, the completeness of EHR data are reliant on individual clinicians and may not include all of the care provided. Standardization of documentation practices would assist in ensuring complete EHR data. Finally, we conducted the 2 example analyses by using data from a single EHR that is linked across multiple clinics. This kind of data resource is not, at present, commonly available, but will likely become more widely available with the expansion of EHRs. The importance of applying standardized measures of care quality in EHR data will increase with the prevalence of such data systems.