OME is an important pediatric condition for performance measurement because it is both prevalent and expensive. A 2003 review estimated that ~2.2 million episodes occur annually in the United States, with estimated costs of $4.0 billion.8
Reducing inappropriate antibiotic use is a national health priority for the US Department of Health and Human Services,27
the Institute of Medicine,28
and the Centers for Disease Control and Prevention.29
A measure of the inappropriate use of antibiotics in the care of children with OME is being considered by the Children's Health Insurance Program Reauthorization Act state demonstration grants.30
The specific topics of OME measurement (eg, diagnostic evaluation, appropriate antibiotic use) also have high face validity, because they are based on clinical practice guideline recommendations of level A or B evidence.5,7
However, in our test, several challenges existed that will need to be addressed to ensure accurate measurement of OME care in general pediatric practices.
Inadequate Sample Sizes
Despite the high prevalence of OME, most practices were unable to identify a sufficient number of eligible cases. Only 4 of the 23 practices had sufficient visits coded with the specified OME diagnosis codes to reach the recommended 30 charts during a 12-month measurement period. Our results suggest that primary care clinicians may not be coding visits for OME correctly. Among QuIIN practices participating in the subsequent OME quality improvement project, some reported that their clinicians use otitis media codes (ICD-9 382.xx) rather than OME codes (ICD-9 381.xx), primarily out of habit. Others reported differential reimbursement rates according to type of otitis media code, particularly from Medicaid. One practice noted that the list of diagnosis codes posted in their examination rooms (and, therefore, available for physicians to code on visit summary sheets) did not include any codes for OME. Another clinician noted that his practice had gotten away from the use of the designation OME because it confuses discussions with parents for whom the suffix “-itis” seems to imply an infectious process.
Reliability and Validity
Measures should produce consistent (reliable) and credible (valid) results.31
The 3 treatment measures (antimicrobial agents, antihistamines, and corticosteroids) demonstrated moderate to acceptable IRR and almost no variability in existing performance, suggesting either that clinical practice is in accordance with recommendations or that the measures did not allow discrimination between appropriate and inappropriate medication use.
Two measures—hearing evaluation and the avoidance of inappropriate use of antimicrobial agents—illustrated additional concerns. The small numbers of children eligible for hearing evaluation, even among practices with sufficient overall sample sizes, suggest this measure may be particularly difficult to monitor at an individual practice level without additional case-finding modifications. This was true despite the initial modifications we made to the measure specification. Reviewers found it difficult to determine from the medical record whether the child was at risk, as well as the duration of the OME diagnosis. Participants also reported that hearing evaluation measure scores were artificially low, as the measure did not give credit for instances when the patient had been referred for hearing testing, but it had not yet been completed or the results had not yet been transmitted from the audiologist to the pediatric practice.
In the antimicrobial measure, the numerator was the number of patients not prescribed antimicrobial agents and the denominator was the number of patients with a diagnosis of OME minus those patients who had a documented medical reason for being prescribed an antimicrobial. Using this definition, the performance measure score was 87% (244 of 281 patients) ( and ). However, this score did not take into account those cases in which antimicrobial agents were correctly prescribed or whether the visit was appropriately coded. Patients were excluded when the clinician documented a medical reason for antimicrobial use for 2 reasons: (1) to account for concurrent antibiotic prescription for a comorbid condition; and (2) because the OME guideline, which recommends against the routine use of antimicrobial therapy, notes that the use of antibiotics can be considered as an option, although limited efficacy is demonstrated by short-term benefit in randomized trials.7
However, our analysis of the specific reasons documented in patients who received antimicrobial agents suggests that this definition may produce artificially high performance rates. In our study, one third of the documented reasons for antimicrobial use were for OME. Although the use of an antibiotic may have been considered appropriate in some of these cases, we expect that the majority were not in compliance with the intent of the guideline to avoid routine use.
Exception methodology is an approach that can be useful to help explain variations in care. The use of exception methodology for performance measures was designed by the PCPI to ensure that its metrics are reflective of appropriate clinical action.32
This method assesses the frequency, classification, and appropriateness of exception data (eg, the reasons an antibiotic was prescribed).33
As an example, if the denominator could be recalculated by excluding only the 15 patients who were probably prescribed antibiotics appropriately, the performance measure score would be much lower: 68% (244 of 360 patients). This measure takes into account both the 79 cases for which an antibiotic was likely inappropriately prescribed and the 37 cases for which no reason for prescribing was documented by including them in the denominator.
Of note, a challenge not addressed with this adjustment are those 30 cases that were included in the denominator but subsequently determined to be wrongly coded for OME based on the reason for prescribing antimicrobial agents.
Reports based on widespread use of exception reporting in the United Kingdom as part of a pay-for-performance demonstration found that it was useful and did not seem to promote “gaming” of the system.34
The exception categories we have outlined may provide an initial framework for the American Medical Association performance metric methods research. This specificity about what constitutes an appropriate reason for prescribing an antimicrobial for a child with OME could also help clinicians reflect on their decision making when it may be difficult to disappoint a parent seeking an antibiotic prescription.
Children with OME are cared for by a variety of clinicians—pediatricians, family physicians, and otolaryngologists. Because we included only primary care pediatricians in our evaluation, we therefore cannot comment as to whether the problems we observed with insufficient case finding and documentation of the reasons for antimicrobial use might extend to other provider categories.
Results were obtained on the basis of patients included in a convenience sample of practices, and no attempt at randomization was made. Certain biases may exist in a convenience sample; for example, clinicians may be more likely to volunteer for participation if they are more interested in a topic and perhaps more adherent to evidence-based care.
Medical records may lack the necessary documentation to offer a reliable summary of the clinical care provided.35,36
In addition, bias may have been introduced because abstractors for the QuINN practice group were practice staff (and possibly physicians) and may have had an increased familiarity with their own practices' medical records and where to look for specific documentation or how to interpret abbreviations.
Despite these limitations, we believe this study highlights some important issues that will need to be considered in the use of performance measures for OME.