|Home | About | Journals | Submit | Contact Us | Français|
Current national efforts provide an opportunity to integrate performance measures into clinical practice and improve outcomes for children.
The goal of this study was to explore issues in developing and testing measures of care for children with otitis media with effusion (OME).
We assessed compliance with diagnostic, evaluation, and treatment measures for OME adapted from preliminary work of the Physician Consortium for Performance Improvement, using chart data in a convenience sample of practices from 2 primary care networks (Cincinnati Pediatric Research Group and the American Academy of Pediatrics Quality Improvement Innovation Network). Children aged 2 months to 12 years with at least 1 visit with a specified OME code during a 1-year period were included.
Of 23 practices, 4 could not locate eligible visits. Nineteen practices submitted 378 abstractions (range: 3–37 per practice) with 15 identifying <30 eligible visits. Performance on diagnosis (33%) and hearing evaluation (29%) measures was low but high on measures of appropriate medication use (97% decongestant/antihistamine, 87% antibiotics, and 95% corticosteroids). Thirty-five percent of records documented antibiotic use concurrent with OME; only 16% of the 94 cases that cited reason for prescribing were appropriate. Using methods that consider appropriate clinical action, a more accurate rate for appropriate use of antibiotics was 68%.
Coding, case finding, and evaluating appropriateness of treatment are some of the issues that will need to be considered to assess the care of children with OME. This study emphasizes the importance of testing proposed quality of care measures in “real-world” settings.
Many performance measures are under development for use in quality improvement, program monitoring, public reporting, and value-based purchasing. A measure of the care of children with otitis media with effusion is included in an initial set of national pediatric core measures.
This study emphasizes the importance of testing proposed performance measures in “real-world” settings. Coding, case finding, and evaluating appropriateness of treatment are some of the issues that will need to be considered to assess the care of children with otitis media with effusion.
Numerous national measure initiatives are under way for use in quality improvement, program monitoring, public reporting, and value-based purchasing.1–3 These efforts provide a significant opportunity to integrate measures into clinical practice, support clinicians' efforts to improve care and outcomes, and achieve better results for patients and families.
Recently, interest has arisen in the development and use of measures to assess the care of pediatric patients with otitis media with effusion (OME),4–6 fluid in the middle ear without symptoms of acute ear infection.7,8 Many episodes resolve spontaneously, but ~30% to 40% of children have recurrent OME and 5% to 10% of episodes last ≥1 year.9–12 Delays in speech, language, and learning may result from undetected hearing loss. Studies document variation in care and deviation from recommendations.13,14 A particular concern is the inappropriate use of antibiotics that do not have long-term efficacy and are not recommended for routine OME management7 and may lead to antibiotic resistance.15–19
The goal of this article was to highlight issues for developing and testing measures of quality of care for children with OME based on our experience with a set of measures for OME in a sample of general pediatric practices.
The project was undertaken by the pediatric Center for Education and Research on Therapeutics at Cincinnati Children's Hospital Medical Center in collaboration with the American Academy of Pediatrics. Approval was obtained for this Health Insurance Portability and Accountability Act–compliant study from the institutional review boards of both organizations. We used a cross-sectional design in a convenience sample of pediatric practices to assess the care of children with OME.
We adapted specifications for 5 measures for OME based on preliminary work of the American Medical Association Physician Consortium for Performance Improvement (PCPI).5 PCPI developed these measures with an expert work group that used a 2004 clinical practice guideline.7 The measures included: diagnostic evaluation of tympanic membrane mobility with pneumatic otoscopy or tympanometry; hearing testing; and 3 measures of avoidance of the inappropriate use of medications (antihistamines or decongestants, systemic antimicrobial agents [antibiotics], and systemic corticosteroids). Definitions for each measure are shown in Table 1. Because of concern that the number of children in a general pediatric practice receiving tympanostomy tubes over a 12-month measurement period would be low, the denominator of the measure assessing whether children receiving tympanostomy tubes had a hearing evaluation before tube insertion was revised to comprise the number of children at risk (ie, OME >3 months' duration or at risk for speech, language, and learning problems) who should, therefore, receive hearing testing or a referral for hearing testing.
A medical record review form was drafted by the principal investigator, then subsequently revised on the basis of feedback from 2 practicing pediatricians (participants in practice-based research networks and advisors to this study) and results of a pilot test on 5 charts in each of their practices. The finalized form was translated into an online data collection instrument (SurveyMonkey.com). An instruction manual was developed for the form.
Volunteer pediatric practices were recruited from 2 pediatric practice networks groups representing a range of practice settings, sizes, locations, and medical record systems. Because this was a test of the measurement methodology, rather than an attempt to accurately quantify current national OME practice, no attempt was made to make the proportions of the various practice attributes representative. The American Academy of Pediatrics Quality Improvement Innovation Network (QuIIN) collaborates to improve care and outcomes by testing tools, measures, and strategies for use in everyday pediatric practice. The network has 151 pediatrician members from 41 states in diverse practice settings. QuIIN has completed 1 project and toolkit20,21 and is undertaking several others. The Cincinnati Pediatric Research Group is a practice-based research network of 65 community child health care providers in 30 practices in the greater Cincinnati area. This network has completed multiple research projects.22 Several of the QuIIN practices subsequently participated in a quality improvement project to improve adherence to the OME guidelines.23
Practices were instructed in using claims data to identify medical records for 30 patients with OME using International Classification of Diseases, Ninth Revision (ICD-9) diagnosis codes outlined by the PCPI. Inclusion criteria were: (1) at least 1 visit with an ICD-9 code of 381.10 (chronic serous otitis media), 381.19 (other chronic serous otitis media), 381.20 (chronic mucoid otitis media), 381.29 (other chronic mucoid otitis media), 381.30 (other and unspecified chronic nonsuppurative otitis media), or 381.40 (nonsuppurative otitis media not specified as acute or chronic); (2) a visit date between February 1, 2008, and January 31, 2009; and (3) patient age between 2 months and 12 years at the time of the visit.
Physicians and/or their office staff performed the chart abstractions at QuIIN practices. Each abstractor attended a 1-hour, conference call training session during which the medical record review form and manual were reviewed. A first reviewer then abstracted 30 eligible medical records and a second reviewer independently reabstracted 10 of the records for tests of interrater reliability (IRR). Data were entered directly into the online database. Abstractors could send questions via e-mail as they conducted reviews.
Project staff performed the medical record abstractions onsite at Cincinnati Pediatric Research Group practices after receiving an orientation to the medical record system of the practice and location of key data elements. Results of abstractions were entered into a spreadsheet and transferred to the online database on return to the project office.
Four of 23 practices recruited were unable to identify any eligible visits using the specified OME diagnosis codes. A total of 19 practices contributed abstraction data, representing 10 states (FL, GA, IL, IN, KY, NC, NY, OH, PA, and VA) and urban, suburban, and rural locations. Nine practices had a fully integrated electronic medical record and 10 used a paper system. All practices had Internet access. Sixteen of the pediatric practices encompassed a total of 92 pediatricians (range: 1–15); the remaining 3 practices were nongovernmental hospital clinic sites staffed by clinical faculty and residents.
Of the 19 practices submitting data, 15 (79%) could not identify a sufficient quantity of eligible medical records using the specified codes and, therefore, completed <30 abstractions. Three practices abstracted >30 charts, completing all eligible records for the 12-month reporting period.
A total of 378 chart abstractions (range: 3–37) and 117 reabstractions (range: 1–10) were performed. The numbers, percentages, and specific diagnoses for eligible charts are summarized in Table 2. Based on 368 abstractions for which time data were available, the median time to complete a review was 4 minutes. This did not include the time for selecting or replacing charts.
Table 3 summarizes performance on the 5 OME measures. Use of pneumatic otoscopy or tympanometry for diagnostic evaluation was 33% (range: 0%–100%). The hearing evaluation measure applied to only 25% (93 of 378) of cases. Among these children, hearing evaluation was completed only 29% of the time. Performance on each of the 3 measures of avoidance of inappropriate use of medications was at least 87%.
The diagnostic and hearing evaluation measures demonstrated considerable variation (range: 0%–100% across participating practices), as well as overall poor performance (33% and 29%, respectively). Performance on the 3 measures of avoidance of inappropriate use of medications was uniformly high at the overall and practice group level. Most of the between-practice variability on the systemic antimicrobial and systemic corticosteroid measures was due to low performance by 1 practice in each of the 2 practice groups. Little between-practice variability was seen on the avoidance of antihistamines or decongestants measure.
Among 375 charts reviewed for use of antimicrobial agents, 131 (35%) documented concurrent use of antibiotics in a child with a diagnosis of OME. In >70% of these patients (n = 94), a reason for use of the antimicrobial agent was documented (Fig 1). For 15 patients (16%), the documented reason was appropriate (eg, presence of a comorbid condition requiring an antibiotic). In 31 of these patients' records (33%), the reason given for the use of antibiotics was related to chronic OME. In another 30 patients (32%), the reason for the visit may have been wrongly coded as OME. In 22 of these 30 patients, the reason listed in the medical record for antimicrobial use was acute otitis media, despite the fact that the case had qualified for the study with a diagnosis code of OME associated with the visit. An additional 7 patients were given an antimicrobial prescription to be used if symptoms worsened, which is considered use of a “safety net prescription” for acute otitis media.26 In another patient, the antibiotic prescribed was an antibiotic suspension used for otitis media externa, suggesting this visit was also assigned the wrong code. Finally, the reason documented for antimicrobial use seemed inappropriate for 18 (19%) of the cases (eg, viral illnesses, ear pain).
Results of IRR testing are displayed in Table 4. IRR scores for whether antimicrobial agents (κ = 0.70) or corticosteroids (κ = 0.81) were prescribed reached acceptable levels.24,25 The IRR for being able to ascertain whether there was documentation of medical reason(s) for prescribing antimicrobial agents (κ= −0.10) was very low. There were too few interrater results for the reasons for prescribing antihistamines/decongestants and corticosteroids to draw conclusions.
OME is an important pediatric condition for performance measurement because it is both prevalent and expensive. A 2003 review estimated that ~2.2 million episodes occur annually in the United States, with estimated costs of $4.0 billion.8 Reducing inappropriate antibiotic use is a national health priority for the US Department of Health and Human Services,27 the Institute of Medicine,28 and the Centers for Disease Control and Prevention.29 A measure of the inappropriate use of antibiotics in the care of children with OME is being considered by the Children's Health Insurance Program Reauthorization Act state demonstration grants.30 The specific topics of OME measurement (eg, diagnostic evaluation, appropriate antibiotic use) also have high face validity, because they are based on clinical practice guideline recommendations of level A or B evidence.5,7 However, in our test, several challenges existed that will need to be addressed to ensure accurate measurement of OME care in general pediatric practices.
Despite the high prevalence of OME, most practices were unable to identify a sufficient number of eligible cases. Only 4 of the 23 practices had sufficient visits coded with the specified OME diagnosis codes to reach the recommended 30 charts during a 12-month measurement period. Our results suggest that primary care clinicians may not be coding visits for OME correctly. Among QuIIN practices participating in the subsequent OME quality improvement project, some reported that their clinicians use otitis media codes (ICD-9 382.xx) rather than OME codes (ICD-9 381.xx), primarily out of habit. Others reported differential reimbursement rates according to type of otitis media code, particularly from Medicaid. One practice noted that the list of diagnosis codes posted in their examination rooms (and, therefore, available for physicians to code on visit summary sheets) did not include any codes for OME. Another clinician noted that his practice had gotten away from the use of the designation OME because it confuses discussions with parents for whom the suffix “-itis” seems to imply an infectious process.
Measures should produce consistent (reliable) and credible (valid) results.31 The 3 treatment measures (antimicrobial agents, antihistamines, and corticosteroids) demonstrated moderate to acceptable IRR and almost no variability in existing performance, suggesting either that clinical practice is in accordance with recommendations or that the measures did not allow discrimination between appropriate and inappropriate medication use.
Two measures—hearing evaluation and the avoidance of inappropriate use of antimicrobial agents—illustrated additional concerns. The small numbers of children eligible for hearing evaluation, even among practices with sufficient overall sample sizes, suggest this measure may be particularly difficult to monitor at an individual practice level without additional case-finding modifications. This was true despite the initial modifications we made to the measure specification. Reviewers found it difficult to determine from the medical record whether the child was at risk, as well as the duration of the OME diagnosis. Participants also reported that hearing evaluation measure scores were artificially low, as the measure did not give credit for instances when the patient had been referred for hearing testing, but it had not yet been completed or the results had not yet been transmitted from the audiologist to the pediatric practice.
In the antimicrobial measure, the numerator was the number of patients not prescribed antimicrobial agents and the denominator was the number of patients with a diagnosis of OME minus those patients who had a documented medical reason for being prescribed an antimicrobial. Using this definition, the performance measure score was 87% (244 of 281 patients) (Table 3 and Fig 1). However, this score did not take into account those cases in which antimicrobial agents were correctly prescribed or whether the visit was appropriately coded. Patients were excluded when the clinician documented a medical reason for antimicrobial use for 2 reasons: (1) to account for concurrent antibiotic prescription for a comorbid condition; and (2) because the OME guideline, which recommends against the routine use of antimicrobial therapy, notes that the use of antibiotics can be considered as an option, although limited efficacy is demonstrated by short-term benefit in randomized trials.7 However, our analysis of the specific reasons documented in patients who received antimicrobial agents suggests that this definition may produce artificially high performance rates. In our study, one third of the documented reasons for antimicrobial use were for OME. Although the use of an antibiotic may have been considered appropriate in some of these cases, we expect that the majority were not in compliance with the intent of the guideline to avoid routine use.
Exception methodology is an approach that can be useful to help explain variations in care. The use of exception methodology for performance measures was designed by the PCPI to ensure that its metrics are reflective of appropriate clinical action.32 This method assesses the frequency, classification, and appropriateness of exception data (eg, the reasons an antibiotic was prescribed).33 As an example, if the denominator could be recalculated by excluding only the 15 patients who were probably prescribed antibiotics appropriately, the performance measure score would be much lower: 68% (244 of 360 patients). This measure takes into account both the 79 cases for which an antibiotic was likely inappropriately prescribed and the 37 cases for which no reason for prescribing was documented by including them in the denominator.
Of note, a challenge not addressed with this adjustment are those 30 cases that were included in the denominator but subsequently determined to be wrongly coded for OME based on the reason for prescribing antimicrobial agents.
Reports based on widespread use of exception reporting in the United Kingdom as part of a pay-for-performance demonstration found that it was useful and did not seem to promote “gaming” of the system.34 The exception categories we have outlined may provide an initial framework for the American Medical Association performance metric methods research. This specificity about what constitutes an appropriate reason for prescribing an antimicrobial for a child with OME could also help clinicians reflect on their decision making when it may be difficult to disappoint a parent seeking an antibiotic prescription.
Children with OME are cared for by a variety of clinicians—pediatricians, family physicians, and otolaryngologists. Because we included only primary care pediatricians in our evaluation, we therefore cannot comment as to whether the problems we observed with insufficient case finding and documentation of the reasons for antimicrobial use might extend to other provider categories.
Results were obtained on the basis of patients included in a convenience sample of practices, and no attempt at randomization was made. Certain biases may exist in a convenience sample; for example, clinicians may be more likely to volunteer for participation if they are more interested in a topic and perhaps more adherent to evidence-based care.
Medical records may lack the necessary documentation to offer a reliable summary of the clinical care provided.35,36 In addition, bias may have been introduced because abstractors for the QuINN practice group were practice staff (and possibly physicians) and may have had an increased familiarity with their own practices' medical records and where to look for specific documentation or how to interpret abbreviations.
Despite these limitations, we believe this study highlights some important issues that will need to be considered in the use of performance measures for OME.
OME is an important pediatric and public health issue. However, a number of issues will need to be resolved before OME measures can be used in national reporting efforts, including case finding and, potentially, reimbursement incentives for appropriate coding. In particular, the measure of the appropriateness of antibiotic use should be modified using exception methodology to clarify specific definitions and ICD-9 codes for conditions under which use of antibiotics are acceptable for OME. Importantly, with advances in and adoption of health information technology by primary care practices over time, the use of text mining may allow the identification of appropriate diagnoses, procedures, and therapeutic exceptions in the electronic medical records of children with OME.
Our findings underscore the importance of legislative mandates, such as the Children's Health Insurance Program Reauthorization Act, to evaluate the measures that will be used to assess the quality of care for children and adolescents. It further highlights multiple challenges in translating clinical guidelines into useful performance metrics for national performance reporting. The findings emphasize the importance of testing measures in the real world settings where care will be assessed before the metrics are used to hold clinicians accountable.
This project was supported by cooperative agreement U18 HS016957-03 from the Agency for Healthcare Research and Quality.
The authors express their appreciation and acknowledge the contributions of the following networks and individuals: American Academy of Pediatrics QuIIN (Steven W. Kairys, MD, MPH, FAAP, Medical Director, QuIIN; William L. Stewart, MD, FAAP, QuIIN Steering Committee Member; Keri Thiessen, MEd, QuIIN Manager; Jill Healy, MS, QuIIN Project Manager), the Cincinnati Pediatric Research Group (Christopher Bolling, MD, FAAP, Medical Director), and the participating practices.
The authors thank Allan S. Lieberthal, MD, FAAP, Pediatrics, Kaiser Permanente (Panorama City, CA) and Richard Rosenfeld, MD, MPH, Professor and Chair, Otolaryngology, SUNY Downstate Medical Center (Brooklyn, NY). Dr Lieberthal was a member of and Dr Rosenfeld was co-chair of the American Academy of Pediatrics Subcommittee that developed the guidelines on OME; they were co-chairs of the PCPI Measurement Workgroup on Otitis Media With Effusion Physician Performance Measurement Set.
The authors also thank Samantha Tierney, MPH, Senior Policy Analyst, Physicians Consortium for Performance Improvement, American Medical Association and Gregory Wozniak, PhD, Director, Measure Analytics and Economic Evaluation, the American Medical Association.
In addition, the authors thank Adam C. Carle, PhD, Assistant Professor of Pediatrics, University of Cincinnati School of Medicine, Division of Health Policy and Clinical Effectiveness, Cincinnati Children's Hospital Medical Center; Brooke Mullett, Project Manager, Center for Education and Research on Therapeutics, Cincinnati Children's Hospital Medical Center; and Pamela J. Schoettker, MS, Medical Writer, Division of Health Policy and Clinical Effectiveness, Cincinnati Children's Hospital Medical Center, Cincinnati, OH.
Dr Lannon participated in conception and design; acquisition of data; analysis and interpretation of data; drafting of the manuscript; critical revision of the manuscript for important intellectual content; obtaining funding; and supervision. As the project's principal investigator, Dr Lannon had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Ms Peterson participated in analysis and interpretation of data; drafting of the manuscript; critical revision of the manuscript for important intellectual content; and administrative, technical, or material support. Dr Goudie participated in analysis and interpretation of data; drafting of the manuscript; and statistical analysis. All of the authors approved the manuscript as submitted for publication.
The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality. The agency did not participate in the design and conduct of the study; collection, management, analysis, and interpretation of the data; or in the preparation, review, or approval of the manuscript.
FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose.