|Home | About | Journals | Submit | Contact Us | Français|
Using electronic medical data, we calculated emergency department physician performance and subsequent outcomes on a measure used in the Centers for Medicare & Medicaid Services’ Physician Quality Reporting System. The measure assesses use of guideline recommended antibiotics for community acquired pneumonia. Physicians met measure criteria in 70.6% of cases at one institution. Among patients admitted to the hospital, measure compliant cases had a significantly shorter length of stay, lower costs and lower intensive care utilization than measure failures. For measure failures admitted to the hospital, antibiotic treatment was adjusted to be measure compliant within 48 hours in 57.1% of cases. Use of electronic performance measurement for antibiotic treatment of community acquired pneumonia identified variations in physician performance. Measure compliance correlated with significantly improved patient outcomes and lower costs.
The United States federal government and private payers are increasingly looking to value-based purchasing to improve the quality of care and reduce healthcare costs. One of the core tenets of value based purchasing is incentive payments for reporting performance data. Notable examples include the Meaningful Use Incentive Program and the Physician Quality Reporting System (formerly Physician Quality Reporting Initiative). Although there is some data to suggest that pay for performance is an effective means of quality improvement, results are mixed.1
One of the limitations of quality measurement in its current state is reliance on administrative (claims) data, which is comprised of diagnoses, procedures, and demographic data about patient encounters. Secondary use of claims data for measurement is often hindered by a dearth of clinical detail and the reality that coding is intended for reimbursement, not quality improvement. Additionally, not all services provided and relevant clinical data are captured in claims data.2 Finally, patient preferences are not available in administrative data, which are important for identifying valid numerator exclusions.3
Electronic Medical Records (EMRs) offer a potential solution to the limitations of administrative data. Comprehensive implementation of EMRs offers detailed clinical data to capture and analyze complex clinical scenarios. Patient preferences and global reasons for measure exclusion—such as a patient receiving palliative measures only—can be used for performance calculation. Additionally, EMRs may also allow automatic capture of quality measurement data without additional documentation from clinicians.4 However, robust testing is necessary to ensure that electronic quality measurement is an improvement over the use of claims data.5
Community acquired pneumonia (CAP) is a frequent topic of quality improvement because it is a common disease, has high costs, and is a frequent cause of mortality.6, 7 Additionally, adherence to antibiotic guidelines results in improved outcomes and reduced costs.8, 9 Our aim is to determine if use of an electronic quality measure identifies variations in care.
We focused on a single metric of physician-level measurement of pneumonia treatment, use of guideline recommended antibiotics for CAP. We determined if performance could be calculated without physician documentation specific to quality measurement. Second, we established physician performance rate at our institution and identified reasons for measure failures. Finally, we ascertain if measure compliance correlates with higher quality, lower cost care.
Our study utilized the physician performance measure “Empiric Antibiotic for Community-Acquired Bacterial Pneumonia” developed by the American Medical Association-convened Physician Consortium for Performance Improvement (PCPI).10 This metric was based on guidelines from the Infectious Disease Society of America (IDSA) and the American Thoracic Society.11, 12 The measure is part of the Centers for Medicare & Medicaid (CMS) Physician Quality Reporting System and is endorsed by the National Quality Forum.
The measure is intended to be used to assess the quality of care provided by physicians in the emergency department and ambulatory settings. We applied the measure to emergency department physicians only.
Rush University Medical Center (RUMC) is a 676-bed academic hospital located in Chicago, Illinois. RUMC operates a comprehensive emergency department (ED), open 24 hours a day; in 2009, the medical center had 46,471 ED encounters. RUMC, including the ED, uses the Epic EMR for nearly all aspects of patient care including documentation, ordering, and viewing laboratory and imaging results. During the study period, ED physicians had access to a CAP-specific antimicrobial order set in the EMR; this order set consisted of IDSA recommendations for appropriate anti-microbial therapy for CAP.
RUMC has developed an integrated clinical and financial data warehouse which includes selected clinical data extracted from the EMR combined with administrative data.
The physician hospital association affiliated with RUMC is a CMS approved Physician Quality Reporting System registry. Using this registry, we identified all patients seen in the RUMC ED who met CMS criteria for reporting on the measure in calendar year 2009.
The denominator of the measure is “All patients aged 18 years and older with a diagnosis of community-acquired pneumonia.” Inclusion in the measure denominator is algorithmically determined using International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) diagnosis codes and Current Procedural Terminology codes. Cases are included in the measure denominator if the professional services billing data for an ED encounter included one of the ICD-9 diagnosis codes for community acquired pneumonia in the measure specifications. To meet the measure numerator criteria, “Patients with appropriate empiric antibiotic prescribed,” physicians must prescribe one of these four regimens: 1) fluoroquinolone, 2) macrolide, 3) doxycycline, or 4) beta lactam with macrolide or doxycycline. Measure compliance was determined by assessing prescription or administration of one of these regimens while the patient was in the ED. Measure failures were deemed corrected if the antibiotic regimen was switched to be measure concordant as an inpatient within 48 hours of presentation to the ED.
For patients in the study, we queried the RUMC integrated data warehouse for sociodemographic variables (age, sex, race/ethnicity, and primary payer). We also extracted data on clinical outcomes, including admission to the hospital, in-hospital mortality, all cause 30-day readmission, intensive care utilization (admission to the intensive care unit [ICU] and days in the ICU), antibiotic administration, direct costs and total costs. Severity of illness was measured as a count of comorbidities using Elixhauser’s coding algorithm from the Healthcare Cost and Utilization Project’s Clinical Classification Software.13
We compared patient sociodemographic characteristics, severity of illness and clinical outcomes for patients who met measure numerator versus those who failed the measure numerator using Chi-squared tests and independent two sample t-tests. This study was approved by the Rush University Medical Center Institutional Review Board (#10031107-IRB01), including a waiver of informed consent and HIPAA waiver of authorization.
There were 316 patient cases during calendar year 2009 that were eligible for reporting to the CMS Physician Quality Reporting System. For the eligible cases, 223 (70.6%) met the numerator criteria; i.e., were prescribed guideline-concordant antibiotic treatment of CAP. There were 93 (29.4%) cases that failed the measure numerator. Amongst the 316 cases, 288 (91.1%) were admitted to the hospital and 28 (8.9%) cases were sent home after evaluation in the ED. For the 28 cases sent home from the ED, 26 (92.9%) met numerator criteria. For the 288 cases admitted to the hospital following evaluation and treatment in the ED, 197 (68.4%) met the numerator criteria and 91 (31.6%) failed the measure (Figure 1).
We focused our analysis on the cohort of patients admitted to the hospital. We compared measure compliant cases to cases that failed the numerator. There were significantly more males among the measure compliant patients, but the groups were otherwise similar with respect to age, race, primary insurer, and severity of illness (Table 1).
For patients admitted to the hospital, the measure compliant cases had significantly lower length of stay, costs, and visits to ICU. Mortality, 30-day readmissions, and length of stay in the ICU were not significantly different between the measure failures and measure compliant cases (Table 2).
Among the 93 measure failures, 2 patients were discharged from the ED without any antibiotic prescription or administration. The remaining 91 measure failure cases were admitted to the hospital having received or been prescribed a beta-lactam alone, without a macrolide or doxycycline. Among the 91 measure failures, 57.1% (52) had their antibiotic regimen corrected to be measure concordant within 48 hours of admission. Another 6.6% (6) cases had their antibiotics switched to be measure concordant from 3–10 days after admission. Measure compliant therapy changes included use of fluoroquinolones (79.3%), macrolides (19.0%), and doxycycline (1.7%).
EMR-driven performance measurement linked to payment incentives is going to be part of the clinical landscape in the coming years and perhaps beyond. The Meaningful Use Incentive Program alone dictates that quality measurement will be a high priority topic for the next three to five years. Additional long term policy initiatives such as the accountable care model will likely necessitate use of electronic quality measurement to meet program mandates or to support operational needs.
Our study used electronic data to calculate physician performance on prescribing guideline recommended antibiotics for CAP. We focused on use of the measure in the ED setting, since other studies have evaluated the measure in an ambulatory environment.14 We strove to study the measure as it is calculated and reported in an ongoing program, CMS’s Physician Quality Reporting System. Performance measurement based on known best practices is assumed, but not proven, to improve processes of care and patient outcomes.
ED physician performance for all 316 patients seen in calendar year 2009 was moderate, 70.6%. Almost all (97.8%) measure failures were patients admitted to the hospital from the ED with only a beta-lactam administered or prescribed; the remaining 2.2% were discharged home without appropriate antibiotics prescribed or administered. Performance at our institution was similar to that seen in other studies that looked at empiric antibiotic treatment of CAP.15
The measure successfully identified variations of care in our institution. Measure compliant cases had significantly shorter length of stays, less visits to the ICU, and lower direct and total costs. Mortality and 30 day all-cause readmissions were lower in the measure compliant group, but this finding was not significant. Our results suggest this measure identifies cases where there is an opportunity for quality improvement and potential cost savings. Quality measures are often criticized for measuring processes, rather than outcomes, of care. For this quality measure, completing the process being measured resulted in improved outcomes. We were also able to automate performance calculation using EMR data without additional measurement-specific documentation, an important consideration since quality measurement can be burdensome.
Interpreting our results requires an understanding of limitations of the performance measure, particularly the use of ICD-9-CM codes to identify the denominator cases. ICD-9-CM coding is known to be inaccurate at correctly identifying pneumonia cases.5 Additionally, the IDSA guidelines referenced in the measure make different recommendations depending on the severity of illness.11 The measure does not attempt to replicate this complex logic. Finally, ICD-9-CM codes only capture the high level concept of pneumonia; codes do not exist to distinguish CAP from hospital acquired pneumonia, which requires different antibiotic therapy. Use of a comprehensive standardized vocabulary such as the Systematized Nomenclature of Medicine--Clinical Terms (SNOMED-CT) in the measure specifications may allow more accurate identification of eligible cases of CAP.
Besides use of recommended antibiotics, another plausible explanation for differences in outcomes is that the non-compliant patients were more acutely ill. ICD-9-CM diagnosis codes are not detailed enough to establish pneumonia severity, so we cannot rule this out. However, the compliance and failure groups had similar Elixhauser severity of illness scores and demographic factors. Additionally, 57.1% of measure failures had their antibiotic regimen changed to be measure compliant within 48 hours. This finding suggests that measure compliant antibiotic treatment was, in fact, appropriate in the majority of measure failures. Patients may have benefited from earlier administration of measure compliant antibiotic therapy, regardless of the severity of pneumonia.
During the course of our work, several opportunities for refining the measure became apparent. With the availability of EMR data rather than just claims data, the measure specifications could be expanded to better address the nuances of treating pneumonia. For example, risk of nosocomial infection could be ascertained by looking for recent hospital admissions, history of dialysis, or admission from a long term care facility. Severity of pneumonia could be determined from EMR data or diagnoses coded with standardized medical vocabularies like SNOMED-CT. Performance could then be more accurately assessed based on guideline recommendations for both the severity of pneumonia and likelihood of facility versus community acquired disease.
Our findings are restricted to this specific measure and cannot be generalized to other measures since each quality measure is different. However, our results demonstrate that other measures can be similarly successful. Additional investigation is necessary to validate that our findings would hold across other EMR products and institutions; a multi-center study would be beneficial. Finally, more research would be beneficial to understand the variation of care noted at our institution, including the contribution of severity of pneumonia to differences in outcomes.
Our study demonstrates that electronic performance measurement is feasible and valid for one nationally endorsed quality measure, without requiring clinical documentation specific to the measure. Use of the measure at our institution successfully identified variations in physician compliance with guideline recommendations for empiric antibiotic treatment for CAP. Most measure failures were corrected to guideline recommended antibiotics after admission to the hospital. Measure compliant cases had improved clinical outcomes, including significantly shorter length of stay, visits to the ICU, and costs.