We conducted an audit of POA reporting in the 2005 California patient discharge data (PDD) by comparing its accuracy against a gold standard created from blinded reabstraction of the corresponding medical records. California is one of two states, the other New York, that has over a decade of experience in requiring hospitals to use POA reporting in routine reporting of hospital discharges. We used our reabstraction-based gold standard to identify patterns of POA under-reporting and over-reporting, including systematic tendencies by hospital characteristics.
The California PDD includes patient demographic, diagnostic, procedure, and disposition codes for approximately 3.7 million hospitalizations per year from all nonfederal, nonchildren's California acute care hospitals (N = 355). Using a research file that included hospital and patient identifiers, we selected a probability sample of records for review. For efficiency, we used a complex sampling design that randomly selected hospitals in proportion to the number of eligible patient records. Other hospital characteristics were not included in the sampling design.
Eligible patient records were those with International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9 CM) codes corresponding to the procedure percutaneous transluminal coronary angioplasty or one of three principal diagnoses: acute myocardial infarction (Romano and Luft 1996
), community-acquired pneumonia (Center 2006
), or congestive heart failure (AHRQ 2006
; see Appendix 2a
). We selected these four “umbrella conditions” because they are common and associated with relatively high mortality. The California Office of Statewide Health Planning and Development (2008)
currently produces public reports of hospital quality for two of these conditions: acute myocardial infarction (Romano and Luft 1996
) and community-acquired pneumonia (Center 2004
In addition, to be an eligible record, the patient had to also have one of two prespecified acute secondary diagnoses that, depending on the clinical circumstances, could be regarded as either a comorbidity or hospital complication of the umbrella condition (see Appendix 2b
). These diagnoses were selected following a literature review that indicated their importance as predictors of mortality. We prespecified the acute secondary diagnoses for sampling to insure adequate sample size for individual influential secondary diagnoses, recognizing that POA reporting accuracy could differ by diagnosis. Among cases whose principal “umbrella condition” was acute myocardial infarction, we sampled cases that also had shock or pulmonary edema listed as a secondary diagnosis. Among all cases of acute myocardial infarction, shock occurred in 5.7 percent and pulmonary edema occurred in 8.7 percent of cases statewide. For cases whose umbrella condition was congestive heart failure, we sampled cases with secondary diagnoses of acute myocardial infarction or acute renal failure (Krumholz et al. 1997
; Smith et al. 2006
). Acute myocardial infarction occurred in 2.19 percent and renal failure in 8.7 percent of cases of congestive heart failure statewide. For community-acquired pneumonia, we sampled cases with secondary diagnoses of septicemia (Iezzoni et al. 1992
; Fine et al. 1996
) or respiratory failure (Haas et al. 2000
). Septicemia occurred in 3.8 percent and respiratory failure in 7.7 percent of community-acquired pneumonia cases. For percutaneous transluminal coronary angioplasty cases, we sampled cases with secondary diagnoses of acute myocardial infarction (Moscucci et al. 2001
) or acute renal failure (Best et al. 2002
). Acute myocardial infarction occurred in 2.8 percent and acute renal failure in 3.2 percent of percutaneous transluminal coronary angioplasty cases.
To prevent any single hospital from disproportionately influencing our sample, we capped the number of records for each “umbrella condition—secondary diagnosis” combination at 10 cases per hospital. Thus, the maximum number of cases from any hospital was 80, and only one of our 48 sampled hospitals (2.1 percent) reached the 80 record cap.
We used two types of abstractors to review the medical records: health information technicians (HIT) and registered nurses (RN). HITs mirror the type of person routinely employed in hospital medical records departments to code administrative coding, including POA. All HITs employed for this study had Registered Health Information Technician certification and were previously employed as inpatient medical coders for at least 5 years. We separately employed RNs who had at least 5 years of experience in reabstraction of inpatient medical coding to abstract the medical records to gauge whether someone with clinical training would make a different determination about whether a secondary diagnosis was present on admission. All abstractors (five HITs and five RNs) participated in a 40-hour training session led by the study team which included instruction in the data collection tool, standardized training examples, and feedback on sample medical records abstractions. Abstractors were instructed to apply the directions on POA reporting provided to the hospitals by the California Office of Statewide Health Planning and Development. These instructions describe that diagnoses were to be recorded as present on admission when the diagnosis was documented by a physician in the admission note. Chronic diagnoses (e.g., diabetes) identified during the hospitalization were considered present on admission. Conditions suspected at the time of admission (e.g., noted in an emergency room or admitting physician note) were also considered present on admission. Abstractors were to record a condition as not present on admission when there was no physician documentation of the clinical condition in the admission or emergency department note and no signs or symptoms of the condition on admission.
Health information technicians blindly reviewed the medical record following standard practice medical record coding rules. In contrast, the RNs possessed the unblinded list of diagnostic codes that the hospitals submitted to the California PDD. The RNs first determined whether the codes listed for the principal and all secondary diagnoses were correct and only then blindly determined as to whether each diagnosis was present on admission. Of the 1,557 records abstracted by the HITs, 9.4 percent (n = 147) were abstracted twice, and of the 1,688 records reviewed by the RNs 18.6 percent (N = 307) were abstracted twice for the purposes of quality control. Depending on whether a record was reviewed once or twice by each of an HIT and an RN, records could have been reabstracted from two to four times.
Using the multiple abstractions for each record, we created gold standards for POA reporting accuracy for the two specified acute secondary diagnoses associated with each of the four umbrella conditions. As a requirement for a case to be considered in the gold standard sample, more than one reviewer had to review the case, and the reviewers had to agree (with the PDD) on the accuracy of the sampled umbrella condition and secondary diagnosis. Then for records with two or three reabstractions, each had to agree on the POA reporting of the specified secondary diagnosis for the case to qualify as a part of the gold standard sample. For records with four reabstractions, at least three needed to agree on the POA reporting for the case to qualify (i.e., either three of four reabstractions agreed or four of four reabstractions agreed on POA reporting). For the remaining records (with two or more reabstractions) lacking consensus as defined above in the reporting of POA, physicians blindly adjudicated the POA reporting to make the final gold standard determination.
POA can be misreported in two ways. Over-reported secondary diagnoses are those in which the PDD recorded the POA reporting as present on admission, but the gold standard assessment was not present on admission. Under-reported secondary diagnoses were documented in the PDD as not present on admission, but the gold standard assessment was present on admission.
We linked hospital characteristics, including teaching status, ownership, percent profit margin, the number of staffed beds, and percent of discharges reimbursed by Medicaid available in the 2005 California Annual Financial Database, to each sampled case in the California PDD (Iezzoni et al. 1988
; Lorence 2003
; Lorence and Ibrahim 2003a
; Preyra 2004
; Goldman et al. 2007
; Santos et al. 2008
). Teaching hospitals were those participating in the Council of Teaching Hospitals and Health Systems. We categorized hospitals into for-profit and not for-profit hospitals (aggregating government and nonprofit). Percent profit margins, number of staffed beds, and percent of discharges reimbursed by Medicaid were categorized into quartiles based on the distribution of these variables among hospitals in California. Sensitivity analyses using tertiles and quintiles of profit margin, number of staffed beds, and percent of discharges reimbursed by Medicaid yielded similar results as when categorized by quartile, so for the ease of presentation only the results by quartile are shown.
We examined the accuracy of POA reporting in the California PDD as compared to the gold standard overall and for each of the eight combinations of umbrella condition and secondary diagnosis. We tested for bias to over-report or under-report POA using McNemar's test. In stratified analyses, we examined whether the accuracy of POA reporting varied by hospital characteristics, including teaching status, ownership, number of staffed beds, percent profit margin, and percent of discharges reimbursed by Medicaid (Iezzoni et al. 1988
; Lorence 2003
; Lorence and Ibrahim 2003a
; Preyra 2004
; Goldman et al. 2007
; Santos et al. 2008
), and whether the umbrella condition is publicly reported in California (acute myocardial infarction and community-acquired pneumonia). We also tested (using a z
-test) the hypothesis of differential POA over-reporting of secondary diagnoses whose umbrella condition was publicly reported. To control for clustering of data within hospitals, we used hierarchical logistic regression (SAS PROC GLIMMIX
, Cary, North Carolina, USA) to test for hospital characteristics predictive of “over-reporting” and “under-reporting,” while controlling for patient characteristics, including age, sex, in-hospital death, and the prespecified acute secondary diagnosis. We only included hospital and patient characteristics in the multivariate analyses that were statistically significant at the level of p
< .1 in bivariate analyses.