PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of hsresearchLink to Publisher's site
 
Health Serv Res. Dec 2011; 46(6 Pt 1): 1946–1962.
PMCID: PMC3393034
The Accuracy of Present-on-Admission Reporting in Administrative Data
L Elizabeth Goldman, M.D., M.C.R.,1 Philip W Chu, M.S.,2 Dennis Osmond, Ph.D.,3 and Andrew Bindman, M.D.2
Department of Medicine, University of California–San Francisco, 1001 Potrero Ave., San Francisco, CA 94110
Department of Medicine, University of California–San Francisco, San Francisco, CA
Department of Epidemiology and Biostatistics, University of California-San Francisco, San Francisco, CA
Address correspondence to L. Elizabeth Goldman, M.D., M.C.R., Department of Medicine, University of California–San Francisco, 1001 Potrero Ave., San Francisco, CA 94110; e-mail: legoldman/at/medsfgh.ucsf.edu. L. Elizabeth Goldman, M.D., Philip W. Chu, M.S., and Andrew Bindman, M.D., are with the Department of Medicine, University of California–San Francisco, San Francisco, CA. Dennis Osmond, Ph.D., is with the Department of Epidemiology and Biostatistics, University of California-San Francisco, San Francisco, CA.
Objective
To test the accuracy of reporting present-on-admission (POA) and to assess whether POA reporting accuracy differs by hospital characteristics.
Data Sources
We performed an audit of POA reporting of secondary diagnoses in 1,059 medical records from 48 California hospitals.
Study Design
We used patient discharge data (PDD) to select records with secondary diagnoses that are powerful predictors of mortality and could potentially represent comorbidities or complications among patients who either had a primary procedure of a percutaneous transluminal coronary angioplasty or a primary diagnosis of acute myocardial infarction, community-acquired pneumonia, or congestive heart failure. We modeled the relationship between secondary diagnoses POA reporting accuracy (over-reporting and under-reporting) and hospital characteristics.
Data Collection
We created a gold standard from blind reabstraction of the medical records and compared the accuracy of the PDD against the gold standard.
Principal Findings
The PDD and gold standard agreed on POA reporting in 74.3 percent of records, with 13.7 percent over-reporting and 11.9 percent under-reporting. For-profit hospitals tended to overcode secondary diagnoses as present on admission (odds ratios [OR] 1.96; 95 percent confidence interval [CI] 1.11, 3.44), whereas teaching hospitals tended to undercode secondary diagnoses as present on admission (OR 2.61; 95 percent CI 1.36, 5.03).
Conclusions
POA reporting of secondary diagnoses is moderately accurate but varies by hospitals. Steps should be taken to improve POA reporting accuracy before using POA in hospital assessments tied to payments.
Keywords: Present-on-admission, hospitals, accuracy, administrative data, quality measurement
There is widespread interest in public reporting of hospital performance and quality-based incentives as a means to improve hospital care. Increasingly, states and other stakeholders use administrative data generated for billing purposes to measure hospital quality, even though the accuracy of such data has been questioned (Iezzoni et al. 1988, 1992; McCarthy et al. 2000; Romano, Schembri, and Rainwater 2002; Romano et al. 2002; Romano 2003; Fry et al. 2006). One large concern is that clinical assessments measured from administrative data inadequately account for patient health status (Iezzoni et al. 1996). Secondary diagnoses in administrative data are used in risk adjustment to estimate differences in patient health status. However, under many circumstances risk-adjustment models exclude secondary diagnoses that are important risk factors of poor patient outcomes because the way the information is coded in administrative data makes it difficult to distinguish whether a recorded secondary diagnosis is a comorbidity or a complication of care; it is appropriate to adjust outcomes for comorbidities, but not for complications of care. It is generally safe to assume that chronic conditions are not the result of hospital care and thereby reflect comorbidities. The situation is often less obvious for recorded acute conditions, which in some circumstances may reflect a comorbidity and in others a complication.
Present-on-admission (POA) reporting is an emerging method for distinguishing in administrative data between complications of care that developed during and comorbidities that existed prior to hospitalization (Stukenborg et al. 2005, 2007; Bindman and Bennett 2006; Iezzoni 2007; Bahl, Thompson, Kau, Hu, and Campbell Jr. 2008). In 2008, the Center for Medicare and Medicaid Services (CMS) implemented the requirement that hospitals report POA for each diagnosis in its administrative data as a means to distinguish hospital-acquired conditions from comorbidities. In this approach, secondary diagnoses that exist prior to admission are considered comorbidities, and CMS instructs hospitals to self-report the “POA” reporting of these conditions as “yes” corresponding to their being present at the time of admission. Secondary diagnoses that occur after hospital admission are considered complications, and hospitals are instructed to self-report these conditions in administrative data as “no” corresponding to not present on admission. CMS uses additional categories: “Exempt” for conditions that are exempt from reporting, “Unknown” for those conditions where it was unknown whether it was present on admission, and “Clinically Undetermined” if there was insufficient evidence to determine clinically whether the condition is present on admission. California did not use the exempt, unknown, and clinically undetermined codes at the time of this study. While the coding of conditions using the POA flag has face validity, there are questions about how accurately POA codes are self-reported by hospitals (Hughes et al. 2006; Iezzoni 2007). Hospitals concerned about publicly reported quality assessments based on risk-adjusted models from administrative data could “over-report” diagnoses as present on admission to make their patients appear sicker and thereby improve their publicly reported risk-adjusted mortality rates. Overcoding of diagnoses in for-profit hospitals has previously been recognized in situations where coding practices influence reimbursement (Hsia et al. 1992). Hospitals with fewer administrative resources in their medical record departments due to small size, payer case mix, or lower profit margins may be less able to train coders in POA reporting and therefore may be more likely to misreport POA (either over- or under-report). Teaching hospitals that predominately rely on physician documentation from rotating physicians-in-training may be more likely to misreport POA. Prior research has documented that coding practices for billing purposes vary extensively across hospitals and are influenced by hospital characteristics, physician documentation, and response to payment reform (Goldfarb and Coffey 1992; Hsia et al. 1992; Lorence 2003; Lorence and Ibrahim 2003a,b; Rangachari 2007; Santos et al. 2008; Hennessy et al. 2010). It is unknown whether these influences extend to POA reporting. No recent large studies have assessed the accuracy of the POA indicator variables against an external standard such as chart review.
We conducted an audit of POA reporting in the 2005 California patient discharge data (PDD) by comparing its accuracy against a gold standard created from blinded reabstraction of the corresponding medical records. California is one of two states, the other New York, that has over a decade of experience in requiring hospitals to use POA reporting in routine reporting of hospital discharges. We used our reabstraction-based gold standard to identify patterns of POA under-reporting and over-reporting, including systematic tendencies by hospital characteristics.
The California PDD includes patient demographic, diagnostic, procedure, and disposition codes for approximately 3.7 million hospitalizations per year from all nonfederal, nonchildren's California acute care hospitals (N = 355). Using a research file that included hospital and patient identifiers, we selected a probability sample of records for review. For efficiency, we used a complex sampling design that randomly selected hospitals in proportion to the number of eligible patient records. Other hospital characteristics were not included in the sampling design.
Eligible patient records were those with International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9 CM) codes corresponding to the procedure percutaneous transluminal coronary angioplasty or one of three principal diagnoses: acute myocardial infarction (Romano and Luft 1996), community-acquired pneumonia (Center 2006), or congestive heart failure (AHRQ 2006; see Appendix 2a). We selected these four “umbrella conditions” because they are common and associated with relatively high mortality. The California Office of Statewide Health Planning and Development (2008) currently produces public reports of hospital quality for two of these conditions: acute myocardial infarction (Romano and Luft 1996) and community-acquired pneumonia (Center 2004, 2006).
In addition, to be an eligible record, the patient had to also have one of two prespecified acute secondary diagnoses that, depending on the clinical circumstances, could be regarded as either a comorbidity or hospital complication of the umbrella condition (see Appendix 2b). These diagnoses were selected following a literature review that indicated their importance as predictors of mortality. We prespecified the acute secondary diagnoses for sampling to insure adequate sample size for individual influential secondary diagnoses, recognizing that POA reporting accuracy could differ by diagnosis. Among cases whose principal “umbrella condition” was acute myocardial infarction, we sampled cases that also had shock or pulmonary edema listed as a secondary diagnosis. Among all cases of acute myocardial infarction, shock occurred in 5.7 percent and pulmonary edema occurred in 8.7 percent of cases statewide. For cases whose umbrella condition was congestive heart failure, we sampled cases with secondary diagnoses of acute myocardial infarction or acute renal failure (Krumholz et al. 1997; Smith et al. 2006). Acute myocardial infarction occurred in 2.19 percent and renal failure in 8.7 percent of cases of congestive heart failure statewide. For community-acquired pneumonia, we sampled cases with secondary diagnoses of septicemia (Iezzoni et al. 1992; Fine et al. 1996) or respiratory failure (Haas et al. 2000). Septicemia occurred in 3.8 percent and respiratory failure in 7.7 percent of community-acquired pneumonia cases. For percutaneous transluminal coronary angioplasty cases, we sampled cases with secondary diagnoses of acute myocardial infarction (Moscucci et al. 2001) or acute renal failure (Best et al. 2002). Acute myocardial infarction occurred in 2.8 percent and acute renal failure in 3.2 percent of percutaneous transluminal coronary angioplasty cases.
To prevent any single hospital from disproportionately influencing our sample, we capped the number of records for each “umbrella condition—secondary diagnosis” combination at 10 cases per hospital. Thus, the maximum number of cases from any hospital was 80, and only one of our 48 sampled hospitals (2.1 percent) reached the 80 record cap.
We used two types of abstractors to review the medical records: health information technicians (HIT) and registered nurses (RN). HITs mirror the type of person routinely employed in hospital medical records departments to code administrative coding, including POA. All HITs employed for this study had Registered Health Information Technician certification and were previously employed as inpatient medical coders for at least 5 years. We separately employed RNs who had at least 5 years of experience in reabstraction of inpatient medical coding to abstract the medical records to gauge whether someone with clinical training would make a different determination about whether a secondary diagnosis was present on admission. All abstractors (five HITs and five RNs) participated in a 40-hour training session led by the study team which included instruction in the data collection tool, standardized training examples, and feedback on sample medical records abstractions. Abstractors were instructed to apply the directions on POA reporting provided to the hospitals by the California Office of Statewide Health Planning and Development. These instructions describe that diagnoses were to be recorded as present on admission when the diagnosis was documented by a physician in the admission note. Chronic diagnoses (e.g., diabetes) identified during the hospitalization were considered present on admission. Conditions suspected at the time of admission (e.g., noted in an emergency room or admitting physician note) were also considered present on admission. Abstractors were to record a condition as not present on admission when there was no physician documentation of the clinical condition in the admission or emergency department note and no signs or symptoms of the condition on admission.
Health information technicians blindly reviewed the medical record following standard practice medical record coding rules. In contrast, the RNs possessed the unblinded list of diagnostic codes that the hospitals submitted to the California PDD. The RNs first determined whether the codes listed for the principal and all secondary diagnoses were correct and only then blindly determined as to whether each diagnosis was present on admission. Of the 1,557 records abstracted by the HITs, 9.4 percent (n = 147) were abstracted twice, and of the 1,688 records reviewed by the RNs 18.6 percent (N = 307) were abstracted twice for the purposes of quality control. Depending on whether a record was reviewed once or twice by each of an HIT and an RN, records could have been reabstracted from two to four times.
Using the multiple abstractions for each record, we created gold standards for POA reporting accuracy for the two specified acute secondary diagnoses associated with each of the four umbrella conditions. As a requirement for a case to be considered in the gold standard sample, more than one reviewer had to review the case, and the reviewers had to agree (with the PDD) on the accuracy of the sampled umbrella condition and secondary diagnosis. Then for records with two or three reabstractions, each had to agree on the POA reporting of the specified secondary diagnosis for the case to qualify as a part of the gold standard sample. For records with four reabstractions, at least three needed to agree on the POA reporting for the case to qualify (i.e., either three of four reabstractions agreed or four of four reabstractions agreed on POA reporting). For the remaining records (with two or more reabstractions) lacking consensus as defined above in the reporting of POA, physicians blindly adjudicated the POA reporting to make the final gold standard determination.
POA can be misreported in two ways. Over-reported secondary diagnoses are those in which the PDD recorded the POA reporting as present on admission, but the gold standard assessment was not present on admission. Under-reported secondary diagnoses were documented in the PDD as not present on admission, but the gold standard assessment was present on admission.
We linked hospital characteristics, including teaching status, ownership, percent profit margin, the number of staffed beds, and percent of discharges reimbursed by Medicaid available in the 2005 California Annual Financial Database, to each sampled case in the California PDD (Iezzoni et al. 1988, 1992; Lorence 2003; Lorence and Ibrahim 2003a,b; Preyra 2004; Goldman et al. 2007; Santos et al. 2008). Teaching hospitals were those participating in the Council of Teaching Hospitals and Health Systems. We categorized hospitals into for-profit and not for-profit hospitals (aggregating government and nonprofit). Percent profit margins, number of staffed beds, and percent of discharges reimbursed by Medicaid were categorized into quartiles based on the distribution of these variables among hospitals in California. Sensitivity analyses using tertiles and quintiles of profit margin, number of staffed beds, and percent of discharges reimbursed by Medicaid yielded similar results as when categorized by quartile, so for the ease of presentation only the results by quartile are shown.
We examined the accuracy of POA reporting in the California PDD as compared to the gold standard overall and for each of the eight combinations of umbrella condition and secondary diagnosis. We tested for bias to over-report or under-report POA using McNemar's test. In stratified analyses, we examined whether the accuracy of POA reporting varied by hospital characteristics, including teaching status, ownership, number of staffed beds, percent profit margin, and percent of discharges reimbursed by Medicaid (Iezzoni et al. 1988, 1992; Lorence 2003; Lorence and Ibrahim 2003a,b; Preyra 2004; Goldman et al. 2007; Santos et al. 2008), and whether the umbrella condition is publicly reported in California (acute myocardial infarction and community-acquired pneumonia). We also tested (using a z-test) the hypothesis of differential POA over-reporting of secondary diagnoses whose umbrella condition was publicly reported. To control for clustering of data within hospitals, we used hierarchical logistic regression (SAS PROC GLIMMIX, Cary, North Carolina, USA) to test for hospital characteristics predictive of “over-reporting” and “under-reporting,” while controlling for patient characteristics, including age, sex, in-hospital death, and the prespecified acute secondary diagnosis. We only included hospital and patient characteristics in the multivariate analyses that were statistically significant at the level of p < .1 in bivariate analyses.
Our initial sample consisted of 1,694 records across 48 hospitals; the HITs abstracted 1,557 of these records, and the RNs abstracted 1,688 (Appendix 1). Of the abstracted records, we excluded 525 records in which the HIT did not code and 162 in which the RN did not confirm either the sampled umbrella condition or the acute secondary diagnosis. A total of 1,059 records met our gold standard criteria, of which 304 (28.7 percent) required physician adjudication.
Our patient sample tended to be sick with an in-hospital mortality of 26.0 percent. Eighty-one percent of the patients were older than 60 and had multiple medical diagnoses (85 percent ≥ 2 comorbidities) (Elixhauser et al. 1998). For the primary diagnosis or procedure (i.e., umbrella condition), 298 patients had an acute myocardial infarction, 205 had community-acquired pneumonia, 288 had congestive heart failure, and 268 had percutaneous transluminal coronary angioplasty (Table 1).
Table 1
Table 1
Present-on-Admission (POA) Reporting Accuracy by Patient and Hospital Characteristics: Sample Characteristics
The 48 sampled hospitals included 9 (18.8 percent) for-profit and 39 (81 percent) not for-profit institutions. Eight of the hospitals (16.7 percent) were teaching facilities and 40 were not (83.3 percent). The average number of staffed beds was 284, ranging from 24 to 855. On average, 19 percent of hospital admissions were reimbursed by Medicaid (range 0.5–63 percent), and the average profit margin was 2 percent (range 22–18 percent).
Overall, we found 74.3 percent agreement in POA reporting of secondary diagnoses between the gold standard and the PDD without any tendency to over- or under-report POA (McNemar's, p-value = .25). Reporting accuracy ranged from only 60.2 percent for septicemia in the setting of community-acquired pneumonia to 81.6 percent for pulmonary edema in the setting of acute myocardial infarction. There were no substantial differences in over- or under-reporting of POA for secondary diagnoses whose umbrella conditions were publicly reported in risk-adjusted mortality reports (78.9 percent agreement for patients with acute myocardial infarction and 70.7 percent in community-acquired pneumonia) compared to those umbrella conditions without public reports (76.4 percent in congestive heart failure and 69.8 percent in percutaneous transluminal coronary angioplasty) (p = .42).
Present-on-admission accuracy was highly variable across hospitals. The percent agreement for the eight secondary conditions ranged from 1 to 100, in part due to small number of cases at some hospitals. Certain hospital characteristics predicted POA reporting accuracy (Table 2). Adjusted for patient-level characteristics, for-profit hospitals were more likely to over-report secondary diagnoses as being present on admission (odds ratios [OR] 1.96; 95 percent confidence interval [CI] 1.11, 3.44). In contrast, POA was more likely to be under-reported at teaching hospitals (OR 2.61; 95 percent CI 1.36, 5.03). Neither percent profit margin, number of staffed beds, nor percent of discharges reimbursed by Medicaid were independently associated with POA reporting accuracy (p > .05).
Table 2
Table 2
Present-on-Admission Hospital Characteristics and Reporting Accuracy by Hospital Characteristics, Univariate and Multivariate Models
Our study is the largest to date to evaluate the accuracy of POA reporting for acute medical conditions that could be either comorbidities or complications (Pine et al. 2009). Consistent with a smaller study of POA reporting in California among cases of community-acquired pneumonia (Haas et al. 2000) and a three-hospital study in Canada (Quan, Parsons, and Ghali 2004), we found variability in the accuracy of reporting POA across secondary diagnoses. In general we did not find a tendency toward under- or over-reporting of POA. However, for-profit hospitals were more likely to over-report the secondary diagnoses as being present on admission when they were not, whereas teaching hospitals were more likely to under-report secondary diagnoses as being not present on admission when they were.
Our findings are consistent with previous studies that have found a greater number of billed diagnoses in for-profit hospitals (Steinbusch et al. 2007). This may suggest a general tendency of hospital medical record coders at these institutions to code medical records more aggressively to the point that they are overcoded. However, in the case of POA reporting it is less clear than in the case of billing codes to understand why for-profit hospitals would have a motivation to overcode. In 2005, the year of our sampled data, hospitals were not subject to any direct financial penalties related to POA reporting. At this time, there was public reporting by the state of the risk-adjusted outcomes of acute myocardial infarction and community-acquired pneumonia, but POA reporting was only part of California's risk-adjustment model for community-acquired pneumonia (Romano and Luft 1996; Haas et al. 2000). This suggests that over-reporting may reflect a general approach to coding by these hospitals that is not always tied to financial gain. Our finding that teaching hospitals under-report POA is more likely related to differences in physician documentation, rather than differences in HIT reporting. At teaching hospitals, physician trainees are responsible for the majority of documentation. Commonly, they receive relatively little training in billing and have minimal personal incentives to document optimally from a billing perspective (Mookherjee et al. 2010; Stephens and Williams 2010).
While it was not the case during the time period of our study, POA reporting in administrative data is now being applied in hospital payment decisions. In 2008, the CMS implemented a Medicare policy to not reimburse hospitals for certain hospital-acquired conditions that are identified in part using POA reporting. Several private insurers are adopting the same policy (Becker 2008; Miller 2010), and as a part of the Patient Protection and Affordable Care Act, CMS will also apply the policy to Medicaid hospitalizations (Patient Protection and Affordable Care Act 2009). While the fiscal impact of CMS's policy on Medicare reimbursement is not as dramatic as had been anticipated (McNair, Luft, and Bindman 2009; Meddings, Saint, and McMahon Jr. 2010), the potential for fiscal consequences associated with POA reporting accuracy substantially increases as more payers participate and use performance-based reimbursement. The ability to improve hospital quality and accountability in the era of health care reform hinges on having accurate hospital assessments. Our study suggests that accuracy of POA reporting varies by hospital characteristics, and in the setting of increasing care accountability, teaching hospitals could mistakenly be assessed as having worse performance than they actually have and perhaps suffer negative financial consequences as a result. Similarly, for-profit hospitals would present themselves as having better performance than they actually delivered. The linking of Medicare's payment policy to POA reporting may influence the accuracy of this variable over time.
Our study focused on several common important clinical conditions used in public reports of hospital assessments and developed a gold standard against which to compare hospitals’ reporting of POA using multiple abstractors with physician adjudication of disagreements. We purposefully sampled potentially high impact and conditions potentially challenging to distinguish between whether they were present on admission. The generalizability of our findings beyond the selected clinical conditions may be limited. Our probability sample was weighted toward larger hospitals, urban institutions, and severely ill patients and the accuracy of POA reporting may be different among patients who are less sick and cared for in smaller, rural hospitals. Our results focus on POA reporting in the setting of agreement in the reporting of the underlying umbrella and secondary diagnoses. However, POA reporting accuracy is only one aspect of the “overall” reporting accuracy for diagnoses in administrative data. The overall reporting accuracy for a diagnosis would be even worse if one takes the inaccuracies of coding the umbrella conditions, acute secondary risk factors, and POA into consideration.
Our finding that POA reporting is moderately accurate suggests that it could be improved to become more useful as a tool to discriminate between comorbidities and complications. As our study was completed, subsequent efforts to address POA reporting errors have been introduced and could change POA reporting accuracy and therefore the utility of this data element. The National Center for Health Statistics has published official guidelines for coders on how to report POA status not available in 2005. In addition, the response categories for this variable have been changed to give coders more latitude to report POA status. Finally, a variety of edits have been developed to flag suspicious POA data (Hughes et al. 2006; Pine et al. 2009), which may substantially improve the accuracy of POA data that are actually used for risk adjustment. In addition to the changes recommended by the National Center for Health Statistics, some institutions are having HITs confirm POA reporting with physicians caring for the patients as a strategy to improve the accuracy of this variable (Garrett 2009). How this practice impacts POA accuracy has not been studied. Another potentially useful strategy would be to add other data elements to administrative data that may make it easier to determine whether an acute condition is present on admission. In 2011, California will add select laboratory values and vital signs to its administrative data (Bindman and Luft 2006). This expansion may provide an opportunity to assess whether POA reporting accuracy could be improved with confirmatory clinical information, and in combination with these clinical variables afford a more robust distinction between comorbidities and complications. Ultimately we need improved methods to make unbiased assessments of patient health status that are not susceptible to gaming. POA reporting is a promising approach, but it needs further refinement if it is to serve as the basis for allocating payments.
Acknowledgments
Joint Acknowledgment/Disclosure Statement: This research is supported by an Agency for Healthcare Research and Quality, K08 Mentored Clinical Scientist Development Award, grant 1 K08 HS018090-01, and a NIH/NCRR/OD UCSF-CTSI grant KL2 RR024130. We would like to acknowledge Dr. Joseph Parker and the Office of Statewide Health Planning and Development for their methodological input and facilitation of data acquisition for this study. We also thank Huong Tran for providing technical and administrative support in preparing the revised manuscript.
Disclosures: None.
Disclaimers: None.
SUPPORTING INFORMATION
Additional supporting information may be found in the online version of this article:
Appendix SA1: Author Matrix.
Appendix 1: Sampling Schema.
Appendix 2a: Umbrella Condition Included in the Analysis.
Appendix 2b: Selected Risk Factors Included in the Analysis.
Please note: Wiley-Blackwell is not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.
  • Agency for Healthcare Research and Quality. 2006. “Inpatient Quality Indicators Overview. AHRQ Quality Indicators” [accessed on March 4, 2006]. Available at http://www.qualityindicators.ahrq.gov/modules/iqi_overview.aspx.
  • Bahl V, Thompson MA, Kau TY, Hu HM, Campbell DA., Jr “Do the AHRQ Patient Safety Indicators Flag Conditions That Are Present at the Time of Hospital Admission?” Medical Care. 2008;46(5):516–22. [PubMed]
  • Becker C. “WellPoint Joins ‘Never’ Crusade. Nation's Biggest Insurer Won't Pay for Some Preventable Errors Starting This Year.” Modern Healthcare. 2008;38(14):12. [PubMed]
  • Best PJ, Lennon R, Ting HH, Bell MR, Rihal CS, Holmes DR, Berger PB. “The Impact of Renal Insufficiency on Clinical Outcomes in Patients Undergoing Percutaneous Coronary Interventions.” Journal of the American College of Cardiology. 2002;39(7):1113–9. [PubMed]
  • Bindman AB, Bennett A. “Date Stamping: Will It Withstand the Test of Time?” Health Services Research. 2006;41(4 Pt 1):1438–43. [PMC free article] [PubMed]
  • Bindman AB, Luft HS. Expanding Patient-Level Administrative Data for Quality Assessment. Sacramento, CA: California Office of Statewide Health Planning and Development; 2006.
  • Center HO. Report on Hospital Outcomes for Community-Acquired Pneumonia in California, 1999-2001. Sacramento, CA: Healthcare Quality and Analysis Division, California Office of Statewide Health Planning and Development; 2004.
  • Center HO. Community-Acquired Pneumonia: Hospital Outcomes in California, 2002-2004. Sacramento, CA: Healthcare Information Division. California Office of Statewide Health Planning and Development; 2006.
  • Elixhauser A, Steiner C, Harris DR, Coffey RM. “Comorbidity Measures for Use with Administrative Data.” Medical Care. 1998;36(1):8–27. [PubMed]
  • Fine MJ, Smith MA, Carson CA, Mutha SS, Sankey SS, Weissfeld LA, Kapoor WN. “Prognosis and Outcomes of Patients with Community-Acquired Pneumonia. A Meta-Analysis.” Journal of the American Medical Association. 1996;275(2):134–41. [PubMed]
  • Fry DE, Pine MB, Jordan HS, Hoaglin DC, Jones B, Meimban R. “The Hazards of Using Administrative Data to Measure Surgical Quality.” American Surgeon. 2006;72(11):1031–7. discussion 61–9, 133–48. [PubMed]
  • Garrett G. “Present on Admission, Where We Are Now.” Journal of AHIMA. 2009;80(7):22–6. [PubMed]
  • Goldfarb MG, Coffey RM. “Change in the Medicare Case-Mix Index in the 1980s and the Effect of the Prospective Payment System.” Health Services Research. 1992;27(3):385–415. [PMC free article] [PubMed]
  • Goldman LE, Henderson S, Dohan DP, Talavera JA, Dudley RA. “Public Reporting and Pay-for-Performance: Safety-Net Hospital Executives’ Concerns and Policy Suggestions.” Inquiry. 2007;44(2):137–45. [PubMed]
  • Haas J, Luft H, Romano P, Dean M, Hung Y, Bacchetti P. Report for the California Hospital Outcomes Project Community-Acquired Pneumonia, 1996; Model Development and Validation. Sacramento, CA: Office of Statewide Health Planning and Development; 2000.
  • Hennessy DA, Quan H, Faris PD, Beck CA. “Do Coder Characteristics Influence Validity of ICD-10 Hospital Discharge Data?” BMC Health Services Research. 2010;10:99. [PMC free article] [PubMed]
  • Hsia DC, Ahern CA, Ritchie BP, Moscoe LM, Krushat WM. “Medicare Reimbursement Accuracy under the Prospective Payment System, 1985 to 1988.” Journal of the American Medical Association. 1992;268(7):896–9. [PubMed]
  • Hughes JS, Averill RF, Goldfield NI, Gay JC, Muldoon J, McCullough E, Xiang J. “Identifying Potentially Preventable Complications Using a Present on Admission Indicator.” Health Care Financing Review. 2006;27(3):63–82. [PubMed]
  • Iezzoni LI. “Finally Present on Admission But Needs Attention.” Medical Care. 2007;45(4):280–2. [PubMed]
  • Iezzoni LI, Burnside S, Sickles L, Moskowitz MA, Sawitz E, Levine PA. “Coding of Acute Myocardial Infarction. Clinical and Policy Implications.” Annals of Internal Medicine. 1988;109(9):745–51. [PubMed]
  • Iezzoni LI, Foley SM, Daley J, Hughes J, Fisher ES, Heeren T. “Comorbidities, Complications, and Coding Bias. Does the Number of Diagnosis Codes Matter in Predicting in-Hospital Mortality?” Journal of the American Medical Association. 1992;267(16):2197–203. [PubMed]
  • Iezzoni LI, Ash AS, Shwartz M, Daley J, Hughes JS, Mackiernan YD. “Judging Hospitals by Severity-Adjusted Mortality Rates: The Influence of the Severity-Adjustment Method.” American Journal of Public Health. 1996;86(10):1379–87. [PubMed]
  • Krumholz HM, Wang Y, Parent EM, Mockalis J, Petrillo M, Radford MJ. “Quality of Care for Elderly Patients Hospitalized with Heart Failure.” Archives of Internal Medicine. 1997;157(19):2242–7. [PubMed]
  • Lorence D. “Regional Variation in Medical Classification Agreement: Benchmarking the Coding Gap.” Journal of Medical Systems. 2003;27(5):435–43. [PubMed]
  • Lorence DP, Ibrahim IA. “Disparity in Coding Concordance: Do Physicians and Coders Agree?” Journal of Health Care Finance. 2003a;29(4):43–53. [PubMed]
  • Lorence DP, Ibrahim IA. “Benchmarking Variation in Coding Accuracy across the United States.” Journal of Health Care Finance. 2003b;29(4):29–42. [PubMed]
  • McCarthy EP, Iezzoni LI, Davis RB, Palmer RH, Cahalane M, Hamel MB, Mukamal K, Phillips RS, Davies DT., Jr “Does Clinical Evidence Support ICD-9-CM Diagnosis Coding of Complications?” Medical Care. 2000;38(8):868–76. [PubMed]
  • McNair PD, Luft HS, Bindman AB. “Medicare's Policy Not to Pay for Treating Hospital-Acquired Conditions: The Impact.” Health Affairs (Millwood) 2009;28(5):1485–93. [PubMed]
  • Meddings J, Saint S, McMahon LF., Jr “Hospital-Acquired Catheter-Associated Urinary Tract Infection: Documentation and Coding Issues May Reduce Financial Impact of Medicare's New Payment Policy.” Infection Control and Hospital Epidemiology. 2010;31(6):627–33. [PubMed]
  • Miller K. 2010. “Blue Cross and Blue Shield Announces System-Wide Payment Policy for ‘Never Events” ” [accessed March 22, 2010]. Available at http://www.bcbs.com/news/bcbsa/bcbs-announces-system-wide-payment-policy-for-never-events.html.
  • Mookherjee S, Vidyarthi AR, Ranji SR, Maselli J, Wachter RM, Baron RB. “Potential Unintended Consequences Due to Medicare's No Pay for Errors Rule? A Randomized Controlled Trial of an Educational Intervention with Internal Medicine Residents.” Journal of General Internal Medicine. 2010;25(10):1097–101. [PMC free article] [PubMed]
  • Moscucci M, Kline-Rogers E, Share D, O'Donnell M, Maxwell-Eward A, Meengs WL, Kraft P, DeFranco AC, Chambers JL, Patel K, McGinnity JG, Eagle KA. “Simple Bedside Additive Tool for Prediction of in-Hospital Mortality after Percutaneous Coronary Interventions.” Circulation. 2001;104(3):263–8. [PubMed]
  • Office of Statewide Health Planning and Development. 2008. “Community-Acquired Pneumonia: Hospital Outcomes in California, 2003-2005” [accessed on July 18, 2011]. Available at http://oshpd.ca.gov/HID/Products/PatDischargeData/ResearchReports/OutcomeRpts/CAP/
  • Patient Protection and Affordable Care Act. 2009. “H.R. 3590-906” [accessed on July 18, 2011]. Available at http://democrats.senate.gov/pdfs/reform/patient-protection-affordable-care-act-as-passed.pdf.
  • Pine M, Fry DE, Jones B, Meimban R. “Screening Algorithms to Assess the Accuracy of Present-on-Admission Coding.” Perspectives in Health Information Management. 2009;6:2. [PMC free article] [PubMed]
  • Preyra C. “Coding Response to a Case-Mix Measurement System Based on Multiple Diagnoses.” Health Services Research. 2004;39(4 Pt 1):1027–45. [PMC free article] [PubMed]
  • Quan H, Parsons GA, Ghali WA. “Assessing Accuracy of Diagnosis-Type Indicators for Flagging Complications in Administrative Data.” Journal of Clinical Epidemiology. 2004;57(4):366–72. [PubMed]
  • Rangachari P. “Coding for Quality Measurement: The Relationship between Hospital Structural Characteristics and Coding Accuracy from the Perspective of Quality Measurement.” Perspectives in Health Information Management. 2007;4:3. [PMC free article] [PubMed]
  • Romano PS. “Asking Too Much of Administrative Data?” Journal of the American College of Surgeons. 2003;196(2):337–8. author reply 38–9. [PubMed]
  • Romano PS, Luft H. Report on Heart Attack Outcomes in California 1994-1996, Volume 3: Detailed Statistical Results. Sacramento, CA: California Office of Statewide Health Planning and Development; 1996.
  • Romano PS, Schembri ME, Rainwater JA. “Can Administrative Data Be Used to Ascertain Clinically Significant Postoperative Complications?” American Journal of Medical Quality. 2002;17(4):145–54. [PubMed]
  • Romano PS, Chan BK, Schembri ME, Rainwater JA. “Can Administrative Data Be Used to Compare Postoperative Complication Rates across Hospitals?” Medical Care. 2002;40(10):856–67. [PubMed]
  • Santos S, Murphy G, Baxter K, Robinson KM. “Organisational Factors Affecting the Quality of Hospital Clinical Coding.” HIM Journal. 2008;37(1):25–37. [PubMed]
  • Smith GL, Lichtman JH, Bracken MB, Shlipak MG, Phillips CO, DiCapua P, Krumholz HM. “Renal Impairment and Outcomes in Heart Failure: Systematic Review and Meta-Analysis.” Journal of the American College of Cardiology. 2006;47(10):1987–96. [PubMed]
  • Steinbusch PJ, Oostenbrink JB, Zuurbier JJ, Schaepkens FJ. “The Risk of Upcoding in Casemix Systems: A Comparative Study.” Health Policy. 2007;81(2–3):289–99. [PubMed]
  • Stephens MB, Williams PM. “Teaching Principles of Practice Management and Electronic Medical Record Clinical Documentation to Third-Year Medical Students.” Journal of Medical Practice Management. 2010;25(4):222–5. [PubMed]
  • Stukenborg GJ, Kilbridge KL, Wagner DP, Harrell FE, Jr, Oliver MN, Lyman JA, Einbinder JS, Connors AF., Jr “Present-at-Admission Diagnoses Improve Mortality Risk Adjustment and Allow More Accurate Assessment of the Relationship between Volume of Lung Cancer Operations and Mortality Risk.” Surgery. 2005;138(3):498–507. [PubMed]
  • Stukenborg GJ, Wagner DP, Harrell FE, Jr, Oliver MN, Heim SW, Price AL, Han CK, Wolf AM, Connors AF., Jr “Present-at-Admission Diagnoses Improved Mortality Risk Adjustment among Acute Myocardial Infarction Patients.” Journal of Clinical Epidemiology. 2007;60(2):142–54. [PubMed]
Articles from Health Services Research are provided here courtesy of
Health Research & Educational Trust