PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of jnciLink to Publisher's site
 
J Natl Cancer Inst. Jun 16, 2009; 101(12): 848–859.
Published online Jun 16, 2009. doi:  10.1093/jnci/djp107
PMCID: PMC2697207
Assessment of Pancreatic Cancer Care in the United States Based on Formally Developed Quality Indicators
Karl Y. Bilimoria,corresponding author David J. Bentrem, Keith D. Lillemoe, Mark S. Talamonti, Clifford Y. Ko, and on behalf of the American College of Surgeons' Pancreatic Cancer Quality Indicator Development Expert Panel
Affiliations of authors: Cancer Programs, American College Surgeons, Chicago, IL (KYB, CYK); Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, IL (KYB, DJB, MST); Department of Surgery, Indiana University School of Medicine, Indianapolis, IN (KDL); Department of Surgery, NorthShore University HealthSystem, Evanston, IL (MST); Department of Surgery, University of California, Los Angeles, and VA Greater Los Angeles Healthcare System, Los Angeles, CA (CYK)
corresponding authorCorresponding author.
Correspondence to: Karl Y. Bilimoria, MD, MS, Cancer Programs, American College of Surgeons, 633 N St Clair St, 22nd Floor, Chicago, IL 60611 (e-mail: kbilimoria/at/facs.org).
Received October 19, 2008; Revised March 8, 2009; Accepted April 3, 2009.
Background
Pancreatic cancer outcomes vary considerably among hospitals. Assessing pancreatic cancer care by using quality indicators could help reduce this variability. However, valid quality indicators are not currently available for pancreatic cancer management, and a composite assessment of the quality of pancreatic cancer care in the United States has not been done.
Methods
Potential quality indicators were identified from the literature, consensus guidelines, and interviews with experts. A panel of 20 pancreatic cancer experts ranked potential quality indicators for validity based on the RAND/UCLA Appropriateness Methodology. The rankings were rated as valid (high or moderate validity) or not valid. Adherence with valid indicators at both the patient and the hospital levels and a composite measure of adherence at the hospital level were assessed using data from the National Cancer Data Base (2004–2005) for 49 065 patients treated at 1134 hospitals. Summary statistics were calculated for each individual candidate quality indicator to assess the median ranking and distribution.
Results
Of the 50 potential quality indicators identified, 43 were rated as valid (29 as high and 14 as moderate validity). Of the 43 valid indicators, 11 (25.6%) assessed structural factors, 19 (44.2%) assessed clinical processes of care, four (9.3%) assessed treatment appropriateness, four (9.3%) assessed efficiency, and five (11.6%) assessed outcomes. Patient-level adherence with individual indicators ranged from 49.6% to 97.2%, whereas hospital-level adherence with individual indicators ranged from 6.8% to 99.9%. Of the 10 component indicators (contributing 1 point each) that were used to develop the composite score, most hospitals were adherent with fewer than half of the indicators (median score = 4; interquartile range = 3–5).
Conclusions
Based on the quality indicators developed in this study, there is considerable variability in the quality of pancreatic cancer care in the United States. Hospitals can use these indicators to evaluate the pancreatic cancer care they provide and to identify potential quality improvement opportunities.
CONTEXT AND CAVEATS
Prior knowledge
Pancreatic cancer outcomes vary considerably among hospitals, but the factors responsible for this variability have been difficult to identify because valid indicators of high-quality care for pancreatic cancer patients are not available.
Study design
A panel of pancreatic cancer experts identified valid quality indicators for pancreatic cancer care, assessed hospital-level compliance with these indicators, and developed a composite measure of adherence at the hospital level using data from the National Cancer Data Base (2004-2005) in the United States.
Contribution
Of 50 potential quality indicators identified, 43 were rated as valid and assessed structural factors, clinical processes of care, treatment appropriateness, efficiency, and outcomes. Most hospitals were adherent with fewer than half of the 10 component indicators that were used to develop the composite measure of adherence.
Implications
These quality indicators can be used by hospitals to monitor, standardize, and improve the care they provide to pancreatic cancer patients.
Limitations
Important indicators may have been missed. Some indicators may have received slightly lower rankings because of how they were worded. The reliability of hospital performance comparisons was limited by the small sample size and an inability to adjust completely for differences in case mix among hospitals. The findings may not be generalizable to all hospitals.
From the Editors
There is considerable variability in outcomes among hospitals in the United States for many procedures and medical conditions, particularly for complex surgeries such as pancreatectomy for malignancy (1,2). Short-term and long-term outcomes of patients at some hospitals are considerably worse than at other hospitals (39); however, it has been difficult to identify the factors responsible for this variability (10,11). Hospitals with poor outcomes are left with little guidance on where to focus quality improvement efforts. Thus, efforts have focused on identifying quality indicators or measures that can be used to standardize care and ensure that patients are managed in accordance with established recommendations (7,10).
A number of organizations have developed quality measures for surgical and oncology care, including the Agency for Healthcare Research and Quality (AHRQ), the Centers for Medicare and Medicaid Services (eg, the Surgical Care Improvement Project and the Physician Quality Reporting Initiative) (12,13), the Joint Commission (14), and the American Hospital Association (15). Of the hundreds of measures put forth thus far, to our knowledge, the only ones involving pancreatic cancer examine pancreatectomy case volume and postoperative mortality (16,17). Recently, the American College of Surgeons, the National Comprehensive Cancer Network, and the American Society of Clinical Oncology collaboratively developed five quality measures for cancer care. These measures were subsequently endorsed by the National Quality Forum as part of the Quality of Cancer Care Performance Measures project (18); however, none of these quality indicators specifically addressed pancreatic cancer care.
Individual quality measures assess only a single aspect of care. However, health care is multidimensional and complex, leading the Institute of Medicine to note that composite quality measures consisting of multiple individual component measures can provide a better sense of the reliability of the health-care system (19). Importantly, the National Quality Forum has recently introduced an initiative to establish a framework for composite quality measures to ensure that they are scientifically acceptable (ie, reliable and valid), usable (ie, meaningful and understandable), and feasible (ie, based on data that are readily available and retrievable without undue collection burden) (20).
Thus, there is a need for both individual and composite quality indicators that are developed by using a formal methodology and that encompass the various domains of pancreatic cancer care, including those related to pancreatic surgery, for which outcomes are highly variable and potentially modifiable. Moreover, there is a need for hospitals to assess adherence with individual aspects of care by using specific indicators as well as to examine the overall quality of pancreatic cancer care by using a composite measure to identify potential quality improvement opportunities within their institutions. The objectives of this study were 1) to develop indicators of high-quality care for pancreatic cancer patients; 2) to assess hospital-level compliance with these indicators in the United States; and 3) to develop a composite, evidenced-based measure of the quality of hospital-level pancreatic cancer care. The ultimate goal of this study was to identify indicators that hospitals can use to assess their performance and to develop specific initiatives to improve the quality of patient care and outcomes.
Quality Indicator Development
We used a modification of the RAND/UCLA Appropriateness Methodology to assess the validity of potential quality indicators (21,22). The RAND/UCLA Appropriateness Methodology is an iterative Delphi method that has been used to develop quality-of-care indicators across a broad range of disease processes (21,2327). This method is particularly useful when high-level evidence is lacking because it incorporates recommendations made by an expert panel that are based on their evaluation of the evidence and their clinical experience. Briefly, in two rounds of rankings, the expert panel members independently rank potential quality indicators for validity. Between the two rounds, there is an expert panel discussion (Figure 1). Indicators are evaluated for appropriateness (based on the median ranking for each) and agreement (based on the distribution of rankings). This process identifies indicators that are ranked as valid by the expert panel and has been shown to provide quality indicators that have face, construct, and predictive validity (2830). This study was approved by the Northwestern University institutional review board.
Figure 1
Figure 1
Overview of the modified RAND/UCLA Appropriateness Methodology used to develop pancreatic cancer care quality indicators.
Potential quality indicators were identified through extensive systematic literature reviews, assessment of existing guidelines from numerous organizations (eg, National Comprehensive Cancer Network guidelines), quality measures (eg, AHRQ), and semistructured interviews with pancreatic cancer experts in various subspecialties of medicine. Although high-level evidence (eg, from randomized trials) supporting clinical practice was frequently unavailable, we required that there was some evidence suggesting that the potential indicators would affect outcome (eg, institutional case series). The indicators were categorized into five domains—structure, process, appropriateness, efficiency, and outcomes (7,10,31)—and encompassed the diagnostic, perioperative, intraoperative, postoperative, and follow-up phases of pancreatic cancer care. To evaluate potential quality indicators, we assembled an expert panel of 20 physicians that included clinicians and researchers in the fields of surgery (12 members), medical oncology (three members), radiation oncology (two members), pathology (one member), radiology (one member), and gastroenterology (one member) (see Notes). Most of the panel members were from academic institutions, but some physicians from community hospitals were also included.
In the first round of rankings, panel members were sent via electronic mail a list of potential indicators and detailed instructions regarding the methodology and the process of ranking indicators for validity. The instructions given to panelists regarding the rankings were as follows. First, an indicator should be considered “valid” if adherence with this indicator is critical to provide quality care to patients with pancreatic cancer exclusive of costs or feasibility of implementation. Not providing the level of care addressed in the indicator would be a breach in clinical practice and an indication of unacceptable care. Second, validity rankings should be based on the panelist’s own judgment, not on what they think other experts or the panel believes. Third, the indicators should be considered for an “average” patient who presents to an “average” physician at an “average” hospital. Finally, the indicators need not necessarily apply to any one specific patient, but rather could pertain to the overall care of pancreatic cancer patients (eg, antibiotic discontinuation within 24 hours of surgery).
Each indicator was ranked on a 9-point scale for which 1 = definitely not valid, 5 = uncertain or equivocal validity, and 9 = definitely valid. Panelists were also given the opportunity to suggest wording modifications to improve the clarity or increase the potential validity of the quality indicator. The panel was also allowed to suggest entirely new indicators. Summary statistics were calculated for each individual candidate quality indicator to assess the median and distribution of rankings. For round 1, a potential quality indicator that had four or more rankings in the 1–3 range and four or more rankings in the 7–9 range was considered to have scores that were in disagreement. If all but four rankings were in any single 3-point range (eg, 1–3, 4–6, or 7–9), then the scores for that indicator were said to be in agreement. All other score distributions were deemed indeterminate. The round 1 rankings were used to guide discussion at the expert panel meeting.
Before the expert panel meeting, the panelists were provided with the relevant literature regarding indicators for which there was disagreement in the round 1 rankings. The panelists were also given a summary sheet of the round 1 rankings that showed the aggregated summary statistics for each indicator and a copy of their own round 1 rankings. Each potential quality indicator was discussed by the panel to identify opportunities to improve the wording of the indicators or to highlight evidence that may have been missed by the literature review. In addition, indicators could be reworded and new indicators could be proposed during the discussion. It was stressed that there was no need to establish a consensus among the panelists because each member would independently rank the indicators for validity after the panel discussion.
Immediately after the expert panel discussion, the panelists were sent an updated ranking form via electronic mail on which they were asked to re-rank all of the indicators for validity. These round 2 rankings were used for the final assessment of validity. The rankings were compiled, and the median ranking from the expert panel was calculated for each individual indicator. We used definitions from previous quality indicator development studies (2326) to establish two levels of validity that were based on the stringency of the criteria used: relaxed and strict. According to the strict criteria, an indicator was deemed to have high validity if the median score and at least 90% of the individual rankings from the 20 panelists were within the 7–9 range. According to the relaxed criteria, an indicator was deemed to have moderate validity if the median score and at least 95% (all but one) of the individual rankings from the expert panel were within the 4–9 range.
Assessment of Hospital Performance
The National Cancer Data Base (NCDB) is a national cancer registry supported by the American College of Surgeons, the Commission on Cancer, and the American Cancer Society (21,32,33). All of the approximately 1450 Commission on Cancer–approved hospitals are required to report all of their cancer cases to the NCDB annually. The NCDB and state and national cancer registries share common mechanisms for data coding, collection, and accuracy assessment (21,34). According to incidence estimates from the American Cancer Society, the NCDB captures approximately 75% of newly diagnosed pancreatic cancers in the United States each year (21). The NCDB collects information regarding patient demographics, tumor characteristics and pathology, staging, diagnosis, treatment, and survival (34).
Patients who were diagnosed with pancreatic adenocarcinoma from January 1, 2004, to December 31, 2005, were identified from the NCDB based on International Classification of Diseases for Oncology, third edition, site and histology codes (35). At the time of this study, patients diagnosed through the end of 2005 were the most recent ones available for analysis. Patients who underwent pancreatectomy were identified based on the Commission on Cancer's Facility Oncology Registry Data Standards site-specific procedure coding (34). Patients were staged according to the American Joint Committee on Cancer sixth edition Cancer Staging Manual (36). We assessed adherence with valid quality indicators for which the relevant data are reported to the NCDB. Patients who underwent palliative procedures or exploratory surgery without a cancer-directed resection were not included in the cohort that was categorized as undergoing cancer-directed resection (ie, pancreatectomy).
We first assessed adherence with the individual quality indicators at the patient level to determine the proportion of patients at Commission on Cancer–approved hospitals who received care that was concordant with the quality indicators. We then assessed adherence with the individual indicators at the hospital level; adherence was defined a priori as hospitals for which at least 90% of patients received care in compliance with the specific quality indicator. A composite measure of hospital pancreatic cancer care was calculated by summing the points for the valid indicators. Adherence with each indicator was assigned 1 point (≥90% of patients received the recommended care). The quality indicators relating to documentation were aggregated into a single composite score for which the maximum score was 10 points. Valid indicators examining all domains of care (structure, process, appropriateness, efficiency, and outcome) were included in the composite measure.
Statistical Analysis
For the quality indicator addressing a hospital's risk-adjusted mortality rate within 30 days of surgery, a logistic regression model was used to adjust for differences in clinicopathologic characteristics among hospitals. The model included sex, age at diagnosis, race (white, black, Asian, Hispanic, other), stage, type of pancreatectomy (pancreaticoduodenectomy, distal pancreatectomy, total pancreatectomy, other), and Charlson comorbidity score. The NCDB requires reporting of six preexisting comorbidities based on International Classification of Disease, ninth edition classification (34,35). The primary cancer diagnosis and postoperative complications are not included when these six codes are reported. A modified Charlson comorbidity score was calculated to assess the severity of preexisting comorbidities (3739). Analyses were performed using SPSS, version 15 (SPSS, Inc., Chicago, IL).
Quality Indicator Development
On the basis of literature reviews, consensus guidelines, and interviews with experts, we identified 50 potential quality indicators for pancreatic cancer care (Table 1). These indicators were categorized into five domains of care: structure (12 indicators), processes (21 indicators), appropriateness (seven indicators), efficiency (five indicators), and outcomes (five indicators). Of the 50 indicators, 20 were hospital-level indicators and 30 were patient-level indicators.
Table 1
Table 1
Summary of pancreatic cancer quality indicators
Based on the round 2 expert panel rankings of the 50 potential quality indicators, 43 indicators (86%) were rated as valid (29 as having high validity and 14 as having moderate validity) and seven (14%) were rated as not valid (Table 1). Of the 43 valid indicators, 11 (25.6%) assessed structural factors, 19 (44.2%) assessed clinical processes of care, four (9.3%) assessed treatment appropriateness, four (9.3%) assessed efficiency, and five (11.6%) assessed outcomes (Tables 2 and and3).3). The assessment for the indicators would be at the hospital level for 18 indicators (41.9%) and at the patient level for 25 indicators (58.1%). Of the 43 indicators rated as valid, 22 are reported to cancer registries or can be derived from data submitted to cancer registries, another 14 are found in widely available multi-institutional administrative datasets, and eight are generally found only in patient charts (Table 1). Seven indicators were ranked as not valid (Table 4).
Table 2
Table 2
High-validity pancreatic cancer quality indicators*
Table 3
Table 3
Moderate-validity pancreatic cancer quality indicators*
Table 4
Table 4
Pancreatic cancer quality indicators that were not valid
The indicators that were rated as having high validity were diverse and included the diagnostic, preoperative, intraoperative, postoperative, and follow-up phases of care (Table 2). Structural indicators included factors that address case volume requirements, surgeon certification, and the availability of consulting physicians and services. Process indicators addressed the preoperative evaluation, assessment of resectability, treatment planning, and operative and pathology report documentation. The appropriateness indicators rated as having high validity focused on the use of surgical and nonsurgical treatment. Efficiency indicators addressed the time from diagnosis to treatment. Finally, outcome indicators that were rated as having high validity included monitoring the margin-negative resection rate and the perioperative mortality rate. The indicators rated as having moderate validity involved clinical trials participation, case volume thresholds, the availability of endoscopic ultrasound and endoscopic retrograde cholangiopancreatography, the availability of adjuvant therapy services, resection margin status, documentation of the assessment of resectability, estimated blood loss, operative time, the adequacy of nodal evaluation, readmission rates, and long-term survival rates (Table 3).
Seven indicators were rated as not valid by the expert panel. These indicators concerned specific case volume thresholds; the use of diagnostic laparoscopy, feeding jejunostomies, and epidural anesthesia; discussion of unresectable disease at a multidisciplinary conference; estimated blood loss thresholds; and the absolute time from diagnosis to treatment (Table 4).
Adherence With Pancreatic Cancer Quality Indicators
Of the 43 indicators rated as valid, 18 could be assessed by using data in the NCDB (Table 5). The indicators related to medical documentation were combined into a single indicator for which a patient was deemed to have had concordant care if all of those indicators were met. This approach resulted in 10 quality indicators for which we assessed adherence (nine individual indicators and the combined medical documentation measure). We first assessed adherence with indicators at the patient level. Adherence with the valid quality indicators of pancreatic cancer care ranged from 49.6% to 97.2% among the 49 065 patients treated at Commission on Cancer–approved hospitals. Next, hospital-level performance for adherence with each of the quality indicators was examined among 1134 Commission on Cancer–approved hospitals (1134 of the 1450 Commission on Cancer–approved hospitals reported a pancreatic cancer operation to the NCDB). A hospital was classified as being adherent with the quality indicator if the care it provided was concordant with the quality indicator in at least 90% of the patients at that hospital. The proportion of adherent hospitals ranged from 6.8% to 99.9%. Two indicators could only be assessed at the hospital level: number of pancreatectomies performed per year (Figure 2, A) and hospital mortality rate (Figure 2, B). Of the 1134 Commission on Cancer–approved hospitals that reported a pancreatic cancer operation to the NCDB, 748 (66.0%) had a perioperative mortality rate less than 5%, and only 77 (6.8%) performed 12 or more pancreatectomies for cancer per year.
Table 5
Table 5
Assessment of adherence with the pancreatic cancer quality indicators at the patient and hospital levels
Figure 2
Figure 2
Examples of hospital performance with respect to hospital-level quality measures. A) Pancreatectomy volume. B) Perioperative mortality rate. Each circle represents one of the 1134 Commission on Cancer–approved hospitals included in this study. (more ...)
To establish a composite score for hospital performance on these quality indicators, we assigned each hospital 1 point for each of the 10 quality indicators with which they were adherent (≥90% of patients received the recommended care) and then summed the scores for each hospital. The summed scores ranged from 1 to 9 (median score = 4, interquartile range = 3–5; maximum possible score = 10; Figure 3).
Figure 3
Figure 3
Composite measure of hospital-level performance. The composite score comprises the 10 valid component measures.
By using a formal, well-described methodology, an expert panel assessed potential quality indicators and identified 43 valid indicators of quality care for pancreatic cancer management. We then assessed performance on these measures at 1134 hospitals using data from a large national cancer registry and found that most hospitals were adherent with fewer than half of the indicators. The intent was to develop indicators of quality of care that hospitals could use for self-assessment to identify quality initiatives for improving pancreatic cancer care.
The RAND/UCLA Appropriateness Methodology has been used to develop quality indicators for many disease processes (21). In previous studies to develop quality indicators in surgery and oncology, 59%–81% of the potential indicators were ranked as valid (2326). These studies individually used only one criterion for the assessment of validity (ie, the number of panelists who ranked an indicator within the 7–9 range); however, the definition of validity differed somewhat among these studies. Therefore, we used two frequently used definitions of validity to establish two potential validity levels based on the relative stringency of the criteria: high validity and moderate validity. We found that 58% of indicators met the strictest validity definition and 86% met the relaxed criteria. We expected that a large proportion of the indicators would be ranked as valid because all were derived from the literature, established guidelines, and interviews with experts in the field.
Previously, the only quality indicators involving the care of patients with pancreatic malignancies were two proposed by the AHRQ (30). These indicators require hospitals to track their pancreatectomy case volume and postoperative mortality rate and are currently under consideration by the National Quality Forum (40). However, neither of these two measures sets an absolute numerical threshold for mortality or case volume. The indicators we used for monitoring surgeon- and hospital-level operative volumes, as well as those for monitoring perioperative mortality, were ranked as having high validity and are similar to the AHRQ pancreas measures. In the preliminary semistructured interviews, all of the experts uniformly suggested that pancreatectomy case volume is a critical component for ensuring quality pancreatic cancer care. However, definitions of “high volume” vary widely in the literature, ranging from two to 200 cases per year (1). The expert panel debated numerous thresholds ranging from six to 24 cases per year and how case volume should be defined (ie, whether it should include benign and/or malignant lesions) and ultimately decided that the valid quality indicators for specific thresholds should be 12 cases per year for hospitals and six cases per year for surgeons. Similarly, the 5% postoperative mortality threshold was discussed and decided by the expert panel.
There is a paucity of high-level evidence (ie, from clinical trials) in pancreas surgery to guide clinical decision making. However, this circumstance is well suited for the application of the RAND Appropriateness Methodology, in which the best available literature is combined with expert opinion. Although the National Comprehensive Cancer Network and other organizations publish detailed recommendations for pancreatic cancer diagnosis, treatment, and follow-up, these guidelines serve a very different function than the intended purpose for quality measures (21,34). Guidelines make recommendations based on the best available evidence and suggest that certain disease management issues be discussed with the patient; quality indicators (or quality measures) are held to a much higher standard in that noncompliance with a quality indicator generally constitutes unacceptable or poor care (21). Moreover, quality indicators must be suitable and practical for potential use if they are to be used to assess hospitals and providers.
Once a set of quality indicators has been developed, the measures can be used by hospitals to assess the quality of care at their institutions. McGlynn et al. (41) developed 429 indicators of quality of care for 30 acute and chronic conditions as well as preventive care and found that recommended care was delivered to only approximately 55% of patients. However, for individual hospitals to assess adherence with quality indicators can require a considerable amount of data abstraction from the patient's chart. Thus, readily available data, such as those collected by cancer registries including the NCDB, are likely to be used to assess hospital performance because no additional data collection would be needed. For this reason, we used cancer registry data to evaluate adherence with the valid pancreatic cancer quality indicators at the patient and hospital levels. Patient-level adherence with individual indicators ranged from 49.6% to 97.2%, and the proportion of adherent hospitals ranged from 6.8% to 99.9%. Of note, only 77 hospitals met the volume threshold established by the panel. Thus, regionalization of surgical care to high-volume centers is likely an impractical policy initiative, and we suggest that these indicators should be used by all hospitals to attempt to raise the level of care provided to pancreatic cancer patients. In addition, we found that most hospitals were adherent with fewer than half of the 10 component indicators that we used to develop the composite score, and no hospital was adherent with all of the indicators. Thus, there is an opportunity for all hospitals to improve.
Hospital adherence with guidelines and consensus recommendations for pancreatic cancer management may vary for a number of reasons. First, the experience and training of the clinical teams are likely to vary. Experienced teams may be more familiar with the literature and guideline recommendations and, thus, may be more likely to follow those recommendations. High-volume hospitals and cancer centers have been shown to provide care concordant with guidelines more frequently than low-volume centers, including the appropriate use of curative resection (42), the completeness of resection (43,44), adequacy of nodal examination (45,46), the use of adjuvant treatments (47), clinical trials participation (48), and aggressiveness of cancer surveillance activities. Second, patient preferences may affect hospital adherence with quality indicators (49). Finally, the dismal prognosis for patients diagnosed with pancreatic cancer may lead to pessimism on the part of physicians and patients, which may result in nonadherence with guidelines (42).
Mechanisms are then needed by which individual hospitals are informed about their adherence rates with quality indicators. Numerous studies have demonstrated the benefits of quality assessment and feedback for a wide range of medical conditions (5052). For many years, reporting of outcomes has been routine in New York and California for coronary artery bypass graft operations, as well as in the Veterans’ Health Administration system for a wide variety of surgeries (5355). These efforts have been shown to prompt hospitals to initiate specific quality improvement efforts that have produced improvements in outcomes (5355). However, it is unknown whether adherence with quality indicators will improve outcomes at individual hospitals, and some have suggested that this type of quality measurement and feedback initiatives may be detrimental to patient care and the health-care system (19,20,52,5658).
For oncological care, a feedback mechanism through the NCDB is currently available for breast and colorectal cancer quality measures (21,59). The NCDB receives data from more than 1450 Commission on Cancer–approved hospitals, and these data can be used to calculate performance rates for individual hospitals for specific quality measures as demonstrated in this study. The NCDB can provide individual hospitals with their performance in a confidential manner on quality indicators compared with that of all the other Commission on Cancer–approved hospitals, as shown in Figure 2. Only the individual hospital can identify its outcomes. However, public reporting initiatives for hospital quality measure compliance and outcomes are becoming a reality in the United States (55). Thus, identification of measures and evaluation of performance by individual hospitals can be good preparation for a future that will likely include a great deal of public reporting of process and outcome measure performance. Importantly, readily available data sources such as cancer registries will likely be used for quality measurement initiatives by government oversight agencies and payers because these existing data sources provide a convenient assessment mechanism for which no additional data need to be collected. Thus, it is important for hospitals to ensure that the data they report to cancer registries are accurate and of high quality.
There are some important caveats regarding the application of the indicators identified in this study. First, 100% compliance is generally not required for all of the quality indicators. No matter how well defined the inclusion and exclusion criteria are for quality indicators, there will be some instances where the indicators are inappropriate (eg, the requirement to assess 12 or more lymph nodes for colon cancer in an intraoperatively unstable patient where resection of more nodes may not be safe). Moreover, patient preferences may also affect quality indicator compliance (eg, patient refusal to undergo chemotherapy for a stage III colon cancer). Second, it is also important to note that the development of quality indicators involves an iterative process. Even measures that are based on high-level evidence will become outdated or may need to be modified over time as the science advances. Measure development will need to be revisited periodically as new evidence accumulates and practice patterns change. When the ultimate goal of complete compliance with a quality measure is achieved, assessment can be discontinued and new measures can be added (40). Prompt feedback regarding quality measure performance could help decrease the time from publication of seminal studies and subsequent guideline development to the incorporation of measures into clinical practice. Finally, quality measures can be applied to different extents. The National Quality Forum has endorsed measures at two levels: accountability and quality improvement. Accountability measures meet the strictest criteria and generally have a clear impact on outcomes; thus, providers may be judged and incur financial consequences depending on their performance on these indicators of care. The criteria for endorsing quality improvement measures are somewhat less rigorous, and these measures are simply intended to provide feedback to hospitals. Although the two levels of validity used in this study do not directly correspond to the National Quality Forum guidelines for accountability and quality improvement, a similar paradigm could be considered to base the “accountability” and “quality improvement” designations on more objective criteria.
This study has some potential limitations. First, although we attempted to include all measures of quality in the indicator development process, it is likely that important indicators were missed. Moreover, another expert panel or a panel with a different composition of specialties or backgrounds represented may have ranked the quality indicators differently or developed a different set of indicators. Second, although the wording of the indicators was discussed at length by the panel, there was not always agreement on the wording, so some indicators may have received slightly lower rankings due to wording disagreement. These differences in wording do not appear to have qualitatively changed the validity category of the indicators. Third, for assessment of hospital performance, small sample size and inadequate risk adjustment (ie, the inability to adjust completely for differences in case mix among hospitals) may decrease the reliability of the comparisons; however, process measure performance is, in principle, insulated from these issues because we assumed that the indicator should be adhered to in nearly all cases. Thus, adherence with the indicator is either met or not met. Furthermore, because there is little evidence regarding a definitive method for threshold selection, we chose, a priori, a 90% threshold for adherence to allow for variability at hospitals while still requiring all hospitals to achieve a high level of adherence. Fourth, the poor quality indicator adherence rates demonstrated in this study may be partly related to poor documentation in the medical chart. For example, adjuvant therapy may be underreported to cancer registries by the individual hospitals because it is frequently administered in the outpatient setting, often many weeks after surgery (60); however, it will be the hospital's responsibility to ensure that accurate and complete data regarding all aspects of care are transmitted to cancer registries because these data will be used by federal agencies and providers for quality assessment (18,48). In addition, some indicators examine issues that are difficult to assess accurately, such as margin status and readmissions, due to variability in practice patterns. For example, low margin-positive resection rates may be indicative of less thorough pathological evaluation of the margins. Thus, centers that focus on pancreatic cancer and perform detailed margin assessments may have higher margin-positive resection rates, but these rates are likely paradoxically related to the quality of care because a more extensive margin assessment will identify higher margin-positive resection rates. Finally, our assessment of hospital performance was limited to Commission on Cancer–approved hospitals. Thus, the findings may not be generalizable to all hospitals. However, the NCDB receives data from a large number of hospitals that care for more than three-fourths of all the pancreatic cancer patients in the United States.
In conclusion, we used a standardized methodology to identify indicators of pancreatic cancer care. Noncompliance with these indicators is indicative of poor quality care. Hospitals can assess their performance on these quality indicators and compare it with that of other hospitals, thus identifying potential areas for internal quality improvement initiatives. Because hospitals’ resources for quality improvement efforts are limited, a mechanism to efficiently direct quality initiatives would be beneficial. Because the future of health care will certainly involve more measurement of the quality of care, there is a need for rigorously developed quality indicators put forth by clinicians. Moreover, individual quality measures can be used to develop a data-driven composite measure of hospital pancreatic cancer care that assesses care across multiple domains. These quality indicators offer an opportunity to monitor, standardize, and improve the care of patients with pancreatic cancer.
Funding
American College of Surgeons, Clinical Scholars in Residence program (to K.Y.B); American Cancer Society Illinois Division (to D.J.B); National Cancer Institute (NCI-60058-NE to C.Y.K).
Footnotes
The funding sources had no role in the design of the study; the collection, analysis, and interpretation of the data; the decision to submit the manuscript for publication; and the writing of the manuscript.
The American College of Surgeon's Pancreatic Cancer Quality Indicator Development Expert Panel included surgeons (Peter J. Allen, MD, Memorial Sloan-Kettering Cancer Center; Gerard V. Aranha, MD, Stritch School of Medicine, Loyola University Chicago; David J. Bentrem, MD, Feinberg School of Medicine, Northwestern University; Douglas B. Evans, MD, M.D. Medical College of Wisconsin; Keith D. Lillemoe, MD, Indiana University School of Medicine; Peter W. T. Pisters, MD, M.D. Anderson Cancer Center; Richard D. Schulick, MD, Johns Hopkins University School of Medicine; Stephen F. Sener, MD, NorthShore University HealthSystem; Mark S. Talamonti, MD, NorthShore University HealthSystem; Selwyn M. Vickers, MD, University of Minnesota; Andrew L. Warshaw, MD, Massachusetts General Hospital, Harvard Medical School; Charles J. Yeo, MD, Jefferson Medical College, Thomas Jefferson University), medical oncologists (David P. Kelsen, MD, Memorial Sloan-Kettering Cancer Center; Vincent J. Picozzi, MD, Virginia Mason Medical Center; Margaret A. Tempero, MD, University of California at San Francisco Medical Center), radiation oncologists (Ross A. Abrams, MD, Rush University Medical Center; Christopher G. Willett, MD, Duke University School of Medicine), a pathologist (N. Volkan Adsay, MD, Emory University School of Medicine), a radiologist (Alec J. Megibow, MD, MPH, New York University Medical Center), and a gastroenterologist (Stuart Sherman, MD, Indiana University School of Medicine). The National Cancer Data Base is supported by the American College of Surgeons, the Commission on Cancer, and the American Cancer Society.
1. Bentrem DJ, Brennan MF. Outcomes in oncologic surgery: does volume make a difference? World J Surg. 2005;29(10):1210–1216. [PubMed]
2. Halm EA, Lee C, Chassin MR. Is volume related to outcome in health care? A systematic review and methodologic critique of the literature. Ann Intern Med. 2002;137(6):511–520. [PubMed]
3. Lieberman MD, Kilburn H, Lindsey M, Brennan MF. Relation of perioperative deaths to hospital volume among patients undergoing pancreatic resection for malignancy. Ann Surg. 1995;222(5):638–645. [PubMed]
4. Begg CB, Cramer LD, Hoskins WJ, Brennan MF. Impact of hospital volume on operative mortality for major cancer surgery. JAMA. 1998;280(20):1747–1751. [PubMed]
5. Gordon TA, Bowman HM, Tielsch JM, Bass EB, Burleyson GP, Cameron JL. Statewide regionalization of pancreaticoduodenectomy and its effect on in-hospital mortality. Ann Surg. 1998;228(1):71–78. [PubMed]
6. Birkmeyer JD, Siewers AE, Finlayson EV, et al. Hospital volume and surgical mortality in the United States. N Engl J Med. 2002;346(15):1128–1137. [PubMed]
7. Birkmeyer JD, Dimick JB, Birkmeyer NJ. Measuring the quality of surgical care: structure, process, or outcomes? J Am Coll Surg. 2004;198(4):626–632. [PubMed]
8. Fong Y, Gonen M, Rubin D, Radzyner M, Brennan MF. Long-term survival is superior after resection for cancer in high-volume centers. Ann Surg. 2005;242(4):540–544. discussion 544–547. [PubMed]
9. Birkmeyer JD, Sun Y, Wong SL, Stukel TA. Hospital volume and late survival after cancer surgery. Ann Surg. 2007;245(5):777–783. [PubMed]
10. Ko CY, Maggard M, Agustin M. Quality in surgery: current issues for the future. World J Surg. 2005;29(10):1204–1209. [PubMed]
11. Birkmeyer JD, Sun Y, Goldfaden A, Birkmeyer NJ, Stukel TA. Volume and process of care in high-risk cancer surgery. Cancer. 2006;106(11):2476–2481. [PubMed]
13. Centers for Medicare & Medicaid Services. Physician Quality Reporting Program. Available at http://www.cms.hhs.gov/pqri/. Accessed March 8, 2008.
14. The Joint Commission. Performance Measures. http://www.jointcommission.org/PerformanceMeasurement/. Accessed March 8, 2008.
15. American Hospital Association. Quality and Patient Safety. Available at http://www.aha.org/aha_app/issues/Quality-and-Patient-Safety/index.jsp/. Accessed March 8, 2008.
16. Agency for Healthcare Research and Quality. Quality Indicators. Available at: http://www.qualityindicators.ahrq.gov/. Accessed March 8, 2008. [PubMed]
17. National Quality Measures Clearinghouse. Available at http://www.qualitymeasures.ahrq.gov/ Accessed March 8, 2008.
18. National Quality Forum Endorses Consensus Standards for Diagnosis and Treatment of Breast & Colorectal Cancer. 2007. http://www.qualityforum.org/pdf/news/prbreast-colon03-12-07.pdf. Accessed December 27, 2007.
19. Werner RM, Bradlow ET. Relationship between Medicare's hospital compare performance measures and mortality rates. JAMA. 2006;296(22):2694–2702. [PubMed]
20. Werner RM, Asch DA. The unintended consequences of publicly reporting quality information. JAMA. 2005;293(10):1239–1244. [PubMed]
21. Bilimoria KY, Stewart AK, Winchester DP, Ko CY. The National Cancer Data Base: a powerful initiative to improve cancer care in the United States. Ann Surg Oncol. 2008;15(3):683–690. [PMC free article] [PubMed]
22. Washington DL, Bernstein SJ, Kahan JP, Leape LL, Kamberg CJ, Shekelle PG. Reliability of clinical guideline development using mail-only versus in-person expert panels. Med Care. 2003;41(12):1374–1381. [PubMed]
23. McGory ML, Shekelle PG, Ko CY. Development of quality indicators for patients undergoing colorectal cancer surgery. J Natl Cancer Inst. 2006;98(22):1623–1633. [PubMed]
24. Maggard MA, McGory ML, Shekelle PG, Ko CY. Quality indicators in bariatric surgery: improving quality of care. Surg Obes Relat Dis. 2006;2(4):423–429. discussion 429–430. [PubMed]
25. McGory ML, Shekelle PG, Rubenstein LZ, Fink A, Ko CY. Developing quality indicators for elderly patients undergoing abdominal operations. J Am Coll Surg. 2005;201(6):870–883. [PubMed]
26. Spencer BA, Steinberg M, Malin J, Adams J, Litwin MS. Quality-of-care indicators for early-stage prostate cancer. J Clin Oncol. 2003;21(10):1928–1936. [PubMed]
27. Shekelle PG, Park RE, Kahan JP, Leape LL, Kamberg CJ, Bernstein SJ. Sensitivity and specificity of the RAND/UCLA Appropriateness Method to identify the overuse and underuse of coronary revascularization and hysterectomy. J Clin Epidemiol. 2001;54(10):1004–1010. [PubMed]
28. Brook RH, McGlynn EA, Shekelle PG. Defining and measuring quality of care: a perspective from US researchers. Int J Qual Health Care. 2000;12(4):281–295. [PubMed]
29. Shekelle P. The appropriateness method. Med Decis Making. 2004;24(2):228–231. [PubMed]
30. Shekelle PG. Are appropriateness criteria ready for use in clinical practice? N Engl J Med. 2001;344(9):677–678. [PubMed]
31. Donabedian A. The quality of care. How can it be assessed? JAMA. 1988;260(12):1743–1748. [PubMed]
32. National Quality Forum. Composite Measure Evaluation Framework. National Quality Forum. Available at www.qualityforum.org/projects/ongoing/CEF/. Accessed August 23, 2008.
33. Winchester DP, Stewart AK, Bura C, Jones RS. The National Cancer Data Base: a clinical surveillance and quality improvement tool. J Surg Oncol. 2004;85(1):1–3. [PubMed]
34. Facility Oncology Registry Data Standards. Chicago, IL: Commission on Cancer; 2004.
35. International Classification of Disease for Oncology. 3rd ed. Geneva, Switzerland: World Health Organization; 2000.
36. AJCC Cancer Staging Manual. 6th ed. Chicago, IL: Springer; 2002.
37. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373–383. [PubMed]
38. Deyo RA, Cherkin DC, Ciol MA. Adapting a clinical comorbidity index for use with ICD-9-CM administrative databases. J Clin Epidemiol. 1992;45(6):613–619. [PubMed]
39. Iezzoni L. Risk Adjustment for Measuring Healthcare Outcomes. Chicago, IL: Health Administration Press; 2003.
40. Lee TH. Eulogy for a quality measure. N Engl J Med. 2007;357(12):1175–1177. [PubMed]
41. McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med. 2003;348(26):2635–2645. [PubMed]
42. Bilimoria KY, Bentrem DJ, Ko CY, Stewart AK, Winchester DP, Talamonti MS. National failure to operate on early stage pancreatic cancer. Ann Surg. 2007;246(2):173–180. [PubMed]
43. Birbeck KF, Macklin CP, Tiffin NJ, et al. Rates of circumferential resection margin involvement vary between surgeons and predict outcomes in rectal cancer surgery. Ann Surg. 2002;235(4):449–457. [PubMed]
44. Bilimoria KY, Talamonti MS, Sener SF, et al. Effect of hospital volume on margin status after pancreaticoduodenectomy for cancer. J Am Coll Surg. 2008;207(4):510–519. [PubMed]
45. Miller EA, Woosley J, Martin CF, Sandler RS. Hospital-to-hospital variation in lymph node detection after colorectal resection. Cancer. 2004;101(5):1065–1071. [PubMed]
46. Bilimoria KY, Talamonti MS, Wayne JD, et al. Effect of hospital type and volume on lymph node evaluation for gastric and pancreatic cancer. Arch Surg. 2008;143(7):671–678. discussion 678. [PubMed]
47. Bilimoria KY, Bentrem DJ, Ko CY, et al. Multimodality therapy for pancreatic cancer in the U.S.: utilization, outcomes, and the effect of hospital volume. Cancer. 2007;110(6):1227–1234. [PubMed]
48. Wennberg DE, Lucas FL, Birkmeyer JD, Bredenberg CE, Fisher ES. Variation in carotid endarterectomy mortality in the Medicare population: trial hospitals, volume, and patient characteristics. JAMA. 1998;279(16):1278–1281. [PubMed]
49. Walter LC, Davidowitz NP, Heineken PA, Covinsky KE. Pitfalls of converting practice guidelines into quality measures: lessons learned from a VA performance measure. JAMA. 2004;291(20):2466–2470. [PubMed]
50. Ferrer R, Artigas A, Levy MM, et al. Improvement in process of care and outcome after a multicenter severe sepsis educational program in Spain. JAMA. 2008;299(19):2294–2303. [PubMed]
51. Fung CH, Lim YW, Mattke S, Damberg C, Shekelle PG. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med. 2008;148(2):111–123. [PubMed]
52. Williams SC, Schmaltz SP, Morton DJ, Koss RG, Loeb JM. Quality of care in U.S. hospitals as reflected by standardized measures, 2002-2004. N Engl J Med. 2005;353(3):255–264. [PubMed]
53. Khuri SF, Daley J, Henderson W, et al. The department of veterans affairs’ NSQIP: the first national, validated, outcome-based, risk-adjusted, and peer-controlled program for the measurement and enhancement of the quality of surgical care. National VA Surgical Quality Improvement Program. Ann Surg. 1998;228(4):491–507. [PubMed]
54. Chassin MR. Achieving and sustaining improved quality: lessons from New York State and cardiac surgery. Health Aff (Millwood) 2002;21(4):40–51. [PubMed]
55. Carey JS, Danielsen B, Junod FL, Rossiter SJ, Stabile BE. The California Cardiac Surgery and Intervention Project: evolution of a public reporting program. Am Surg. 2006;72(10):978–983. [PubMed]
56. Apolito RA, Greenberg MA, Menegus MA, et al. Impact of the New York State Cardiac Surgery and Percutaneous Coronary Intervention Reporting System on the management of patients with acute myocardial infarction complicated by cardiogenic shock. Am Heart J. 2008;155(2):267–273. [PubMed]
57. Casalino LP. The unintended consequences of measuring quality on the quality of medical care. N Engl J Med. 1999;341(15):1147–1150. [PubMed]
58. Burack JH, Impellizzeri P, Homel P, Cunningham JN., Jr Public reporting of surgical mortality: a survey of New York State cardiothoracic surgeons. Ann Thorac Surg. 1999;68(4):1195–1200. discussion 1201–1202. [PubMed]
59. Stewart A, Gay E, Patel-Parekh L, Winchester D, Edge S, Ko C. American Society of Clinical Oncology Annual Meeting. Chicago, IL: 2007. Provider feedback improves reporting on quality measures: national profile reports for adjuvant chemotherapy for stage III colon cancer. J Clin Oncol, ASCO Annual Meeting Proceedings Part I. Vol 25, No. 18S (June 20 Supplement):6572.
60. Cress RD, Zaslavsky AM, West DW, Wolf RE, Felter MC, Ayanian JZ. Completeness of information on adjuvant therapies for colorectal cancer in population-based cancer registries. Med Care. 2003;41(9):1006–1012. [PubMed]
Articles from JNCI Journal of the National Cancer Institute are provided here courtesy of
Oxford University Press