By using a formal, well-described methodology, an expert panel assessed potential quality indicators and identified 43 valid indicators of quality care for pancreatic cancer management. We then assessed performance on these measures at 1134 hospitals using data from a large national cancer registry and found that most hospitals were adherent with fewer than half of the indicators. The intent was to develop indicators of quality of care that hospitals could use for self-assessment to identify quality initiatives for improving pancreatic cancer care.
The RAND/UCLA Appropriateness Methodology has been used to develop quality indicators for many disease processes (21
). In previous studies to develop quality indicators in surgery and oncology, 59%–81% of the potential indicators were ranked as valid (23
). These studies individually used only one criterion for the assessment of validity (ie, the number of panelists who ranked an indicator within the 7–9 range); however, the definition of validity differed somewhat among these studies. Therefore, we used two frequently used definitions of validity to establish two potential validity levels based on the relative stringency of the criteria: high validity and moderate validity. We found that 58% of indicators met the strictest validity definition and 86% met the relaxed criteria. We expected that a large proportion of the indicators would be ranked as valid because all were derived from the literature, established guidelines, and interviews with experts in the field.
Previously, the only quality indicators involving the care of patients with pancreatic malignancies were two proposed by the AHRQ (30
). These indicators require hospitals to track their pancreatectomy case volume and postoperative mortality rate and are currently under consideration by the National Quality Forum (40
). However, neither of these two measures sets an absolute numerical threshold for mortality or case volume. The indicators we used for monitoring surgeon- and hospital-level operative volumes, as well as those for monitoring perioperative mortality, were ranked as having high validity and are similar to the AHRQ pancreas measures. In the preliminary semistructured interviews, all of the experts uniformly suggested that pancreatectomy case volume is a critical component for ensuring quality pancreatic cancer care. However, definitions of “high volume” vary widely in the literature, ranging from two to 200 cases per year (1
). The expert panel debated numerous thresholds ranging from six to 24 cases per year and how case volume should be defined (ie, whether it should include benign and/or malignant lesions) and ultimately decided that the valid quality indicators for specific thresholds should be 12 cases per year for hospitals and six cases per year for surgeons. Similarly, the 5% postoperative mortality threshold was discussed and decided by the expert panel.
There is a paucity of high-level evidence (ie, from clinical trials) in pancreas surgery to guide clinical decision making. However, this circumstance is well suited for the application of the RAND Appropriateness Methodology, in which the best available literature is combined with expert opinion. Although the National Comprehensive Cancer Network and other organizations publish detailed recommendations for pancreatic cancer diagnosis, treatment, and follow-up, these guidelines serve a very different function than the intended purpose for quality measures (21
). Guidelines make recommendations based on the best available evidence and suggest that certain disease management issues be discussed with the patient; quality indicators (or quality measures) are held to a much higher standard in that noncompliance with a quality indicator generally constitutes unacceptable or poor care (21
). Moreover, quality indicators must be suitable and practical for potential use if they are to be used to assess hospitals and providers.
Once a set of quality indicators has been developed, the measures can be used by hospitals to assess the quality of care at their institutions. McGlynn et al. (41
) developed 429 indicators of quality of care for 30 acute and chronic conditions as well as preventive care and found that recommended care was delivered to only approximately 55% of patients. However, for individual hospitals to assess adherence with quality indicators can require a considerable amount of data abstraction from the patient's chart. Thus, readily available data, such as those collected by cancer registries including the NCDB, are likely to be used to assess hospital performance because no additional data collection would be needed. For this reason, we used cancer registry data to evaluate adherence with the valid pancreatic cancer quality indicators at the patient and hospital levels. Patient-level adherence with individual indicators ranged from 49.6% to 97.2%, and the proportion of adherent hospitals ranged from 6.8% to 99.9%. Of note, only 77 hospitals met the volume threshold established by the panel. Thus, regionalization of surgical care to high-volume centers is likely an impractical policy initiative, and we suggest that these indicators should be used by all hospitals to attempt to raise the level of care provided to pancreatic cancer patients. In addition, we found that most hospitals were adherent with fewer than half of the 10 component indicators that we used to develop the composite score, and no hospital was adherent with all of the indicators. Thus, there is an opportunity for all hospitals to improve.
Hospital adherence with guidelines and consensus recommendations for pancreatic cancer management may vary for a number of reasons. First, the experience and training of the clinical teams are likely to vary. Experienced teams may be more familiar with the literature and guideline recommendations and, thus, may be more likely to follow those recommendations. High-volume hospitals and cancer centers have been shown to provide care concordant with guidelines more frequently than low-volume centers, including the appropriate use of curative resection (42
), the completeness of resection (43
), adequacy of nodal examination (45
), the use of adjuvant treatments (47
), clinical trials participation (48
), and aggressiveness of cancer surveillance activities. Second, patient preferences may affect hospital adherence with quality indicators (49
). Finally, the dismal prognosis for patients diagnosed with pancreatic cancer may lead to pessimism on the part of physicians and patients, which may result in nonadherence with guidelines (42
Mechanisms are then needed by which individual hospitals are informed about their adherence rates with quality indicators. Numerous studies have demonstrated the benefits of quality assessment and feedback for a wide range of medical conditions (50
). For many years, reporting of outcomes has been routine in New York and California for coronary artery bypass graft operations, as well as in the Veterans’ Health Administration system for a wide variety of surgeries (53
). These efforts have been shown to prompt hospitals to initiate specific quality improvement efforts that have produced improvements in outcomes (53
). However, it is unknown whether adherence with quality indicators will improve outcomes at individual hospitals, and some have suggested that this type of quality measurement and feedback initiatives may be detrimental to patient care and the health-care system (19
For oncological care, a feedback mechanism through the NCDB is currently available for breast and colorectal cancer quality measures (21
). The NCDB receives data from more than 1450 Commission on Cancer–approved hospitals, and these data can be used to calculate performance rates for individual hospitals for specific quality measures as demonstrated in this study. The NCDB can provide individual hospitals with their performance in a confidential manner on quality indicators compared with that of all the other Commission on Cancer–approved hospitals, as shown in . Only the individual hospital can identify its outcomes. However, public reporting initiatives for hospital quality measure compliance and outcomes are becoming a reality in the United States (55
). Thus, identification of measures and evaluation of performance by individual hospitals can be good preparation for a future that will likely include a great deal of public reporting of process and outcome measure performance. Importantly, readily available data sources such as cancer registries will likely be used for quality measurement initiatives by government oversight agencies and payers because these existing data sources provide a convenient assessment mechanism for which no additional data need to be collected. Thus, it is important for hospitals to ensure that the data they report to cancer registries are accurate and of high quality.
There are some important caveats regarding the application of the indicators identified in this study. First, 100% compliance is generally not required for all of the quality indicators. No matter how well defined the inclusion and exclusion criteria are for quality indicators, there will be some instances where the indicators are inappropriate (eg, the requirement to assess 12 or more lymph nodes for colon cancer in an intraoperatively unstable patient where resection of more nodes may not be safe). Moreover, patient preferences may also affect quality indicator compliance (eg, patient refusal to undergo chemotherapy for a stage III colon cancer). Second, it is also important to note that the development of quality indicators involves an iterative process. Even measures that are based on high-level evidence will become outdated or may need to be modified over time as the science advances. Measure development will need to be revisited periodically as new evidence accumulates and practice patterns change. When the ultimate goal of complete compliance with a quality measure is achieved, assessment can be discontinued and new measures can be added (40
). Prompt feedback regarding quality measure performance could help decrease the time from publication of seminal studies and subsequent guideline development to the incorporation of measures into clinical practice. Finally, quality measures can be applied to different extents. The National Quality Forum has endorsed measures at two levels: accountability and quality improvement. Accountability measures meet the strictest criteria and generally have a clear impact on outcomes; thus, providers may be judged and incur financial consequences depending on their performance on these indicators of care. The criteria for endorsing quality improvement measures are somewhat less rigorous, and these measures are simply intended to provide feedback to hospitals. Although the two levels of validity used in this study do not directly correspond to the National Quality Forum guidelines for accountability and quality improvement, a similar paradigm could be considered to base the “accountability” and “quality improvement” designations on more objective criteria.
This study has some potential limitations. First, although we attempted to include all measures of quality in the indicator development process, it is likely that important indicators were missed. Moreover, another expert panel or a panel with a different composition of specialties or backgrounds represented may have ranked the quality indicators differently or developed a different set of indicators. Second, although the wording of the indicators was discussed at length by the panel, there was not always agreement on the wording, so some indicators may have received slightly lower rankings due to wording disagreement. These differences in wording do not appear to have qualitatively changed the validity category of the indicators. Third, for assessment of hospital performance, small sample size and inadequate risk adjustment (ie, the inability to adjust completely for differences in case mix among hospitals) may decrease the reliability of the comparisons; however, process measure performance is, in principle, insulated from these issues because we assumed that the indicator should be adhered to in nearly all cases. Thus, adherence with the indicator is either met or not met. Furthermore, because there is little evidence regarding a definitive method for threshold selection, we chose, a priori, a 90% threshold for adherence to allow for variability at hospitals while still requiring all hospitals to achieve a high level of adherence. Fourth, the poor quality indicator adherence rates demonstrated in this study may be partly related to poor documentation in the medical chart. For example, adjuvant therapy may be underreported to cancer registries by the individual hospitals because it is frequently administered in the outpatient setting, often many weeks after surgery (60
); however, it will be the hospital's responsibility to ensure that accurate and complete data regarding all aspects of care are transmitted to cancer registries because these data will be used by federal agencies and providers for quality assessment (18
). In addition, some indicators examine issues that are difficult to assess accurately, such as margin status and readmissions, due to variability in practice patterns. For example, low margin-positive resection rates may be indicative of less thorough pathological evaluation of the margins. Thus, centers that focus on pancreatic cancer and perform detailed margin assessments may have higher margin-positive resection rates, but these rates are likely paradoxically related to the quality of care because a more extensive margin assessment will identify higher margin-positive resection rates. Finally, our assessment of hospital performance was limited to Commission on Cancer–approved hospitals. Thus, the findings may not be generalizable to all hospitals. However, the NCDB receives data from a large number of hospitals that care for more than three-fourths of all the pancreatic cancer patients in the United States.
In conclusion, we used a standardized methodology to identify indicators of pancreatic cancer care. Noncompliance with these indicators is indicative of poor quality care. Hospitals can assess their performance on these quality indicators and compare it with that of other hospitals, thus identifying potential areas for internal quality improvement initiatives. Because hospitals’ resources for quality improvement efforts are limited, a mechanism to efficiently direct quality initiatives would be beneficial. Because the future of health care will certainly involve more measurement of the quality of care, there is a need for rigorously developed quality indicators put forth by clinicians. Moreover, individual quality measures can be used to develop a data-driven composite measure of hospital pancreatic cancer care that assesses care across multiple domains. These quality indicators offer an opportunity to monitor, standardize, and improve the care of patients with pancreatic cancer.