PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of jrsocmedLink to Publisher's site
 
J R Soc Med. 2007 March; 100(3): 122–124.
PMCID: PMC1809156

Redefining quality of care

There is increasing interest in evaluating the quality of care delivered by health care providers and its impact on the overall satisfaction of the end-user, namely the patient. Despite the political incentives that such research evokes, important questions surrounding this topic must be answered to improve the way in which care is delivered. This signals important changes in the way that patients, clinicians, scientists and administrators, evaluate outcomes of treatment.

Such an example is illustrated in a recent study by Malin et al.,1 which analyses adherence to quality measures for cancer care, including patients with a new diagnosis of either stage I to III breast cancer or stage II or III colorectal cancer as index cases. The incorporation of clinical domains representative of the entire patient episode is unique: diagnostic evaluation, surgery, adjuvant therapy, management of treatment toxicity and post-treatment surveillance. Further examination was made of eight components of care integral to these clinical domains: testing, pathology, documentation, referral, timing, receipt of treatment, technical quality and respect for patient preferences. In all, adherences to respectively 36 and 25 explicit quality measures with clinically detailed eligibility criteria, specific to the process of cancer care, were identified for breast and colorectal cancer. Overall adherence to these quality measures was 86% (95% CI 86-87%) for breast cancer patients and 78% (95% CI 77-79%) for colorectal cancer patients. Subgroup analysis across the clinical domains and components of care did however identify significant adherence variability: 13-97% for breast cancer and 50-93% for colorectal cancer. This is the first study to evaluate adherence to identifiable and reproducible indicators of quality in a specific area of health care. So novel is this piece of work, that there is at present no objective, validated tool for scoring studies looking at quality of care or quality of life outcomes. We are therefore unable to measure the ‘quality’ of this article as we can for randomized controlled trials (Jadad score),2 nonrandomized studies in surgery (MINORS)3 and diagnostic accuracy studies (QUADAS and STARD).4

The term ‘quality’ encompasses so much more than merely the speed and cost at which a patient episode can be completed. It is important to evaluate measures beyond traditional surrogate endpoints, such as length of stay, morbidity and mortality, and include factors such as referral networks, diagnostics, perioperative and operative factors, adjuvant therapies, follow-up including surveillance, and patient values using qualitative indicators such as EQ-5D, SF 36, or newer SF 12 and SF 8 health surveys.5,6 Consideration will also need to be given to cost-utility ratios using either quality adjusted life years or incremental cost-effectiveness ratios. Indeed, in the operational system that is the NHS, the current key performance indicators for trusts, treatment centres, and independent sector treatment centres, are mainly reflective of management outputs with only a few clinical key performance indicators.7 If units and providers are to be compared accurately, then the key performance indicators or any other endpoints that are captured to assess ‘quality’ must be matched. There exists a complex relationship between processes of care, productivity, and health-related outcomes. It is the further investigation into and understanding of this interaction that will facilitate clearer definitions regarding the quality of care that a patient receives.

There has been much interest over the last 20 years over the potential correlation of patient outcomes and the volume of health care service provision. This research aims to advance our understanding of, and thereby embrace, the systems that improve quality of care. Studies have demonstrated improved disease specific outcomes for patients who undergo operations at hospitals with large case volumes.8,9 On the basis of these studies substantiating this positive correlation between volume and outcome, there have been proponents for selective referral to high-volume institutions (e.g. the Leapfrog group10). Heterogeneity of the studies, however, makes comparison difficult, and this has lead to a weakening of the previously substantiated positive correlation in volume and outcome. The result is a proposal that the improved outcomes for patients operated on by surgeons who perform greater numbers of specific operations may only exist for high-risk, complex surgery such as oesophagectomy, pancreatectomy and gastrectomy.11 More recently, additional methodological limitations of existing measures of the volume-outcome relationship have been highlighted by some investigators.12-14 Although there remains a positive correlation between volume and outcome, an improved understanding of methods of data adjustment and statistical analysis have proposed an attenuation of the statistical significance of this association.14

Historically, outcome differences observed between institutions, and indeed surgeons, were assumed to be secondary to factors that were deemed immeasurable (e.g. the skill of the surgeon). For this statement to remain true, it must be ensured that we have identified and considered all ‘measurable’ factors. Their identification will further explain the heterogeneous results of studies published thus far comparing the relationship of surgeon and institutional volumes and health outcomes. A recent systematic review by Killeen et al.11 applied a previously validated ‘quality scoring measure’ for papers reporting on volume/outcome relationships in a number of cancer operations, including pancreatectomy, gastrectomy, oesophagectomy and colectomy.11,15 The quality scoring measure incorporates such criteria as clinical processes of care. This review demonstrated that currently available papers exploring the volume-outcome relationship were not of high quality. No paper achieved a quality score above 11 (out of a maximum of 18) and median scores for each of the cancer types were seven to nine. This suggests that existing studies have therefore not adjusted for a number of ‘measurable’ factors. It would be of interest to analyse differences in the adherence to quality measures between the high and low volume institutions recruited in the Malin et al. study.1

In the UK, current changes in the structuring of the health care system are favouring centralization of particular services, such as surgical oncology, to a small number of high-volume specialist centres. It appears that the evidence upon which these changes are being made may not be as robust as should be expected.

The traditional measurement of surrogate endpoints, such as length of stay, morbidity and mortality, has arisen in part due to the ease with which they can be measured, and they are recorded regularly in most centres. They are often overly weighted towards the performance of the ‘surgeon’ and ignore the importance of the multidisciplinary institution, with resulting underestimation of the multifactorial intricacies within health care pathways that determine the actual ‘quality’ of care and subsequent health outcomes. We now better understand the influence of case mix and determination of risk adjusted outcomes. This has had particular importance in the recent publication of cardiac surgical outcomes across the UK.16 Traditional measures, however, fail to appreciate patient-specific measures of quality of life, the importance of which are becoming increasingly recognized but continue to be overlooked by ‘pure’ clinical outcome measures. For instance, Al-Ruzzeh et al. have assessed the determinants of poor mid-term health-related quality of life at one year after primary isolated coronary artery bypass grafting.17 Personality type was one of the main factors that influenced outcome in cardiac surgery. Patients with a type D personality were more than twice as likely to have poor physical health-related quality of life and more than five times as likely to have poor mental health-related quality of life. As this comprehension of the limitations of existing studies evolves, we are acknowledging that future research efforts should be directed towards identifying the processes that influence the quality of care both from a patient and clinician perspective. Although their identification and measurement may be more challenging, they will provide us with greater insight into the mechanisms underlying differences between observed outcomes for high and low volume institutions. Such differences have been identified for outcomes following acute myocardial infarction.18 This retrospective cohort study, which analysed 95 185 patients across all acute care hospitals in the USA, demonstrated a continuous inverse dose-response relationship between hospital volume and the risk of death following myocardial infarction. This relationship persisted despite multi-variant risk adjustment for variables including clinical factors, availability of invasive procedures, and physician speciality. When separate variables for each patient's receipt of aspirin and thrombolytic medications on admission and beta blockers and angiotensin converting enzyme inhibitors at discharge were risk adjusted, a statistically significant decrease in the hazard ratio for death at 30 days in lower volume institutions that did not offer on-site angiography was observed (1.38-1.23; 95% CI 1.04-1.46; P=0.02).

Productivity is based on the relationship between input and output, activities, and outcome. Until recently, NHS productivity was estimated using volume measures of inputs and outputs derived from the National Accounts.19 Similar to striving to achieve targets for improving processes of care, this oversimplified approach negates important determinants of an improved health system, namely quality of care and health outcomes. In the face of increasing demand and advances in health care, with high costs per quality adjusted life year, it is not surprising that NHS productivity using these traditional measures has steadily declined by an average of between 0.6 and 1.3% per year during the period 1995-2004.19 When NHS output is adjusted to account for indicators of quality change and increasing value of health, productivity is seen to increase with time; estimates demonstrate an increase of between 0.9 and 1.6% per year during the period 1999-2004.19,20 Understanding ‘quality’ will therefore have economic gain additional to improving estimated output, health outcomes and public satisfaction. Proposed measures of quality will include the incorporation of patient values using qualitative indicators such as validated health surveys (EQ-5D, SF 36, SF 12 and SF 8). The estimation of quality of life is not new and is well established as inclusion criteria for the evaluation of the perceived benefits of a new treatment. Indeed, the National Institute for Health and Clinical Excellence has greatly weighted for the quality adjusted life year against an unfavourable financial outlook in its decision to enforce the availability of novel oncological drugs for cancer patients, such as Herceptin.21

It is not inconceivable that in a future NHS, there may be implementation of an incentive based system which will financially reimburse organizations based on measures of quality of outcome, rewarding higher levels of performance and patient satisfaction.

Quality of care is a complex interaction between processes of care, clinical and patient oriented health outcomes and the productivity of a health care system. The recent study by Malin et al.,1 following on from the work of Halm et al.,15 is helping us to understand the mechanisms, which will no doubt be specific to discernable disease groups, that will allow for future improvement in the overall quality of care that patients experience. It is further research within these areas that will help to shape the future funding and restructuring of our health care systems.

Notes

Competing interests None declared.

Funding None.

Guarantor Professor Sir Ara Darzi.

References

1. Malin JL, Schneider EC, Epstein AM, et al. Results of the National Initiative for Cancer Care Quality: how can we improve the quality of cancer care in the United States? J Clin Oncol 2006;24: 626-34 [PubMed]
2. Jadad AR, Moore RA, Carroll D, et al. Assessing the quality of reports of randomized clinical trials: is blinding necessary? Control Clin Trials 1996;17: 1-12 [PubMed]
3. Slim K, Nini E, Forestier D, et al. Methodological index for nonrandomized studies (minors): development and validation of a new instrument. ANZ J Surg 2003;73: 712-6 [PubMed]
4. Whiting P, Rutjes AW, Reitsma JB, et al. The development of QUADAS: a tool for the quality assessment of studies of diagnostic accuracy included in systematic reviews. BMC Med Res Methodol 2003;3: 25. [PMC free article] [PubMed]
5. Jenkinson C, Coulter A, Wright L. Short form 36 (SF36) health survey questionnaire: normative data for adults of working age. BMJ 1993;306: 1437-40 [PMC free article] [PubMed]
6. Park SM, Park MH, Won JH, et al. EuroQol and survival prediction in terminal cancer patients: a multicenter prospective study in hospicepalliative care units. Support Care Cancer 2006;14: 329-33 [PubMed]
7. House of Commons Health Committee. Independent Sector Treatment Centres. Fourth Report of Session 2005/2006. London: HMSO, 2006
8. Dudley RA, Johansen KL, Brand R, et al. Selective referral to high-volume hospitals: estimating potentially avoidable deaths. JAMA 2000;283: 1159-66 [PubMed]
9. Birkmeyer JD, Siewers AE, Finlayson EV, et al. Hospital volume and surgical mortality in the United States. N Engl J Med 2002;346: 1128-37 [PubMed]
10. The Leapfrog Group. Evidence-Based Hospital Referral. 2004. Available at http://www.leapfroggroup.org/
11. Killeen SD, O'Sullivan MJ, Coffey JC, et al. Provider volume and outcomes for oncological procedures. Br J Surg 2005;92: 389-402 [PubMed]
12. Christian CK, Gustafson ML, Betensky RA, et al. The volume-outcome relationship: don't believe everything you see. World J Surg 2005;29: 1241-4 [PubMed]
13. Halm EA, Lee C, Chassin MR: Is volume related to outcome in health care? A systematic review and methodologic critique of the literature. Ann Intern Med 2002;137: 511-20 [PubMed]
14. Panageas KS, Schrag D, Riedel E, et al. The effect of clustering of outcomes on the association of procedure volume and surgical outcomes. Ann Intern Med 2003;139: 658-65 [PubMed]
15. Halm E, Lee C, MR C: Interpreting how is volume related to quality in health care? A systematic review of the research literature. In: Hewitt M (ed). Interpreting the Volume-Outcome Relationship in the Context of Health Care Quality. Workshop Summary. Institute of Medicine, Washington DC, National Academic Press 2000: Appendix C, 27-102
16. Healthcare Commission. Heart Surgery in Great Britain. London: Healthcare Commission, 2006
17. Al-Ruzzeh S, Athanasiou T, Mangoush O, et al. Predictors of poor mid-term health related quality of life after primary isolated coronary artery bypass grafting surgery. Heart 2005;91: 1557-62 [PMC free article] [PubMed]
18. Thiemann DR, Coresh J, Oetgen WJ, et al. The association between hospital volume and survival after acute myocardial infarction in elderly patients. N Engl J Med 1999;340: 1640-8 [PubMed]
19. Office for National Statistics: Public service productivity: Health. Economic Trends 2006;628: 26-57
20. Atkinson T: Atkinson Review of Government Output and Productivity for the National Accounts: Final Report. London: HMSO, 2005
21. NICE. Update on Herceptin Appraisal. London: National Institute for Health and Clinical Excellence, 2006

Articles from Journal of the Royal Society of Medicine are provided here courtesy of Royal Society of Medicine Press