PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of qualsafetyQuality and Safety in Health CareCurrent TOCInstructions for authors
 
Qual Saf Health Care. 2007 December; 16(6): 456–461.
PMCID: PMC2653184

Impact of short evidence summaries in discharge letters on adherence of practitioners to discharge medication. A cluster‐randomised controlled trial

Abstract

Background

International concern about quality of medical care has led to intensive study of interventions to ensure care is consistent with best evidence. Simple, inexpensive, feasible and effective interventions remain limited.

Objective

We examined the impact of one‐sentence evidence summaries appended to consultants' letters to primary care practitioners on adherence of the practitioners to recommendations made by the consultants regarding medication for patients with chronic medical problems.

Design

Cluster‐randomised trial.

Setting

Secondary/primary care interface (urban district hospital/referral practices).

Participants

178 practices received one or more discharge letters with evidence summaries. The 66 practices in the intervention group provided feedback on 172 letters, and the 56 practices in the control group provided feedback on 96 letters.

Results

Appending an evidence summary to discharge letters resulted in a decrease in non‐adherence to discharge medication from 29.6% to 18.5% (difference adjusted for underlying medical condition 12.5%; p = 0.039). Among the five possible reasons for discontinuing discharge medication, the evidence summaries seemed to have the largest impact on budget‐related reasons for discontinuation (2.6% in the intervention versus 10.7% in the control group (p = 0.052)). Most clinicians (72%) were enthusiastic about continuing receiving evidence summaries with discharge letters in routine care.

Conclusions

The one‐sentence evidence summary is a simple, inexpensive, well‐accepted intervention that may improve primary care practitioners' adherence to evidence‐based consultant recommendations.

Ensuring high‐quality medical care has become a major goal for healthcare systems worldwide. However, achieving clinical practice that is consistent with the best evidence has proved challenging. A vast number of randomised trials have tested a wide variety of behaviour‐change strategies designed to increase practitioners' adherence to evidence‐based guidelines,1,2 but systematic reviews have concluded that the effects are generally small and inconsistent.

Even the most highly touted interventions have proved disappointing. For example, a recent systematic review of randomised controlled trials testing computerised reminders in outpatient settings found a reduction of non‐adherence to suggested drug changes in only six of 12 comparisons.3 Reasons for making poor use of the decision support systems included limitations in the user interface and requirements for healthcare providers to enter redundant data. Thus, the need remains for inexpensive, well‐accepted, behaviour‐change strategies for improving the quality of clinical care.

The challenges in delivering evidence‐based patient care include time pressures,4 limited skills in interpreting the literature,5 and lack of access to rapid information services.6 Practitioners are well aware of the many questions that are generated while providing clinical care,6,7 the burden of new information and the lack of time to keep up to date.8 Attitudinal resistance may complicate logistical challenges.4,9,10 These considerations suggest a need for clinicians to have access to up‐to‐date, evidence‐based treatment and management recommendations at the point of care.8,11 Rather than trying to find and appraise the evidence in primary studies, practitioners favour trustworthy evidence summaries that they then can interpret within the context of the social and economic factors that impinge on their practice.7,8,9,12

Hospital consultants represent a highly appreciated source of information10,13 and their recommendations have an important impact on primary care physicians' practice,9 particularly if consultants provide information in the context of the care of a shared patient. Ideally, an intervention using consultant feedback will be inexpensive, easily implemented and well accepted by consultants and primary care physicians. We conducted a randomised controlled trial to determine if short, one‐sentence evidence summaries supporting treatment recommendations appended to hospital consultants' discharge letters would increase the adherence of primary care practitioners to discharge medication (box 1).

Box 1: Study questions

  • Do short, one‐sentence evidence summaries in discharge letters increase the likelihood of discharge medication being continued in primary care compared with usual discharge letters?
  • What are the reasons for discontinuation of medicines, and do evidence summaries have a differential impact on these reasons?
  • How do practitioners value evidence summaries?

Methods

Evidence summaries

We identified medical conditions that are frequently encountered in hospital care, require long‐term drug treatment, and for which high‐quality randomised controlled trials, or meta‐analysis of such trials,14,15,16 have unequivocally established benefits greater than risks, costs and inconvenience. We generated single sentence evidence summaries (see appendix A for an example) for each condition–medication pair. SML, with support from his medical team, created 135 evidence summaries for 15 predefined medical conditions; RK provided feedback regarding readability, accuracy and completeness.

Study design

We randomised all primary care practices in the catchment area of the Department of Internal Medicine, Park‐Klinik Weissensee, Berlin, from where patients had been admitted in the preceding year to receive or to not receive evidence summaries appended to consultants' letters. Of the included practices, 59% were solo practices, 37% had two to four doctors, and 5% had five or more doctors. It was felt that exposure to an evidence summary for a patient in the intervention group might influence management of a similar patient in the control group. To deal with this problem—which could lead to false negative results—we chose to cluster‐randomise practices and conducted an analysis appropriate to the study design.17 All practices with eligible patients managed by members of the Department of Internal Medicine, Park‐Klinik Weissensee, were considered eligible. Eligible patients were those who were prescribed new medications at discharge, for which evidence summaries were available. We excluded patients with an anticipated life expectancy less than 6 months, and recruited patients between May and December 2000.

Existing practices were randomised using a computer generated random list before establishing the practitioners' willingness to participate. For practices that opened during the study period, we prepared opaque sealed envelopes that the department secretary opened sequentially. Immediately before discharge, and thus at the point of returning the patient to the care of a primary care practitioner, the residents followed an algorithm to identify patients who had begun medication intended for long‐term use and for which an evidence summary was available. Only after establishing a patient's eligibility did they check the patient's allocation to the experimental or control intervention.

In the intervention group, we appended the relevant evidence summary to the letter and in the control group we put the code number of the relevant evidence summary. The two procedures required similar workload. Consultants could delete the evidence summaries at the time of signing the letter if they felt they were inappropriate. Primary care doctors received only one evidence summary per letter; if several summaries were applicable, the doctor received the most relevant one.

Follow‐up interviews

Follow‐up interviews assessed doctors' adherence to the relevant consultant recommendations and in the intervention group the general attitude towards the evidence summaries (see appendix B). For each patient, the practitioners in both groups reported whether they had continued prescribing the medication noted in the discharge letter. If they had not, they chose, from among five possibilities, all the relevant reasons why they had not. Practitioners in the intervention group answered five additional questions about the novelty of the information in the evidence summary, the extent to which it helped with understanding consultants' management, its impact on the practitioner's decision making, possible effect on perception of clinical autonomy, and interest in continuing to receive evidence summaries.

The questionnaires were pretested by five practitioners for comprehension, time required to complete the questionnaire and practicability, and were modified accordingly. Two trained research assistants administered the structured follow‐up telephone interviews with the practitioner at least 100 days after the last of the study patients was included, thus ensuring that the doctor had at least one chance to make a decision about renewal of the medication.

We instituted several precautions to minimise the potential for bias that could result from combining non‐blinded interviews with practitioners' self‐report about continuation of a patient's medication. Interviewers strictly followed a written interview guide that had been pretested. Throughout the study, we repeatedly reviewed the conduct of the interview, and, in particular, adherence to the guide. We prearranged interview times through the practice nurse, ensured the availability of the patient's drug record for the interview, and faxed the questionnaire and the original discharge letter to the practitioner before the interview. These measures assured the practitioner's awareness of the specific patient and the patient's current medication and also facilitated the practitioner's understanding of the interview. The short duration of the interviews (3–5 min) encouraged high participation rates.

Sample size, power calculation and statistical analysis

Since this study was cluster‐randomised and clusters were of different sizes, comparisons between random groups were performed using Monte Carlo permutation tests18 with practices (clusters, units of randomisation) as units of permutation. Thus for each endpoint we conducted 10 000 Monte Carlo simulation runs in which practices were randomly shuffled between intervention and control groups and the considered statistic (for example, difference of the rates of discontinuation) was calculated. The p values presented in the tables represent the proportions of simulations producing more extreme results than those observed. The test statistics applied were various differences of proportions. We adjusted for the differences in medical conditions between the groups by applying two‐way analyses of variance without interaction terms.

A simulation model provided a tailor‐made power calculation for the study. We assumed that the intervention would reduce the discontinuation rate from 50% to 40%. As input for the simulation program we used the numbers of patients admitted to the study hospital by different practices in 1999 (anticipated cluster sizes). In each of 10 000 simulation runs we drew binary random samples according to the assumed rates of discontinuation from a constant multiple of the cluster sizes, thus assuming that the number of practices was fixed, but the recruitment time could vary. For each set of random samples, the permutation test that we planned for the final analysis was performed and the proportion of positive test results was calculated. Because the simulation suggested that this sample size would guarantee a power of 80% to detect the assumed difference, given the 1999 cluster size distribution, we chose a sample size of 2×250 = 500 evidence summaries.

Calculations were performed using APL+Win 2.0.00 (supplied by APL 2000, Rockville, Maryland, USA).

Results

Recruitment

At the time of termination of the study, owing to limited resources for conducting the study, 417 of the planned 500 letters had been sent to the practices. Figure 11 and table 11 summarise the study design, the characteristics of the practices and the participating practitioners. Eligible patients came from 178/469 potentially eligible (and thus randomised) practices (fig 11).). Of these, 122 practices participated in an interview. No interview was possible in 56 practices because of: refusal to participate (intervention 9/control 6); logistical problems—that is, patient lost to practitioner; practice closed; address unknown; missing discharge letter (intervention 14/control 13); patient‐related reasons—that is, death within the observation period (intervention 1/control 4); miscellaneous reasons (intervention 3/control 6). The 122 practices received 268 letters relating to 59 different evidence summaries.

figure qc20305.f1
Figure 1 Study design and participation at different sampling stages (randomisation, inclusion, interview). The unit of analysis was practices. Letters were nested within the practices that acted as clusters in the cluster‐randomisation. ...
Table thumbnail
Table 1 Baseline comparisons and socio‐demographic factors of doctors participating in the interviews

Comparability

The intervention and control groups

Figure 11 and table 11 summarise the characteristics of the intervention and control group at randomisation and subsequent stages of the study and show excellent balance at randomisation. A similar number of practices received discharge letters with and without evidence summaries (95 and 83, respectively; p = 0.37), and participated in the interviews (66 and 56, respectively; p = 0.42). The practices of the intervention group received more letters (243 vs 174 letters in the control group; p = 0.17) owing to the chance allocation of three large group practices to the intervention group, which received a total of 40 discharge letters. No such large practice was allocated to the control group. This chance imbalance and the high rate of participation in interviews of those three practices resulted in the larger number of interviews in the intervention group (172 vs 96 interviews in the control group; p = 0.03). The mean time to interview in the intervention group was 239.6 days compared with a mean of 230.9 days in the control group (p = 0.22).

Evidence summaries and medical conditions

We evaluated the effect of 59 different evidence summaries addressing 15 medical conditions in 268 interviews. Of the 268 evidence summaries, 74 addressed the management of heart failure, 63 of hypertension, and the remainder of a wide variety of other conditions. The distribution of medical conditions addressed was similar between groups with the exception of heart failure (13% intervention, 21% control), hypertension (13% intervention, 22% control), and osteoporosis (10% intervention, 6% control). The analysis included adjustment for differences in the distribution of medical conditions.

Discontinuation of discharge medication

The rate of discontinuation, the primary study endpoint, was 18.5% in the intervention group and 29.4% in the control group. In absolute terms, the adjusted estimate of the reduction in discontinuation of recommended medication was 12.5% (p = 0.039). The three large practices allocated to the intervention displayed similar discontinuation behaviour to drug recommendation (18.2%; 29.4% and 0%; mean: 15.9%) than the other practices in the intervention group from which fewer patients were enrolled (18.6%, excluding the three large practices).

Reasons why practitioners discontinued discharge medication

In the interview, the doctors could choose from among five options to explain discontinuation of recommended medication: medical issues related to the patient; non‐medical issues related to the patient; disagreement with the consultants' recommendation; cost (that is, impact on the drug budget of the practitioner); or other reasons (table 22).). Practitioners in both groups stopped discharge medication for similar reasons. However, in the intervention group, doctors discontinued expensive medication substantially less often than those in the control group: 2.6% and 10.7%, respectively (adjusted absolute risk reduction of 7.4%; p = 0.052). This occurred primarily in the disease entities “hypertension”, “reflux oesophagitis”, “lipid disorders” and “digestive disorders”.

Table thumbnail
Table 2 Reasons for stopping the medication (more than one answer possible)

Overall acceptance of the evidence summaries

The evidence summaries were well accepted by the participating practitioners (table 33).). Although only a few thought they contained new information, most felt they aided in understanding consultants' recommendations. Most felt the evidence summaries played a part in decisions to continue medication, and almost all wanted that evidence summaries should continue to be sent.

Table thumbnail
Table 3 Feedback of practitioners regarding evidence summaries*

Discussion

We found that patient‐specific evidence summaries increased primary care practitioners' adherence to evidence‐based consultant recommendations for long‐term drug treatment across a broad spectrum of chronic medical conditions. The greatest effect seemed to be a reduction in the proportion of patients in whom doctors discontinued the drug for financial reasons.

Challenges of implementation

Limitations in quality of care have been extensively documented.19 Improving quality of care requires implementation strategies that have demonstrated effectiveness in rigorous trials.20 Recent reviews2,19 of relevant randomised trials have documented that no strategies provide consistent or large effects across settings. For example, although academic detailing has sometimes proved effective for changing practitioners' behaviour,21 some randomised controlled trials did not find any impact on attitude22 or behaviour.23 Despite promise in some randomised controlled trials,24 postal prompts directed to both patients and clinicians failed to increase the rate of prescription of aspirin, β blockers or statins after myocardial infarction.25 A nurse‐delivered programme designed to educate patients with myocardial infarction improved the process of care, but had no effect on blood pressure control, smoking cessation or serum cholesterol.26 We have already mentioned the limited impact of computerised reminders earlier in this paper.

Explanations for the failed interventions include lack of specificity and failure to support the decision making process of an explicit patient‐centred problem. More successful randomised controlled trials have used a practitioner‐centred approach by providing evidence‐based information for a specific patient problem. For instance, providing practitioners with the evidence regarding antibiotics for otitis media in children at the time of writing prescriptions led to a substantial reduction of prescriptions.27 Linking practice data to published evidence increased the use of peritoneal dialysis from 2.4% to 15.3% in the intervention group,28 and nurse‐led secondary prevention clinics for coronary heart disease in practices led to a marked change in risk factors and increased use of appropriate drugs.29 In each of these trials, the interventions occurred at the point of care without disrupting the workflow. We propose that our presentation of information in the context of an individual patient's problem at the point of care, supported by an authority familiar with the patient—the hospital consultant—was responsible for its impact on clinicians' behaviour.

The strengths of our study include the simplicity and feasibility of the intervention, the link between recommendations and high‐quality evidence, the randomised design and the high rate of participation of primary care practitioners. Limitations included the lack of blinding of interviewers to allocation to treatment and control groups and the practitioners' self‐report of drug (dis‐)continuation that we did not confirm with a review of charts. We tried, however, to minimise bias in the interviews through a highly standardised interview format and strict monitoring of the interviewers to comply with that format. We made various provisions to ensure that practitioners had all relevant information available at the time of the interview. Another limitation was the imbalance in the number of letters for which we had follow‐up data in the intervention and control groups. In retrospect, we could have prevented this problem by stratifying practices according to size, and blocked randomisation within strata. It is, however, reassuring that the three large practices in the intervention group that were responsive for the imbalances displayed similar rates of adherence to the other intervention practices.

We did not achieve our target sample size, the cluster sizes varied to a greater extent than anticipated, and we observed a markedly higher rate of adherence to hospital recommendations at baseline than anticipated. Although these factors limited the power of the study, we could still show differences between the intervention and control groups, although the results were on the borderline of conventional levels of statistical significance. Finally, we cannot exclude that unmeasured local factors were responsible for the success of the evidence summaries. Thus, further research is needed to confirm the effectiveness of evidence summaries in other settings.

Although 72% of practitioners thought that summaries should be continued, 80% felt they provided no new information. Although there could be several explanations for this apparently incongruous finding, perhaps the most likely is that doctors found the summaries both an effective reminder of what they knew but might forget, and a reassurance that their information remained up to date.

Conclusions

Short evidence summaries in discharge or referral letters are a simple, effective and feasible method of disseminating important healthcare information to primary care practitioners at the point of care. Initial evidence suggest they impact on practitioners' decision making and are well received. Thus, they might be particularly useful for the transfer of specific guideline recommendations or standards of disease management programmes at the interfaces of the healthcare system (inpatient/ambulatory care; specialist/general practice), a common point of failure in the delivery of quality of care.

Acknowledgements

We thank the doctors at the Department of Internal Medicine, Park‐Klinik Weissensee. The study was performed as part of a joint project of the Ärztekammer Berlin, Park‐Klinik Weissensee and the Techniker Krankenkasse, Hamburg.

Appendix A

Example of an evidence summary

Spironolactone in severe heart failure

Low dose spironolactone (25 mg/day added to ACE inhibitors, thiazide diuretics and digitalis) improved the prognosis of severe heart failure. Severe hyperkalaemia was observed only in rare exceptions. (N Engl J Med 1999; 341: 709)

NNT  = 9/2 years: to prevent one additional death. Level of evidence: I b

Appendix B

Interview questions

  1. Did you have an opportunity to read the discharge letter including the evidence summary before taking over the care of patient XY ... after her/his discharge from hospital?
  2. Did the evidence summary YZ ... (reading it to the physician) contain information that was new to you?
  3. Did the evidence summary aide in better understanding the consultant's treatment decision?
  4. Did you change the discharge medication?
  5. Which medication does the patient now receive for the aforementioned problem?
  6. How much did the evidence summary influence your decision?

Only if the response to question 4 was positive:

Why was the consultant's recommendation not implemented? [More than one answer was allowed]

  • Your concept about the appropriate therapy is different.
  • There were patient‐related medical reasons (side effect, change in the patient's health status)
  • There were patient‐related non‐medical reasons (preference of the patient, poor compliance, co‐payment of the drug)
  • To keep the drug budget under control.
  • Other reasons

(7) Do you regard the evidence summary as an intrusion on your freedom to practice?

(8) Would you want the evidence summaries to be continued?

Additional questions addressed the doctor's gender, age, specialisation, their experience as practitioner and the practice setting (single doctor or group practice, and if the latter, number of partners).

Footnotes

Funding: Financial support was provided by the Techniker Krankenkasse, Hamburg, Germany. RK was supported by a research grant from the Senat of Berlin, Germany; Santésuisse and the Gottfried und Julia Bangerter‐Rhyner‐Stiftung, Switzerland.

Competing interests: None.

The study sponsor had no role in the study design, collection, analysis, and interpretation of data.

The practitioners in the catchment area received written notification regarding the project before its commencement. We interpreted willingness to answer follow‐up questions as denoting informed consent. The ethics committee of the University Hospital Charité, Berlin, Germany provided ethical approval.

References

1. Grimshaw J M, Shirran L, Thomas R. et al Changing provider behavior: an overview of systematic reviews of interventions. Med Care 2001. 39II2–NaN45.NaN45 [PubMed]
2. Grimshaw J M, Thomas R E, MacLennan G. et al Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess 2004. 8iiiiv 1–iiiiv72.iiiiv72 [PubMed]
3. Bennett J W, Glasziou P P. Computerised reminders and feedback in medication management: a systematic review of randomised controlled trials. Med J Aust 2003. 178217–222.222 [PubMed]
4. Freeman A C, Sweeney K. Why general practitioners do not implement evidence: qualitative study. BMJ 2001. 3231100–1102.1102 [PMC free article] [PubMed]
5. Cranney M, Walley T. Same information, different decisions: the influence of evidence on the management of hypertension in the elderly. Br J Gen Pract 1996. 46661–663.663 [PMC free article] [PubMed]
6. Gorman P N, Helfand M. Information seeking in primary care: how physicians choose which clinical questions to pursue and which to leave unanswered. Med Decis Making 1995. 15113–119.119 [PubMed]
7. Brassey J, Elwyn G, Price C. et al Just in time information for clinicians: a questionnaire evaluation of the ATTRACT project. BMJ 2001. 322529–530.530 [PMC free article] [PubMed]
8. Ely J W, Osheroff J A, Ebell M H. et al Analysis of questions asked by family doctors regarding patient care. BMJ 1999. 319358–361.361 [PMC free article] [PubMed]
9. Fairhurst K, Huby G. From trial data to practical knowledge: qualitative study of how general practitioners have accessed and used evidence about statin drugs in their management of hypercholesterolaemia. BMJ 1998. 3171130–1134.1134 [PMC free article] [PubMed]
10. Tomlin Z, Humphrey C, Rogers S. General practitioners' perceptions of effective health care. BMJ 1999. 3181532–1535.1535 [PMC free article] [PubMed]
11. Osheroff J A, Forsythe D E, Buchanan B G. et al Physicians' information needs: analysis of questions posed during clinical teaching. Ann Intern Med 1991. 114576–581.581 [PubMed]
12. Guyatt G H, Meade M O, Jaeschke R Z. et al Practitioners of evidence based care. Not all clinicians need to appraise evidence from scratch but all need some skills. BMJ 2000. 320954–955.955 [PMC free article] [PubMed]
13. Allery L A, Owen P A, Robling M R. Why general practitioners and consultants change their clinical practice: a critical incident study. BMJ 1997. 314870–874.874 [PMC free article] [PubMed]
14. Best Evidence. issue 3. Philadephia: American College of Physicians 1999
15. Clinical Evidence [German version]. Bern: Verlag Hans Huber, 1999
16. Cochrane Library, Issue 3. Oxford, Update Software, 1999
17. Ukoumunne O C, Gulliford M C, Chinn S. et al Methods in health service research. Evaluation of health interventions at area and organisation level. BMJ 1999. 319376–379.379 [PMC free article] [PubMed]
18. Good P. Permutation, parametric, and bootstrap tests of hypotheses. New York: Springer, 2005
19. Kohn L, Corrigan J, Donaldson M S. To err is human: building a safer health system. Washington: National Academic Press, 2000
20. van der Weijden T, Grol R. These will be corrected. Preventing recurrent coronary heart disease. We need to attend more to implementing evidence based practice. BMJ 1998. 3161400–1401.1401 [PMC free article] [PubMed]
21. O'Brien M A, Oxman A, Davis D A. et al Educational outreach visits: effects on professional practice and health care outcomes. Cochrane Database Syst Rev 1997. CD000409 [PubMed]
22. Markey P, Schattner P. Promoting evidence‐based medicine in general practice‐the impact of academic detailing. Fam Pract 2001. 18364–366.366 [PubMed]
23. Wyatt J C, Paterson‐Brown S, Johanson R. et al Randomised trial of educational visits to enhance use of systematic reviews in 25 obstetric units. BMJ 1998. 3171041–1046.1046 [PMC free article] [PubMed]
24. Davis D A, Thomson M A, Oxman A D. et al Changing physician performance. A systematic review of the effect of continuing medical education strategies. JAMA 1995. 274700–705.705 [PubMed]
25. Feder G, Griffiths C, Eldridge S. et al Effect of postal prompts to patients and general practitioners on the quality of primary care after a coronary event (POST): randomised controlled trial. BMJ 1999. 3181522–1526.1526 [PMC free article] [PubMed]
26. Cupples M E, McKnight A. Randomised controlled trial of health promotion in general practice for patients at high cardiovascular risk. BMJ 1994. 309993–996.996 [PMC free article] [PubMed]
27. Christakis D A, Zimmerman F J, Wright J A. et al A randomized controlled trial of point‐of‐care evidence to improve the antibiotic prescribing practices for otitis media in children. Pediatrics 2001. 107E15 [PubMed]
28. Balas E A, Boren S A, Hicks L L. et al Effect of linking practice data to published evidence. A randomized controlled trial of clinical direct reports 1285. Med Care 1998. 3679–87.87 [PubMed]
29. Campbell N C, Thain J, Deans H G. et al Secondary prevention clinics for coronary heart disease: randomised trial of effect on health. BMJ 1998. 3161434–1437.1437 [PMC free article] [PubMed]

Articles from Quality & Safety in Health Care are provided here courtesy of BMJ Publishing Group