Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Acad Radiol. Author manuscript; available in PMC 2013 May 12.
Published in final edited form as:
PMCID: PMC3651854

Teaching Management of Contrast Reactions: Does it work and how often do we need to refresh?


Rationale and Objectives

Knowledge of the management of acute contrast reactions is lacking among radiologists. Training in the management of acute contrast reactions occurs at our institution and others but the durability of that training and the need for refresher training have not been assessed.

Materials and Methods

Prospective assessment of changes in trainee knowledge and confidence following a required educational course concerning reactions to contrast material. Assessments were performed prior to and immediately following the course and at 1, 3, 6 and 9 months after the course.


Trainee knowledge significantly improved following the course and remained improved for 6 months (p<0.0001). By 9 months, knowledge was no longer improved over baseline (p=0.0644). Trainee confidence also improved following the course and remained improved throughout follow up (p=0.0356 at 9-months). At 6 months, however, confidence had significantly declined relative to immediately post-course levels (p=0.0241). Trainee knowledge was not dependent on post-graduate year but PGY-2 residents were significantly less confident in their ability to manage a contrast reaction than PGY-5 and PGY-6 trainees. Residents who managed a contrast reaction during the follow-up period were more confident in their abilities than residents who did not (p=0.0097).


These data suggest the need for bi-annual refresher training in the management of acute contrast reactions to maintain trainee knowledge and confidence.

Keywords: Contrast media, Contrast reactions, Education


Acute reactions to contrast material represent one of the few clinical emergencies faced by the diagnostic radiologist acting as the primary caregiver. Since these reactions are rare, radiologists have little experience managing them and there is little reinforcement of the knowledge needed to manage these reactions. As a result, skills for treating adverse contrast reactions are generally lacking in both trainees and practicing radiologists (16). In 1994, surveys of radiologists in the United Kingdom found that only 24% had formal training in CPR, only 68% knew the correct dose of epinephrine to administer for an anaphylactic-like contrast reaction and only 2% managed severe contrast reactions according to published guidelines (2, 5). Several studies have advocated different methods of either teaching this material or improving radiologist performance in managing simulated contrast reactions (4, 712). Each of these studies demonstrates improved knowledge and/or performance, but the effect of these interventions on the management of real-life contrast reactions and the durability of this effect is not known.

At our institution, trainees are required to participate in an annual contrast reaction course which covers the recognition, pathophysiology, treatment, and prevention of contrast reactions. The course, which has been in place for over ten years, includes didactic lectures, low fidelity simulations/scenarios and hands-on sessions where participants practice preparing and administering intravenous fluids and medication and using a defibrillator. As a result, radiology residents are exposed to the material four times over the course of their training.

We undertook this study to evaluate the educational value of the course as it is currently structured at our institution and to assess the durability of knowledge learned as a result of this course. We hypothesized that the course would result in a measurable improvement in knowledge but that this improvement would fade in the one year interval between courses.


This prospective study was designed to assess changes in trainees’ knowledge of the recognition, management and prevention of reactions to contrast material. The study authors, which include two experts on radiographic contrast material, created 30 questions based upon material covered in the lecture portion of the course and reinforced during the hands-on sessions. These questions were divided into two distinct 15 item quizzes (quiz A and B) with emphasis placed on covering similar material in both quizzes. To confirm that both instruments were similarly difficult, a pilot study in which the quizzes were administered to faculty radiologists at our institution was performed. Once validated as being of similar difficulty, these quizzes were administered in alternating fashion to course participants prior to the first lecture session, at the end of the course, and then at 1, 3, 6 and 9 months following course completion. “Quiz B” was administered prior to the course and at 1 and 6 months following the course. “Quiz A” was administered at the other time points (post-course, 3 and 9 months).

In addition to the knowledge assessment, survey questions were administered at each time point assessing demographic information (post-graduate year) and whether the participant had either managed a reaction or studied the management of contrast reactions over the interval since the prior assessment. Additionally, a single question asked about the participant’s degree of confidence in managing an acute contrast reaction as assessed with a five point Likert scale (1=very unsure, 2=somewhat unsure, 3=neutral, 4=somewhat confident, 5=very confident).

Our institution is a large academic medical center (885 inpatient beds; 1.8 million annual outpatient care encounters). In the Department of Radiology there are 44 Diagnostic Radiology residency positions and 32 fellowship positions, the latter in the following sub-specialties: Abdominal Imaging, Cardiothoracic Radiology, Interventional Neuroradiology, Magnetic Resonance Imaging (MRI), Musculoskeletal Radiology, Neuroradiology, Nuclear Medicine, Pediatric Radiology, Vascular/Interventional Radiology, and Women’s Imaging/Mammography. All of our residents and fellows are expected to participate in the annual contrast reaction course and were asked to participate in this study. Participation was voluntary and the assessments were anonymous. Institutional review board approval with a waiver of informed consent was obtained.

Statistical analysis

Pilot study

The paired t-test was used to compare mean faculty scores on the two quizzes during the pilot portion of the study.

Assessment of trainee learning and retention

Due to the anonymous nature of the assessments, individual performance was not tracked over time. Scores were instead grouped based on post-graduate level. Mean quiz (knowledge) and confidence scores were analyzed as outcome measures. Continuous variables were summarized using means and standard deviations. Categorical variables were summarized by counts and percentages. A repeated measure mixed model was used for univariate analysis to test the association between the outcome measures and predictors (period following the course, class year, whether the survey responder had self-reported interval experience managing a reaction, and whether or not the participant had studied contrast reaction identification and management since the prior assessment). Multivariate repeated measure models were then fit with significant predictors identified in the univariate analysis to further investigate each predictor’s effect on the quiz and confidence scores in the presence of other factors. To adjust for multiple pairwise comparisons and control the experimentwise error rate, post hoc analyses were performed using the Scheffé test in the comparison of quiz and confidence scores among the different groups and time points (13). This post-hoc analysis controls for increased alpha (type 1) error that results from multiple comparisons within a population. A probability (p) of 0.05 or smaller was considered significant for all hypothesis tests. All analyses were performed in SAS 9.2 (SAS Institute Inc., Cary, NC).


Pilot study (validation of test equivalence)

In the pilot portion of this study, 40 faculty radiologists were recruited to complete the assessment instruments. 23 completed “quiz A” with a mean score of 11.3 +/− 1.7 (out of 15) and 17 completed “quiz B” with a mean score of 11.5 +/− 1.8. Mean scores were not significantly different between the two groups (p=0.63). Given the slightly higher average score on “quiz B” this was administered as the baseline assessment (prior to the course and then at months 1 and 6) to limit any potentially confounding effect this slightly higher score might have.

Assessment of trainee learning and retention

Seventy six trainees (44 residents and 32 fellows) participated at some point during the study though not all individuals participated at all time points (Table 1). Seven of the 32 fellows were previously residents at our institution and therefore were not new to the course as were the other fellows.

Table 1
Participant numbers for each time point. The 9 month follow up time point occurred after the PGY-5 and PGY-6 participants had completed training for the year.

Knowledge (Quiz scores)

Mean knowledge scores are shown graphically for each time point in Figure 1. Prior to the course, mean knowledge scores were as follows: PGY-2: 10.2, PGY-3: 10.5, PGY-4: 12, PGY-5: 12.2, PGY-6: 10.9). While there were absolute differences in scores based on post-graduate level, these differences did not reach statistical significance when compared between individual post-graduate levels (p=0.22). Trainees who had previously participated in the course did not have significantly higher scores than PGY-2 residents who were taking the course for the first time.

Figure 1
Mean trainee knowledge scores over time. Error bars indicate 95% confidence intervals. The hashed vertical bar indicates the point at which scores are no longer statistically significantly improved over baseline.

On multivariate analysis, statistically significant predictors of knowledge scores included time relative to the course (p<0.0001) and whether the participant studied between assessments (p=0.0045). For all trainees, scores immediately following the course and at 1, 3 and 6 months after the course were significantly improved relative to pre-course scores (Table 2). However, at 9 months, scores were no longer significantly improved relative to pre-test scores. At no point during the follow-up period were scores significantly different between post-graduate years. The effect of studying between assessments was negatively associated with quiz scores (p=0.0045). That is, participants who stated they had studied the management of contrast reactions since the prior assessment had significantly lower quiz scores than those who did not report studying.

Table 2
Comparison of knowledge scores between time points. Scores listed in the first column are means with 95% confidence intervals indicated in parentheses. Statistical comparisons are based on the Scheffé test.


On multivariate analysis, statistically significant predictors of participant confidence included time relative to the course (p<0.0001), post-graduate year (p=0.0002) and whether the participant had managed a contrast reaction in the interim (p=0.0097). Mean confidence scores are shown graphically for each time point in Figure 2. For all trainees, confidence scores were significantly higher than baseline immediately following the course and at all subsequent time points (Table 3). While scores at 6 months were significantly higher than prior to the course, they had declined significantly relative to the immediate post-course scores (p=0.0241). Similarly, scores at 9 months remained improved relative to pre-course scores; 9-month scores also declined relative to post-course scores, although statistical significance was not quite reached (p=0.0503).

Figure 2
Mean trainee reported confidence scores over time. Error bars indicate 95% confidence intervals.
Table 3
Comparison of confidence scores between time points. Scores listed in the first column are means with 95% confidence intervals indicated in parentheses. Statistical comparisons are based on the Scheffé test.

At baseline (pre-course) PGY-2 residents were significantly less confident than more senior trainees (PGY-3–PGY-6) in their ability to manage acute contrast reactions (Table 4). There were no differences in confidence among the other post-graduate years (Table 5). Immediately following the course, PGY-2 residents reported increased confidence that was no longer significantly different from PGY-3 and PGY-4 residents. However, PGY-2 residents remained significantly less confident than PGY-5 (p=0.0030) and PGY-6 (p=0.0068) trainees in their ability to manage acute contrast reactions across all time points for the duration of the study. Based upon the multivariate analysis, those trainees who managed a contrast reaction at any point during the assessment period were also significantly more confident than those who did not (p=0.0097).

Table 4
Comparison of baseline (pre-course) confidence scores based upon post graduate year. Scores listed in the first column are means with 95% confidence intervals indicated in parentheses. Statistical comparisons are based on the Scheffé test.
Table 5
Mean trainee reported confidence scores subdivided by post-graduate year. Standard deviations are indicated in parentheses. Nine month follow-up testing occurred after the PGY-5 and PGY-6 trainees had graduated from the program.


As recently as 2009, data in the literature suggested that radiologists are ill-prepared to manage reactions to contrast material (1, 3). In a knowledge survey, Lightfoot et al. showed that none of the surveyed radiologists correctly described how to administer epinephrine for an anaphylactic-like reaction to contrast material and 17% provided an answer that was considered an overdose (3). Individual radiologists are aware of the need for additional training (14) and residency programs have responded by instituting material to teach trainees about the management of acute contrast reactions (7, 15).

Due to the rarity of acute contrast reactions (approximately 0.6% of intravenous contrast administrations (15, 16)) it is difficult to assess the impact educational interventions have on the management of these reactions (7). Research has focused instead on improving the way in which the material is taught and assessing how those changes affect knowledge and resuscitation skills. Lecture series similar to ours have been previously shown to result in demonstrable improvements in scores on immediate post-tests concerning the management of acute contrast reactions (12). While the findings of Echenique et al (12) may have been confounded by using identical pre- and post-tests, our data confirm the results of this prior study: following a lecture and demonstration series, trainees at our institution scored significantly higher on a written knowledge test that was different but of similar difficulty to the pre-test. Participants in our course also reported significantly higher confidence in their ability to manage an acute contrast reaction following participation in this course.

While prior studies have shown an immediate improvement in knowledge following lecture based training, there has been little attention to the durability of this effect. Literature on cardiopulmonary resuscitation has shown that knowledge and skills gained through training decline in as few as two months and certainly prior to repeat training one year later (1720). To date, the only study which has looked at the durability of knowledge gained from training in the management of acute contrast reactions is a recent study by Wang et al. (7). As part of their comparison of lecture and scenario based teaching methods, the authors reassessed participant knowledge at two months following the training and found that improvements persisted at that time point (7). Until now, testing at time points farther from the initial training has not been performed.

In our study, we followed participant knowledge and confidence for nine months after initial training and can infer findings at twelve months. Our results demonstrate durability of improvements in participant knowledge and confidence through six months. At nine months, however, knowledge scores declined to the point that they were no longer statistically significantly improved over baseline. Confidence scores, while still improved over baseline, had declined relative to immediate post-course scores. Despite prior exposure to the course material, PGY-3–PGY-6 trainees’ scores on the pre-course quiz were not significantly higher than the scores of PGY-2 trainees who were first-time attendees at the course. While we did not specifically test trainees 12 months following the course, it is reasonable to view the pre-course test as a 12 month assessment for returning trainees; thus this finding confirms the decline in knowledge that we demonstrated at 9 months.

We found that, prior to the course, senior residents were more confident than PGY-2 residents in their ability to manage an acute contrast reaction despite no measurable difference in knowledge. Combined with the fact that improved confidence persisted longer than improved knowledge after the course, this suggests that beyond 6 months after a course like this, trainees may believe they are better equipped to treat a contrast reaction than they actually are.

The measurable decline in knowledge and confidence six months after the course combined with the potential over-confidence reported by trainees confirm the importance of refresher courses in a training program of this type. Our data suggest that these refresher courses should occur no more than six months after initial training. A similar frequency of refresher courses has been recommended for cardiopulmonary resuscitation training by Woolard et al., who showed a similar decline in defibrillation skills at 7 months after initial certification (19).

One method of teaching the management of acute contrast reactions not directly evaluated with this study is the use of technologically sophisticated simulation (e.g., electronically controlled manikins), although our course did employ low-fidelity simulations. Some have suggested that simulation might provide a more realistic and durable training experience than lecture-based teaching and might allow for more accurate assessment of contrast reaction treatment skills of course participants (7). Tubbs et al. and Tofil et al. have both shown that simulation improves scores on post-testing of knowledge related to the management of contrast reactions (8, 9). More recently, comparisons of lecture based instruction to simulation training in the management of contrast reactions have shown that while participants favor simulation-based instruction, it is more costly and there was no difference in participant knowledge on post-testing (7, 10). To date it appears that while there may be some advantages to teaching this material in a more “realistic” setting, it has not yet been shown that there is a clear benefit in terms of knowledge, and knowledge durability, gained. This is an area of research that deserves additional attention.

One other interesting finding of our study was the apparent negative relationship between studying between assessments and participant scores on the quizzes. Those who reported studying between assessments had significantly lower quiz scores than those who had not, a finding that is contrary to what would be expected. One possible explanation is that participants studied incorrect material. Alternatively, there may have been intrinsic differences in knowledge retention between participants who studied the material and those who did not. While it could have been possible that a greater number of participants who did not study the material had encountered patients having contrast reactions in the interval between tests, this does not appear to be the case, as statistically there was no association between those who studied (or did not) and those who reported managing a reaction.

This study has several limitations, one of which is the small sample size. The sample size of our study was limited by the size of the residency and fellowship class at our institution. As a result of scheduling conflicts, there were substantial declines in participation over the course of the study, with a number of trainees not taking follow up tests. A second limitation is the relative lack of difficulty of the knowledge assessment instruments. The high mean scores on the assessments prior to the course and the small absolute differences in scores following the course suggest that the assessments could have been too “easy” which may have made differences between groups and across time less apparent.

The repeated use of the knowledge assessment measures is also a limitation. While we alternated two distinct quizzes, the same quizzes were administered several times during the course of the study. This could have produced inflated follow up scores, purely related to recall of previously answered questions and/or repeated calling of attention to the subject matter. This study is also limited by the fact that it was performed at a single institution with a unique training program for the management of contrast reactions. While we believe the most important result of this study, (i.e., the need for biannual refresher courses to maintain knowledge) can be applied at other institutions, the improvements in knowledge and confidence could be unique to our institution and/or its trainees.

Finally, we believe that the most important limitation of this study is that while the assessment measures are testing trainee knowledge, they are not assessing management of real-life contrast reactions. The rarity of real-life contrast reactions makes assessment of changes in trainee performance following an intervention difficult and thus necessitates the use of surrogate measures such as knowledge quizzes and observation of trainee performance in simulated scenarios.

In an ideal world, radiologists would be taught to manage acute reactions to contrast material and instruction on treatment would be reinforced in some fashion throughout the course of their careers, so that knowledge remains fresh and can be applied effectively at all times. The limited frequency of contrast reactions means that we need to develop educational methods of reinforcement to maintain such knowledge and skill. Our study indicates that trainee knowledge and confidence can be improved through a lecture and skills demonstration course, but that the knowledge gained lasts for only about six months, despite a longer persistence of participants’ subjective feelings of increased confidence. Reinforcement at no more than six month intervals is therefore suggested. As a result of these findings, we have decided to provide our current trainees with a repeat activity on contrast reaction treatment within six months of our annual course.


Statistical analysis was performed with financial support from Michigan Institute for Clinical and Health Research (MICHR) grant:UL1RR024986. We thank Linda Hart, Jill Philp and Amy Spencer for their assistance in administering the assessment measures.


1. Tapping CR, Culverwell AD. Are radiologists able to manage serious anaphylactic reactions and cardiopulmonary arrest? Br J Radiol. 2009;82(982):793–9. [PubMed]
2. Sadler DJ, Parrish F, Coulthard A. Intravenous contrast media reactions: how do radiologists react? Clin Radiol. 1994;49(12):879–82. [PubMed]
3. Lightfoot CB, Abraham RJ, Mammen T, Abdolell M, Kapur S. Survey of radiologists’ knowledge regarding the management of severe contrast material-induced allergic reactions. Radiology. 2009;251(3):691–6. [PubMed]
4. Gaca AM, Frush DP, Hohenhaus SM, et al. Enhancing pediatric safety: using simulation to assess radiology resident preparedness for anaphylaxis from intravenous contrast media. Radiology. 2007;245(1):236–44. [PubMed]
5. Brown PW, Ghandhi MR, Morcos SK. Can radiologists manage reactions to intravascular contrast media? Clin Radiol. 1994;49(12):906. [PubMed]
6. Bartlett MJ, Bynevelt M. Acute contrast reaction management by radiologists: a local audit study. Australas Radiol. 2003;47(4):363–7. [PubMed]
7. Wang CL, Schopp JG, Petscavage JM, Paladin AM, Richardson ML, Bush WH. Prospective randomized comparison of standard didactic lecture versus high-fidelity simulation for radiology resident contrast reaction management training. AJR Am J Roentgenol. 2011;196(6):1288–95. [PubMed]
8. Tubbs RJ, Murphy B, Mainiero MB, et al. High-fidelity medical simulation as an assessment tool for radiology residents’ acute contrast reaction management skills. J Am Coll Radiol. 2009;6(8):582–7. [PubMed]
9. Tofil NM, White ML, Grant M, et al. Severe contrast reaction emergencies high-fidelity simulation training for radiology residents and technologists in a children’s hospital. Acad Radiol. 2010;17(7):934–40. [PubMed]
10. Petscavage JM, Wang CL, Schopp JG, Paladin AM, Richardson ML, Bush WH., Jr Cost analysis and feasibility of high-fidelity simulation based radiology contrast reaction curriculum. Acad Radiol. 2011;18(1):107–12. [PubMed]
11. Lerner C, Gaca AM, Frush DP, et al. Enhancing pediatric safety: assessing and improving resident competency in life-threatening events with a computer-based interactive resuscitation tool. Pediatr Radiol. 2009;39(7):703–9. [PubMed]
12. Echenique AM, Joseph R, Casillas VJ. Recognition and treatment of reactions to contrast media: a model for resident and faculty education employing lectures and case scenario workshops. Acad Radiol. 1997;4(3):230–4. [PubMed]
13. Norman G, Streiner DL. In: Biostatistics: The Bare Essentials. 3. Norman G, editor. PMPH; USA: 2008. pp. 81–5.
14. Schellhammer F. Do radiologists want/need training in cardiopulmonary resuscitation? Results of an Internet questionnaire. Acta Radiol. 2003;44(1):56–8. [PubMed]
15. Collins MS, Hunt CH, Hartman RP. Use of IV epinephrine for treatment of patients with contrast reactions: lessons learned from a 5-year experience. AJR Am J Roentgenol. 2009;192(2):455–61. [PubMed]
16. Wang CL, Cohan RH, Ellis JH, Caoili EM, Wang G, Francis IR. Frequency, outcome, and appropriateness of treatment of nonionic iodinated contrast media reactions. AJR Am J Roentgenol. 2008;191(2):409–15. [PubMed]
17. Friesen L, Stotts NA. Retention of Basic Cardiac Life Support content: the effect of two teaching methods. J Nurs Educ. 1984;23(5):184–91. [PubMed]
18. Berden HJ, Willems FF, Hendrick JM, Pijls NH, Knape JT. How frequently should basic cardiopulmonary resuscitation training be repeated to maintain adequate skills? BMJ. 1993;306(6892):1576–7. [PMC free article] [PubMed]
19. Woollard M, Whitfield R, Newcombe RG, Colquhoun M, Vetter N, Chamberlain D. Optimal refresher training intervals for AED and CPR skills: a randomised controlled trial. Resuscitation. 2006;71(2):237–47. [PubMed]
20. Weaver FJ, Ramirez AG, Dorfman SB, Raizner AE. Trainees’ retention of cardiopulmonary resuscitation. How quickly they forget. JAMA. 1979;241(9):901–3. [PubMed]