Among eligible radiologists who agreed to participate in this study, about two thirds started the web-based educational intervention and 89% (41/46) of those who started completed it. This suggests that once engaged, the intervention was meaningful to participants. When we examined the characteristics of radiologists who consented to the program versus those who did not consent, we learned that the only difference was gender. Female radiologists were more likely than males to complete the program after consenting. While we learned that it is feasible to develop and implement an educational intervention using a large data repository to individually tailor the intervention to practicing radiologists who are part of the BCSC, our ability to recruit and actively engage radiologist in this activity was disappointing. We know that radiologists are very busy clinicians and perhaps it is too difficult to identify even small sessions of protected time to undertake such an intervention. However, we believe that an interactive intervention is likely to be more effective than paper audit reports.
At the BCSC, and likely elsewhere, these audit reports are provided to either individual radiologists or to the lead radiologist in charge of MQSA audits. Although he/she is responsible for sharing individual data with radiologists interpreting mammography, it is unclear how this process occurs in actual practice. For example, does the radiologist facilitate any discussion about how radiologists might bring their performance data closer to the desirable goals published by the Agency for Health Care Policy and Research and the benchmarks reported by the BCSC (29
We learned that the majority of participants logged on three to four times to finish the program, so they did not finish in a single session, even though the program took only about one hour to complete. It may be that radiologists completed the program in between interpreting images during their clinical day, resulting in interruptions to the intervention. We learned that the vast majority of radiologists found the audit module activities to be moderately to very helpful, especially when comparing their own data with that of other radiologists in the study. Participants also found it helpful to see that the risk of cancer in their patient panel is very low. This is likely the first time radiologists saw the level of breast cancer risk in their screening population, which was calculated using a risk model developed by BCSC investigators (31
) for risk of breast cancer in the next year
rather than five years. This risk information is highly relevant to radiologists' assessment of their own annual or biennial screening mammography data.
We were also interested in participants' responses to the malpractice module and their perceptions of the influence of malpractice on their recall and biopsy recommendation rates. Importantly, radiologists' perceptions of the relationship between risk of medical malpractice on their recall rates and breast biopsy recommendations was reduced by half and all participants agreed at the end of the intervention that a recall rate above 10% would not reduce their risk of malpractice. Thus, their perceptions about malpractice have been changed such that they are less likely to affect recall. Also, our knowledge questions clearly indicated that radiologists recognized that their self-reported recall rates (13.0%–14.9%) were substantially higher than the mean recall rate in the U.S. of 9.8%, and that the majority could cite the 5–7% plateau above which additional recall does not result in increased cancer detection. It will be interesting to see if their recall rates change in clinical practice as a result of this intervention. We are now collecting follow-up data to assess this measure between the early and late (control) intervention groups.
The intervention we studied illustrated, for each individual radiologist, his/her performance gap (Module 1) identified modifiable factors that might affect performance and then asked radiologists to develop individualized goals for improvement (Modules 1–3). Our study was innovative in that it was provided over the Internet and when each radiologist logged onto the system, the intervention automatically populated all data fields with his/her clinical performance data and provided peer comparison data to the aggregate of all radiologists. This could not be accomplished without a data collection and data cleaning system that could ascertain and prepare data for interpretation and facilitate each radiologist's use of the data for quality improvement purposes. Though many mammography management software systems are able to calculate audit measures, their data capture is often not complete enough to calculate performance indices with the precision we have using BCSC data, and they cannot provide the performance comparisons the BCSC provides.
The strengths of our study include the participation of radiologists from four states in the U.S., our ability to provide actual clinical performance data in real time, and our linkage to standardized data collected over many years prior to the intervention to assess individual audit data. We could also help the radiologists understand why their recall rates may be unduly high because of perceived medical malpractice risk or inflated perceptions of cancer risk in their patient populations. Weaknesses of the study are that this may not be a representative sample of radiologists across the U.S.. The findings could also be affected by selection bias, though our assessment of the characteristics of those who did and did not consent was reassuring. Though the intervention was designed to address adult learning principles, some aspects of adult learning theory could not be accommodated in our intervention. Such principles include coaching, other kinds of follow-up support and small-group activities, and evaluation that foster sharing and reflection (24
), which were beyond the scope of this study. We also have yet to conduct an analysis of the impact of this intervention on clinical performance, though we plan to do this after the follow-up data are collected and analyzed. Lastly, we did not have a control group for some of the knowledge questions that were administered at the post-test period only.
In conclusion, radiologists who begin an internet-based tailored intervention designed to help reduce unnecessary recall in mammography will likely complete it, though only about half who consented to the study actually completed the intervention. Greater than 90% of participants found the intervention useful in helping them understand why their recall rates may be elevated. More research needs to be done to understand how best to engage radiologists in undertaking educational programs on the Internet.