Many countries with organized breast screening programs have developed audit and feedback systems based on their national data to help radiologists assess and improve their skills; however, the US does not offer a similar program to their radiologists. Audit feedback can improve medical practice (14
) but currently is not routinely used for mammography in the US other than the minimal MQSA requirement. Based on the results of focus groups with radiologists in three states (WA, NH, VT) (17
) we developed and piloted a web-based outcome audit and feedback system for radiologists participating in three BCSC registries.
Radiologists who participate in the BCSC have long benefited from receiving paper outcome audits from their local registries, albeit without national comparisons. Benchmarks let radiologists compare their performance with that of others and to accepted practice guidelines. Many of the early performance benchmarks were developed based on the evaluation of outcomes from small groups of breast imaging specialists or using the opinions of experienced radiologists.(12
The BCSC published benchmarks for both screening and diagnostic mammography based on the performance of community radiologists in 2005 and 2006.(1
) These benchmarks are updated annually on the BCSC website (http://breastscreening.cancer.gov/
). Subtle differences in the way data are calculated (adjusted or unadjusted) and variations in definitions used to determine a positive and negative exam (including BI-RADS®
assessment category 3 as negative regardless of the management recommendation), make it complicated for radiologists to make an exact comparison to the published benchmarks. An advantage of our audit website is that the same definitions were used for the radiologist’s individual performance measures and the regional and national benchmarks.
Although radiologists in our previous focus groups wanted the flexibility of seeing the data in different visual formats, the outcomes table was the most common format used (17
). Perhaps busy radiologists did not want to take the time to look at the data in more than one format and because the outcome table provides all of the measures on one page it was the most convenient to review.
The four comments on the survey were informative. Comment 1, “Breaking it down per year ….” and Comment 2 “Explaining more about Confidence Intervals (CI) and percentages…. Telling us the range we should strive for would be helpful. What are the national averages for all the percentages and numbers?” informed us that these radiologists were not aware of the existing functions on the website and that we need clearer descriptions of what is available and more detailed definitions. We were surprised by the next comment “How do we get specific patient names to review false negatives?” because we thought that the local registries who provide papers audits also provided lists of patients who had false negatives. The BCSC does not have access to names of patients and cannot provide this information yet it is vitally important for radiologists to learn from the review false negative cases.(19
) An important function of radiology information systems would be to produce these types of lists for radiologist review.
Only a small proportion of invited radiologists used the website and only 37% of those who used the website completed the survey so our results may not be generalizable to all U.S. radiologists who read mammography. The 22 radiologists who did not complete our survey may have not liked using the website. Currently this website is only available to radiologists who participate in the BCSC and not to all radiologists in the US. We do not know whether other breast imaging facilities are able to export TP, FP, TN and FN data separated into screening and diagnostic mammography from their computer systems. With the advent of the American College of Radiology’s National Mammography Database, more radiologists will be able to export these data elements in the future. The BCSC matches mammograms with pathology and cancer registries to identify TP and FN exams, so can calculate performance measures such as sensitivity and specificity. Most breast imaging practices, even those participating in the National Mammography Database, are not able to completely capture cancers matched to mammograms. Although we provided most of the information that was mentioned in the comments of the survey, the radiologists did not know it was available and did not access this information. Cancer registry data are only available to the BCSC two to three years after the cancer diagnosis date so all measures requiring cancer status will always be several years behind current mammography assessments. Also technology is changing rapidly in the field of breast imaging. The use of digital and computer assisted detection are rapidly disseminating throughout the US and this will influence the outcome audit results.(20
Most radiologists who accessed the website did so in the month or two following the invitation letter. Because we are planning to update the data annually it may not be necessary to visit the website more frequently than once a year. However, to enhance the use of the audit feeback one would need to develop a reminder system for the radiologists to check the website at least annually or develop incentives such as CME credit or documenting regular participation to become and maintain an ACR “Center of Excellence.” Or our legislators could consider making it mandatory to review complete audit reports when MQSA is reissued.
The website cost about $55,000 to develop. This covered the cost of a programmer, graphic and word editors, a data manager and two investigators. We estimate that there will be a modest cost to annually update the data and to maintain the website.
Along with these limitations are considerable strengths. The radiologists who used the website found it useful to help guide changes in their interpretative goals. Many radiologists are not familiar with published interpretive goals (Jackson under review) and are not always accurate in knowing what their outcome statistics are compared to their peers (Cook under review). This website continually provides accurate individual radiologist data with comparisons to national and regional data that are calculated all the same. Radiologists appeared to use different types of visual formats and reviewed different outcome measures. This can only occur with an interactive website. The American College of Radiology’s National Mammography Database has recently started to provide some audit feedback to their participating facilities using the BCSC data as benchmark comparisons but it is currently not interactive (personal communication, Mythreyi Chatfield, 7/13/11).
We are expanding the website to also provide information at the facility level which should be ready for BCSC facilities in the summer of 2012, and we plan to make the website public in early 2013, so that all radiologists and facilities will be able to get audit reports, with benchmark comparisons, after entering their own information. The shell of our website is available to be used by other countries or screening programs.
An interactive website to provide customized mammography audit feedback reports to radiologists has the potential to be a powerful tool in improving interpretive performance. The conceptual framework of customized audit feedback reports can also be generalized to other imaging tests.