|Home | About | Journals | Submit | Contact Us | Français|
One might speculate that radiologists who enjoy mammography may exhibit better performance than radiologists who do not.
One hundred thirty-one radiologists at three Breast Cancer Surveillance Consortium (BCSC) registries completed a survey about their characteristics, clinical practices, and attitudes related to screening mammography. Survey results were linked with BCSC performance data for 662,084 screening and 33,977 diagnostic mammograms. Using logistic regression, we modeled the odds of an abnormal interpretation, cancer detection, sensitivity, and specificity among radiologists who reported they enjoy interpreting screening mammograms compared with those who do not.
Overall, 44.3% of radiologists reported not enjoying interpreting screening mammograms. Radiologists who reported enjoying interpreting screening mammograms were more likely to be women, spend at least 20% of their time in breast imaging, have a primary academic affiliation, read more than 2,000 mammograms per year, and be salaried. Enjoyment was not associated with screening mammography performance. Among diagnostic mammograms, there was a significant increase in sensitivity among radiologists who reported enjoyment (85.2%) compared with those who did not (78.2%). In models adjusting for radiologist characteristics, similar trends were found; however, no statistically significant associations remained.
Almost one half of radiologists actively interpreting mammograms do not enjoy that part of their job. Once we adjusted for radiologist and patient characteristics, we found that reported enjoyment was not related to performance in our study, although suggestive trends were noted.
The declining numbers of radiologists willing to provide mammography services [1–3] and the increasing evidence suggesting wide variability in their performance [4, 5] raised the question of whether mammography performance is associated with radiologists' enjoyment of interpreting screening mammograms. Given the negative incentives for joining the breast imaging field , one might speculate that radiologists who enjoy mammography are more likely to remain in the field and may exhibit better performance than radiologists who do not. However, we know of no studies that have tested this idea.
Radiologists in general are satisfied with their professional choice of specialty, with 93% reporting that they enjoy their career . In the single published study on career satisfaction related to mammography, Lewis et al.  reported that breast imaging specialists enjoy their subspecialty as much as other general radiologists or other radiology subspecialists. However, the satisfaction survey of breast imaging specialists in the Lewis et al. study included no data about attitudes specific to mammography or community-based general radiologists in the United States who do not specialize in breast imaging. The finding by Lewis et al. is counterintuitive if one considers that breast imaging is associated with several disincentives, including low reimbursement, high relative risk of malpractice lawsuits, and a perception that the field is not commensurate with other advanced technology-driven sub-specialties .
Although it is helpful to understand the level of satisfaction of breast imaging specialists, these radiologists do not interpret the majority of the 35 million mammograms obtained in the United States each year . Approximately 62% of all radiologists interpret mammograms as part of their workload, whereas only 10.5% are considered breast imaging specialists . In fact, breast imaging specialists interpret only 30% of mammograms in the United States .
As part of a study of radiologist characteristics associated with interpretive accuracy, we had a unique opportunity to examine radiologists' attitudes toward mammographic interpretation and the relationship between radiologists' enjoyment of their work and their interpretive accuracy. We obtained performance data from community radiology practice sites in three geographic regions of the United States and linked these performance data to survey data obtained from individual radiologists. This linkage enabled us to study the relationships between radiologists' self-reported level of enjoyment of interpreting screening mammograms and their actual interpretive performances. Our study aims were to identify characteristics of the radiologists who do and of those who do not enjoy interpreting screening mammograms and to examine any potential associations among radiologist enjoyment and objective measures of interpretive performance of both screening and diagnostic mammography in clinical practice.
Our community-based multicenter observational study involved collaboration among general radiologists and breast imaging specialists in diverse regions of the United States who participate in the Breast Cancer Surveillance Consortium (BCSC) . Our survey methods have been previously described and include information obtained from radiologists through a mailed survey . Of the 181 radiologists invited to participate in the study 139 (76.8%) responded but only 131 (72.4%) answered the question about enjoying interpreting mammograms. The survey data were linked with screening and diagnostic mammography interpretations collected from the BCSC from 1998 to 2005 for these 131 radiologists. These mammography data were linked with cancers found in state or in Surveillance, Epidemiology, and End Results (SEER) cancer registries and pathology databases.
All data were analyzed at a central statistical coordinating center. Each registry and the statistical coordinating center have received institutional review board (IRB) approval for either active or passive consenting processes or a waiver of consent to enroll participants, link data, and perform analytic studies. All procedures are HIPAA-compliant, and all registries and the statistical coordinating center have received a federal certificate of confidentiality that protects the identities of research subjects. The IRBs of the University of Washington, Group Health, Dartmouth College, Northwestern University, and the Colorado Mammography Advocacy Program all approved study protocols. Data security protections to safeguard these research data are described elsewhere .
Briefly, all radiologists interpreting mammograms at facilities that participate in one of three BCSC mammography registries in Colorado, New Hampshire, and the Puget Sound region of Washington were mailed a survey in February 2002 . The survey included questions on demographic characteristics (e.g., age, sex), clinical practice (e.g., experience with and attitudes toward mammography, practice characteristics, reimbursement), and perceptions of and experience with medical malpractice lawsuits.
The survey also included responses to the following statement regarding professional satisfaction: “I enjoy interpreting screening mammograms.” Participants rated their level of agreement on a Likert scale of from 1 to 5, from strongly disagree to strongly agree, respectively. Because of small sample sizes in the extreme categories, results were collapsed into two categories: strongly agree and agree; and strongly disagree, disagree, and neutral.
The survey included a 10-item instrument assessing reactions to uncertainty in clinical decision making that has been validated among several specialties, including radiology [12, 13]. These items characterize physicians' reactions to the uncertainty associated with clinical care in three areas: first, reluctance to disclose mistakes to other physicians; second, stress from uncertainty in clinical decision making; and, third, concerns about adverse patient outcomes. Each item was scored on a Likert scale of from 1 to 6, from strongly disagree to strongly agree, respectively. The items were slightly modified from the literature to increase their relevance to the practice of mammography interpretation; however, their psychometric properties remained unchanged .
We studied the interpretive performance of radiologists on all screening and diagnostic mammograms obtained between 1998 and 2005 except the following exclusions.
Screening mammograms included 754,207 examinations characterized by the standard BCSC definition of routine screening indicated by the radiologist: patient 18 years old or older; bilateral screening views captured; no history of breast cancer based on self-report, pathology databases, and cancer registry linkage; no imaging in the previous 9 months from database, self-report, radiologist report, or comparison film date; no breast implants; and overall BI-RADS assessment code not 6. We excluded mammograms missing assessment (n = 3,334 mammograms, 0.4%) or key patient covariate data including patient age, time since last mammogram, and breast density (n = 45,032 mammograms, 6.0%). In addition, we excluded 43,757 mammograms from facilities that did not report breast density, for a total sample size of 662,084 screening mammograms. Breast density influences performance and therefore is an important covariate .
Diagnostic mammograms included 59,001 defined as mammograms with an indication for evaluation of a breast problem (e.g., clinical signs or symptoms). We did not include diagnostic mammograms obtained to further work up a screening mammogram, short-interval follow-up examinations, or mammograms from facilities that do not report a measure of breast density (n = 7,931 mammograms, 13.4%). We excluded mammograms missing outcome (n = 705 mammograms, 1.2%) or key patient covariate data including patient age, time since last mammogram, breast density, and self-reported symptoms (n = 16,388 mammograms, 27.8%) for a total of 33,977 diagnostic mammograms in the analyses. More diagnostic mammograms were missing covariate data than screening mammograms because more were missing breast density information and because we also excluded mammograms with missing symptom information. Diagnostic mammograms are more frequently missing breast density measures because the radiologist routinely reports it at the time of the screening mammogram.
The result of screening mammograms was characterized on the basis of the initial BI-RADS assessment according to the standard BCSC definition : A positive initial result from a screening mammogram includes a BI-RADS assessment of category 0, 4, or 5 or category 3 with a recommendation for immediate imaging or surgical evaluation. A negative initial result includes a BI-RADS assessment of category 1 or 2 or category 3 with no recommendation for immediate imaging or surgical evaluation. If additional imaging was performed on the same day as the screening mammogram, the initial screening result was coded as BI-RADS category 0 and considered to be positive.
A positive final assessment for diagnostic mammograms was defined as BI-RADS category 4, 5, or either category 3 or 0 with a recommendation for biopsy, fine-needle aspiration, or surgical consultation after imaging workup. A negative final assessment for diagnostic mammograms was defined as BI-RADS category 1, 2, and 3 or category 0 without a recommendation for biopsy, fine-needle aspiration, or surgical consultation.
We considered a woman to have breast cancer if invasive carcinoma or ductal carcinoma in situ of the breast was found in either the pathology or cancer registry data within 365 days of mammography. For screening mammograms, we censored the follow-up period for determining breast cancer at the next screening mammogram if it occurred before 365 days. The abnormal interpretation rate was calculated as the number of positive examinations divided by the total number of mammograms. We calculated the cancer detection rate by the number of positive examinations among women with breast cancer divided by the total number of examinations. Sensitivity was defined as the percentage of positive examinations among women diagnosed with breast cancer. Specificity was defined as the percentage of negative examinations among women without breast cancer.
We calculated abnormal interpretation rates (per 100 examinations), cancer detection rates (per 1,000 examinations), sensitivity (as a percentage), and specificity (as a percentage) separately for screening and diagnostic mammograms. We also calculated these rates by patient characteristics including age (10-year age groups), time since last mammogram (< 1 year, 1–2 years, ≥ 3 years, or no prior mammogram), breast density (BI-RADS categories almost entirely fat, scattered fibroglandular densities, heterogeneously dense, or extremely dense), and self-reported breast symptoms at the time of the mammogram (lump only, other symptoms, or none). Although symptomatic examinations were included in the definition of diagnostic examinations, some women self-report experiencing symptoms at screening examinations; therefore, we have shown these data for both types of examinations.
For each radiologist characteristic, we calculated the proportion of radiologists who reported that they enjoy interpreting screening mammograms (strongly agreed or agreed) and the proportion who reported they do not enjoy interpreting screening mammograms (strongly disagreed, disagreed, or neutral). We compared the proportions who enjoy and do not enjoy interpreting screening mammograms across each radiologist characteristic using a chi-square test or Fisher's exact test when expected cell sizes were < 5. All p values < 0.05 were considered statistically significant. We included all radiologist characteristics that were statistically significantly associated with enjoyment from the chi-square or Fisher's exact test in a multivariable model to determine which characteristics were still significantly associated with enjoyment after adjusting for all others.
We calculated overall abnormal interpretation rates, cancer detection rates, sensitivity, and specificity with 95% CIs by whether radiologists enjoy interpreting screening mammograms. CIs accounted for clustering among radiologists using the robust sandwich variance estimated from generalized estimating equations (GEEs) assuming an independence working correlation matrix. Using logistic regression, we examined whether these performance characteristics depended on whether radiologists felt they enjoy interpreting mammograms, adjusting for possible confounding variables. Models were fit using a three-step GEE approach to adjust for correlation among mammograms obtained at the same facility and among mammograms interpreted by the same radiologist [16, 17].
We calculated adjusted odds ratios with 95% CIs for abnormal interpretation, cancer detection, a true-positive examination given cancer (sensitivity), and a true-negative examination given no cancer (specificity). We adjusted the analyses of screening examinations for registry site and patient characteristics identified a priori based on the literature (age, density, and time since last mammogram). The diagnostic mammogram analyses were adjusted for the same covariates as well as the type of patient symptoms at the time of the mammogram. We fit additional models for screening and diagnostic mammograms that also adjusted for radiologist characteristics (sex, percentage of time spent in breast imaging, reimbursement, concerns about withdrawing from mammography due to malpractice, and attitudes about tediousness of mammography).
Characteristics of the women with 662,084 screening mammograms and 33,977 diagnostic mammograms for the evaluation of a breast problem are shown in Table 1. The average abnormal interpretation rate was 11.2 per 100 screening mammograms and decreased with patient age, time since last mammography examination (except those who came back after ≥ 3 years), and lower breast density. For every 1,000 screening examinations, four women were diagnosed with breast cancer after an abnormal mammogram; a cancer diagnosis was more common among older women and women with denser breasts.
The abnormal interpretation rate among all diagnostic mammograms was 8.6 per 100 examinations and increased with age, and time since last mammography examination and among women with denser breasts. The cancer detection rate of diagnostic mammograms was higher than that of screening examinations—30.9 per 1,000 examinations—and also increased with age, time since last mammography examination, breast density (except extremely dense breasts), and women with a lump at the time of mammography.
Table 2 shows the characteristics and survey results of 131 radiologists by whether they enjoy interpreting screening mammograms. Overall, just over half the radiologists (55.7%) reported they enjoy interpreting screening mammograms. Radiologists who were older, women, and work part-time reported enjoying interpreting mammograms more than younger, men, or full-time radiologists. All eight radiologists whose primary affiliation was with an academic medical center reported enjoying interpreting mammograms compared with only half of those not affiliated or those who had an adjunct affiliation. Those who spent 20% or more of their time working in breast imaging or who interpreted more than 2,000 mammograms per year were significantly more likely to enjoy interpreting mammograms. Radiologists with an annual salary compared with those paid by other means (e.g., per mammogram interpreted, shared partnership profits) reported higher rates of enjoyment. Those radiologists who were less likely to enjoy interpreting mammograms considered that mammography was tedious or experienced high uncertainty when interpreting mammograms. They also felt they are not as good at interpreting mammograms as those radiologists who enjoy interpreting screening mammograms.
We included all statistically significant characteristics in Table 2 in a single multivariable model to determine which radiologist characteristics were the most important in predicting whether radiologists enjoy interpreting screening mammograms. Academic affiliation was not included in the model because there were no academic radiologists in the neutral or disagree category. These results are shown in Table 3. Those radiologists who are women, spend more time working in breast imaging, are on annual salary, who never feel like withdrawing from interpreting mammograms due to malpractice concerns, and who do not find mammography tedious were significantly more likely to enjoy mammography after adjusting for the other radiologist characteristics in the model.
Table 4 shows the unadjusted abnormal interpretation rates, cancer detection rates, sensitivity, and specificity for screening and diagnostic mammograms by whether radiologists enjoy or do not enjoy interpreting screening mammograms. There were no statistically significant differences in abnormal interpretation rates, cancer detection rates, sensitivity, or specificity among radiologists who enjoy interpreting screening mammograms compared with those radiologists who do not enjoy mammography. There was a significant association between enjoyment and increased sensitivity among diagnostic mammograms when adjusting for patient characteristics (Table 5). After adjusting for radiologist characteristics, similar trends were found; however, no statistically significant associations remained.
Our study based on 131 primarily community-based radiologists from three geographic regions in the United States found that slightly more than half of radiologists who interpret screening mammograms enjoy this work. In particular, women radiologists, those who spend at least 20% of their time in breast imaging, those who have a primary academic affiliation, those who read more than 2,000 mammograms per year, and those who receive a salary were more likely to report enjoying interpreting screening mammography. Not surprisingly, those radiologists who feel like withdrawing from interpreting mammograms at least monthly because of malpractice concerns, those who reported mammography tedious, and those who have an overall high level of uncertainty in medical decision making when interpreting mammography examinations were significantly less likely to report enjoying interpreting screening mammography. Despite these differences, enjoyment was not significantly associated with interpretive performance for both screening and diagnostic mammography after adjusting for patient and radiologist characteristics.
In contrast to our current study that found 56% of radiologists in general community practice enjoy interpreting screening mammograms, Lewis et al.  found a 93% professional satisfaction rate among breast imaging specialists, comparable to a separate report of job satisfaction among radiologists in general . We suspect that the lower rate of satisfaction noted in our study is related to differences in the study populations; primarily, most of the radiologists in our study were generalists who included mammographic interpretation in their clinical practice and were specifically queried about this part of their practice. Our study did show a 100% rate of satisfaction among academic radiologists. The study by Lewis et al. included only radiologists who specialize in breast imaging. The difference may also be due to asking slightly different survey questions. Our study asked, “Do you enjoy interpreting screening mammograms?” whereas the study by Lewis et al. asked about satisfaction with breast imaging overall.
According to the 2006 American College of Radiology survey, 10% of radiologists consider themselves specialists in breast imaging, although only 21% of the 10% are fellowship-trained . These breast imaging specialists interpret about one third of all mammograms each year, consistent with the Institute of Medicine report , “Improving Breast Imaging Quality Standards,” which thoroughly reviews workforce issues about providing high-quality breast imaging services. There is no standard definition of a breast imaging specialist; therefore, Lewis et al.  reported results by several different definitions, including percent effort in breast imaging. Radiologists who spend 30% or greater effort in breast imaging reported a high level of professional satisfaction.
Although percent effort categories in our study were different from those in the Lewis et al.  study, we noted similar findings. The radiologists in our study who reported working in breast imaging less than 20% of their time were significantly less likely to enjoy interpreting screening mammograms than radiologists who spend > 40% of their time in breast imaging. Lewis et al. found that 82% of breast imaging specialists surveyed read > 2,000 mammograms per year compared with only 34% of radiologists in our study who read more that 2,000 mammograms per year. Our study's various measures that could define a breast imaging specialist—that is, percentage of time spent in breast imaging and high volume of mammograms—were both significantly associated with enjoyment of interpreting screening mammograms.
Overall, radiologists who enjoy interpreting screening mammograms do not perform better than those who do not enjoy interpreting mammograms, although suggestive trends were noted. Radiologists who enjoy interpreting screening mammograms have slightly lower abnormal interpretation rates and higher sensitivity of screening mammography without a reduction in cancer detection rates when adjusting for patient characteristics.
For diagnostic mammography, radiologists who enjoy mammography had statistically higher sensitivity while maintaining equal specificity as those who do not enjoy mammography. Thus, they missed fewer cancers without biopsying more women without cancer. These results were no longer significant after adjusting for other radiologist characteristics, but we do not know the direction of the causal relationship. We cannot tell what the causal association is or whether it is the enjoyment or the other radiologist characteristics that are the best indicators of performance because they are associated with each other. We do not know whether radiologists have improved performance because they enjoy interpreting screening mammography or that they enjoy it because they are good at their job.
Even though enjoyment was not significantly related to performance after adjusting for radiologists' characteristics, we can speculate how enjoyment may influence whether radiologists start and continue to interpret mammograms. Given the concern that there may not be an adequate workforce in the future to meet the increasing demands for mammography [1, 6, 19], it is important to understand why new residents are not choosing breast imaging as a specialty. In our study, younger radiologists reported less enjoyment in interpreting mammograms compared with older radiologists. The difference was not statistically significant but may help explain why new residents are not joining the mammography field. Alternatively, the satisfaction seen among the older radiologists is because these are who remain after those who do not enjoy mammography have left the field. Therefore, this question may be a predictor of retention.
In the recent past, low reimbursement and increased malpractice litigation were two of the main reasons cited by radiology residents for not pursuing a career in breast imaging, and these were also related to enjoyment in our study. The primary reason given by radiology residents for not going into the field of breast imaging is that breast imaging was “not an interesting field” and limited in its application of advanced technology compared with other imaging subspecialties . However, circumstances are now changing with the rapid incorporation of digital mammography, including advanced platforms such as future tomosynthesis, and breast MRI into routine clinical practice and the increasing volume of imagingguided breast biopsy with subsequent patient interaction . These circumstances may change radiologists' attitudes toward mammography over time and warrant further study.
There are several strengths and limitations to our study. Although we surveyed radiologists from only three geographic locations, they represent very different regions of the United States (northeast, northwest, and central) and are primarily community-based, with only 6% of respondents working in academic centers. A previously published study using the same radiologists' survey data reported that the demographics of our radiologists and the radiologists in a study in which they were randomly selected were similar . Therefore, we believe that these findings are generalizable to mammographers in other parts of the country, most of whom are generalists and account for nearly two thirds of all mammographic interpretations in the United States . We asked detailed questions and were able to link the survey responses to their actual mammography data to measure interpretive accuracy of both screening and diagnostic examinations.
Because of the nature of our research question, we are unable to discern from our data the specific cause and effect of the associations reported. It may be that if one is proficient in the interpretation of screening mammography, one is more likely to enjoy the benefits of successful outcomes. Alternatively, the simple lack of aversion experienced by some radiologists to interpreting screening examinations may be more conducive to higher reading volumes and subsequent improved skills. Furthermore, those who may have performed poorly or were exceedingly unhappy either were no longer reading mammograms or might have refused to respond to the survey.
As the population of women over 40 increases over the next 10 years, the need for additional mammography will also rise . This trend is occurring because the supply of radiologists willing to interpret mammograms is declining and suggests a precarious situation [1, 18]. On average, 44% of mammography facilities reported staffing shortages in 2005, a problem that becomes even more severe in not-for-profit facilities . These findings emphasize the alarming discrepancy between the increasing demand for mammographic services and the decline of radiologists willing to provide these services.
Our study shows that almost half of radiologists who interpret mammograms do not enjoy interpreting screening mammography, although enjoyment does not appear to affect performance. It is reassuring that radiologists who do not enjoy interpreting mammography are as good as those who do because it is likely not an option for them not to do it given that many general radiologists work in small or rural practices where mammography must be part of their workload.
We thank Bonnie Yankaskas from the University of North Carolina at Chapel Hill and Gary Cutter from the University of Alabama for their contributions to this study.
This study was supported by the Agency for Health Research and Quality (grant HS10519) and the National Cancer Institute (grants U01 CA63731, U01 CA86082, U01 CA 63726, U01 CA70013, and U01 CA86076).
The opinions and assertions contained here in are those of the authors and should not be construed as official or as representing the opinions of the federal government or of the Agency for Health Research and Quality or the National Cancer Institute.