|Home | About | Journals | Submit | Contact Us | Français|
Although bone mineral density (BMD) testing to screen for osteoporosis (BMD T score, −2.50 or lower) is recommended for women 65 years of age or older, there are few data to guide decisions about the interval between BMD tests.
We studied 4957 women, 67 years of age or older, with normal BMD (T score at the femoral neck and total hip, −1.00 or higher) or osteopenia (T score, −1.01 to −2.49) and with no history of hip or clinical vertebral fracture or of treatment for osteoporosis, followed prospectively for up to 15 years. The BMD testing interval was defined as the estimated time for 10% of women to make the transition to osteoporosis before having a hip or clinical vertebral fracture, with adjustment for estrogen use and clinical risk factors. Transitions from normal BMD and from three subgroups of osteopenia (mild, moderate, and advanced) were analyzed with the use of parametric cumulative incidence models. Incident hip and clinical vertebral fractures and initiation of treatment with bisphosphonates, calcitonin, or raloxifene were treated as competing risks.
The estimated BMD testing interval was 16.8 years (95% confidence interval [CI], 11.5 to 24.6) for women with normal BMD, 17.3 years (95% CI, 13.9 to 21.5) for women with mild osteopenia, 4.7 years (95% CI, 4.2 to 5.2) for women with moderate osteopenia, and 1.1 years (95% CI, 1.0 to 1.3) for women with advanced osteopenia.
Our data indicate that osteoporosis would develop in less than 10% of older, post-menopausal women during rescreening intervals of approximately 15 years for women with normal bone density or mild osteopenia, 5 years for women with moderate osteopenia, and 1 year for women with advanced osteopenia. (Funded by the National Institutes of Health.)
Current osteoporosis management guidelines1–7 recommend routine bone mineral density (BMD) screening with the use of dual-energy x-ray absorptiometry (DXA) scans for women 65 years of age or older, but no guidelines specify an osteoporosis screening interval that is based on data from longitudinal cohort studies. The U.S. Preventive Services Task Force stated in 2011, “Because of limitations in the precision of testing, a minimum of 2 years may be needed to reliably measure a change in BMD; however, longer intervals may be necessary to improve fracture risk prediction.”1 To our knowledge, no U.S. study has addressed this clinical uncertainty.
A previous prospective analysis8 of data from the Study of Osteoporotic Fractures (SOF) suggested that repeating a BMD measurement up to 8 years after the initial screening provided little additional value beyond the initial BMD screening results for predicting new fractures in elderly women. A 2009 longitudinal analysis9 involving 1008 women in Australia who were 60 years of age or older suggested that age and baseline T score were important factors to consider in determining a BMD testing interval, with the goal of detecting low BMD before the onset of a fragility fracture. However, neither of these studies estimated BMD testing intervals to identify osteoporosis (as defined by BMD criteria) before a major fracture occurred; instead, they used fracture or fracture combined with osteoporosis as the outcome.
To determine how the BMD testing interval relates to the timing of the transition from normal BMD or osteopenia to the development of osteoporosis before a hip or clinical vertebral fracture occurs, we conducted competing-risk analyses of data from 4957 women, 67 years of age or older, who did not have osteoporosis at baseline and who were followed longitudinally for up to 15 years in the SOF. The BMD testing interval was defined as the estimated time during which osteoporosis developed in 10% of women before they had a hip or clinical vertebral fracture and before they received treatment for osteoporosis. We expected women with osteopenia at baseline to have a more rapid transition to osteoporosis than women with normal T scores at baseline.
The SOF cohort included 9704 ambulatory women (more than 99% of whom were white; race was self-reported), 65 years of age or older, recruited between 1986 and 1988 from population-based listings at four U.S. sites: Baltimore, Minneapolis, the Monongahela Valley near Pittsburgh, and Portland, Oregon.10 Women with bilateral hip replacements were excluded from the study. All participants provided written informed consent. The follow-up period included study examinations at year 2 (1989–1990), year 6 (1992–1994), year 8 (1995–1996), year 10 (1997–1999), and year 16 (2002–2004). Details of the study examinations and the selection of the analytic cohort (4957 women) are described in the Supplementary Appendix (available with the full text of this article at NEJM.org) and in Figure 1. The analysis protocol was approved by the institutional review board of the University of North Carolina. The SOF study protocol was approved by the institutional review boards at all participating sites (the study protocol approved by the institutional review board of the University of California, San Francisco, is available at NEJM.org).
The primary outcomes were the estimated intervals for 10% of participants to make the transition from normal BMD or osteopenia at baseline to osteoporosis before a hip or clinical vertebral fracture occurred and before treatment for osteoporosis was initiated.
Several clinical risk factors for fracture, including components of the FRAX fracture risk assessment tool,11 were covariates in the time-to-event analyses, including age, body-mass index (BMI, the weight in kilograms divided by the square of the height in meters), estrogen use at baseline, any fracture after 50 years of age, current smoking, current or past use of oral glucocorticoids, and self-reported rheumatoid arthritis (with a missing value classified as absence of disease).
Competing risk analyses were conducted to estimate the cumulative incidence functions for the time to the development of osteoporosis before a hip or clinical vertebral fracture and before initiation of treatment for osteoporosis and the corresponding intervals for 10% of participants to make the transition from normal BMD or osteopenia to osteoporosis — that is, the cumulative incidence quantile as defined by Peng and Fine.12 The participants were stratified into four groups according to the T-score range (lowest T score at femoral neck or total hip): normal BMD (T score, −1.00 or higher), mild osteopenia (T score, −1.01 to −1.49), moderate osteopenia (T score, −1.50 to −1.99), and advanced osteopenia (T score, −2.00 to −2.49). Parametric cumulative incidence curves for the time to osteoporosis were estimated from log–logistic-regression models for the cumulative incidence function based on interval-censored data.13–19 The baseline for each participant (and her time origin, which was the first study examination at which a BMD measurement was recorded, in each primary analysis) was the first study examination that showed normal BMD or osteopenia, with follow-up continuing until the study examination that preceded death or withdrawal from the study. Incident hip or clinical vertebral fractures and the first reported use, before the development of osteoporosis, of a Food and Drug Administration–approved agent for the treatment of osteoporosis (i.e., bisphosphonate, calcitonin, or raloxifene) were treated as competing risks, whether or not there was a subsequent transition to osteoporosis, with the data coded as in the study by Hudgens et al.20 for naive likelihood analyses of parametric cumulative incidence regression with interval censoring, in which each cumulative incidence function is analyzed separately.
Two primary analyses were conducted: one for the transition from normal BMD to osteoporosis (1255 women) and the second for the transition from osteopenia to osteoporosis (4215 women); 513 women in whom the transition from normal BMD to osteopenia occurred before their last BMD test were included in both analyses. For statistically significant clinical risk factors in the models, we conducted stratified analyses of the estimated time for 10% of women to make the transition to osteoporosis.
Two sensitivity analyses were conducted. First, the testing interval was redefined as the estimated time for 20% of women to make the transition from osteopenia to osteoporosis, or for 1%, 2%, or 5% of women to make the transition from normal BMD to osteoporosis. (Sensitivity analyses could not be conducted for thresholds between 1% and 5% among women with advanced osteopenia or for a 20% threshold among women with normal BMD, because the resulting time extrapolations were respectively shorter than the minimum and longer than the maximum follow-up times.) Second, we repeated the primary analyses using the secondary definition of osteoporosis, based only on the BMD at the femoral neck.
To better study women who had a fracture without first making the transition to osteoporosis, as defined by diagnostic criteria from the World Health Organization (WHO), and without first receiving treatment for osteoporosis, we also calculated the time for 2% of women to have a hip or clinical vertebral fracture in competing risk analyses of data from the same study population stratified according to the four T-score ranges. In the fracture analyses, the first reported use of bisphosphonate, calcitonin, or raloxifene and the first documentation of osteoporosis before fracture were treated as competing risks,13 and data were censored for death or withdrawal from the study with the use of the approach of Hudgens et al.20
Baseline characteristics of the 4957 women who participated in the analysis are shown in Table 1. Within each T-score range, the numbers of women in whom osteoporosis developed during the follow-up period were as follows: normal BMD, 10 of 1255 women (0.8%); mild osteopenia, 64 of 1386 (4.6%); moderate osteopenia, 309 of 1478 (20.9%); and advanced osteopenia, 841 of 1351 (62.3%).
Unadjusted estimates (Fig. 2) and covariate-adjusted estimates of the cumulative incidence of osteoporosis as a function of testing interval were similar. The times for 10% of women without osteoporosis to make the transition to osteoporosis increased with higher baseline T scores at the hip. The adjusted estimates for women with normal BMD and for those with mild osteopenia at baseline were very similar (16.8 years [time conservatively estimated for the lowest BMD in the normal range] and 17.3 years, respectively) (Table 2). The adjusted estimates were 4.7 years for women with moderate osteopenia and 1.1 years for those with advanced osteopenia. For women with osteopenia at baseline, T-score group, age, BMI, current estrogen use, and the interaction of T-score group by BMI were significant predictors in the final model (P<0.02). The other covariates — any fracture after 50 years of age, current smoking, previous or current use of oral glucocorticoids, and self-reported rheumatoid arthritis — were not significant predictors (all P>0.20).
Within a given T-score range, the estimated time for the transition from osteopenia to osteoporosis was longer with younger age (Table 3). For example, among women with moderate osteopenia, the estimated BMD testing interval was approximately 5 years for women who were 70 years old and approximately 3 years for those who were 85 years old. The estimated transition time was also longer for women who were taking estrogen at baseline, as compared with those who had either taken estrogen in the past or never taken it. Among women with mild osteopenia, testing intervals were longer than 14 years for all BMI values evaluated. A higher BMI was associated with a slightly longer testing interval among women with advanced osteopenia (P<0.001 for trend), but all estimated intervals were close to 1 year (range, 0.8 to 1.3). For all BMI values evaluated, women with moderate osteopenia had an estimated testing interval of approximately 4.5 years. There was no significant association between BMI and the time to the development of osteoporosis for women with moderate osteopenia at baseline (P=0.51 for trend).
When the testing interval was redefined as the estimated times for 20% of women to make the transition from osteopenia to osteoporosis, the time estimates were approximately 80% longer (8.5 years and 2.0 years for women with moderate and advanced osteopenia, respectively), as compared with corresponding estimates based on a 10% transition threshold. In a sensitivity analysis in which we used the secondary definition of osteoporosis, based on the BMD at the femoral neck alone, the covariate-adjusted times for 10% of women to make the transition to osteoporosis were 1.0 years for women with advanced osteopenia, 4.7 years for those with moderate osteopenia, and more than 15 years for those with mild osteopenia or normal BMD. Although these estimates were similar to the estimates in the primary analysis (which was based on the BMD at the total hip or femoral neck) for women with osteopenia, the time estimate in this sensitivity analysis for women with normal BMD was more than twice as long as that in the primary analysis, and it was much longer than the maximum follow-up time of 15 years.
A total of 121 women (2.4%) had a hip or clinical vertebral fracture before the transition to osteoporosis, as defined by WHO diagnostic criteria, or before the receipt of treatment for osteoporosis. The adjusted estimated time for 2% of women to have a hip or clinical vertebral fracture was more than 15 years for women with normal BMD or mild osteopenia, and approximately 5 years for those with moderate or advanced osteopenia. Complete results of the sensitivity analyses are presented in the Supplementary Appendix.
We conducted a study of rates of transition to osteoporosis in order to help clinicians decide on BMD testing intervals for older women with normal BMD or osteopenia at the initial assessment. Our results suggest that the baseline T score is the most important determinant of a BMD testing interval. During the 15-year study period, less than 1% of women with T scores indicating normal BMD and 5% of women with T scores indicating mild osteopenia at their first assessment made the transition to osteoporosis, with an estimated testing interval of about 15 years for 10% of women in each of these groups to make the transition. This finding suggests that if BMD testing is deferred for 15 years among women with T scores greater than −1.50, there is a low likelihood of a transition to osteoporosis during that period. We found that 10% of women with moderate osteopenia and 10% of women with advanced osteopenia made the transition to osteoporosis in 5 years and 1 year, respectively. Although clinical risk factors had a minimal effect on the time estimates as a whole, a significant trend for age supported shorter testing intervals as women age. The estimated time for only 2% of women to make the transition to hip or clinical vertebral fracture before the development of osteoporosis was 5 years for women with moderate or advanced osteopenia and at least 15 years for women with mild osteopenia or normal BMD. Thus, with the use of the stated criteria for the study, consideration of the time to a hip or clinical vertebral fracture would not substantially alter recommendations for osteoporosis screening intervals based on the time to osteoporosis alone.
Recent controversy over the harms of excessive screening for other chronic diseases22–24 reinforces the importance of developing a rational screening program for osteoporosis that is based on the best available evidence rather than on health care marketing, advocacy, and public beliefs that have encouraged overtesting and over-treatment in the United States.25 Our findings provide evidence-based estimates for an osteoporosis screening interval before new hip or clinical vertebral fractures and before initiation of treatment for osteoporosis. Our results are consistent with those of Hillier et al.,8 suggesting that frequent BMD testing is unlikely to improve fracture prediction, and with those of Frost et al.,9 suggesting that age and T score are key factors in determining a reasonable interval for BMD testing. Our study extends their findings by estimating the transition time to osteoporosis before a hip or clinical vertebral fracture, with the goal of treating osteoporosis to reduce the risk of such fractures, which account for the majority of fracture-related complications among older adults.
Several features of our analysis will assist clinicians in making decisions regarding osteoporosis screening intervals. Clinicians might feel impelled to shorten the BMD screening interval for patients with osteopenia who have clinical risk factors for fracture. Our estimates for BMD testing intervals proved to be robust after adjustment for major clinical risk factors. However, clinicians may choose to reevaluate patients before our estimated screening intervals if there is evidence of decreased activity or mobility, weight loss, or other risk factors not considered in our analyses. As expected, the estimated time to osteoporosis decreased with increasing age, so that an interval of 3 years, instead of 5 years, might be considered for women 85 years of age or older who have moderate osteopenia. Although the trends for BMI and estrogen use were also significant, they were less clinically relevant. If 10 years were to be considered the maximum testing interval for any woman, the BMI would not alter the recommendations for the testing intervals for each T-score range (based on a comparison of the time estimates in Table 3 vs. those in Table 2). Current estrogen use, as compared with use of estrogen in the past or no history of estrogen use, was significantly associated with higher BMD and a longer testing interval. These results were consistent with the finding of BMD loss after discontinuation of hormone therapy in the Postmenopausal Estrogen/Progestin Interventions (PEPI) trial (ClinicalTrials.gov number, NCT00000466)26 and a SOF analysis suggesting that previous hormone therapy does not provide protection against hip fracture.27 Because of the transient effect of estrogen on BMD, we do not recommend modifying the screening interval on the basis of estrogen use.
Our study had several limitations. First, our testing interval was based only on BMD transitions, with adjustment for risk factors for fracture; the potential benefits and risks of screening and its cost-effectiveness were not considered. Second, owing to limitations imposed by the data set, precise time estimates were not possible for the following analyses: 5% threshold for women with advanced osteopenia at baseline, 20% threshold for women with normal BMD or mild osteopenia at baseline (Table A in the Supplementary Appendix), and outcome for BMD at the femoral neck among women with normal BMD at baseline (Table C in the Supplementary Appendix). Third, 49% of the original SOF participants (4747 of 9704 women) were excluded from our analysis. About half the excluded women were not eligible for screening because they had osteoporosis at baseline, had a history of a hip or clinical vertebral fracture, or had received treatment for osteoporosis at baseline; the remaining women had too few DXA examinations to be followed longitudinally. However, the mean age and mean baseline T scores in our analytic cohort were similar to those for all SOF participants who had any DXA scans of the hip. Fourth, our analysis was limited to women 67 years of age or older; different results might have been obtained from analyses that included younger postmenopausal women or men. Finally, white women accounted for more than 99% of our sample. However, because the prevalence of osteoporosis of the hip among white women is equal to or slightly higher than the prevalence among nonwhite women by estimates from the National Health and Nutrition Examination Survey,28 the testing intervals we calculated are likely to be reasonable estimates for women of all races.
The strengths of our analysis include the large size of the cohort and the long follow-up period. Repeated BMD testing during the long follow-up period allowed precise estimation of testing intervals from event times (i.e., time to the development of osteoporosis) that were unknown, up to a time interval, for every woman in the study sample.
In conclusion, our results suggest that osteoporosis would develop in less than 10% of older, postmenopausal women during screening intervals that are set at approximately 15 years for women with normal bone density or mild osteopenia (T score, greater than −1.50) at the initial assessment, 5 years for women with moderate osteopenia (T score, −1.50 to −1.99), and 1 year for women with advanced osteopenia (T score, −2.00 to −2.49).
Supported by a grant (K23RR024685) from the National Institutes of Health. The Study of Osteoporotic Fractures is supported by grants (AG05407, AR35582, AG05394, AR35584, AR35583, R01 AG005407, R01 AG027576-22, 2 R01 AG005394-22A1, and 2 R01 AG027574-22A1) from the National Institutes of Health.
No potential conflict of interest relevant to this article was reported.
Disclosure forms provided by the authors are available with the full text of this article at NEJM.org.
The views expressed are those of the authors and do not necessarily reflect the official views of the funding agency.