|Home | About | Journals | Submit | Contact Us | Français|
We examined the extent of changes in absolute and relative geographic disparities in six colorectal cancer (CRC) indicators using data about persons aged 50 and older from 195 counties in the 1988–2006 Surveillance, Epidemiology, and End Results Program database.
County-level trends in six colorectal cancer indicators (overall CRC incidence, descending colon cancer incidence, proximal colon cancer incidence, late-stage CRC incidence, CRC mortality, and 5-year probability of CRC death) were summarized using the estimated annual percentage change. Observed county rates were smoothed using Bayesian hierarchical spatiotemporal methods to calculate measures of absolute and relative geographic disparity and their changes over time.
During the study period, absolute disparity for all six indicators decreased (CRC incidence: 43.2%; proximal colon cancer: 31.9%; descending colon cancer: 52.8%; late-stage CRC: 50.0%; CRC mortality: 57.8%; 5-year CRC-specific probability of death: 12.2%). Relative disparity remained stable for all six indicators over the entire study period.
Important progress has been made toward achieving the Healthy People 2010 and NCI strategic objectives for reducing geographic disparities, although absolute and relative disparities remain in CRC.
Colorectal cancer (CRC) is the fourth most common cancer and the second leading cause of cancer deaths in United States . Screening for this disease has resulted in a reduction in 1) CRC incidence by early detection and removal of polyps, 2) the rate of late-stage CRC, and 3) the risk of CRC-related mortality [2–8]. Use of colorectal cancer screening has increased since 2000, but changes in screening over time vary by screening modality ; use of fecal occult blood testing and sigmoidoscopy declined from 1995 to 2003, whereas colonoscopy use increased substantially . Regardless, relative to the screening rates for breast and cervical cancer, screening for CRC is still low.
The reduction in CRC-related mortality results from polyp removal, detection of tumors with smaller mass, at an earlier stage, and before they become clinically evident . In the United States, CRC incidence has declined since the mid 1980s with steeper declines starting in the mid 1990s. Trends in CRC incidence over time show differences in tumor location (i.e., in the proximal or descending colon), possibly related to differences in use of screening modalities . Fecal occult blood testing (FOBT) and colonoscopy have the ability to detect a lesion in any part of the colorectal system while sigmoidoscopy only detects lesions in the distal colon. Regardless of tumor location, late-stage CRC incidence has declined steadily since the mid 1980s . Mortality rates also declined over time while survival rates have improved .
Monitoring the effects of screening on CRC incidence and mortality at the population level is vital to maximize the impact of cancer prevention interventions particularly in light of changes in overall screening use, risk factors, and treatment improvements. While colorectal cancer incidence and mortality trends are generally monitored at the national level, little is known about small-area variation (geographic disparity) in these trends. Reducing disparities, including geographic disparities, is an overarching goal of the Healthy People 2010 initiative and of the National Cancer Institute’s (NCI) strategic plan [14, 15]. Monitoring disparities in multiple CRC indicators at small geographic levels (e.g., the county level) may help facilitate local health planning of interventions aimed at increasing screening and allocation of pertinent screening and treatment resources as well as interventions aimed at improving risk factors. Previous studies have shown geographic disparities in CRC incidence and mortality[16–18], but it is unclear the extent to which these disparities have changed over time. The purpose of this study was to describe and examine temporal changes in geographic disparity and rates of six colorectal cancer indicators across 195 counties during 1988–2006.
We used the 1988–2006 public-use county-level data from eight population-based Surveillance, Epidemiology, and End Results (SEER) programs to calculate six CRC indicators in an ecological study design. We used 1988 as the first year of observation since this is the first year when detailed information about lymph node involvement, American Joint Commission on Cancer (AJCC) tumor-node-metastasis (TNM) staging, and tumor size are available in the SEER data. The SEER programs collect data about demographics, clinical characteristics of the tumor), treatment, and survival. During this time period, the eight SEER programs covered 195 counties and about nine percent of the United States population. The analyses were based on persons aged 50 or older diagnosed with first primary colorectal cancer or who died from CRC from 1988 to 2006.
We used the following six CRC indicators from the SEER data because different indicators are indicative of different underlying biologic, behavioral, and/or social processes: overall incidence, descending colon cancer incidence, proximal colon cancer incidence, late-stage CRC incidence, CRC mortality, and 5-year CRC-specific probability of death. Proximal colon tumors were located in the cecum, ascending colon, hepatic flexure, or transverse colon; descending colon tumors were located in the splenic flexure and descending colon. Because of the shorter length of a sigmoidoscope it cannot visualize tumors in the proximal colon, but a colonoscope visualizes the entire colon. Late-stage CRC included tumors classified as regional or distant according to SEER historic stage. The CRC mortality rate was calculated for persons who had CRC as the underlying cause of death on their death certificate. Persons who were previously diagnosed with CRC but died from other causes did not contribute to calculations of the CRC mortality rate. To estimate survival trends, we used the 5-year CRC-specific survival rate, excluding death-certificate-only cases. All persons who died of CRC within 5 years of their primary CRC diagnosis were coded as 1, otherwise as 0. To ensure a minimum of 5 years of observation time, we excluded persons diagnosed after 2002.
First, we smoothed the observed county rates using Bayesian hierarchical spatiotemporal methods in order to calculate measures of absolute and relative geographic disparity. Bayesian hierarchical methods were used because county rates are strongly affected by the annual number of CRC cases in each county and may be very unreliable if based on few cancer cases. Moreover, rates that are close in geographic proximity are not independent of each other (spatial correlation). The smoothed rates are close to the observed rates when based upon a large number of CRC cases or population size. However, in counties with lower incidence of CRC or with smaller population size, the rates vary significantly each year. In this case, the observed rate was smoothed toward the rates of the adjacent counties. Observed county rates were age-sex-race adjusted using the total study population unless age-sex-race-county-year specific data included fewer than five CRC cases in which case we age-sex adjusted the observed rates. We adjusted for age only when the age-sex-county-year specific data included fewer than five CRC cases. We used three age groups (50–59, 60–69, 70+) and three racial groups (White, Black, and other race) to calculate these rates. Specifically, we used the Knorr-Held model to obtain the yearly, smoothed county rates during 1988–2006 . This model contained four random terms,
where i represents county and j represents year of diagnosis; θij is the county-year-specific predicted rate of CRC as part of a Poisson model; pij is the county-year-specific probability of death within 5 years following primary colorectal cancer diagnosis as part of a binomial model; β0 is the intercept; μi and vi are the spatially structured and unstructured random terms, respectively; δj is the temporal random term; and ij is the spatiotemporal random term. Since spatial correlation and/or heterogeneity due to unobserved spatially varying covariates are usually present in spatial data, it seems appropriate for the spatial effect to be broken down into a spatially correlated part (structured part) and a spatially uncorrelated part (unstructured part). This representation of the spatial effects makes it possible to distinguish between the two kinds of unobserved covariates, namely, those that display a strong spatial structure and those that are present locally. Our analysis included the spatially structured part and thus accounted for the dependence of neighboring counties in the model. The spatiotemporal term allows for the spatial variability to vary for each of the years. Specification of the structured spatial random effect was derived from an intrinsic conditional autoregressive model (iCAR) in which adjacent counties were assumed to have more similar disease risk . The other three random effects were assumed to be independent of counties and with exchangeable normal priors. Markov Chain Monte Carlo methods were adopted to fit the models. The spatial adjacency matrix was created in ArcGIS (ESRI, Redlands, CA) using an add-in adjacency tool .
Second, we identified changes in overall CRC indicator rates over time in order to put in perspective changes in geographic disparities . It is possible for disparities to increase even when overall rates are declining. For example, the racial disparity in CRC mortality is growing even though overall CRC mortality is declining . Overall rates were obtained based on smoothed year-county-specific rates and year-county-specific population proportions. Linear trends in rates from 1988 to 2006 were summarized using the estimated annual percentage change (EAPC). The EAPC was calculated by fitting a linear regression model to the natural logarithm of the annual rates, using calendar year as a covariate. Joinpoint regression analyses were performed to identify significant changes in rates over time. Joinpoint regression is based on permutation tests to identify an inflection point (hereafter called joinpoint) with a significant change in the slope of the trend [24, 25]. For each of the six CRC indicators, a maximum of four joinpoints was allowed.
Third, we calculated trends in geographic disparity for each of the CRC indicators. Geographic disparity can be measured in two different ways, depending on whether one is concerned with measuring the relative or absolute distribution of these indicators across counties. The most frequent method of communicating information about disparities in epidemiology and public health is in relative terms (e.g., relative risk). Risk difference, a measure of absolute disparity, is used less frequently. We used measures of both relative disparity (Mean Log Deviation [MLD]) and absolute disparity (the between-group variance [BGV]) because of potential differences in findings between both measures [22, 23]. For unordered groups (such as counties, in our instance) the MLD and BGV are recommended . This approach uses population-weighted (case-weighted for 5-year probability of death indicator) measures that account for changes over time in the underlying distribution of the county populations and measure absolute and relative disparity as differences from the population average (i.e., overall rate) for each of the six CRC indicators. Both measures weight the county rates by their population size and are more sensitive than other measures of absolute and relative disparity to deviations from the overall rate . The yearly BGV is calculated by squaring the differences in county rates from the population average and weighting by population size: . The yearly MLD summarizes the disproportionality between county rates and population size (expressed on the natural logarithm scale): , where rj = yi/μ, pj is county i’s population fraction, yi is the county i’s age-(race)-adjusted rate and μ is the overall age-race-adjusted rate across all 200 SEER counties. In the current study, yi was defined as model-based predicted county-year-specific rates, and μ was obtained from a summary after multiplying the smoothed county rate by its population fraction. Theoretically, BGV and MLD are no less than 0 and larger values indicate greater disparities. If there is no disparity, then the BGV and the MLD are 0.
Bayesian spatiotemporal models, including the county-year-specific CRC indicator rates, year-specific overall CRC indicator rates, and both measures of geographic disparity, were implemented in WinBUGS (Ver.1.4.3, Medical Research Council, UK). After running 20,000 iterations as burn-in, 20,000 more samples were used to obtain parameter estimates, including county-year-specific CRC indicator rates and their absolute and relative disparity measures. Model fit was evaluated using the Deviance Information Criteria, with lower values indicating better fit .
Fourth, we used joinpoint regression models to identify significant changes in both absolute and relative disparity measures for the posterior means of each of the six indicators between 1988 and 2006. Standard errors of the absolute and relative disparity measures were based on the 95% credible interval obtained from the Bayesian models. Again, the estimated annual percentage change was calculated for each joinpoint. We also calculated the percent change in absolute and relative disparity between 1988 and 2006 and the difference in the values for 1998 and 2006. The percent change measure allows for the comparisons among disparity measures and the incidence/mortality rates across the six CRC indicators.
Table 1 displays the number of cases and adjusted county rates for each of the six CRC indicators.
The overall adjusted rate decreased 3.5 percent per year from 1991 to 1995, after which it remained stable until 1998 (Figure 1, Table 2). After 1998, the overall adjusted rate decreased 3.3 percent until 2002, after which it declined 5.2 percent per year until 2006. The overall incidence rate decreased 39.7 percent from 1988 to 2006. The absolute disparity for CRC incidence decreased 3.1 percent per year from 1988 to 2006, while the relative disparity remained stable (Figure 2, Table 2). Absolute disparity for CRC incidence decreased 544 points from 1988 to 2006, while relative disparity increased 2 points.
The proximal incidence rate decreased 0.7 percent per year from 1988 to 2000, after which it decreased 4.1 percent per year until 2006 (Table 2, Figure 1). From 1998 to 2006, the absolute disparity for proximal colon cancer decreased 2.1 percent per year, while the relative disparity remained stable during the entire study period (Figure 3, Table 2). Absolute disparity for proximal colon cancer decreased 123 points from 1988 to 2006, while relative disparity decreased 6 points.
The descending incidence rate decreased 3.9 percent per year from 1988 to 2006 (Table 2, Figure 1). From 1998 to 2006, the absolute disparity for descending colon cancer decreased 4.1 percent per year, while relative disparity remained stable during the entire study period (Figure 4, Table 2). Absolute disparity for descending colon cancer decreased 42 points from 1988 to 2006, while relative disparity increased 30 points.
The late-stage CRC rate decreased 2.1 percent per year from 1988 to 2003, after which it declined 5.7 percent per year. From 1988 to 2006, the rate of late-stage CRC decreased 41.8 percent. From 1998 to 2006, the absolute disparity for late-stage CRC decreased 6.3 percent per year, while the relative disparity remained stable during the entire study period (Figure 5, Table 2). Absolute disparity for late-stage CRC decreased 295 points from 1988 to 2006, while relative disparity increased 4 points.
The mortality rate decreased 2.1 percent per year from 1988 to 1997, after which it declined 4.5 percent per year until 2006. From 1988 to 2006, the mortality rate decreased 43.9 percent. From 1998 to 2006, the absolute disparity for CRC mortality decreased 4.7 percent per year while relative disparity remained stable during the entire study period (Figure 6, Table 2). Absolute disparity for CRC mortality decreased 273 points from 1988 to 2006, while relative disparity increased 2 points.
The 5-year CRC-specific probability of death remained stable from 1988 to 1995, and then declined 2.9 percent per year until 1999 after which it remained stable again until 2006 (Figure 7). From 1988 to 2006, the 5-year CRC-specific probability of death decreased 12.6 percent (Table 2). From 1998 to 2006, the absolute disparity for the 5-year CRC-specific probability of death decreased 1.0 percent per year, while relative disparity remained stable during the entire study period (Figure 8, Table 2). Absolute disparity for CRC-specific probability of death decreased 1 point from 1988 to 2006, while relative disparity increased 3 points.
For each indicator, we observed a decline in absolute disparities and stable relative disparities over time. This suggests that the 195 counties in the study declined at the same relative rate over time, similar to the overall decrease in the annual percentage change in overall CRC indicators across the 195 counties. As a result, there was little “catching up” by counties that had high rates at the start of the study period; some counties continued to face a higher burden of CRC than others and this relative burden has not changed over time. Thus, counties that had higher incidence and mortality rates in 1988 likely had higher rates in 2006. It appears that all county populations benefitted about equally from possible improvements in early detection, risk factors, and treatment associated with decreases in CRC incidence and mortality over time. The results for CRC are in contrast to absolute and relative geographic disparities in breast cancer, both of which declined between 1988 and 2005 in 200 SEER counties, suggesting that there was “catching up” by some counties, resulting in a more equally shared breast cancer burden .
Differences were observed in the magnitude of declines in absolute disparities among the six indicators. The reduction in geographic disparity was most pronounced for descending colon cancer, late-stage CRC, and mortality and was least pronounced for the 5-year probability of death. The decline in absolute geographic disparity for proximal colon cancer was larger than for descending colon cancer. The increasing use of colonoscopy for early detection and removal of pre-cancerous polyps may have resulted in faster declines in proximal colon cancer. Also encouraging was the large decline in late-stage CRC, which may have led, in part, to the decline in CRC mortality and 5-year probability of death. The decline in late-stage CRC incidence may be the result, in part, of Medicare's policy change in July 2001 to expand CRC screening coverage to average-risk enrollees by reimbursing up to 80% of Medicare-allowed cost of colonoscopy . This expansion of Medicare reimbursement was associated with an increased use of colonoscopy for Medicare beneficiaries, and for those Medicare beneficiaries who were diagnosed with colon cancer, an increased probability of being diagnosed at an early stage . Thus, important progress has been made toward achieving the Healthy People 2010 and NCI strategic objectives for reducing geographic disparities, although absolute and relative disparities remain in CRC.
The challenge is to identify potential reasons for changes in geographic disparities. We expect that the geographic disparity could be related to various aspects, including risk factors for development of CRC, prevention or early detection by means of screening, and receipt of treatment. Perhaps disparately affected counties have higher rates of established risk factors for CRC which include overconsumption of red and processed meat, excess alcohol intake, deficiency of B and D vitamins, obesity, physical inactivity, diabetes mellitus, smoking, family history of colorectal cancer, inflammatory bowel diseases, among others . These factors could be examined in other studies to explain these observed disparities in CRC indicators. The prevalence of these factors should vary over time, not necessarily across counties since relative disparities did not decline, in order to affect changes in geographic disparities in the CRC indicators observed. A recent simulation model indicated that changes in risk factors and screening each accounted for 50% of the overall decline in incidence rates during 1975–2000 . This simulation model also predicted that changes in risk factors explained 35%, increased screening accounted for 53%, and treatment accounted for 12% of the observed decline in CRC mortality . Larger declines in incidence and mortality might have occurred in the disparately affected counties if some of the risk factors had not increased over time, e.g., body mass index and diabetes [32, 33].
Despite these differing trends over time, geographic disparities in the CRC indicators remained in 2006, particularly for overall, late-stage, and proximal CRC incidence as well as for CRC mortality rates. Moreover, absolute geographic disparity in CRC incidence was more than two times larger compared to late-stage and proximal CRC incidence and CRC mortality rates in 2006. Little absolute disparity was observed for descending colon cancer and for the 5-year probability of death in 2006. Reasons for the variability in geographic disparity across the CRC indicators is unclear but is expected to be related to variability in risk factors for development of CRC, early detection by means of screening, and receipt of treatment .
Our study has some limitations. We used county as the geographic entity since it is the smallest geographic unit with the social, political and legal responsibility for providing a broad range of services, including health-related services. We recognize that comparisons across counties presents a number of challenges in that a county in one SEER area (e.g., San Francisco) is not likely comparable to counties in another area (e.g., Iowa) in terms of size and population density. However, our Bayesian analysis allowed us to take into account the spatial relationships among counties within areas, which have not been considered in many other ecological analyses. By using smoothed county rates, disparity measures are less likely to be affected by the extreme rates that are based on few CRC cases or small population size. Third, the extent to which geographic variation in misclassification of the underlying cause of death may have impacted our findings cannot be determined. However, coding CRC as an underlying cause of death on death certificates is over 90 percent accurate .
Our findings suggest that some progress has been made toward achieving Healthy People 2010 and NCI objectives to reduce geographic disparities, but that the absolute declines in incidence and mortality rates were similar across counties, leaving some counties facing a higher burden of CRC than others. Consequently, there is a need for continued monitoring and further study to determine where and how to target efforts to reduce geographic disparities in CRC, particularly in light of the larger geographic disparities for some of the CRC indicators.
Sources of financial support: This research was supported in part by grants from the National Cancer Institute (CA91842, CA137750).
We thank the Alvin J. Siteman Cancer Center at Barnes-Jewish Hospital and Washington University School of Medicine in St. Louis, Missouri, for the use of the Health Behavior, Communication, and Outreach Core, especially Jim Struthers for data management services.