To examine whether use of insulin glargine, compared with another long-acting insulin, is associated with risk of breast, prostate, colorectal cancer, or all cancers combined.
RESEARCH DESIGN AND METHODS
Computerized health records from Kaiser Permanente Northern and Southern California regions starting in 2001 and ending in 2009 were used to conduct a population-based cohort study among patients with diabetes aged ≥18 years. With use of Cox regression modeling, cancer risk in users of insulin glargine (n = 27,418) was compared with cancer risk in users of NPH (n = 100,757).
The cohort had a median follow-up of 3.3 years during which there was a median of 1.2 years of glargine use and 1.4 years of NPH use. Among users of NPH at baseline, there was no clear increase in risk of breast, prostate, colorectal, or all cancers combined associated with switching to glargine. Among those initiating insulin, ever use or ≥2 years of glargine was not associated with increased risk of prostate or colorectal cancer or all cancers combined. Among initiators, the hazard ratio (HR) for breast cancer associated with ever use of glargine was 1.3 (95% CI 1.0–1.8); the HR for breast cancer associated with use of glargine for ≥2 years was 1.6 or 1.7 depending on whether glargine users had also used NPH.
Results of this study should be viewed cautiously, given the relatively short duration of glargine use to date and the large number of potential associations examined.
HIV-exposed uninfected infants (HEU) have higher infectious disease morbidity and mortality than unexposed infants. We determined the incidence and risk factors for pneumonia, a leading cause of infant mortality worldwide, in a cohort of HEU infants. Identifying predictors of pneumonia among HEU infants may enable early identification of those at highest risk.
A retrospective cohort of HEU participating in a Kenyan perinatal HIV study, enrolled between 1999-2002.
Infants were followed monthly from birth to 12 months. Incidence of pneumonia diagnosed at monthly study visits, sick-child visits or by means of a verbal autopsy, was estimated with a 14-day window for new episodes. Cox proportional hazards regression was used to identify predictors of first pneumonia occurrence.
Among 388 HEU infants with 328 person-years of follow-up, the incidence of pneumonia was 900/1,000 child-years (95% CI: 800-1,000). Maternal HIV viral load at 32 weeks gestation [HR=1.2 (1.0-1.5) per log10 difference] and being underweight (weight-for-age Z-score <-2) at the previous visit [HR=1.8 (1.1-2.8)] were associated with increased risk of pneumonia. Breastfed infants had a 47% lower risk of pneumonia than those never breastfed [HR=0.53 (0.39-0.73)], independent of infant growth, maternal viral load and maternal CD4%. Breastfeeding was also associated with a 74% lower risk of pneumonia-related hospitalization (HR=0.26 (0.13-0.53)).
The incidence of pneumonia in this cohort of HEU infants was high. Our observations suggest that maternal viral suppression and breastfeeding may reduce the burden of pneumonia among HEU.
HIV-exposed uninfected; infants; morbidity; breastfeeding; pneumonia
Evaluate risk factors for poor functional outcome in 28 day survivors after an episode of severe sepsis.
Retrospective cohort study examining data from the RESOLVE (REsearching severe Sepsis and Organ dysfunction in children: a gLobal perspectiVE, F1K-MC-EVBP) trial.
104 pediatric centers in 18 countries.
Children with severe sepsis who required both vasoactive-inotropic infusions and mechanical ventilation and who survived to 28 days (n=384).
Measurements and Main Results
Poor functional outcome was defined as a Pediatric Overall Performance Category (POPC) score ≥3 and an increase from baseline when measured 28 days after trial enrollment. Median(IQR) POPC at enrollment was 1 (1–2). Median(IQR) POPC at 28 days was 2 (1–4).
34% of survivors had decline in their functional status at 28 days, and 18% were determined to have a “poor” functional outcome. Hispanic ethnicity was associated with poor functional outcome compared to the white referent group [RR= 1.9 (95% CI 1.0, 3.0)]. Clinical factors associated with increased risk of poor outcome included: central nervous system and intra-abdominal infection sources compared to the lung infection referent category [RR= 3.3 (95%CI 1.4, 5.6) and 2.4 (95%CI 1.0, 4.5) respectively]; a history of recent trauma [RR=3.9 (95%CI 1.4, 5.4)]; receipt of cardiopulmonary resuscitation prior to enrollment [RR=5.1 (95% CI 2.9, 5.7)]; and baseline Pediatric Risk of Mortality III (PRISM) score of 20 – 29 [RR=2.8 (95%CI 1.2, 5.2)] and PRISM ≥30 [RR=4.5 (95%CI 1.6, 8.0)] compared to the referent group with PRISM scores of 0 – 9.
In this sample of 28 day survivors of pediatric severe sepsis diminished functional status was common. This analysis provides evidence that particular patient characteristics and aspects of an individual’s clinical course are associated with poor functional outcome 28 days after onset of severe sepsis. These characteristics may provide opportunity for intervention in order to improve functional outcome in pediatric patients with severe sepsis. Decline in functional status 28 days after onset of severe sepsis is a frequent and potentially clinically meaningful event. Further consideration should be given to functional status as the primary outcome that future trials of novel or unproven therapies are designed to affect.
sepsis; septic shock; severe sepsis; outcome assessment; mechanical ventilation; multiple organ failure; functional status; Pediatric Overall Performance Category
Studies of the effects of exposures after cancer diagnosis on cancer recurrence and survival can provide important information to the growing group of cancer survivors. Observational studies that address this issue generally fall into one of two categories: 1) those using health plan automated data that contain “continuous” information on exposures, such as studies that use pharmacy records; and 2) survey or interview studies that collect information directly from patients once or periodically postdiagnosis. Reverse causation, confounding, selection bias, and information bias are common in observational studies of cancer outcomes in relation to exposures after cancer diagnosis. We describe these biases, focusing on sources of bias specific to these types of studies, and we discuss approaches for reducing them. Attention to known challenges in epidemiologic research is critical for the validity of studies of postdiagnosis exposures and cancer outcomes.
Recent guidelines from the American Cancer Society, the American Society for Colposcopy and Cervical Pathology, and the American Society for Clinical Pathology recommend cessation of cervical cancer screening at age 65 years for women with an “adequate” history of negative Papanicolaou smears. In our view, those who formulated these guidelines did not consider a growing body of evidence from nonrandomized studies that provides insight into the efficacy of cervical cancer screening among older women. First, older women are not at indefinitely low risk following negative screening results. Second, recent data from the United States, the United Kingdom, and Sweden suggest that screening of older women is associated with substantial reductions in cervical cancer incidence and mortality, even among previously screened women. It may be that after consideration of the reduced incidence of (and reduced mortality from) cervical cancer that may result from screening older women, the harms and economic costs of screening will be judged to outweigh its benefits. However, it is essential to consider the now-documented benefits of cervical screening when formulating screening guidelines for older women, and recommendations that do not do so will lack an evidence base.
cervical cancer; health policy; Papanicolaou smear; screening; women's health
Recent studies suggest that cancer increases risk of atrial fibrillation. Whether atrial fibrillation is a marker for underlying occult cancer is unknown.
We conducted a cohort study (1980–2011) of all Danish patients with new-onset atrial fibrillation. To examine cancer risk, we computed absolute risk at 3 months and standardized incidence ratios (SIRs) by comparing observed cancer incidence among patients newly diagnosed with atrial fibrillation with that expected based on national cancer incidence during the period.
Median follow-up time was 3.4 years among 269 742 atrial fibrillation patients. Within 3 months of follow-up, 6656 cancers occurred (absolute risk, 2.5%; 95% confidence intervals [CI], 2.4%–2.5%) versus 1302 expected, yielding a SIR of 5.11; 95% CI, 4.99–5.24. Associations were particularly strong for cancers of the lung, kidney, colon, ovary, and for non-Hodgkin's lymphoma. The SIR within 3 months of follow-up was 7.02; 95% CI, 6.76–7.28 for metastatic and 3.53; 95% CI, 3.38–3.68 for localized cancer. Beyond 3 months of follow-up, overall cancer risk was modestly increased (SIR, 1.13; 95% CI, 1.12–1.15).
Patients with new-onset atrial fibrillation had a markedly increased relative risk of a cancer diagnosis within the next three months, however, corresponding absolute risk was small.
In critically ill patients, induction with etomidate is hypothesized to be associated with an increased risk of mortality. Previous randomized studies suggest a modest trend toward an increased risk of death among etomidate recipients; however, this relationship has not been measured with great statistical precision. We aimed to test whether etomidate is associated with risk of hospital mortality and other clinical outcomes in critically ill patients.
We conducted a retrospective cohort study from January 1, 2001, to December 31, 2005, of 824 subjects requiring mechanical ventilation, who underwent adrenal function testing in the ICUs of 2 academic medical centers. The primary outcome was in-hospital mortality, comparing subjects given etomidate (n = 452) to those given an alternative induction agent (n = 372). The secondary outcome was diagnosis of critical illness-related corticosteroid insufficiency following etomidate exposure.
Overall mortality was 34.3%. After adjustment for age, sex, and baseline illness severity, the relative risk of death among the etomidate recipients was higher than that of subjects given an alternative agent (relative risk 1.20, 95% CI 0.99–1.45). Among subjects whose adrenal function was assessed within the 48 hours following intubation, the adjusted risk of meeting the criteria for critical illness-related corticosteroid insufficiency was 1.37 (95% CI 1.12–1.66), comparing etomidate recipients to subjects given another induction agent.
In this study of critically ill patients requiring endotracheal intubation, etomidate administration was associated with a trend toward a relative increase in mortality, similar to the collective results of smaller randomized trials conducted to date. If a small relative increased risk is truly present, though previous trials have been underpowered to detect it, in absolute terms the number of deaths associated with etomidate in this high-risk population would be considerable. Large, prospective controlled trials are needed to finalize the role of etomidate in critically ill patients.
sepsis; ICU; mortality; etomidate; adrenal function; rapid sequence induction
Background & Aims
Although patients with Barrett's esophagus commonly undergo endoscopic surveillance, its effectiveness in reducing mortality from esophageal/gastroesophageal junction adenocarcinomas has not been evaluated rigorously.
We performed a case-control study in a community-based setting. Among 8272 members with Barrett's esophagus, we identified 351 esophageal adenocarcinoma: 70 in persons who had a prior diagnosis of Barrett's esophagus (who were eligible for surveillance); 51 of these patients died, 38 as a result of the cancers (cases). Surveillance histories were contrasted with a sample of 101 living persons with Barrett's esophagus (controls), matched for age, sex, and duration of follow-up evaluation.
Surveillancei within 3 years was not associated with a decreased risk of death from esophageal adenocarcinoma (adjusted odds ratio, 0.99; 95% confidence interval, 0.36–2.75). Fatal cases were nearly as likely to have received surveillance (55.3%) as were controls (60.4%). A Barrett's esophagus length longer than 3 cm and prior dysplasia each were associated with subsequent mortality, but adjustment for these did not change the main findings. Although all patients should be included in evaluations of effectiveness, excluding deaths related to cancer treatment and patients who failed to complete treatment, changed the magnitude, but not the significance, of the association (odds ratio, 0.46; 95% confidence interval, 0.13–1.64).
Endoscopic surveillance of patients with Barrett's esophagus was not associated with a substantially decreased risk of death from esophageal adenocarcinoma. The results do not exclude a small to moderate benefit. However, if such a benefit exists, our findings indicate that it is substantially smaller than currently estimated. The effectiveness of surveillance was influenced partially by the acceptability of existing treatments and the occurrence of treatment-associated mortality.
BE; EAC; Esophageal Cancer; Prevention
Endometrial cancers have long been divided into estrogen-dependent type I and the less common clinically aggressive estrogen-independent type II. Little is known about risk factors for type II tumors because most studies lack sufficient cases to study these much less common tumors separately. We examined whether so-called classical endometrial cancer risk factors also influence the risk of type II tumors.
Patients and Methods
Individual-level data from 10 cohort and 14 case-control studies from the Epidemiology of Endometrial Cancer Consortium were pooled. A total of 14,069 endometrial cancer cases and 35,312 controls were included. We classified endometrioid (n = 7,246), adenocarcinoma not otherwise specified (n = 4,830), and adenocarcinoma with squamous differentiation (n = 777) as type I tumors and serous (n = 508) and mixed cell (n = 346) as type II tumors.
Parity, oral contraceptive use, cigarette smoking, age at menarche, and diabetes were associated with type I and type II tumors to similar extents. Body mass index, however, had a greater effect on type I tumors than on type II tumors: odds ratio (OR) per 2 kg/m2 increase was 1.20 (95% CI, 1.19 to 1.21) for type I and 1.12 (95% CI, 1.09 to 1.14) for type II tumors (Pheterogeneity < .0001). Risk factor patterns for high-grade endometrioid tumors and type II tumors were similar.
The results of this pooled analysis suggest that the two endometrial cancer types share many common etiologic factors. The etiology of type II tumors may, therefore, not be completely estrogen independent, as previously believed.
We investigated the relationship between use of tricyclic antidepressants (TCA) and risk of non-Hodgkin lymphoma (NHL). Previous studies provided some evidence of an association, but did not assess risk of NHL subtypes.
Cases and controls were members of Group Health (GH), an integrated healthcare delivery system. Cases were persons diagnosed with NHL between 1980–2011 at age ≥25; 8 controls were matched to each case on age, sex, and length of enrollment. Information on prior TCA use was ascertained from automated pharmacy data. Conditional logistic regression was used to calculate ORs and 95%CIs for NHL, overall and for common subtypes, for various patterns of TCA use.
We identified 2,768 cases and 22,127 matched controls. We did not observe an appreciably increased risk of NHL among TCA ever-users compared to non-users (OR=1.1; 95%CI=1.0–1.2). Overall risk of NHL was associated to at most a small degree with longer-term use (OR=1.2; 95%CI=1.0–1.4; ≥10 prescriptions), high-dose use (OR=1.1; 95%CI=0.8–1.5; ≥50mg), or non-recent use (OR=1.0; 95%CI=0.9=1.2; >5y ago). TCA use was not associated with NHL subtypes, except chronic lymphocytic leukemia/small lymphocytic lymphoma (OR=1.5; 95%CI=1.1–2.0; longer-term use).
We found little evidence that TCA use increases risk of NHL, overall or for specific common subtypes of NHL.
Lymphoma, non-Hodgkin; Antidepressive Agents, Tricyclic; Epidemiology; Case-Control Studies
Endometrial cancer (EC) contributes substantially to total burden of cancer morbidity and mortality in the United States. Family history is a known risk factor for EC, thus genetic factors may play a role in EC pathogenesis. Three previous genome-wide association studies (GWAS) have found only one locus associated with EC, suggesting that common variants with large effects may not contribute greatly to EC risk. Alternatively, we hypothesize that rare variants may contribute to EC risk. We conducted an exome-wide association study (EXWAS) of EC using the Infinium HumanExome BeadChip in order to identify rare variants associated with EC risk. We successfully genotyped 177,139 variants in a multiethnic population of 1,055 cases and 1,778 controls from four studies that were part of the Epidemiology of Endometrial Cancer Consortium (E2C2). No variants reached global significance in the study, suggesting that more power is needed to detect modest associations between rare genetic variants and risk of EC.
Comparative effectiveness research (CER) on preventive services can shape policy and help patients, their providers, and public health practitioners select regimens and programs for disease prevention. Patients and providers need information about the relative effectiveness of various regimens they may choose. Decision makers need information about the relative effectiveness of various programs to offer or recommend. The goal of this paper is to define and differentiate measures of relative effectiveness of regimens and programs for disease prevention. Cancer screening is used to demonstrate how these measures differ in an example of two hypothetic screening regimens and programs.
Conceptually and algebraically defined measures of relative regimen and program effectiveness are also presented. The measures evaluate preventive services that range from individual tests through organized, population-wide prevention programs. Examples illustrate how effective screening regimens may not result in effective screening programs and how measures can vary across subgroups and settings. Both regimen and program relative effectiveness measures assess benefits of prevention services in real-world settings, but each addresses different scientific and policy questions. As the body of CER grows, a common lexicon for various measures of relative effectiveness becomes increasingly important to facilitate communication and shared understanding among researchers, healthcare providers, patients, and policymakers.
The US Preventive Services Task Force recently recommended against prostate-specific antigen (PSA) screening for prostate cancer based primarily on evidence from the European Randomized Study of Screening for Prostate Cancer (ERSPC) and the US Prostate, Lung, Colorectal, and Ovarian (PLCO) cancer screening trial.
To examine limitations of basing screening policy on evidence from screening trials.
We review published modeling studies that examine population and trial data. The studies (1) project the roles of screening and changes in primary treatment in the US mortality decline, (2) extrapolate the ERSPC mortality reduction to the long-term US setting, (3) estimate overdiagnosis based on US incidence trends, and (4) quantify the impact of control arm screening on PLCO mortality results.
Screening plausibly explains 45% and changes in primary treatment can explain 33% of the US prostate cancer mortality decline. Extrapolating the ERSPC results to the long-term US setting implies an absolute mortality reduction at least 5 times greater than that observed in the trial. Approximately 28% screen-detected cases are overdiagnosed in the US versus 58% of screen-detected cases suggested by the ERSPC results. Control arm screening can explain the null result in the PLCO trial.
Modeling studies indicate that population trends and trial results extended to the long-term population setting are consistent with greater benefit of PSA screening—and more favorable harm-benefit tradeoffs—than has been suggested by empirical trial evidence.
Mass screening; policy development; prostatic neoplasms; simulation modeling
Few studies on the occurrence of depression in pediatric patients with chronic kidney disease (CKD) have been conducted and none have identified associated clinical and demographic factors.
This was a cross-sectional study in which we administered the Child Depression Inventory-2 (CDI-2) to 44 patients aged 9–18 years with CKD stages III–V. Criteria for depression were CDI-2 scores of ≥65 or an established diagnosis of depression recorded in the medical chart. Relative risks (RR) and 95 % confidence intervals (CI) were calculated to determine associations between patient characteristics and depression status.
Of the 44 patients enrolled in the study, 13 (30 %) met our criteria for depression, representing 18 % of patients aged <13 years and 34 % of those aged ≥13 years. Although not reaching statistical significance, the adjusted risk of depression was lower for patients with CKD duration of ≤3 years than for those with longer CKD duration (RR 0.19, 95 % CI 0.02, 1.53), and for those with CKD stage IV (RR 0.23, 95 % CI 0.05, 1.09) and CKD stage V (RR 0.13, 95 % CI 0.01, 1.07) compared to those with CKD stage III.
Our results indicate that depression is common in children with CKD, particularly for those with longstanding renal disease and at CKD stage III.
Child depression inventory; Adolescents; Children; Dialysis; Transplant; Mental health
An analysis of a case-control study of rhabdomyolysis was conducted to screen for previously unrecognized CYP2C8 inhibitors that may cause other clinically important drug-drug interactions. Cases of rhabdomyolysis using cerivastatin (n=72) were compared with controls using atorvastatin (n=287) between 1998–2001. The use of clopidogrel (OR 29.6; 95% CI, 6.1–143) was strongly associated with rhabdomyolysis. In a replication effort that used the FDA Adverse Event Reporting System (AERS), clopidogrel was used more commonly by rhabdomyolysis cases using cerivastatin (17%) than by rhabdomyolysis cases using atorvastatin (0%, OR infinity; 95% CI = 5.2-infinity). Several medications were tested in vitro for their potential to cause drug-drug interactions. Clopidogrel, rosiglitazone and montelukast were the most potent inhibitors of cerivastatin metabolism. Clopidogrel and its metabolites also inhibited cerivastatin metabolism in human hepatocytes. These epidemiological and in-vitro findings suggest that clopidogrel may cause clinically important, dose dependent, drug-drug interactions with other medications metabolized by CYP2C8.
rhabdomyolysis; statins; clopidogrel; adverse drug reaction; drug-drug interaction prediction; 2-oxo-clopidogrel; acyl glucuronide
To test whether angiotensin-converting enzyme inhibitor use is associated with decreased risk of community-acquired pneumonia in older adults.
We analyzed data from a nested case-control study of community-dwelling, immunocompetent adults aged 65–94 within an integrated healthcare delivery system. Cases of ambulatory and hospitalized pneumonia from 2000–2003 were identified from International Classification of Disease, version 9, codes and validated using medical record review. Controls were matched to cases by age, sex and calendar year. Using health plan pharmacy data, we defined current use as filling ≥ 2 prescriptions during the 180 days prior to the case's diagnosis date. We calculated standardized doses per day using World Health Organization defined daily doses. Multivariable conditional logistic regression estimated adjusted odds ratios (ORs) for pneumonia in relation to angiotensin-converting enzyme inhibitor use, adjusting for comorbidity, functional and cognitive status, and other covariates from medical record review and pharmacy data.
Current use of angiotensin-converting enzyme inhibitors was seen in 23% (242/1039) of cases and 21% (433/2022) of controls. Lisinopril accounted for 95% of prescriptions. The OR for pneumonia comparing current use to no current use was 0.99 (95% confidence interval [CI] 0.83–1.19). The OR for use of more than 2 standardized daily doses per day was 1.39 (95% CI 0.93–2.06) compared to no current use.
Angiotensin-converting enzyme inhibitor use is not associated with reduced pneumonia risk in community-dwelling older adults.
pneumonia; angiotensin-converting enzyme inhibitors; case-control studies; antihypertensive agents
Detection of meningococcal carriers is key to understanding the epidemiology of Neisseria meningitidis, yet no gold standard has been established. Here, we directly compare two methods for collecting pharyngeal swabs to identify meningococcal carriers.
We conducted cross-sectional surveys of schoolchildren at multiple sites in Africa to compare swabbing the posterior pharynx behind the uvula (U) to swabbing the posterior pharynx behind the uvula plus one tonsil (T). Swabs were cultured immediately and analyzed using molecular methods.
One thousand and six paired swab samples collected from schoolchildren in four countries were analyzed. Prevalence of meningococcal carriage was 6.9% (95% CI: 5.4-8.6%) based on the results from both swabs, but the observed prevalence was lower based on one swab type alone. Prevalence based on the T swab or the U swab alone was similar (5.2% (95% CI: 3.8-6.7%) versus 4.9% (95% CI: 3.6-6.4%) respectively (p=0.6)). The concordance between the two methods was 96.3% and the kappa was 0.61 (95% CI: 0.50-0.73), indicating good agreement.
These two commonly used methods for collecting pharyngeal swabs provide consistent estimates of the prevalence of carriage, but both methods misclassified carriers to some degree, leading to underestimates of the prevalence.
Endometrial cancer (EC), a neoplasm of the uterine epithelial lining, is the most common gynecological malignancy in developed countries and the fourth most common cancer among US women. Women with a family history of EC have an increased risk for the disease, suggesting that inherited genetic factors play a role. We conducted a two-stage genome-wide association study of Type I EC. Stage 1 included 5,472 women (2,695 cases and 2,777 controls) of European ancestry from seven studies. We selected independent single-nucleotide polymorphisms (SNPs) that displayed the most significant associations with EC in Stage 1 for replication among 17,948 women (4,382 cases and 13,566 controls) in a multiethnic population (African America, Asian, Latina, Hawaiian and European ancestry), from nine studies. Although no novel variants reached genome-wide significance, we replicated previously identified associations with genetic markers near the HNF1B locus. Our findings suggest that larger studies with specific tumor classification are necessary to identify novel genetic polymorphisms associated with EC susceptibility.
Electronic supplementary material
The online version of this article (doi:10.1007/s00439-013-1369-1) contains supplementary material, which is available to authorized users.
Dietary phosphorus consumption has risen steadily in the United States. Oral phosphorus loading alters key regulatory hormones and impairs vascular endothelial function which may lead to an increase in left ventricular mass (LVM). We investigated the association of dietary phosphorus with LVM in 4,494 participants from the Multi-Ethnic Study of Atherosclerosis, a community-based study of individuals free of known cardiovascular disease. The intake of dietary phosphorus was estimated using a 120-item food frequency questionnaire and the LVM was measured using magnetic resonance imaging. Regression models were used to determine associations of estimated dietary phosphorus with LVM and left ventricular hypertrophy (LVH). Mean estimated dietary phosphorus intake was 1,167 mg/day in men and 1,017 mg/day in women. After adjustment for demographics, dietary sodium, total calories, lifestyle factors, comorbidities, and established LVH risk factors, each quintile increase in the estimated dietary phosphate intake was associated with an estimated 1.1 gram greater LVM. The highest gender-specific dietary phosphorus quintile was associated with an estimated 6.1 gram greater LVM compared to the lowest quintile. Higher dietary phosphorus intake was associated with greater odds of LVH among women, but not men. These associations require confirmation in other studies.
Phosphorus; phosphate; diet; consumption; left ventricular mass; left ventricular hypertrophy
The effectiveness of screening colonoscopy in average-risk adults is uncertain, particularly for right colon cancers.
Examine the association between screening colonoscopy and incident late-stage colorectal cancer (CRC) risk.
Nested case-control study.
Four U.S. health plans
Average-risk adults with ≥5 years of enrollment in one of the health plans (n=1,039). Cases were 55–85 years old on their diagnosis date (reference date) of stage ≥IIB (late-stage) CRC during 2006–2008. We selected 1–2 controls for each case, matched on birth year, gender, health plan, and prior enrollment duration.
Receipt of CRC screening between 3 months and up to 10 years before the reference date, ascertained through medical record audits. We compared cases and controls on receipt of screening colonoscopy or sigmoidoscopy using conditional logistic regressions that accounted for health history, socioeconomic status and other screening exposures.
In analyses restricted to 471 eligible cases and their matched controls (n=509), 13 cases (2.8%) and 46 controls (9.0%) had undergone screening colonoscopy, which corresponded to an adjusted odds ratio (AOR) of 0.30 (95% confidence interval [CI]: 0.15–0.59) for any late-stage CRC, 0.37 (CI: 0.16–0.82) for right colon cancers, and 0.26 (CI: 0.06–1.11) for left-sided colon/rectum cancers. Ninety-two cases (19.5%) and 173 controls (34.0%) underwent screening sigmoidoscopy, corresponding to an AOR of 0.51 (CI: 0.36–0.71) overall, 0.80 (CI: 0.52–1.25) for right colon late-stage cancers, and 0.26 (CI: 0.14–0.49) for left colon/rectum cancers.
The small number of screening colonoscopies affected the precision of our estimates.
Screening with colonoscopy in average-risk persons was associated with reduced risk of diagnosis with incident late-stage CRC in both the right colon and left colon/rectum. For sigmoidoscopy, this association was observed for left-sided CRC, but the association for right colon late-stage cancer was not statistically significant.
Primary Funding Source
National Cancer Institute of the National Institutes of Health.
Childbearing at an older age has been associated with a lower risk of endometrial cancer, but whether the association is independent of the number of births or other factors remains unclear. Individual-level data from 4 cohort and 13 case-control studies in the Epidemiology of Endometrial Cancer Consortium were pooled. A total of 8,671 cases of endometrial cancer and 16,562 controls were included in the analysis. After adjustment for known risk factors, endometrial cancer risk declined with increasing age at last birth (Ptrend < 0.0001). The pooled odds ratio per 5-year increase in age at last birth was 0.87 (95% confidence interval: 0.85, 0.90). Women who last gave birth at 40 years of age or older had a 44% decreased risk compared with women who had their last birth under the age of 25 years (95% confidence interval: 47, 66). The protective association was similar across the different age-at-diagnosis groups and for the 2 major tumor histologic subtypes (type I and type II). No effect modification was observed by body mass index, parity, or exogenous hormone use. In this large pooled analysis, late age at last birth was independently associated with a reduced risk of endometrial cancer, and the reduced risk persisted for many years.
endometrial neoplasms; parity; reproductive history
Diabetes is the leading cause of kidney disease in the developed world. Over time, the prevalence of diabetic kidney disease (DKD) may increase due to the expanding size of the diabetes population or decrease due to the implementation of diabetes therapies.
To define temporal changes in DKD prevalence in the United States.
Design, Setting, and Participants
Cross-sectional analyses of the Third National Health and Nutrition Examination Survey (NHANES III) from 1988–1994 (N = 15 073), NHANES 1999–2004 (N = 13 045), and NHANES 2005–2008 (N=9588). Participants with diabetes were defined by levels of hemoglobin A1c of 6.5% or greater, use of glucose-lowering medications, or both (n = 1431 in NHANES III; n = 1443 in NHANES 1999–2004; n = 1280 in NHANES 2005–2008).
Main Outcome Measures
Diabetic kidney disease was defined as diabetes with albuminuria (ratio of urine albumin to creatinine >30 mg/g), impaired glomerular filtration rate (<60 mL/min/1.73 m2 estimated using the Chronic Kidney Disease Epidemiology Collaboration formula), or both. Prevalence of albuminuria was adjusted to estimate persistent albuminuria.
The prevalence of DKD in the US population was 2.2% (95% confidence interval [CI], 1.8%–2.6%) in NHANES III, 2.8% (95% CI, 2.4%–3.1%) in NHANES 1999–2004, and 3.3% (95% CI, 2.8%–3.7%) in NHANES 2005–2008 (P<.001 for trend). The prevalence of DKD increased in direct proportion to the prevalence of diabetes, without a change in the prevalence of DKD among those with diabetes. Among persons with diabetes, use of glucose-lowering medications increased from 56.2% (95% CI, 52.1%–60.4%) in NHANES III to 74.2% (95% CI, 70.4%–78.0%) in NHANES 2005–2008 (P<.001); use of renin-angiotensin-aldosterone system inhibitors increased from 11.2% (95% CI, 9.0%–13.4%) to 40.6% (95% CI, 37.2%–43.9%), respectively (P<.001); the prevalence of impaired glomerular filtration rate increased from 14.9% (95% CI, 12.1%–17.8%) to 17.7% (95% CI, 15.2%–20.2%), respectively (P=.03); and the prevalence of albuminuria decreased from 27.3% (95% CI, 22.0%–32.7%) to 23.7% (95% CI, 19.3%–28.0%), respectively, but this was not statistically significant (P=.07).
Prevalence of DKD in the United States increased from 1988 to 2008 in proportion to the prevalence of diabetes. Among persons with diabetes, prevalence of DKD was stable despite increased use of glucose-lowering medications and renin-angiotensin-aldosterone system inhibitors.
MF59-adjuvanted trivalent influenza vaccine (Novartis Vaccines and Diagnostics, Siena, Italy) has been shown to be more effective than nonadjuvanted vaccine in the elderly population. Here we present results from a large-scale, observational, noninterventional, prospective postlicensure study that evaluated the safety of MF59-adjuvanted vaccine in elderly subjects aged 65 years or more. The study was performed in 5 northern Italian health districts during the 2006–2007, 2007–2008, and 2008–2009 influenza seasons. The choice of vaccine—either adjuvanted vaccine or a nonadjuvanted influenza vaccine—was determined by individual providers on the basis of local influenza vaccination policy. Hospitalizations for potential adverse events of special interest (AESIs) were identified from hospital databases and then reviewed against recognized case definitions to identify confirmed cases of AESI. Cumulative incidences were calculated for AESIs in predefined biologically plausible time windows, as well as in a 6-month window following vaccination. During the 3-year study period, 170,988 vaccine doses were administered to a total of 107,661 persons. Despite the large study size, cases of AESI resulting in hospitalization were rare, and risks of AESI were similar in both the MF59-adjuvanted and nonadjuvanted vaccination groups. In conclusion, similar safety profiles were observed for both nonadjuvanted and MF59-adjuvanted seasonal influenza vaccines in elderly recipients.
adjuvants; elderly; influenza; influenza vaccine; MF59; vaccines; vaccine safety