To estimate the association of antiretroviral therapy initiation with incident acquired immunodeficiency syndrome (AIDS) or death while accounting for time-varying confounding in a cost-efficient manner, the authors combined a case-cohort study design with inverse probability-weighted estimation of a marginal structural Cox proportional hazards model. A total of 950 adults who were positive for human immunodeficiency virus type 1 were followed in 2 US cohort studies between 1995 and 2007. In the full cohort, 211 AIDS cases or deaths occurred during 4,456 person-years. In an illustrative 20% random subcohort of 190 participants, 41 AIDS cases or deaths occurred during 861 person-years. Accounting for measured confounders and determinants of dropout by inverse probability weighting, the full cohort hazard ratio was 0.41 (95% confidence interval: 0.26, 0.65) and the case-cohort hazard ratio was 0.47 (95% confidence interval: 0.26, 0.83). Standard multivariable-adjusted hazard ratios were closer to the null, regardless of study design. The precision lost with the case-cohort design was modest given the cost savings. Results from Monte Carlo simulations demonstrated that the proposed approach yields approximately unbiased estimates of the hazard ratio with appropriate confidence interval coverage. Marginal structural model analysis of case-cohort study designs provides a cost-efficient design coupled with an accurate analytic method for research settings in which there is time-varying confounding.
acquired immunodeficiency syndrome; case-cohort studies; cohort studies; confounding bias; HIV; pharmacoepidemiology; selection bias
Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976–1983) assessing the relation between residential exposure to magnetic fields and the development of childhood cancer. Results from rejection sampling (odds ratio (OR) = 1.69, 95% posterior interval (PI): 0.57, 5.00) were similar to MCMC results (OR = 1.69, 95% PI: 0.58, 4.95) and approximations from data-augmentation priors (OR = 1.74, 95% PI: 0.60, 5.06). In example 2, the authors apply rejection sampling to a cohort study of 315 human immunodeficiency virus seroconverters (1984–1998) to assess the relation between viral load after infection and 5-year incidence of acquired immunodeficiency syndrome, adjusting for (continuous) age at seroconversion and race. In this more complex example, rejection sampling required a notably longer run time than MCMC sampling but remained feasible and again yielded similar results. The transparency of the proposed approach comes at a price of being less broadly applicable than MCMC.
Bayes theorem; epidemiologic methods; inference; Monte Carlo method; posterior distribution; simulation
In occupational epidemiologic studies, the healthy-worker survivor effect refers to a process that leads to bias in the estimates of an association between cumulative exposure and a health outcome. In these settings, work status acts both as an intermediate and confounding variable, and may violate the positivity assumption (the presence of exposed and unexposed observations in all strata of the confounder). Using Monte Carlo simulation, we assess the degree to which crude, work-status adjusted, and weighted (marginal structural) Cox proportional hazards models are biased in the presence of time-varying confounding and nonpositivity. We simulate data representing time-varying occupational exposure, work status, and mortality. Bias, coverage, and root mean squared error (MSE) were calculated relative to the true marginal exposure effect in a range of scenarios. For a base-case scenario, using crude, adjusted, and weighted Cox models, respectively, the hazard ratio was biased downward 19%, 9%, and 6%; 95% confidence interval coverage was 48%, 85%, and 91%; and root MSE was 0.20, 0.13, and 0.11. Although marginal structural models were less biased in most scenarios studied, neither standard nor marginal structural Cox proportional hazards models fully resolve the bias encountered under conditions of time-varying confounding and nonpositivity.
Following the outbreaks of 2009 pandemic H1N1 infection, rapid influenza diagnostic tests have been used to detect H1N1 infection. However, no meta-analysis has been undertaken to assess the diagnostic accuracy when this manuscript was drafted.
The literature was systematically searched to identify studies that reported the performance of rapid tests. Random effects meta-analyses were conducted to summarize the overall performance.
Seventeen studies were selected with 1879 cases and 3477 non-cases. The overall sensitivity and specificity estimates of the rapid tests were 0.51 (95%CI: 0.41, 0.60) and 0.98 (95%CI: 0.94, 0.99). Studies reported heterogeneous sensitivity estimates, ranging from 0.11 to 0.88. If the prevalence was 30%, the overall positive and negative predictive values were 0.94 (95%CI: 0.85, 0.98) and 0.82 (95%CI: 0.79, 0.85). The overall specificities from different manufacturers were comparable, while there were some differences for the overall sensitivity estimates. BinaxNOW had a lower overall sensitivity of 0.39 (95%CI: 0.24, 0.57) compared to all the others (p-value < 0.001), whereas QuickVue had a higher overall sensitivity of 0.57 (95%CI: 0.50, 0.63) compared to all the others (p-value = 0.005).
Rapid tests have high specificity but low sensitivity and thus limited usefulness.
meta analysis; H1N1; diagnostic tests; rapid tests; sensitivity and specificity
Lagging exposure information is often undertaken to allow for a latency period in cumulative exposure-disease analyses. The authors first consider bias and confidence interval coverage when using the standard approaches of fitting models under several lag assumptions and selecting the lag that maximizes either the effect estimate or model goodness of fit. Next, they consider bias that occurs when the assumption that the latency period is a fixed constant does not hold. Expressions were derived for bias due to misspecification of lag assumptions, and simulations were conducted. Finally, the authors describe a method for joint estimation of parameters describing an exposure-response association and the latency distribution. Analyses of associations between cumulative asbestos exposure and lung cancer mortality among textile workers illustrate this approach. Selecting the lag that maximizes the effect estimate may lead to bias away from the null; selecting the lag that maximizes model goodness of fit may lead to confidence intervals that are too narrow. These problems tend to increase as the within-person exposure variation diminishes. Lagging exposure assignment by a constant will lead to bias toward the null if the distribution of latency periods is not a fixed constant. Direct estimation of latency periods can minimize bias and improve confidence interval coverage.
asbestos; cohort studies; latency; neoplasms; survival analysis
Viremia copy-years predicted all-cause mortality independent of traditional, cross-sectional viral load measures and time-updated CD4+ T-lymphocyte count in antiretroviral therapy-treated patients suggesting cumulative human immunodeficiency virus replication causes harm independent of its effect on the degree of immunodeficiency.
Background. Cross-sectional plasma human immunodeficiency virus (HIV) viral load (VL) measures have proven invaluable for clinical and research purposes. However, cross-sectional VL measures fail to capture cumulative plasma HIV burden longitudinally. We evaluated the cumulative effect of exposure to HIV replication on mortality following initiation of combination antiretroviral therapy (ART).
Methods. We included treatment-naive HIV-infected patients starting ART from 2000 to 2008 at 8 Center for AIDS Research Network of Integrated Clinical Systems sites. Viremia copy-years, a time-varying measure of cumulative plasma HIV exposure, were determined for each patient using the area under the VL curve. Multivariable Cox models were used to evaluate the independent association of viremia copy-years for all-cause mortality.
Results. Among 2027 patients contributing 6579 person-years of follow-up, the median viremia copy-years was 5.3 log10 copy × y/mL (interquartile range: 4.9–6.3 log10 copy × y/mL), and 85 patients (4.2%) died. When evaluated separately, viremia copy-years (hazard ratio [HR] = 1.81 per log10 copy × y/mL; 95% confidence interval [CI], 1.51–2.18 per log10 copy × y/mL), 24-week VL (1.74 per log10 copies/mL; 95% CI, 1.48–2.04 per log10 copies/mL), and most recent VL (HR = 1.89 per log10 copies/mL; 95% CI: 1.63–2.20 per log10 copies/mL) were associated with increased mortality. When simultaneously evaluating VL measures and controlling for other covariates, viremia copy-years increased mortality risk (HR = 1.44 per log10 copy × y/mL; 95% CI, 1.07–1.94 per log10 copy × y/mL), whereas no cross-sectional VL measure was independently associated with mortality.
Conclusions. Viremia copy-years predicted all-cause mortality independent of traditional, cross-sectional VL measures and time-updated CD4+ T-lymphocyte count in ART-treated patients, suggesting cumulative HIV replication causes harm independent of its effect on the degree of immunodeficiency.
Properly planned and conducted randomized clinical trials remain susceptible to a lack of external validity. The authors illustrate a model-based method to standardize observed trial results to a specified target population using a seminal human immunodeficiency virus (HIV) treatment trial, and they provide Monte Carlo simulation evidence supporting the method. The example trial enrolled 1,156 HIV-infected adult men and women in the United States in 1996, randomly assigned 577 to a highly active antiretroviral therapy and 579 to a largely ineffective combination therapy, and followed participants for 52 weeks. The target population was US people infected with HIV in 2006, as estimated by the Centers for Disease Control and Prevention. Results from the trial apply, albeit muted by 12%, to the target population, under the assumption that the authors have measured and correctly modeled the determinants of selection that reflect heterogeneity in the treatment effect. In simulations with a heterogeneous treatment effect, a conventional intent-to-treat estimate was biased with poor confidence limit coverage, but the proposed estimate was largely unbiased with appropriate confidence limit coverage. The proposed method standardizes observed trial results to a specified target population and thereby provides information regarding the generalizability of trial results.
bias; bias (epidemiology); causal inference; external validity; generalizability; randomized trials; standardization
We compared three ad hoc methods to estimate the marginal hazard of incident cancer AIDS in a highly active antiretroviral therapy (1996–2006) relative to a monotherapy/combination therapy (1990–1996) calendar period, accounting for other AIDS events and deaths as competing risks.
Study Design and Setting
Among 1911 HIV+ men from the Multicenter AIDS Cohort Study, 228 developed cancer AIDS and 745 developed competing risks in 14,202 person-years from 1990–2006. Method 1 censored competing risks at the time they occurred, method 2 excluded competing risks, and method 3 censored competing risks at the date of analysis.
The age, race and infection duration adjusted hazard ratios (HRs) for cancer AIDS were similar for all methods (HR≅0.15). We estimated bias and CI coverage of each method with Monte Carlo simulation. On average across 24 scenarios, method 1 produced less biased estimates than methods 2 or 3.
When competing risks are independent of the event of interest, only method 1 produced unbiased estimates of the marginal HR, though independence cannot be verified from the data. When competing risks are dependent, method 1 generally produced the least biased estimates of the marginal HR for the scenarios explored; however, alternative methods may be preferred.
Competing risks; epidemiology; HIV; highly active antiretroviral therapy; cancer
Typical applications of marginal structural time-to-event (e.g., Cox) models have used time on study as the time scale. Here, the authors illustrate use of time on treatment as an alternative time scale. In addition, a method is provided for estimating Kaplan-Meier–type survival curves for marginal structural models. For illustration, the authors estimate the total effect of highly active antiretroviral therapy on time to acquired immunodeficiency syndrome (AIDS) or death in 1,498 US men and women infected with human immunodeficiency virus and followed for 6,556 person-years between 1995 and 2002; 323 incident cases of clinical AIDS and 59 deaths occurred. Of the remaining 1,116 participants, 77% were still under observation at the end of follow-up. By using time on study, the hazard ratio for AIDS or death comparing always with never using highly active antiretroviral therapy from the marginal structural model was 0.52 (95% confidence interval: 0.35, 0.76). By using time on treatment, the analogous hazard ratio was 0.44 (95% confidence interval: 0.32, 0.60). In time-to-event analyses, the choice of time scale may have a meaningful impact on estimates of association and precision. In the present example, use of time on treatment yielded a hazard ratio further from the null and more precise than use of time on study as the time scale.
acquired immunodeficiency syndrome; antiretroviral therapy, highly active; bias (epidemiology); causal inference; confounding factors (epidemiology); proportional hazards model; survival curve; survival time
HIV-1 protease inhibitors (PIs) have antimalarial activity in vitro and in murine models. The potential beneficial effect of HIV-1 PIs on malaria has not been studied in clinical settings. We used data from Adult AIDS Clinical Trials Group A5208 sites where malaria is endemic to compare the incidence of clinically diagnosed malaria among HIV-infected adult women randomized to either lopinavir/ritonavir (LPV/r)-based antiretroviral therapy (ART) or to nevirapine (NVP)-based ART. We calculated hazard ratios and 95% confidence intervals. We conducted a recurrent events analysis that included both first and second clinical malarial episodes and also conducted analyses to assess the sensitivity of results to outcome misclassification. Among the 445 women in this analysis, 137 (31%) received a clinical diagnosis of malaria at least once during follow-up. Of these 137, 72 (53%) were randomized to LPV/r-based ART. Assignment to the LPV/r treatment group (n = 226) was not consistent with a large decrease in the hazard of first clinical malarial episode (hazard ratio = 1.11 [0.79 to 1.56]). The results were similar in the recurrent events analysis. Sensitivity analyses indicated the results were robust to reasonable levels of outcome misclassification. In this study, the treatment with LPV/r compared to NVP had no apparent beneficial effect on the incidence of clinical malaria among HIV-infected adult women. Additional research concerning the effects of PI-based therapy on the incidence of malaria diagnosed by more specific criteria and among groups at a higher risk for severe disease is warranted.
Plasma human immunodeficiency virus type 1 (HIV-1) viral load is a valuable tool for HIV research and clinical care but is often used in a noncumulative manner. The authors developed copy-years viremia as a measure of cumulative plasma HIV-1 viral load exposure among 297 HIV seroconverters from the Multicenter AIDS Cohort Study (1984–1996). Men were followed from seroconversion to incident acquired immunodeficiency syndrome (AIDS), death, or the beginning of the combination antiretroviral therapy era (January 1, 1996); the median duration of follow-up was 4.6 years (interquartile range (IQR), 2.7–6.5). The median viral load and level of copy-years viremia over 2,281 semiannual follow-up assessments were 29,628 copies/mL (IQR, 8,547–80,210) and 63,659 copies × years/mL (IQR, 15,935–180,341). A total of 127 men developed AIDS or died, and 170 survived AIDS-free and were censored on January 1, 1996, or lost to follow-up. Rank correlations between copy-years viremia and other measures of viral load were 0.56–0.87. Each log10 increase in copy-years viremia was associated with a 1.70-fold increased hazard (95% confidence interval: 0.94, 3.07) of AIDS or death, independently of infection duration, age, race, CD4 cell count, set-point, peak viral load, or most recent viral load. Copy-years viremia, a novel measure of cumulative viral burden, may provide prognostic information beyond traditional single measures of viremia.
acquired immunodeficiency syndrome; HIV; HIV infections; viral load; viremia
To estimate the net effect of imperfectly measured highly active antiretroviral therapy on incident acquired immunodeficiency syndrome or death, the authors combined inverse probability-of-treatment-and-censoring weighted estimation of a marginal structural Cox model with regression-calibration methods. Between 1995 and 2007, 950 human immunodeficiency virus–positive men and women were followed in 2 US cohort studies. During 4,054 person-years, 374 initiated highly active antiretroviral therapy, 211 developed acquired immunodeficiency syndrome or died, and 173 dropped out. Accounting for measured confounders and determinants of dropout, the weighted hazard ratio for acquired immunodeficiency syndrome or death comparing use of highly active antiretroviral therapy in the prior 2 years with no therapy was 0.36 (95% confidence limits: 0.21, 0.61). This association was relatively constant over follow-up (P = 0.19) and stronger than crude or adjusted hazard ratios of 0.75 and 0.95, respectively. Accounting for measurement error in reported exposure using external validation data on 331 men and women provided a hazard ratio of 0.17, with bias shifted from the hazard ratio to the estimate of precision as seen by the 2.5-fold wider confidence limits (95% confidence limits: 0.06, 0.43). Marginal structural measurement-error models can simultaneously account for 3 major sources of bias in epidemiologic research: validated exposure measurement error, measured selection bias, and measured time-fixed and time-varying confounding.
acquired immunodeficiency syndrome; bias (epidemiology); cohort studies; confounding factors (epidemiology); epidemiologic measurements; HIV; pharmacoepidemiology; selection bias
The method of inverse probability weighting (henceforth, weighting) can be used to adjust for measured confounding and selection bias under the four assumptions of consistency, exchangeability, positivity, and no misspecification of the model used to estimate weights. In recent years, several published estimates of the effect of time-varying exposures have been based on weighted estimation of the parameters of marginal structural models because, unlike standard statistical methods, weighting can appropriately adjust for measured time-varying confounders affected by prior exposure. As an example, the authors describe the last three assumptions using the change in viral load due to initiation of antiretroviral therapy among 918 human immunodeficiency virus-infected US men and women followed for a median of 5.8 years between 1996 and 2005. The authors describe possible tradeoffs that an epidemiologist may encounter when attempting to make inferences. For instance, a tradeoff between bias and precision is illustrated as a function of the extent to which confounding is controlled. Weight truncation is presented as an informal and easily implemented method to deal with these tradeoffs. Inverse probability weighting provides a powerful methodological tool that may uncover causal effects of exposures that are otherwise obscured. However, as with all methods, diagnostics and sensitivity analyses are essential for proper use.
bias (epidemiology); causality; confounding factors (epidemiology); probability weighting; regression model
Many studies have chronicled the “epidemiologic synergy” between human immunodeficiency virus (HIV) and herpes simplex virus type 2 (HSV-2). HIV adversely affects the natural history of HSV-2 and results in more frequent and severe HSV-2 reactivation. Few longitudinal studies, however, have examined whether HSV-2 is associated with increased HIV plasma viral loads or decreased CD4 counts. The authors estimated the effect of HSV-2 seropositivity on HIV RNA viral load and on CD4 count over time among 777 HIV-seropositive US women not receiving suppressive HSV-2 therapy in the HIV Epidemiology Research Study (1993–2000). Linear mixed models were used to assess the effect of HSV-2 on log HIV viral load and CD4 count/mm3 prior to widespread initiation of highly active antiretroviral therapy. Coinfection with HSV-2 was not associated with HIV RNA plasma viral loads during study follow-up. There was a statistically significant association between HSV-2 seropositivity and CD4 count over time, but this difference was small and counterintuitive at an increase of 8 cells/mm3 (95% confidence interval: 2, 14) per year among HSV-2-seropositive women compared with HSV-2-seronegative women. These data do not support a clinically meaningful effect of baseline HSV-2 seropositivity on the trajectories of HIV plasma viral loads or CD4 counts.
CD4 lymphocyte count; herpes simplex; herpesvirus 2, human; HIV; viral load
The applied literature on propensity scores has often cited the c-statistic as a measure of the ability of the propensity score to control confounding. However, a high c-statistic in the propensity model is neither necessary nor sufficient for control of confounding. Moreover, use of the c-statistic as a guide in constructing propensity scores may result in less overlap in propensity scores between treated and untreated subjects; this may require the analyst to restrict populations for inference. Such restrictions may reduce precision of estimates and change the population to which the estimate applies. Variable selection based on prior subject matter knowledge, empirical observation, and sensitivity analysis is preferable and avoids many of these problems.
Propensity scores; c-statistic; variable selection; confounding
In the analysis of survival data, there are often competing events that preclude an event of interest from occurring. Regression analysis with competing risks is typically undertaken using a cause-specific proportional hazards model. However, modern alternative methods exist for the analysis of the subdistribution hazard with a corresponding subdistribution proportional hazards model. In this paper, we introduce a flexible parametric mixture model as a unifying method to obtain estimates of the cause-specific and subdistribution hazards and hazard ratio functions. We describe how these estimates can be summarized over time to give a single number that is comparable to the hazard ratio that is obtained from a corresponding cause-specific or subdistribution proportional hazards model. An application to the Women’s Interagency HIV Study is provided to investigate injection drug use and the time to either the initiation of effective antiretroviral therapy, or clinical disease progression as a competing event.
Cause-specific hazards; Competing risks; Hazard ratio; Mixture Model; Subdistribution; Subdistribution hazards; Survival analysis
Exposure lagging and exposure-time window analysis are 2 widely used approaches to allow for induction and latency periods in analyses of exposure-disease associations. Exposure lagging implies a strong parametric assumption about the temporal evolution of the exposure-disease association. An exposure-time window analysis allows for a more flexible description of temporal variation in exposure effects but may result in unstable risk estimates that are sensitive to how windows are defined. The authors describe a hierarchical regression approach that combines time window analysis with a parametric latency model. They illustrate this approach using data from 2 occupational cohort studies: studies of lung cancer mortality among 1) asbestos textile workers and 2) uranium miners. For each cohort, an exposure-time window analysis was compared with a hierarchical regression analysis with shrinkage toward a simpler, second-stage parametric latency model. In each cohort analysis, there is substantial stability gained in time window-specific estimates of association by using a hierarchical regression approach. The proposed hierarchical regression model couples a time window analysis with a parametric latency model; this approach provides a way to stabilize risk estimates derived from a time window analysis and a way to reduce bias arising from misspecification of a parametric latency model.
cohort studies; hierarchical model; latency; neoplasms; regression
Plasmodium falciparum malaria (Pf-malaria) and Epstein Barr Virus (EBV) infections coexist in children at risk for endemic Burkitt's lymphoma (eBL); yet studies have only glimpsed the cumulative effect of Pf-malaria on EBV-specific immunity. Using pooled EBV lytic and latent CD8+ T-cell epitope-peptides, IFN-γ ELISPOT responses were surveyed three times among children (10 months to 15 years) in Kenya from 2002–2004. Prevalence ratios (PR) and 95% confidence intervals (CI) were estimated in association with Pf-malaria exposure, defined at the district-level (Kisumu: holoendemic; Nandi: hypoendemic) and the individual-level. We observed a 46% decrease in positive EBV lytic antigen IFN-γ responses among 5–9 year olds residing in Kisumu compared to Nandi (PR: 0.54; 95% CI: 0.30–0.99). Individual-level analysis in Kisumu revealed further impairment of EBV lytic antigen responses among 5–9 year olds consistently infected with Pf-malaria compared to those never infected. There were no observed district- or individual-level differences between Pf-malaria exposure and EBV latent antigen IFN-γ response. The gradual decrease of EBV lytic antigen but not latent antigen IFN-γ responses after primary infection suggests a specific loss in immunological control over the lytic cycle in children residing in malaria holoendemic areas, further refining our understanding of eBL etiology.
In time-to-event analyses, artificial censoring with correction for induced selection bias using inverse probability-of-censoring weights can be used to 1) examine the natural history of a disease after effective interventions are widely available, 2) correct bias due to noncompliance with fixed or dynamic treatment regimens, and 3) estimate survival in the presence of competing risks. Artificial censoring entails censoring participants when they meet a predefined study criterion, such as exposure to an intervention, failure to comply, or the occurrence of a competing outcome. Inverse probability-of-censoring weights use measured common predictors of the artificial censoring mechanism and the outcome of interest to determine what the survival experience of the artificially censored participants would be had they never been exposed to the intervention, complied with their treatment regimen, or not developed the competing outcome. Even if all common predictors are appropriately measured and taken into account, in the context of small sample size and strong selection bias, inverse probability-of-censoring weights could fail because of violations in assumptions necessary to correct selection bias. The authors used an example from the Multicenter AIDS Cohort Study, 1984–2008, regarding estimation of long-term acquired immunodeficiency syndrome-free survival to demonstrate the impact of violations in necessary assumptions. Approaches to improve correction methods are discussed.
epidemiologic methods; selection bias; survival analysis
Linear regression with a left-censored independent variable X due to limit of detection (LOD) was recently considered by 2 groups of researchers: Richardson and Ciampi, and Schisterman and colleagues.
Both groups obtained consistent estimators for the regression slopes by replacing left-censored X with a constant, that is, the expectation of X given X below LOD E(X|X
Schisterman and colleagues argued that their approach would be a better choice because the sample mean of X given X above LOD is available, whereas E(X|X
Recommendations are given based on theoretical and simulation results. These recommendations are illustrated with 1 case study.
Estimate the effect of alcohol consumption on HIV acquisition while appropriately accounting for confounding by time-varying risk factors.
African American injection drug users in the AIDS Link to Intravenous Experience cohort study. Participants were recruited and followed with semiannual visits in Baltimore, Maryland between 1988 and 2008.
Marginal structural models were used to estimate the effect of alcohol consumption on HIV acquisition.
At entry, 28% of 1,525 participants were female with a median (quartiles) age of 37 (32; 42) years and 10 (10; 12) years of formal education. During follow up, 155 participants acquired HIV and alcohol consumption was 24%, 24%, 26%, 17%, and 9% for 0, 1–5, 6–20, 21–50 and 51–140 drinks/week over the prior two years, respectively. In analyses accounting for socio-demographic factors, drug use, and sexual activity, hazard ratios for participants reporting 1–5, 6–20, 21–50, and 51–140 drinks/week in the prior two years compared to participants who reported 0 drinks/week were 1.09 (0.60, 1.98), 1.18 (0.66, 2.09), 1.66 (0.94, 2.93) and 2.12 (1.15, 3.90), respectively. A trend test indicated a dose-response relationship between alcohol consumption and HIV acquisition (P value for trend = 9.7×10−4).
A dose-response relationship between alcohol consumption and subsequent HIV acquisition is indicated, independent of measured known risk factors.
Alcohol consumption; HIV infection; Bias; Cohort studies; Injection drug users
Previous research identified differences in breast cancer-specific mortality across four "intrinsic" tumor subtypes: luminal A, luminal B, basal-like, and human epidermal growth factor receptor 2 positive/estrogen receptor negative (HER2+/ER−).
We used immunohistochemical markers to subtype 1149 invasive breast cancer patients (518 African American, 631 white) in the Carolina Breast Cancer Study, a population-based study of women diagnosed with breast cancer. Vital status was determined through 2006 using the National Death Index, with median follow-up of 9 years.
Cancer subtypes luminal A, luminal B, basal-like and HER2+/ER- were distributed as 64%, 11%, 11% and 5% for whites, and 48%, 8%, 22% and 7% for African Americans, respectively. Breast cancer mortality was higher for patients with HER2+/ER- and basal-like breast cancer compared to luminal A and B. African Americans had higher breast-cancer specific mortality than whites, but the effect of race was statistically significant only among women with luminal A breast cancer. However, when compared to the luminal A subtype within racial categories, mortality for patients with basal-like breast cancer was higher among whites (HR=2.0, 95% CI: 1.2, 3.4) than African Americans (HR=1.5, 95% CI: 1.0, 2.4), with the strongest effect seen in postmenopausal white women (HR=3.9, 95% CI: 1.5, 10.0).
Our results confirm the association of basal-like breast cancer with poor prognosis, and suggest that basal-like breast cancer is not an inherently more aggressive disease in African American women compared to whites. Additional analyses are needed in populations with known treatment profiles to understand the role of tumor subtypes and race in breast cancer mortality, and in particular our finding that among women with luminal A breast cancer, African Americans have higher mortality than whites.
Breast Cancer; Breast Cancer Subtypes; Race; Survival; Epidemiology
To examine the impact of HIV on lung cancer incidence and survival.
Prospective study of 2,495 HIV-infected and HIV-uninfected injection drug users in Baltimore, MD.
Cancer data were obtained from the Maryland Cancer Registry. We estimated hazard ratios (HRs) and 95% confidence intervals (CIs) for lung cancer in two strata of packs smoked per day by HIV serostatus, and for mortality by HIV serostatus.
HIV-infected participants had twice the risk (HR=2.3; 95% CI: 1.1-5.1) of lung cancer. There was no evidence of an interaction between HIV and packs of cigarettes smoked per day (p-interaction=0.18). Compared to participants who smoked <1.43 packs per day, among HIV-uninfected individuals lung cancer risk was six times greater (HR=5.9; 95% CI 2.1-17) and among HIV-infected individuals lung cancer risk was doubled (HR=2.1; 95% CI 0.63-6.8) in persons who smoked ≥1.43 per day. Additionally, HIV was associated with four times the risk of death (HR=3.8; 95% CI 0.92-15) in lung cancer cases.
HIV was associated with increased risk of lung cancer, after adjusting for smoking. However, no evidence was observed for synergistic effects of HIV and smoking. Further, HIV was associated with poorer lung cancer survival, after accounting for cancer stage.
Cancer; Lung; Smoking; Survival; Drug Users
Results 1-25 (63)
Remove citation from clipboard
Add citation to clipboard
This will clear all selections from your clipboard. Do you wish proceed?
Clipboard is full! Please remove an item and try again.