Studies have suggested that exposure to ultraviolet (UV) light may increase risk of herpes simplex virus (HSV) recurrence. Between 1993 and 1997, the Herpetic Eye Disease Study (HEDS) randomized 703 participants with ocular HSV to receipt of acyclovir or placebo for prevention of ocular HSV recurrence. Of these, 308 HEDS participants (48% female and 85% white; median age, 49 years) were included in a nested study of exposures thought to cause recurrence and were followed for up to 15 months. We matched weekly UV index values from the National Oceanic and Atmospheric Administration to each participant's study center and used marginal structural Cox models to account for time-varying psychological stress and contact lens use and selection bias from dropout. There were 44 recurrences of ocular HSV, yielding an incidence of 4.3 events per 1,000 person-weeks. Weighted hazard ratios comparing persons with ≥8 hours of time outdoors to those with less exposure were 0.84 (95% confidence interval (CI): 0.27, 2.63) and 3.10 (95% CI: 1.14, 8.48) for weeks with a UV index of <4 and ≥4, respectively (ratio of hazard ratios = 3.68, 95% CI: 0.43, 31.4). Though results were imprecise, when the UV index was higher (i.e., ≥4), spending 8 or more hours per week outdoors was associated with increased risk of ocular HSV recurrence.
cohort studies; herpes simplex virus; recurrence; sunlight; ultraviolet light; UV index
The method of maximum likelihood is widely used in epidemiology, yet many epidemiologists receive little or no education in the conceptual underpinnings of the approach. Here we provide a primer on maximum likelihood and some important extensions which have proven useful in epidemiologic research, and which reveal connections between maximum likelihood and Bayesian methods. For a given data set and probability model, maximum likelihood finds values of the model parameters that give the observed data the highest probability. As with all inferential statistical methods, maximum likelihood is based on an assumed model and cannot account for bias sources that are not controlled by the model or the study design. Maximum likelihood is nonetheless popular, because it is computationally straightforward and intuitive and because maximum likelihood estimators have desirable large-sample properties in the (largely fictitious) case in which the model has been correctly specified. Here, we work through an example to illustrate the mechanics of maximum likelihood estimation and indicate how improvements can be made easily with commercial software. We then describe recent extensions and generalizations which are better suited to observational health research and which should arguably replace standard maximum likelihood as the default method.
epidemiologic methods; maximum likelihood; modeling; penalized estimation; regression; statistics
Estimate the effect of alcohol consumption on HIV acquisition while appropriately accounting for confounding by time-varying risk factors.
African American injection drug users in the AIDS Link to Intravenous Experience cohort study. Participants were recruited and followed with semiannual visits in Baltimore, Maryland between 1988 and 2008.
Marginal structural models were used to estimate the effect of alcohol consumption on HIV acquisition.
At entry, 28% of 1,525 participants were female with a median (quartiles) age of 37 (32; 42) years and 10 (10; 12) years of formal education. During follow up, 155 participants acquired HIV and alcohol consumption was 24%, 24%, 26%, 17%, and 9% for 0, 1–5, 6–20, 21–50 and 51–140 drinks/week over the prior two years, respectively. In analyses accounting for socio-demographic factors, drug use, and sexual activity, hazard ratios for participants reporting 1–5, 6–20, 21–50, and 51–140 drinks/week in the prior two years compared to participants who reported 0 drinks/week were 1.09 (0.60, 1.98), 1.18 (0.66, 2.09), 1.66 (0.94, 2.93) and 2.12 (1.15, 3.90), respectively. A trend test indicated a dose-response relationship between alcohol consumption and HIV acquisition (P value for trend = 9.7×10−4).
A dose-response relationship between alcohol consumption and subsequent HIV acquisition is indicated, independent of measured known risk factors.
Alcohol consumption; HIV infection; Bias; Cohort studies; Injection drug users
Although it is clear that there are short-term effects of sodium intake on blood pressure, little is known about the most relevant timing of sodium exposure for the onset of hypertension. This question can only be addressed in cohorts with repeated measures of sodium intake.
Using up to 7 measures of dietary sodium intake and blood pressure between 1991 and 2009, we compared baseline, the mean of all measures, and the most recent sodium intake in association with incident hypertension, in 6578 adults enrolled in the China Health and Nutrition Survey aged 18 to 65 free of hypertension at baseline. We used survival methods that account for the interval-censored nature of this study, and inverse probability weights to generate adjusted survival curves and time-specific cumulative risk differences; hazard ratios were also estimated.
For mean and most recent measures, the probability of hypertension-free survival was the lowest among those in the highest intake sodium group compared to all other intake groups across the entire follow-up. In addition, the most recent sodium intake measure had a positive dose-response association with incident hypertension [Risk Difference at 11 years of follow-up = 0.04 (95%CI −0.01, 0.09), 0.06 (0.00, 0.13), 0.18 (0.12, 0.24) and 0.20 (0.12, 0.27) for the second to fifth sodium intake groups compared to the lowest group respectively]. Baseline sodium intake was not associated with incident hypertension.
These results suggest caution when using baseline sodium intake measures with long-term follow up.
China; sodium intake; incident hypertension; interval-censored; adjusted survival curves
Alcohol Drinking; HIV Seropositivity; Men who Have Sex with Men; Prospective Studies; Sexual Behavior
To estimate the association of antiretroviral therapy initiation with incident acquired immunodeficiency syndrome (AIDS) or death while accounting for time-varying confounding in a cost-efficient manner, the authors combined a case-cohort study design with inverse probability-weighted estimation of a marginal structural Cox proportional hazards model. A total of 950 adults who were positive for human immunodeficiency virus type 1 were followed in 2 US cohort studies between 1995 and 2007. In the full cohort, 211 AIDS cases or deaths occurred during 4,456 person-years. In an illustrative 20% random subcohort of 190 participants, 41 AIDS cases or deaths occurred during 861 person-years. Accounting for measured confounders and determinants of dropout by inverse probability weighting, the full cohort hazard ratio was 0.41 (95% confidence interval: 0.26, 0.65) and the case-cohort hazard ratio was 0.47 (95% confidence interval: 0.26, 0.83). Standard multivariable-adjusted hazard ratios were closer to the null, regardless of study design. The precision lost with the case-cohort design was modest given the cost savings. Results from Monte Carlo simulations demonstrated that the proposed approach yields approximately unbiased estimates of the hazard ratio with appropriate confidence interval coverage. Marginal structural model analysis of case-cohort study designs provides a cost-efficient design coupled with an accurate analytic method for research settings in which there is time-varying confounding.
acquired immunodeficiency syndrome; case-cohort studies; cohort studies; confounding bias; HIV; pharmacoepidemiology; selection bias
Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976–1983) assessing the relation between residential exposure to magnetic fields and the development of childhood cancer. Results from rejection sampling (odds ratio (OR) = 1.69, 95% posterior interval (PI): 0.57, 5.00) were similar to MCMC results (OR = 1.69, 95% PI: 0.58, 4.95) and approximations from data-augmentation priors (OR = 1.74, 95% PI: 0.60, 5.06). In example 2, the authors apply rejection sampling to a cohort study of 315 human immunodeficiency virus seroconverters (1984–1998) to assess the relation between viral load after infection and 5-year incidence of acquired immunodeficiency syndrome, adjusting for (continuous) age at seroconversion and race. In this more complex example, rejection sampling required a notably longer run time than MCMC sampling but remained feasible and again yielded similar results. The transparency of the proposed approach comes at a price of being less broadly applicable than MCMC.
Bayes theorem; epidemiologic methods; inference; Monte Carlo method; posterior distribution; simulation
In HIV-1 clinical trials the interest is often to compare how well treatments suppress the HIV-1 RNA viral load. The current practice in statistical analysis of such trials is to define a single ad hoc composite event which combines information about both the viral load suppression and the subsequent viral rebound, and then analyze the data using standard univariate survival analysis techniques. The main weakness of this approach is that the results of the analysis can be easily influenced by minor details in the definition of the composite event. We propose a straightforward alternative endpoint based on the probability of being suppressed over time, and suggest that treatment differences be summarized using the restricted mean time a patient spends in the state of viral suppression. A nonparametric analysis is based on methods for multiple endpoint studies. We demonstrate the utility of our analytic strategy using a recent therapeutic trial, in which the protocol specified a primary analysis using a composite endpoint approach.
AIDS; Clinical trial endpoint; Counting processes; Multistate models; Survival analysis
The parametric g-formula can be used to estimate the effect of a policy, intervention, or treatment. Unlike standard regression approaches, the parametric g-formula can be used to adjust for time-varying confounders that are affected by prior exposures. To date, there are few published examples in which the method has been applied.
We provide a simple introduction to the parametric g-formula and illustrate its application in analysis of a small cohort study of bone marrow transplant patients in which the effect of treatment on mortality is subject to time-varying confounding.
Standard regression adjustment yields a biased estimate of the effect of treatment on mortality relative to the estimate obtained by the g-formula.
The g-formula allows estimation of a relevant parameter for public health officials: the change in the hazard of mortality under a hypothetical intervention, such as reduction of exposure to a harmful agent or introduction of a beneficial new treatment. We present a simple approach to implement the parametric g-formula that is sufficiently general to allow easy adaptation to many settings of public health relevance.
In occupational epidemiologic studies, the healthy-worker survivor effect refers to a process that leads to bias in the estimates of an association between cumulative exposure and a health outcome. In these settings, work status acts both as an intermediate and confounding variable, and may violate the positivity assumption (the presence of exposed and unexposed observations in all strata of the confounder). Using Monte Carlo simulation, we assess the degree to which crude, work-status adjusted, and weighted (marginal structural) Cox proportional hazards models are biased in the presence of time-varying confounding and nonpositivity. We simulate data representing time-varying occupational exposure, work status, and mortality. Bias, coverage, and root mean squared error (MSE) were calculated relative to the true marginal exposure effect in a range of scenarios. For a base-case scenario, using crude, adjusted, and weighted Cox models, respectively, the hazard ratio was biased downward 19%, 9%, and 6%; 95% confidence interval coverage was 48%, 85%, and 91%; and root MSE was 0.20, 0.13, and 0.11. Although marginal structural models were less biased in most scenarios studied, neither standard nor marginal structural Cox proportional hazards models fully resolve the bias encountered under conditions of time-varying confounding and nonpositivity.
Gene expression analyses indicate that breast cancer is a heterogeneous disease with at least 5 immunohistologic subtypes. Despite growing evidence that these subtypes are etiologically and prognostically distinct, few studies have investigated whether they have divergent genetic risk factors. To help fill in this gap in our understanding, we examined associations between breast cancer subtypes and previously established susceptibility loci among white and African-American women in the Carolina Breast Cancer Study.
We used Bayesian polytomous logistic regression to estimate odds ratios (ORs) and 95% posterior intervals (PIs) for the association between each of 78 single nucleotide polymorphisms (SNPs) and 5 breast cancer subtypes. Subtypes were defined using 5 immunohistochemical markers: estrogen receptors (ER), progesterone receptors (PR), human epidermal growth factor receptors 1 and 2 (HER1/2) and cytokeratin (CK) 5/6.
Several SNPs in TNRC9/TOX3 were associated with luminal A (ER/PR+, HER2−) or basal-like breast cancer (ER−, PR−, HER2−, HER1 or CK 5/6+), and one SNP (rs3104746) was associated with both. SNPs in FGFR2 were associated with luminal A, luminal B (ER/PR+, HER2+), or HER2+/ER− disease, but none were associated with basal-like disease. We also observed subtype differences in the effects of SNPs in 2q35, 4p, TLR1, MAP3K1, ESR1, CDKN2A/B, ANKRD16, and ZM1Z1.
Conclusion and Impact
We found evidence that genetic risk factors for breast cancer vary by subtype and further clarified the role of several key susceptibility genes.
breast cancer; single nucleotide polymorphisms; breast cancer subtypes; GWAS; Bayesian analysis
In this article, we present an overview and tutorial of statistical methods for meta-analysis of diagnostic tests under two scenarios: 1) when the reference test can be considered a gold standard; and 2) when the reference test cannot be considered a gold standard. In the first scenario, we first review the conventional summary receiver operating characteristics (ROC) approach and a bivariate approach using linear mixed models (BLMM). Both approaches require direct calculations of study-specific sensitivities and specificities. We next discuss the hierarchical summary ROC curve approach for jointly modeling positivity criteria and accuracy parameters, and the bivariate generalized linear mixed models (GLMM) for jointly modeling sensitivities and specificities. We further discuss the trivariate GLMM for jointly modeling prevalence, sensitivities and specificities, which allows us to assess the correlations among the three parameters. These approaches are based on the exact binomial distribution and thus do not require an ad hoc continuity correction. Last, we discuss a latent class random effects model for meta-analysis of diagnostic tests when the reference test itself is imperfect for the second scenario. A number of case studies with detailed annotated SAS code in procedures MIXED and NLMIXED are presented to facilitate the implementation of these approaches.
meta-analysis; diagnostic test; gold standard; generalized linear mixed models
In a recent issue of the Journal, Kirkeleit et al. (Am J Epidemiol. 2013;177(11):1218–1224) provided empirical evidence for the potential of the healthy worker effect in a large cohort of Norwegian workers across a range of occupations. In this commentary, we provide some historical context, define the healthy worker effect by using causal diagrams, and use simulated data to illustrate how structural nested models can be used to estimate exposure effects while accounting for the healthy worker survivor effect in 4 simple steps. We provide technical details and annotated SAS software (SAS Institute, Inc., Cary, North Carolina) code corresponding to the example analysis in the Web Appendices, available at http://aje.oxfordjournals.org/.
causal inference; healthy worker effect; marginal structural models; occupational epidemiology; structural nested models
In case-control studies, exposure assessments are almost always error-prone. In the absence of a gold standard, two or more assessment approaches are often used to classify people with respect to exposure. Each imperfect assessment tool may lead to misclassification of exposure assignment; the exposure misclassification may be differential with respect to case status or not; and, the errors in exposure classification under the different approaches may be independent (conditional upon the true exposure status) or not. Although methods have been proposed to study diagnostic accuracy in the absence of a gold standard, these methods are infrequently used in case-control studies to correct exposure misclassification that is simultaneously differential and dependent. In this paper, we proposed a Bayesian method to estimate the measurement-error corrected exposure-disease association, accounting for both differential and dependent misclassification. The performance of the proposed method is investigated using simulations, which show that the proposed approach works well, as well as an application to a case-control study assessing the association between asbestos exposure and mesothelioma.
Case-control study; gold standard; misclassification; dependent; differential
Predictors of study retention and scheduled visit attendance in the University of North Carolina Center for AIDS Research (UNC CFAR) prospective clinical cohort of HIV-infected patients enrolled between 1 January 2001 and 1 January 2008 are reported. At study entry, 1636 participants were 32% female, 58% were African-American, 49% had not received HIV care elsewhere, 71% were receiving or initiated combination antiretroviral therapy, and 26% were diagnosed with AIDS, with median (quartiles) age of 40 (34; 47) years, distance to clinic of 45 (21; 70) miles, HIV-1 RNA of 1396 (200; 26,750) copies/ml, and CD4 of 374 (182; 602) cells/mm3. Participants contributed a median of 7 (4; 13) scheduled visits and 2.25 (1.0; 3.9) years alive under follow-up. During 6134 person-years of follow-up, 414 participants dropped out and 145 died. Accounting for differences in death by participant characteristics, the 6-year cumulative probability of retention was 67% [95% confidence limits (CL): 65, 70%], with 6.75 (95% CL: 6.13, 7.43) drop outs per 100 person-years. In a multivariable Cox proportional hazards model, retention was higher among participants who were insured, had not received HIV care elsewhere, had controlled HIV viremia, and were living in nonurban areas or proximate to the clinic. In a multivariable modified Poisson regression model that accounted for differences in drop out and death by participant characteristics, visit attendance was higher among older, AIDS-diagnosed, immune compromised, and cART-initiated participants. The UNC CFAR clinical cohort has ample enrollment with retention and visit attendance modestly influenced by factors such as disease severity.
Properly planned and conducted randomized clinical trials remain susceptible to a lack of external validity. The authors illustrate a model-based method to standardize observed trial results to a specified target population using a seminal human immunodeficiency virus (HIV) treatment trial, and they provide Monte Carlo simulation evidence supporting the method. The example trial enrolled 1,156 HIV-infected adult men and women in the United States in 1996, randomly assigned 577 to a highly active antiretroviral therapy and 579 to a largely ineffective combination therapy, and followed participants for 52 weeks. The target population was US people infected with HIV in 2006, as estimated by the Centers for Disease Control and Prevention. Results from the trial apply, albeit muted by 12%, to the target population, under the assumption that the authors have measured and correctly modeled the determinants of selection that reflect heterogeneity in the treatment effect. In simulations with a heterogeneous treatment effect, a conventional intent-to-treat estimate was biased with poor confidence limit coverage, but the proposed estimate was largely unbiased with appropriate confidence limit coverage. The proposed method standardizes observed trial results to a specified target population and thereby provides information regarding the generalizability of trial results.
bias; bias (epidemiology); causal inference; external validity; generalizability; randomized trials; standardization
Marginal structural models were developed as a semiparametric alternative to the G-computation formula to estimate causal effects of exposures. In practice, these models are often specified using parametric regression models. As such, the usual conventions regarding regression model specification apply. This paper outlines strategies for marginal structural model specification, and considerations for the functional form of the exposure metric in the final structural model. We propose a quasi-likelihood information criterion adapted from use in generalized estimating equations. We evaluate the properties of our proposed information criterion using a limited simulation study. We illustrate our approach using two empirical examples. In the first example, we use data from a randomized breastfeeding promotion trial to estimate the effect of breastfeeding duration on infant weight at one year. In the second example, we use data from two prospective cohorts studies to estimate the effect of highly active antiretroviral therapy on CD4 count in an observational cohort of HIV-infected men and women. The marginal structural model specified should reflect the scientific question being addressed, but can also assist in exploration of other plausible and closely related questions. In marginal structural models, as in any regression setting, correct inference depends on correct model specification. Our proposed information criterion provides a formal method for comparing model fit for different specifications.
Bias; Causal inference; Marginal structural model; Regression analysis; Model specification
We compared three ad hoc methods to estimate the marginal hazard of incident cancer AIDS in a highly active antiretroviral therapy (1996–2006) relative to a monotherapy/combination therapy (1990–1996) calendar period, accounting for other AIDS events and deaths as competing risks.
Study Design and Setting
Among 1911 HIV+ men from the Multicenter AIDS Cohort Study, 228 developed cancer AIDS and 745 developed competing risks in 14,202 person-years from 1990–2006. Method 1 censored competing risks at the time they occurred, method 2 excluded competing risks, and method 3 censored competing risks at the date of analysis.
The age, race and infection duration adjusted hazard ratios (HRs) for cancer AIDS were similar for all methods (HR≅0.15). We estimated bias and CI coverage of each method with Monte Carlo simulation. On average across 24 scenarios, method 1 produced less biased estimates than methods 2 or 3.
When competing risks are independent of the event of interest, only method 1 produced unbiased estimates of the marginal HR, though independence cannot be verified from the data. When competing risks are dependent, method 1 generally produced the least biased estimates of the marginal HR for the scenarios explored; however, alternative methods may be preferred.
Competing risks; epidemiology; HIV; highly active antiretroviral therapy; cancer
Typical applications of marginal structural time-to-event (e.g., Cox) models have used time on study as the time scale. Here, the authors illustrate use of time on treatment as an alternative time scale. In addition, a method is provided for estimating Kaplan-Meier–type survival curves for marginal structural models. For illustration, the authors estimate the total effect of highly active antiretroviral therapy on time to acquired immunodeficiency syndrome (AIDS) or death in 1,498 US men and women infected with human immunodeficiency virus and followed for 6,556 person-years between 1995 and 2002; 323 incident cases of clinical AIDS and 59 deaths occurred. Of the remaining 1,116 participants, 77% were still under observation at the end of follow-up. By using time on study, the hazard ratio for AIDS or death comparing always with never using highly active antiretroviral therapy from the marginal structural model was 0.52 (95% confidence interval: 0.35, 0.76). By using time on treatment, the analogous hazard ratio was 0.44 (95% confidence interval: 0.32, 0.60). In time-to-event analyses, the choice of time scale may have a meaningful impact on estimates of association and precision. In the present example, use of time on treatment yielded a hazard ratio further from the null and more precise than use of time on study as the time scale.
acquired immunodeficiency syndrome; antiretroviral therapy, highly active; bias (epidemiology); causal inference; confounding factors (epidemiology); proportional hazards model; survival curve; survival time
Plasma human immunodeficiency virus type 1 (HIV-1) viral load is a valuable tool for HIV research and clinical care but is often used in a noncumulative manner. The authors developed copy-years viremia as a measure of cumulative plasma HIV-1 viral load exposure among 297 HIV seroconverters from the Multicenter AIDS Cohort Study (1984–1996). Men were followed from seroconversion to incident acquired immunodeficiency syndrome (AIDS), death, or the beginning of the combination antiretroviral therapy era (January 1, 1996); the median duration of follow-up was 4.6 years (interquartile range (IQR), 2.7–6.5). The median viral load and level of copy-years viremia over 2,281 semiannual follow-up assessments were 29,628 copies/mL (IQR, 8,547–80,210) and 63,659 copies × years/mL (IQR, 15,935–180,341). A total of 127 men developed AIDS or died, and 170 survived AIDS-free and were censored on January 1, 1996, or lost to follow-up. Rank correlations between copy-years viremia and other measures of viral load were 0.56–0.87. Each log10 increase in copy-years viremia was associated with a 1.70-fold increased hazard (95% confidence interval: 0.94, 3.07) of AIDS or death, independently of infection duration, age, race, CD4 cell count, set-point, peak viral load, or most recent viral load. Copy-years viremia, a novel measure of cumulative viral burden, may provide prognostic information beyond traditional single measures of viremia.
acquired immunodeficiency syndrome; HIV; HIV infections; viral load; viremia
Kaposi sarcoma and lymphoma rates were highest immediately after antiretroviral therapy (ART) initiation, particularly among patients with low CD4 cell counts, whereas other cancers increased with time on ART. Calendar year of ART initiation was not associated with subsequent cancer incidence.
Cancer is an important cause of morbidity and mortality in individuals infected with human immunodeficiency virus (HIV), but patterns of cancer incidence after combination antiretroviral therapy (ART) initiation remain poorly characterized.
We evaluated the incidence and timing of cancer diagnoses among patients initiating ART between 1996 and 2011 in a collaboration of 8 US clinical HIV cohorts. Poisson regression was used to estimate incidence rates. Cox regression was used to identify demographic and clinical characteristics associated with cancer incidence after ART initiation.
At initiation of first combination ART among 11 485 patients, median year was 2004 (interquartile range [IQR], 2000–2007) and median CD4 count was 202 cells/mm3 (IQR, 61–338). Incidence rates for Kaposi sarcoma (KS) and lymphomas were highest in the first 6 months after ART initiation (P < .001) and plateaued thereafter, while incidence rates for all other cancers combined increased from 416 to 615 cases per 100 000 person-years from 1 to 10 years after ART initiation (average 7% increase per year; 95% confidence interval, 2%–13%). Lower CD4 count at ART initiation was associated with greater risk of KS, lymphoma, and human papillomavirus–related cancer. Calendar year of ART initiation was not associated with cancer incidence.
KS and lymphoma rates were highest immediately following ART initiation, particularly among patients with low CD4 cell counts, whereas other cancers increased with time on ART, likely reflecting increased cancer risk with aging. Our results underscore recommendations for earlier HIV diagnosis followed by prompt ART initiation along with ongoing aggressive cancer screening and prevention efforts throughout the course of HIV care.
HIV-associated malignancies; AIDS-defining cancer; non-AIDS-defining cancer; combination antiretroviral therapy
To estimate the net effect of imperfectly measured highly active antiretroviral therapy on incident acquired immunodeficiency syndrome or death, the authors combined inverse probability-of-treatment-and-censoring weighted estimation of a marginal structural Cox model with regression-calibration methods. Between 1995 and 2007, 950 human immunodeficiency virus–positive men and women were followed in 2 US cohort studies. During 4,054 person-years, 374 initiated highly active antiretroviral therapy, 211 developed acquired immunodeficiency syndrome or died, and 173 dropped out. Accounting for measured confounders and determinants of dropout, the weighted hazard ratio for acquired immunodeficiency syndrome or death comparing use of highly active antiretroviral therapy in the prior 2 years with no therapy was 0.36 (95% confidence limits: 0.21, 0.61). This association was relatively constant over follow-up (P = 0.19) and stronger than crude or adjusted hazard ratios of 0.75 and 0.95, respectively. Accounting for measurement error in reported exposure using external validation data on 331 men and women provided a hazard ratio of 0.17, with bias shifted from the hazard ratio to the estimate of precision as seen by the 2.5-fold wider confidence limits (95% confidence limits: 0.06, 0.43). Marginal structural measurement-error models can simultaneously account for 3 major sources of bias in epidemiologic research: validated exposure measurement error, measured selection bias, and measured time-fixed and time-varying confounding.
acquired immunodeficiency syndrome; bias (epidemiology); cohort studies; confounding factors (epidemiology); epidemiologic measurements; HIV; pharmacoepidemiology; selection bias
Lymphoma is the leading cause of cancer-related death among HIV-infected patients in the antiretroviral therapy (ART) era.
We studied lymphoma patients in the Centers for AIDS Research Network of Integrated Clinical Systems from 1996 until 2010. We examined differences stratified by histology and diagnosis year. Mortality and predictors of death were analyzed using Kaplan–Meier curves and Cox proportional hazards.
Of 23 050 HIV-infected individuals, 476 (2.1%) developed lymphoma (79 [16.6%] Hodgkin lymphoma [HL]; 201 [42.2%] diffuse large B-cell lymphoma [DLBCL]; 56 [11.8%] Burkitt lymphoma [BL]; 54 [11.3%] primary central nervous system lymphoma [PCNSL]; and 86 [18.1%] other non-Hodgkin lymphoma [NHL]). At diagnosis, HL patients had higher CD4 counts and lower HIV RNA than NHL patients. PCNSL patients had the lowest and BL patients had the highest CD4 counts among NHL categories. During the study period, CD4 count at lymphoma diagnosis progressively increased and HIV RNA decreased. Five-year survival was 61.6% for HL, 50.0% for BL, 44.1% for DLBCL, 43.3% for other NHL, and 22.8% for PCNSL. Mortality was associated with age (adjusted hazard ratio [AHR] = 1.28 per decade increase, 95% confidence interval [CI] = 1.06 to 1.54), lymphoma occurrence on ART (AHR = 2.21, 95% CI = 1.53 to 3.20), CD4 count (AHR = 0.81 per 100 cell/µL increase, 95% CI = 0.72 to 0.90), HIV RNA (AHR = 1.13 per log10copies/mL, 95% CI = 1.00 to 1.27), and histology but not earlier diagnosis year.
HIV-associated lymphoma is heterogeneous and changing, with less immunosuppression and greater HIV control at diagnosis. Stable survival and increased mortality for lymphoma occurring on ART call for greater biologic insights to improve outcomes.
Missing outcome data due to loss to follow-up occurs frequently in clinical cohort studies of HIV-infected patients. Censoring patients when they become lost can produce inaccurate results if the risk of the outcome among the censored patients differs from the risk of the outcome among patients remaining under observation. We examine whether patients who are considered lost to follow up are at increased risk of mortality compared to those who remain under observation. Patients from the US Centers for AIDS Research Network of Integrated Clinical Systems (CNICS) who newly initiated combination antiretroviral therapy between January 1, 1998 and December 31, 2009 and survived for at least one year were included in the study. Mortality information was available for all participants regardless of continued observation in the CNICS. We compare mortality between patients retained in the cohort and those lost-to-clinic, as commonly defined by a 12-month gap in care. Patients who were considered lost-to-clinic had modestly elevated mortality compared to patients who remained under observation after 5 years (risk ratio (RR): 1.2; 95% CI: 0.9, 1.5). Results were similar after redefining loss-to-clinic as 6 months (RR: 1.0; 95% CI: 0.8, 1.3) or 18 months (RR: 1.2; 95% CI: 0.8, 1.6) without a documented clinic visit. The small increase in mortality associated with becoming lost to clinic suggests that these patients were not lost to care, rather they likely transitioned to care at a facility outside the study. The modestly higher mortality among patients who were lost-to-clinic implies that when we necessarily censor these patients in studies of time-varying exposures, we are likely to incur at most a modest selection bias.
Although the influenza vaccine is recommended for end-stage renal disease (ESRD) patients, little is known about its effectiveness. Observational studies of vaccine effectiveness (VE) are challenging because vaccinated persons may be healthier than unvaccinated persons.
Using United States Renal Data System data, we estimated VE for influenza-like illness (ILI), influenza/pneumonia hospitalization, and mortality in adult, hemodialysis patients using a natural experiment created by year-to-year variation in the match of the influenza vaccine to the circulating virus. Matched (1998, 1999, 2001) and unmatched (1997) years among vaccinated patients were compared using Cox proportional hazards models. Ratios of hazard ratios compared vaccinated patients between two years and unvaccinated patients between two years. VE was calculated as 1 - effect measure.
Vaccination rates were <50% each year. Conventional analysis comparing vaccinated to unvaccinated patients produced average VE estimates of 13%, 16%, and 30% for ILI, influenza/pneumonia hospitalization and mortality respectively. When restricted to the pre-influenza period, results were even stronger, indicating bias. The pooled ratio of HRs comparing matched seasons to a placebo season resulted in a VE of 0% (95% CI: −3,2%) for ILI, 2% (95% CI: −2,5%) for hospitalization, and 0% (95% CI: −3,3%) for death.
Relative to a mismatched year, we found little evidence of increased VE in subsequent, well-matched years, suggesting that the current influenza vaccine strategy may have a smaller effect on morbidity and mortality in the ESRD population than previously thought. Alternate strategies (high dose vaccine, adjuvanted vaccine, multiple doses) should be investigated.
Influenza vaccines; vaccine effectiveness; bias (epidemiology); renal dialysis; cohort studies