PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (98)
 

Clipboard (0)
None

Select a Filter Below

Journals
more »
Year of Publication
more »
Document Types
1.  Association Between Unprotected Ultraviolet Radiation Exposure and Recurrence of Ocular Herpes Simplex Virus 
American Journal of Epidemiology  2013;179(2):208-215.
Studies have suggested that exposure to ultraviolet (UV) light may increase risk of herpes simplex virus (HSV) recurrence. Between 1993 and 1997, the Herpetic Eye Disease Study (HEDS) randomized 703 participants with ocular HSV to receipt of acyclovir or placebo for prevention of ocular HSV recurrence. Of these, 308 HEDS participants (48% female and 85% white; median age, 49 years) were included in a nested study of exposures thought to cause recurrence and were followed for up to 15 months. We matched weekly UV index values from the National Oceanic and Atmospheric Administration to each participant's study center and used marginal structural Cox models to account for time-varying psychological stress and contact lens use and selection bias from dropout. There were 44 recurrences of ocular HSV, yielding an incidence of 4.3 events per 1,000 person-weeks. Weighted hazard ratios comparing persons with ≥8 hours of time outdoors to those with less exposure were 0.84 (95% confidence interval (CI): 0.27, 2.63) and 3.10 (95% CI: 1.14, 8.48) for weeks with a UV index of <4 and ≥4, respectively (ratio of hazard ratios = 3.68, 95% CI: 0.43, 31.4). Though results were imprecise, when the UV index was higher (i.e., ≥4), spending 8 or more hours per week outdoors was associated with increased risk of ocular HSV recurrence.
doi:10.1093/aje/kwt241
PMCID: PMC3873108  PMID: 24142918
cohort studies; herpes simplex virus; recurrence; sunlight; ultraviolet light; UV index
2.  Maximum Likelihood, Profile Likelihood, and Penalized Likelihood: A Primer 
American Journal of Epidemiology  2013;179(2):252-260.
The method of maximum likelihood is widely used in epidemiology, yet many epidemiologists receive little or no education in the conceptual underpinnings of the approach. Here we provide a primer on maximum likelihood and some important extensions which have proven useful in epidemiologic research, and which reveal connections between maximum likelihood and Bayesian methods. For a given data set and probability model, maximum likelihood finds values of the model parameters that give the observed data the highest probability. As with all inferential statistical methods, maximum likelihood is based on an assumed model and cannot account for bias sources that are not controlled by the model or the study design. Maximum likelihood is nonetheless popular, because it is computationally straightforward and intuitive and because maximum likelihood estimators have desirable large-sample properties in the (largely fictitious) case in which the model has been correctly specified. Here, we work through an example to illustrate the mechanics of maximum likelihood estimation and indicate how improvements can be made easily with commercial software. We then describe recent extensions and generalizations which are better suited to observational health research and which should arguably replace standard maximum likelihood as the default method.
doi:10.1093/aje/kwt245
PMCID: PMC3873110  PMID: 24173548
epidemiologic methods; maximum likelihood; modeling; penalized estimation; regression; statistics
3.  A prospective study of alcohol consumption and HIV acquisition among injection drug users 
AIDS (London, England)  2011;25(2):221-228.
Objective
Estimate the effect of alcohol consumption on HIV acquisition while appropriately accounting for confounding by time-varying risk factors.
Design
African American injection drug users in the AIDS Link to Intravenous Experience cohort study. Participants were recruited and followed with semiannual visits in Baltimore, Maryland between 1988 and 2008.
Methods
Marginal structural models were used to estimate the effect of alcohol consumption on HIV acquisition.
Results
At entry, 28% of 1,525 participants were female with a median (quartiles) age of 37 (32; 42) years and 10 (10; 12) years of formal education. During follow up, 155 participants acquired HIV and alcohol consumption was 24%, 24%, 26%, 17%, and 9% for 0, 1–5, 6–20, 21–50 and 51–140 drinks/week over the prior two years, respectively. In analyses accounting for socio-demographic factors, drug use, and sexual activity, hazard ratios for participants reporting 1–5, 6–20, 21–50, and 51–140 drinks/week in the prior two years compared to participants who reported 0 drinks/week were 1.09 (0.60, 1.98), 1.18 (0.66, 2.09), 1.66 (0.94, 2.93) and 2.12 (1.15, 3.90), respectively. A trend test indicated a dose-response relationship between alcohol consumption and HIV acquisition (P value for trend = 9.7×10−4).
Conclusion
A dose-response relationship between alcohol consumption and subsequent HIV acquisition is indicated, independent of measured known risk factors.
doi:10.1097/QAD.0b013e328340fee2
PMCID: PMC3006640  PMID: 21099668
Alcohol consumption; HIV infection; Bias; Cohort studies; Injection drug users
4.  Sodium Intake and Incident Hypertension among Chinese Adults: Estimated Effects Across Three Different Exposure Periods 
Epidemiology (Cambridge, Mass.)  2013;24(3):410-418.
Background
Although it is clear that there are short-term effects of sodium intake on blood pressure, little is known about the most relevant timing of sodium exposure for the onset of hypertension. This question can only be addressed in cohorts with repeated measures of sodium intake.
Methods
Using up to 7 measures of dietary sodium intake and blood pressure between 1991 and 2009, we compared baseline, the mean of all measures, and the most recent sodium intake in association with incident hypertension, in 6578 adults enrolled in the China Health and Nutrition Survey aged 18 to 65 free of hypertension at baseline. We used survival methods that account for the interval-censored nature of this study, and inverse probability weights to generate adjusted survival curves and time-specific cumulative risk differences; hazard ratios were also estimated.
Results
For mean and most recent measures, the probability of hypertension-free survival was the lowest among those in the highest intake sodium group compared to all other intake groups across the entire follow-up. In addition, the most recent sodium intake measure had a positive dose-response association with incident hypertension [Risk Difference at 11 years of follow-up = 0.04 (95%CI −0.01, 0.09), 0.06 (0.00, 0.13), 0.18 (0.12, 0.24) and 0.20 (0.12, 0.27) for the second to fifth sodium intake groups compared to the lowest group respectively]. Baseline sodium intake was not associated with incident hypertension.
Conclusion
These results suggest caution when using baseline sodium intake measures with long-term follow up.
doi:10.1097/EDE.0b013e318289e047
PMCID: PMC3909658  PMID: 23466527
China; sodium intake; incident hypertension; interval-censored; adjusted survival curves
5.  Joint effects of alcohol consumption and high-risk sexual behavior on HIV seroconversion among men who have sex with men 
AIDS (London, England)  2013;27(5):815-823.
doi:10.1097/QAD.0b013e32835cff4b
PMCID: PMC3746520  PMID: 23719351
Alcohol Drinking; HIV Seropositivity; Men who Have Sex with Men; Prospective Studies; Sexual Behavior
6.  Marginal Structural Models for Case-Cohort Study Designs to Estimate the Association of Antiretroviral Therapy Initiation With Incident AIDS or Death 
American Journal of Epidemiology  2012;175(5):381-390.
To estimate the association of antiretroviral therapy initiation with incident acquired immunodeficiency syndrome (AIDS) or death while accounting for time-varying confounding in a cost-efficient manner, the authors combined a case-cohort study design with inverse probability-weighted estimation of a marginal structural Cox proportional hazards model. A total of 950 adults who were positive for human immunodeficiency virus type 1 were followed in 2 US cohort studies between 1995 and 2007. In the full cohort, 211 AIDS cases or deaths occurred during 4,456 person-years. In an illustrative 20% random subcohort of 190 participants, 41 AIDS cases or deaths occurred during 861 person-years. Accounting for measured confounders and determinants of dropout by inverse probability weighting, the full cohort hazard ratio was 0.41 (95% confidence interval: 0.26, 0.65) and the case-cohort hazard ratio was 0.47 (95% confidence interval: 0.26, 0.83). Standard multivariable-adjusted hazard ratios were closer to the null, regardless of study design. The precision lost with the case-cohort design was modest given the cost savings. Results from Monte Carlo simulations demonstrated that the proposed approach yields approximately unbiased estimates of the hazard ratio with appropriate confidence interval coverage. Marginal structural model analysis of case-cohort study designs provides a cost-efficient design coupled with an accurate analytic method for research settings in which there is time-varying confounding.
doi:10.1093/aje/kwr346
PMCID: PMC3282878  PMID: 22302074
acquired immunodeficiency syndrome; case-cohort studies; cohort studies; confounding bias; HIV; pharmacoepidemiology; selection bias
7.  Bayesian Posterior Distributions Without Markov Chains 
American Journal of Epidemiology  2012;175(5):368-375.
Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976–1983) assessing the relation between residential exposure to magnetic fields and the development of childhood cancer. Results from rejection sampling (odds ratio (OR) = 1.69, 95% posterior interval (PI): 0.57, 5.00) were similar to MCMC results (OR = 1.69, 95% PI: 0.58, 4.95) and approximations from data-augmentation priors (OR = 1.74, 95% PI: 0.60, 5.06). In example 2, the authors apply rejection sampling to a cohort study of 315 human immunodeficiency virus seroconverters (1984–1998) to assess the relation between viral load after infection and 5-year incidence of acquired immunodeficiency syndrome, adjusting for (continuous) age at seroconversion and race. In this more complex example, rejection sampling required a notably longer run time than MCMC sampling but remained feasible and again yielded similar results. The transparency of the proposed approach comes at a price of being less broadly applicable than MCMC.
doi:10.1093/aje/kwr433
PMCID: PMC3282880  PMID: 22306565
Bayes theorem; epidemiologic methods; inference; Monte Carlo method; posterior distribution; simulation
8.  Viral Suppression in HIV Studies: Combining Times to Suppression and Rebound 
Biometrics  2014;70(2):441-448.
Summary
In HIV-1 clinical trials the interest is often to compare how well treatments suppress the HIV-1 RNA viral load. The current practice in statistical analysis of such trials is to define a single ad hoc composite event which combines information about both the viral load suppression and the subsequent viral rebound, and then analyze the data using standard univariate survival analysis techniques. The main weakness of this approach is that the results of the analysis can be easily influenced by minor details in the definition of the composite event. We propose a straightforward alternative endpoint based on the probability of being suppressed over time, and suggest that treatment differences be summarized using the restricted mean time a patient spends in the state of viral suppression. A nonparametric analysis is based on methods for multiple endpoint studies. We demonstrate the utility of our analytic strategy using a recent therapeutic trial, in which the protocol specified a primary analysis using a composite endpoint approach.
doi:10.1111/biom.12140
PMCID: PMC4319678  PMID: 24446693
AIDS; Clinical trial endpoint; Counting processes; Multistate models; Survival analysis
9.  The parametric G-formula for time-to-event data: towards intuition with a worked example 
Epidemiology (Cambridge, Mass.)  2014;25(6):889-897.
Background
The parametric g-formula can be used to estimate the effect of a policy, intervention, or treatment. Unlike standard regression approaches, the parametric g-formula can be used to adjust for time-varying confounders that are affected by prior exposures. To date, there are few published examples in which the method has been applied.
Methods
We provide a simple introduction to the parametric g-formula and illustrate its application in analysis of a small cohort study of bone marrow transplant patients in which the effect of treatment on mortality is subject to time-varying confounding.
Results
Standard regression adjustment yields a biased estimate of the effect of treatment on mortality relative to the estimate obtained by the g-formula.
Conclusions
The g-formula allows estimation of a relevant parameter for public health officials: the change in the hazard of mortality under a hypothetical intervention, such as reduction of exposure to a harmful agent or introduction of a beneficial new treatment. We present a simple approach to implement the parametric g-formula that is sufficiently general to allow easy adaptation to many settings of public health relevance.
doi:10.1097/EDE.0000000000000160
PMCID: PMC4310506  PMID: 25140837
10.  Breast cancer subtypes and previously established genetic risk factors: A Bayesian approach 
Background
Gene expression analyses indicate that breast cancer is a heterogeneous disease with at least 5 immunohistologic subtypes. Despite growing evidence that these subtypes are etiologically and prognostically distinct, few studies have investigated whether they have divergent genetic risk factors. To help fill in this gap in our understanding, we examined associations between breast cancer subtypes and previously established susceptibility loci among white and African-American women in the Carolina Breast Cancer Study.
Methods
We used Bayesian polytomous logistic regression to estimate odds ratios (ORs) and 95% posterior intervals (PIs) for the association between each of 78 single nucleotide polymorphisms (SNPs) and 5 breast cancer subtypes. Subtypes were defined using 5 immunohistochemical markers: estrogen receptors (ER), progesterone receptors (PR), human epidermal growth factor receptors 1 and 2 (HER1/2) and cytokeratin (CK) 5/6.
Results
Several SNPs in TNRC9/TOX3 were associated with luminal A (ER/PR+, HER2−) or basal-like breast cancer (ER−, PR−, HER2−, HER1 or CK 5/6+), and one SNP (rs3104746) was associated with both. SNPs in FGFR2 were associated with luminal A, luminal B (ER/PR+, HER2+), or HER2+/ER− disease, but none were associated with basal-like disease. We also observed subtype differences in the effects of SNPs in 2q35, 4p, TLR1, MAP3K1, ESR1, CDKN2A/B, ANKRD16, and ZM1Z1.
Conclusion and Impact
We found evidence that genetic risk factors for breast cancer vary by subtype and further clarified the role of several key susceptibility genes.
doi:10.1158/1055-9965.EPI-13-0463
PMCID: PMC3947131  PMID: 24177593
breast cancer; single nucleotide polymorphisms; breast cancer subtypes; GWAS; Bayesian analysis
11.  A comparison of methods to estimate the hazard ratio under conditions of time-varying confounding and nonpositivity 
Epidemiology (Cambridge, Mass.)  2011;22(5):718-723.
In occupational epidemiologic studies, the healthy-worker survivor effect refers to a process that leads to bias in the estimates of an association between cumulative exposure and a health outcome. In these settings, work status acts both as an intermediate and confounding variable, and may violate the positivity assumption (the presence of exposed and unexposed observations in all strata of the confounder). Using Monte Carlo simulation, we assess the degree to which crude, work-status adjusted, and weighted (marginal structural) Cox proportional hazards models are biased in the presence of time-varying confounding and nonpositivity. We simulate data representing time-varying occupational exposure, work status, and mortality. Bias, coverage, and root mean squared error (MSE) were calculated relative to the true marginal exposure effect in a range of scenarios. For a base-case scenario, using crude, adjusted, and weighted Cox models, respectively, the hazard ratio was biased downward 19%, 9%, and 6%; 95% confidence interval coverage was 48%, 85%, and 91%; and root MSE was 0.20, 0.13, and 0.11. Although marginal structural models were less biased in most scenarios studied, neither standard nor marginal structural Cox proportional hazards models fully resolve the bias encountered under conditions of time-varying confounding and nonpositivity.
doi:10.1097/EDE.0b013e31822549e8
PMCID: PMC3155387  PMID: 21747286
12.  Statistical Methods for Multivariate Meta-analysis of Diagnostic Tests: An Overview and Tutorial 
Statistical methods in medical research  2013;10.1177/0962280213492588.
Summary
In this article, we present an overview and tutorial of statistical methods for meta-analysis of diagnostic tests under two scenarios: 1) when the reference test can be considered a gold standard; and 2) when the reference test cannot be considered a gold standard. In the first scenario, we first review the conventional summary receiver operating characteristics (ROC) approach and a bivariate approach using linear mixed models (BLMM). Both approaches require direct calculations of study-specific sensitivities and specificities. We next discuss the hierarchical summary ROC curve approach for jointly modeling positivity criteria and accuracy parameters, and the bivariate generalized linear mixed models (GLMM) for jointly modeling sensitivities and specificities. We further discuss the trivariate GLMM for jointly modeling prevalence, sensitivities and specificities, which allows us to assess the correlations among the three parameters. These approaches are based on the exact binomial distribution and thus do not require an ad hoc continuity correction. Last, we discuss a latent class random effects model for meta-analysis of diagnostic tests when the reference test itself is imperfect for the second scenario. A number of case studies with detailed annotated SAS code in procedures MIXED and NLMIXED are presented to facilitate the implementation of these approaches.
doi:10.1177/0962280213492588
PMCID: PMC3883791  PMID: 23804970
meta-analysis; diagnostic test; gold standard; generalized linear mixed models
13.  Causal Inference in Occupational Epidemiology: Accounting for the Healthy Worker Effect by Using Structural Nested Models 
American Journal of Epidemiology  2013;178(12):1681-1686.
In a recent issue of the Journal, Kirkeleit et al. (Am J Epidemiol. 2013;177(11):1218–1224) provided empirical evidence for the potential of the healthy worker effect in a large cohort of Norwegian workers across a range of occupations. In this commentary, we provide some historical context, define the healthy worker effect by using causal diagrams, and use simulated data to illustrate how structural nested models can be used to estimate exposure effects while accounting for the healthy worker survivor effect in 4 simple steps. We provide technical details and annotated SAS software (SAS Institute, Inc., Cary, North Carolina) code corresponding to the example analysis in the Web Appendices, available at http://aje.oxfordjournals.org/.
doi:10.1093/aje/kwt215
PMCID: PMC3858107  PMID: 24077092
causal inference; healthy worker effect; marginal structural models; occupational epidemiology; structural nested models
14.  A Bayesian approach to strengthen inference for case-control studies with multiple error-prone exposure assessments 
Statistics in medicine  2013;32(25):4426-4437.
In case-control studies, exposure assessments are almost always error-prone. In the absence of a gold standard, two or more assessment approaches are often used to classify people with respect to exposure. Each imperfect assessment tool may lead to misclassification of exposure assignment; the exposure misclassification may be differential with respect to case status or not; and, the errors in exposure classification under the different approaches may be independent (conditional upon the true exposure status) or not. Although methods have been proposed to study diagnostic accuracy in the absence of a gold standard, these methods are infrequently used in case-control studies to correct exposure misclassification that is simultaneously differential and dependent. In this paper, we proposed a Bayesian method to estimate the measurement-error corrected exposure-disease association, accounting for both differential and dependent misclassification. The performance of the proposed method is investigated using simulations, which show that the proposed approach works well, as well as an application to a case-control study assessing the association between asbestos exposure and mesothelioma.
doi:10.1002/sim.5842
PMCID: PMC3788843  PMID: 23661263
Case-control study; gold standard; misclassification; dependent; differential
15.  An information criterion for marginal structural models 
Statistics in medicine  2012;32(8):1383-1393.
Summary
Marginal structural models were developed as a semiparametric alternative to the G-computation formula to estimate causal effects of exposures. In practice, these models are often specified using parametric regression models. As such, the usual conventions regarding regression model specification apply. This paper outlines strategies for marginal structural model specification, and considerations for the functional form of the exposure metric in the final structural model. We propose a quasi-likelihood information criterion adapted from use in generalized estimating equations. We evaluate the properties of our proposed information criterion using a limited simulation study. We illustrate our approach using two empirical examples. In the first example, we use data from a randomized breastfeeding promotion trial to estimate the effect of breastfeeding duration on infant weight at one year. In the second example, we use data from two prospective cohorts studies to estimate the effect of highly active antiretroviral therapy on CD4 count in an observational cohort of HIV-infected men and women. The marginal structural model specified should reflect the scientific question being addressed, but can also assist in exploration of other plausible and closely related questions. In marginal structural models, as in any regression setting, correct inference depends on correct model specification. Our proposed information criterion provides a formal method for comparing model fit for different specifications.
doi:10.1002/sim.5599
PMCID: PMC4180061  PMID: 22972662
Bias; Causal inference; Marginal structural model; Regression analysis; Model specification
16.  Enrollment, Retention, and Visit Attendance in the University of North Carolina Center for AIDS Research HIV Clinical Cohort, 2001–2007 
Abstract
Predictors of study retention and scheduled visit attendance in the University of North Carolina Center for AIDS Research (UNC CFAR) prospective clinical cohort of HIV-infected patients enrolled between 1 January 2001 and 1 January 2008 are reported. At study entry, 1636 participants were 32% female, 58% were African-American, 49% had not received HIV care elsewhere, 71% were receiving or initiated combination antiretroviral therapy, and 26% were diagnosed with AIDS, with median (quartiles) age of 40 (34; 47) years, distance to clinic of 45 (21; 70) miles, HIV-1 RNA of 1396 (200; 26,750) copies/ml, and CD4 of 374 (182; 602) cells/mm3. Participants contributed a median of 7 (4; 13) scheduled visits and 2.25 (1.0; 3.9) years alive under follow-up. During 6134 person-years of follow-up, 414 participants dropped out and 145 died. Accounting for differences in death by participant characteristics, the 6-year cumulative probability of retention was 67% [95% confidence limits (CL): 65, 70%], with 6.75 (95% CL: 6.13, 7.43) drop outs per 100 person-years. In a multivariable Cox proportional hazards model, retention was higher among participants who were insured, had not received HIV care elsewhere, had controlled HIV viremia, and were living in nonurban areas or proximate to the clinic. In a multivariable modified Poisson regression model that accounted for differences in drop out and death by participant characteristics, visit attendance was higher among older, AIDS-diagnosed, immune compromised, and cART-initiated participants. The UNC CFAR clinical cohort has ample enrollment with retention and visit attendance modestly influenced by factors such as disease severity.
doi:10.1089/aid.2009.0282
PMCID: PMC2957633  PMID: 20672995
17.  Generalizing Evidence From Randomized Clinical Trials to Target Populations 
American Journal of Epidemiology  2010;172(1):107-115.
Properly planned and conducted randomized clinical trials remain susceptible to a lack of external validity. The authors illustrate a model-based method to standardize observed trial results to a specified target population using a seminal human immunodeficiency virus (HIV) treatment trial, and they provide Monte Carlo simulation evidence supporting the method. The example trial enrolled 1,156 HIV-infected adult men and women in the United States in 1996, randomly assigned 577 to a highly active antiretroviral therapy and 579 to a largely ineffective combination therapy, and followed participants for 52 weeks. The target population was US people infected with HIV in 2006, as estimated by the Centers for Disease Control and Prevention. Results from the trial apply, albeit muted by 12%, to the target population, under the assumption that the authors have measured and correctly modeled the determinants of selection that reflect heterogeneity in the treatment effect. In simulations with a heterogeneous treatment effect, a conventional intent-to-treat estimate was biased with poor confidence limit coverage, but the proposed estimate was largely unbiased with appropriate confidence limit coverage. The proposed method standardizes observed trial results to a specified target population and thereby provides information regarding the generalizability of trial results.
doi:10.1093/aje/kwq084
PMCID: PMC2915476  PMID: 20547574
bias; bias (epidemiology); causal inference; external validity; generalizability; randomized trials; standardization
18.  Incidence and Timing of Cancer in HIV-Infected Individuals Following Initiation of Combination Antiretroviral Therapy 
Kaposi sarcoma and lymphoma rates were highest immediately after antiretroviral therapy (ART) initiation, particularly among patients with low CD4 cell counts, whereas other cancers increased with time on ART. Calendar year of ART initiation was not associated with subsequent cancer incidence.
Background
Cancer is an important cause of morbidity and mortality in individuals infected with human immunodeficiency virus (HIV), but patterns of cancer incidence after combination antiretroviral therapy (ART) initiation remain poorly characterized.
Methods
We evaluated the incidence and timing of cancer diagnoses among patients initiating ART between 1996 and 2011 in a collaboration of 8 US clinical HIV cohorts. Poisson regression was used to estimate incidence rates. Cox regression was used to identify demographic and clinical characteristics associated with cancer incidence after ART initiation.
Results
At initiation of first combination ART among 11 485 patients, median year was 2004 (interquartile range [IQR], 2000–2007) and median CD4 count was 202 cells/mm3 (IQR, 61–338). Incidence rates for Kaposi sarcoma (KS) and lymphomas were highest in the first 6 months after ART initiation (P < .001) and plateaued thereafter, while incidence rates for all other cancers combined increased from 416 to 615 cases per 100 000 person-years from 1 to 10 years after ART initiation (average 7% increase per year; 95% confidence interval, 2%–13%). Lower CD4 count at ART initiation was associated with greater risk of KS, lymphoma, and human papillomavirus–related cancer. Calendar year of ART initiation was not associated with cancer incidence.
Conclusions
KS and lymphoma rates were highest immediately following ART initiation, particularly among patients with low CD4 cell counts, whereas other cancers increased with time on ART, likely reflecting increased cancer risk with aging. Our results underscore recommendations for earlier HIV diagnosis followed by prompt ART initiation along with ongoing aggressive cancer screening and prevention efforts throughout the course of HIV care.
doi:10.1093/cid/cit369
PMCID: PMC3739467  PMID: 23735330
HIV-associated malignancies; AIDS-defining cancer; non-AIDS-defining cancer; combination antiretroviral therapy
19.  A comparison of ad hoc methods to account for non-cancer AIDS and deaths as competing risks when estimating the effect of HAART on incident cancer AIDS among HIV-infected men 
Journal of clinical epidemiology  2009;63(4):459-467.
Objective
We compared three ad hoc methods to estimate the marginal hazard of incident cancer AIDS in a highly active antiretroviral therapy (1996–2006) relative to a monotherapy/combination therapy (1990–1996) calendar period, accounting for other AIDS events and deaths as competing risks.
Study Design and Setting
Among 1911 HIV+ men from the Multicenter AIDS Cohort Study, 228 developed cancer AIDS and 745 developed competing risks in 14,202 person-years from 1990–2006. Method 1 censored competing risks at the time they occurred, method 2 excluded competing risks, and method 3 censored competing risks at the date of analysis.
Results
The age, race and infection duration adjusted hazard ratios (HRs) for cancer AIDS were similar for all methods (HR≅0.15). We estimated bias and CI coverage of each method with Monte Carlo simulation. On average across 24 scenarios, method 1 produced less biased estimates than methods 2 or 3.
Conclusions
When competing risks are independent of the event of interest, only method 1 produced unbiased estimates of the marginal HR, though independence cannot be verified from the data. When competing risks are dependent, method 1 generally produced the least biased estimates of the marginal HR for the scenarios explored; however, alternative methods may be preferred.
doi:10.1016/j.jclinepi.2009.08.003
PMCID: PMC2837111  PMID: 19880284
Competing risks; epidemiology; HIV; highly active antiretroviral therapy; cancer
20.  Temporal Trends in Presentation and Survival for HIV-Associated Lymphoma in the Antiretroviral Therapy Era 
Background
Lymphoma is the leading cause of cancer-related death among HIV-infected patients in the antiretroviral therapy (ART) era.
Methods
We studied lymphoma patients in the Centers for AIDS Research Network of Integrated Clinical Systems from 1996 until 2010. We examined differences stratified by histology and diagnosis year. Mortality and predictors of death were analyzed using Kaplan–Meier curves and Cox proportional hazards.
Results
Of 23 050 HIV-infected individuals, 476 (2.1%) developed lymphoma (79 [16.6%] Hodgkin lymphoma [HL]; 201 [42.2%] diffuse large B-cell lymphoma [DLBCL]; 56 [11.8%] Burkitt lymphoma [BL]; 54 [11.3%] primary central nervous system lymphoma [PCNSL]; and 86 [18.1%] other non-Hodgkin lymphoma [NHL]). At diagnosis, HL patients had higher CD4 counts and lower HIV RNA than NHL patients. PCNSL patients had the lowest and BL patients had the highest CD4 counts among NHL categories. During the study period, CD4 count at lymphoma diagnosis progressively increased and HIV RNA decreased. Five-year survival was 61.6% for HL, 50.0% for BL, 44.1% for DLBCL, 43.3% for other NHL, and 22.8% for PCNSL. Mortality was associated with age (adjusted hazard ratio [AHR] = 1.28 per decade increase, 95% confidence interval [CI] = 1.06 to 1.54), lymphoma occurrence on ART (AHR = 2.21, 95% CI = 1.53 to 3.20), CD4 count (AHR = 0.81 per 100 cell/µL increase, 95% CI = 0.72 to 0.90), HIV RNA (AHR = 1.13 per log10copies/mL, 95% CI = 1.00 to 1.27), and histology but not earlier diagnosis year.
Conclusions
HIV-associated lymphoma is heterogeneous and changing, with less immunosuppression and greater HIV control at diagnosis. Stable survival and increased mortality for lymphoma occurring on ART call for greater biologic insights to improve outcomes.
doi:10.1093/jnci/djt158
PMCID: PMC3748003  PMID: 23892362
21.  Time Scale and Adjusted Survival Curves for Marginal Structural Cox Models 
American Journal of Epidemiology  2010;171(6):691-700.
Typical applications of marginal structural time-to-event (e.g., Cox) models have used time on study as the time scale. Here, the authors illustrate use of time on treatment as an alternative time scale. In addition, a method is provided for estimating Kaplan-Meier–type survival curves for marginal structural models. For illustration, the authors estimate the total effect of highly active antiretroviral therapy on time to acquired immunodeficiency syndrome (AIDS) or death in 1,498 US men and women infected with human immunodeficiency virus and followed for 6,556 person-years between 1995 and 2002; 323 incident cases of clinical AIDS and 59 deaths occurred. Of the remaining 1,116 participants, 77% were still under observation at the end of follow-up. By using time on study, the hazard ratio for AIDS or death comparing always with never using highly active antiretroviral therapy from the marginal structural model was 0.52 (95% confidence interval: 0.35, 0.76). By using time on treatment, the analogous hazard ratio was 0.44 (95% confidence interval: 0.32, 0.60). In time-to-event analyses, the choice of time scale may have a meaningful impact on estimates of association and precision. In the present example, use of time on treatment yielded a hazard ratio further from the null and more precise than use of time on study as the time scale.
doi:10.1093/aje/kwp418
PMCID: PMC2877453  PMID: 20139124
acquired immunodeficiency syndrome; antiretroviral therapy, highly active; bias (epidemiology); causal inference; confounding factors (epidemiology); proportional hazards model; survival curve; survival time
22.  Copy-Years Viremia as a Measure of Cumulative Human Immunodeficiency Virus Viral Burden 
American Journal of Epidemiology  2009;171(2):198-205.
Plasma human immunodeficiency virus type 1 (HIV-1) viral load is a valuable tool for HIV research and clinical care but is often used in a noncumulative manner. The authors developed copy-years viremia as a measure of cumulative plasma HIV-1 viral load exposure among 297 HIV seroconverters from the Multicenter AIDS Cohort Study (1984–1996). Men were followed from seroconversion to incident acquired immunodeficiency syndrome (AIDS), death, or the beginning of the combination antiretroviral therapy era (January 1, 1996); the median duration of follow-up was 4.6 years (interquartile range (IQR), 2.7–6.5). The median viral load and level of copy-years viremia over 2,281 semiannual follow-up assessments were 29,628 copies/mL (IQR, 8,547–80,210) and 63,659 copies × years/mL (IQR, 15,935–180,341). A total of 127 men developed AIDS or died, and 170 survived AIDS-free and were censored on January 1, 1996, or lost to follow-up. Rank correlations between copy-years viremia and other measures of viral load were 0.56–0.87. Each log10 increase in copy-years viremia was associated with a 1.70-fold increased hazard (95% confidence interval: 0.94, 3.07) of AIDS or death, independently of infection duration, age, race, CD4 cell count, set-point, peak viral load, or most recent viral load. Copy-years viremia, a novel measure of cumulative viral burden, may provide prognostic information beyond traditional single measures of viremia.
doi:10.1093/aje/kwp347
PMCID: PMC2878100  PMID: 20007202
acquired immunodeficiency syndrome; HIV; HIV infections; viral load; viremia
23.  Using Marginal Structural Measurement-Error Models to Estimate the Long-term Effect of Antiretroviral Therapy on Incident AIDS or Death 
American Journal of Epidemiology  2009;171(1):113-122.
To estimate the net effect of imperfectly measured highly active antiretroviral therapy on incident acquired immunodeficiency syndrome or death, the authors combined inverse probability-of-treatment-and-censoring weighted estimation of a marginal structural Cox model with regression-calibration methods. Between 1995 and 2007, 950 human immunodeficiency virus–positive men and women were followed in 2 US cohort studies. During 4,054 person-years, 374 initiated highly active antiretroviral therapy, 211 developed acquired immunodeficiency syndrome or died, and 173 dropped out. Accounting for measured confounders and determinants of dropout, the weighted hazard ratio for acquired immunodeficiency syndrome or death comparing use of highly active antiretroviral therapy in the prior 2 years with no therapy was 0.36 (95% confidence limits: 0.21, 0.61). This association was relatively constant over follow-up (P = 0.19) and stronger than crude or adjusted hazard ratios of 0.75 and 0.95, respectively. Accounting for measurement error in reported exposure using external validation data on 331 men and women provided a hazard ratio of 0.17, with bias shifted from the hazard ratio to the estimate of precision as seen by the 2.5-fold wider confidence limits (95% confidence limits: 0.06, 0.43). Marginal structural measurement-error models can simultaneously account for 3 major sources of bias in epidemiologic research: validated exposure measurement error, measured selection bias, and measured time-fixed and time-varying confounding.
doi:10.1093/aje/kwp329
PMCID: PMC2800300  PMID: 19934191
acquired immunodeficiency syndrome; bias (epidemiology); cohort studies; confounding factors (epidemiology); epidemiologic measurements; HIV; pharmacoepidemiology; selection bias
24.  Loss to Clinic and Five-Year Mortality among HIV-Infected Antiretroviral Therapy Initiators 
PLoS ONE  2014;9(7):e102305.
Missing outcome data due to loss to follow-up occurs frequently in clinical cohort studies of HIV-infected patients. Censoring patients when they become lost can produce inaccurate results if the risk of the outcome among the censored patients differs from the risk of the outcome among patients remaining under observation. We examine whether patients who are considered lost to follow up are at increased risk of mortality compared to those who remain under observation. Patients from the US Centers for AIDS Research Network of Integrated Clinical Systems (CNICS) who newly initiated combination antiretroviral therapy between January 1, 1998 and December 31, 2009 and survived for at least one year were included in the study. Mortality information was available for all participants regardless of continued observation in the CNICS. We compare mortality between patients retained in the cohort and those lost-to-clinic, as commonly defined by a 12-month gap in care. Patients who were considered lost-to-clinic had modestly elevated mortality compared to patients who remained under observation after 5 years (risk ratio (RR): 1.2; 95% CI: 0.9, 1.5). Results were similar after redefining loss-to-clinic as 6 months (RR: 1.0; 95% CI: 0.8, 1.3) or 18 months (RR: 1.2; 95% CI: 0.8, 1.6) without a documented clinic visit. The small increase in mortality associated with becoming lost to clinic suggests that these patients were not lost to care, rather they likely transitioned to care at a facility outside the study. The modestly higher mortality among patients who were lost-to-clinic implies that when we necessarily censor these patients in studies of time-varying exposures, we are likely to incur at most a modest selection bias.
doi:10.1371/journal.pone.0102305
PMCID: PMC4092142  PMID: 25010739
25.  Evaluating influenza vaccine effectiveness among hemodialysis patients using a natural experiment 
Archives of internal medicine  2012;172(7):548-554.
Background
Although the influenza vaccine is recommended for end-stage renal disease (ESRD) patients, little is known about its effectiveness. Observational studies of vaccine effectiveness (VE) are challenging because vaccinated persons may be healthier than unvaccinated persons.
Methods
Using United States Renal Data System data, we estimated VE for influenza-like illness (ILI), influenza/pneumonia hospitalization, and mortality in adult, hemodialysis patients using a natural experiment created by year-to-year variation in the match of the influenza vaccine to the circulating virus. Matched (1998, 1999, 2001) and unmatched (1997) years among vaccinated patients were compared using Cox proportional hazards models. Ratios of hazard ratios compared vaccinated patients between two years and unvaccinated patients between two years. VE was calculated as 1 - effect measure.
Results
Vaccination rates were <50% each year. Conventional analysis comparing vaccinated to unvaccinated patients produced average VE estimates of 13%, 16%, and 30% for ILI, influenza/pneumonia hospitalization and mortality respectively. When restricted to the pre-influenza period, results were even stronger, indicating bias. The pooled ratio of HRs comparing matched seasons to a placebo season resulted in a VE of 0% (95% CI: −3,2%) for ILI, 2% (95% CI: −2,5%) for hospitalization, and 0% (95% CI: −3,3%) for death.
Conclusions
Relative to a mismatched year, we found little evidence of increased VE in subsequent, well-matched years, suggesting that the current influenza vaccine strategy may have a smaller effect on morbidity and mortality in the ESRD population than previously thought. Alternate strategies (high dose vaccine, adjuvanted vaccine, multiple doses) should be investigated.
doi:10.1001/archinternmed.2011.2238
PMCID: PMC4082376  PMID: 22493462
Influenza vaccines; vaccine effectiveness; bias (epidemiology); renal dialysis; cohort studies

Results 1-25 (98)