Search tips
Search criteria

Results 1-25 (86)

Clipboard (0)

Select a Filter Below

more »
Year of Publication
more »
Document Types
1.  Loss to Clinic and Five-Year Mortality among HIV-Infected Antiretroviral Therapy Initiators 
PLoS ONE  2014;9(7):e102305.
Missing outcome data due to loss to follow-up occurs frequently in clinical cohort studies of HIV-infected patients. Censoring patients when they become lost can produce inaccurate results if the risk of the outcome among the censored patients differs from the risk of the outcome among patients remaining under observation. We examine whether patients who are considered lost to follow up are at increased risk of mortality compared to those who remain under observation. Patients from the US Centers for AIDS Research Network of Integrated Clinical Systems (CNICS) who newly initiated combination antiretroviral therapy between January 1, 1998 and December 31, 2009 and survived for at least one year were included in the study. Mortality information was available for all participants regardless of continued observation in the CNICS. We compare mortality between patients retained in the cohort and those lost-to-clinic, as commonly defined by a 12-month gap in care. Patients who were considered lost-to-clinic had modestly elevated mortality compared to patients who remained under observation after 5 years (risk ratio (RR): 1.2; 95% CI: 0.9, 1.5). Results were similar after redefining loss-to-clinic as 6 months (RR: 1.0; 95% CI: 0.8, 1.3) or 18 months (RR: 1.2; 95% CI: 0.8, 1.6) without a documented clinic visit. The small increase in mortality associated with becoming lost to clinic suggests that these patients were not lost to care, rather they likely transitioned to care at a facility outside the study. The modestly higher mortality among patients who were lost-to-clinic implies that when we necessarily censor these patients in studies of time-varying exposures, we are likely to incur at most a modest selection bias.
PMCID: PMC4092142  PMID: 25010739
2.  Evaluating influenza vaccine effectiveness among hemodialysis patients using a natural experiment 
Archives of internal medicine  2012;172(7):548-554.
Although the influenza vaccine is recommended for end-stage renal disease (ESRD) patients, little is known about its effectiveness. Observational studies of vaccine effectiveness (VE) are challenging because vaccinated persons may be healthier than unvaccinated persons.
Using United States Renal Data System data, we estimated VE for influenza-like illness (ILI), influenza/pneumonia hospitalization, and mortality in adult, hemodialysis patients using a natural experiment created by year-to-year variation in the match of the influenza vaccine to the circulating virus. Matched (1998, 1999, 2001) and unmatched (1997) years among vaccinated patients were compared using Cox proportional hazards models. Ratios of hazard ratios compared vaccinated patients between two years and unvaccinated patients between two years. VE was calculated as 1 - effect measure.
Vaccination rates were <50% each year. Conventional analysis comparing vaccinated to unvaccinated patients produced average VE estimates of 13%, 16%, and 30% for ILI, influenza/pneumonia hospitalization and mortality respectively. When restricted to the pre-influenza period, results were even stronger, indicating bias. The pooled ratio of HRs comparing matched seasons to a placebo season resulted in a VE of 0% (95% CI: −3,2%) for ILI, 2% (95% CI: −2,5%) for hospitalization, and 0% (95% CI: −3,3%) for death.
Relative to a mismatched year, we found little evidence of increased VE in subsequent, well-matched years, suggesting that the current influenza vaccine strategy may have a smaller effect on morbidity and mortality in the ESRD population than previously thought. Alternate strategies (high dose vaccine, adjuvanted vaccine, multiple doses) should be investigated.
PMCID: PMC4082376  PMID: 22493462
Influenza vaccines; vaccine effectiveness; bias (epidemiology); renal dialysis; cohort studies
3.  Serum uric acid in relation to endogenous reproductive hormones during the menstrual cycle: findings from the BioCycle study 
Human Reproduction (Oxford, England)  2013;28(7):1853-1862.
Do uric acid levels across the menstrual cycle show associations with endogenous estradiol (E2) and reproductive hormone concentrations in regularly menstruating women?
Mean uric acid concentrations were highest during the follicular phase, and were inversely associated with E2 and progesterone, and positively associated with FSH.
E2 may decrease serum levels of uric acid in post-menopausal women; however, the interplay between endogenous reproductive hormones and uric acid levels among regularly menstruating women has not been elucidated.
The BioCycle study was a prospective cohort study conducted at the University at Buffalo research centre from 2005 to 2007, which followed healthy women for one (n = 9) or 2 (n = 250) menstrual cycle(s).
Participants were healthy women aged 18–44 years. Hormones and uric acid were measured in serum eight times each cycle for up to two cycles. Marginal structural models with inverse probability of exposure weights were used to evaluate the associations between endogenous hormones and uric acid concentrations.
Uric acid levels were observed to vary across the menstrual cycle, with the lowest levels observed during the luteal phase. Every log-unit increase in E2 was associated with a decrease in uric acid of 1.1% (β = −0.011; 95% confidence interval (CI): −0.019, −0.004; persistent-effects model), and for every log-unit increase in progesterone, uric acid decreased by ∼0.8% (β = −0.008; 95% CI: −0.012, −0.004; persistent-effects model). FSH was positively associated with uric acid concentrations, such that each log-unit increase was associated with a 1.6% increase in uric acid (β = 0.016; 95% CI: 0.005, 0.026; persistent-effects model). Progesterone and FSH were also associated with uric acid levels in acute-effects models. Of 509 cycles, 42 were anovulatory (8.3%). Higher uric acid levels were associated with increased odds of anovulation (odds ratio 2.39, 95% CI: 1.25, 4.56).
The change in uric acid levels among this cohort of healthy women was modest, and analysis was limited to two menstrual cycles. The women in this study were healthy and regularly menstruating, and as such there were few women with high uric acid levels and anovulatory cycles.
These findings demonstrate the importance of taking menstrual cycle phase into account when measuring uric acid in premenopausal women, and confirm the hypothesized beneficial lowering effects of endogenous E2 on uric acid levels. These findings suggest that there could be an underlying association affecting both sporadic anovulation and high uric acid levels among young, regularly menstruating women. Further studies are needed to confirm these findings and elucidate the connection between uric acid and reproductive and later cardiovascular health.
This work was supported by the Intramural Research Program of the Eunice Kennedy Shriver National Institute of Child Health and Human Development, National Institutes of Health (contract # HHSN275200403394C). No competing interests declared.
PMCID: PMC3685334  PMID: 23562957
anovulation; estradiol; menstrual cycle; premenopausal women; uric acid
4.  The use of propensity scores to assess the generalizability of results from randomized trials 
Randomized trials remain the most accepted design for estimating the effects of interventions, but they do not necessarily answer a question of primary interest: Will the program be effective in a target population in which it may be implemented? In other words, are the results generalizable? There has been very little statistical research on how to assess the generalizability, or “external validity,” of randomized trials. We propose the use of propensity-score-based metrics to quantify the similarity of the participants in a randomized trial and a target population. In this setting the propensity score model predicts participation in the randomized trial, given a set of covariates. The resulting propensity scores are used first to quantify the difference between the trial participants and the target population, and then to match, subclassify, or weight the control group outcomes to the population, assessing how well the propensity score-adjusted outcomes track the outcomes actually observed in the population. These metrics can serve as a first step in assessing the generalizability of results from randomized trials to target populations. This paper lays out these ideas, discusses the assumptions underlying the approach, and illustrates the metrics using data on the evaluation of a schoolwide prevention program called Positive Behavioral Interventions and Supports.
PMCID: PMC4051511  PMID: 24926156
Causal inference; External validity; Positive Behavioral Interventions and Supports; Research synthesis
5.  Alcohol consumption trajectory patterns in adult women with HIV infection 
AIDS and behavior  2013;17(5):1705-1712.
HIV-infected women with excessive alcohol consumption are at risk for adverse health outcomes, but little is known about their long-term drinking trajectories. This analysis included longitudinal data, obtained from 1996–2006, from 2791 women with HIV from the Women’s Interagency HIV Study. Among these women, the proportion in each of five distinct drinking trajectories was: continued heavy drinking (3%), reduction from heavy to non-heavy drinking (4%), increase from non-heavy to heavy drinking (8%), continued non-heavy drinking (36%), and continued non-drinking (49%). Depressive symptoms, other substance use (crack/cocaine, marijuana, and tobacco), co-infection with HCV, and heavy drinking prior to enrollment were associated with trajectories involving future heavy drinking. In conclusion, many women with HIV change their drinking patterns over time. Clinicians and those providing alcohol-related interventions might target those with depression, current use of tobacco or illicit drugs, HCV infection, or a previous history of drinking problems.
PMCID: PMC3534826  PMID: 22836592
Alcohol consumption; women; HIV-infection; trajectories
6.  Hematologic, Hepatic, Renal and Lipid Laboratory Monitoring Following Initiation of Combination Antiretroviral Therapy in the United States, 2000–2010 
We assessed laboratory monitoring following combination antiretroviral therapy (cART) initiation among 3,678 patients in a large US multi-site clinical cohort, censoring participants at last clinic visit, cART change, or three years. Median days (interquartile range) to first hematologic, hepatic, renal and lipid tests were 30 (18–53), 31 (19–56), 33 (20–59) and 350 (96–1106), respectively. At one year, approximately 80% received more than two hematologic, hepatic, and renal tests consistent with guidelines. However, only 40% received one or more lipid tests. Monitoring was more frequent in specific subgroups, likely reflecting better clinic attendance or clinician perception of higher susceptibility to toxicities.
PMCID: PMC3654034  PMID: 23446495
Laboratory Monitoring; Antiretroviral Therapy; Antiretroviral Toxicity
7.  Assessing the component associations of the healthy worker survivor bias: occupational asbestos exposure and lung cancer mortality 
Annals of epidemiology  2013;23(6):334-341.
The healthy worker survivor bias is well-recognized in occupational epidemiology. Three component associations are necessary for this bias to occur: i) prior exposure and employment status; ii) employment status and subsequent exposure; and iii) employment status and mortality. Together, these associations result in time-varying confounding affected by prior exposure. We illustrate how these associations can be assessed using standard regression methods.
We use data from 2975 asbestos textile factory workers hired between January 1940 and December 1965 and followed for lung cancer mortality through December 2001.
At entry, median age was 24 years, with 42% female and 19% non-Caucasian. Over follow-up, 21% and 17% of person-years were classified as at work and exposed to any asbestos, respectively. For a 100 fiber-year/mL increase in cumulative asbestos, the covariate-adjusted hazard of leaving work decreased by 52% (95% confidence interval [CI], 46–58). The association between employment status and subsequent asbestos exposure was strong due to nonpositivity: 88.3% of person-years at work (95% CI, 87.0–89.5) were classified as exposed to any asbestos; no person-years were classified as exposed to asbestos after leaving work. Finally, leaving active employment was associated with a 48% (95% CI, 9–71) decrease in the covariate-adjusted hazard of lung cancer mortality.
We found strong associations for the components of the healthy worker survivor bias in these data. Standard methods, which fail to properly account for time-varying confounding affected by prior exposure, may provide biased estimates of the effect of asbestos on lung cancer mortality under these conditions.
PMCID: PMC3773512  PMID: 23683709
Epidemiologic methods; Occupational health; Healthy worker effect; Bias; Lung cancer; Mortality
8.  Sodium Intake and Incident Hypertension among Chinese Adults: Estimated Effects Across Three Different Exposure Periods 
Epidemiology (Cambridge, Mass.)  2013;24(3):410-418.
Although it is clear that there are short-term effects of sodium intake on blood pressure, little is known about the most relevant timing of sodium exposure for the onset of hypertension. This question can only be addressed in cohorts with repeated measures of sodium intake.
Using up to 7 measures of dietary sodium intake and blood pressure between 1991 and 2009, we compared baseline, the mean of all measures, and the most recent sodium intake in association with incident hypertension, in 6578 adults enrolled in the China Health and Nutrition Survey aged 18 to 65 free of hypertension at baseline. We used survival methods that account for the interval-censored nature of this study, and inverse probability weights to generate adjusted survival curves and time-specific cumulative risk differences; hazard ratios were also estimated.
For mean and most recent measures, the probability of hypertension-free survival was the lowest among those in the highest intake sodium group compared to all other intake groups across the entire follow-up. In addition, the most recent sodium intake measure had a positive dose-response association with incident hypertension [Risk Difference at 11 years of follow-up = 0.04 (95%CI −0.01, 0.09), 0.06 (0.00, 0.13), 0.18 (0.12, 0.24) and 0.20 (0.12, 0.27) for the second to fifth sodium intake groups compared to the lowest group respectively]. Baseline sodium intake was not associated with incident hypertension.
These results suggest caution when using baseline sodium intake measures with long-term follow up.
PMCID: PMC3909658  PMID: 23466527
China; sodium intake; incident hypertension; interval-censored; adjusted survival curves
9.  Depressive Symptoms, HIV Medication Adherence, and HIV Clinical Outcomes in Tanzania: A Prospective, Observational Study 
PLoS ONE  2014;9(5):e95469.
Depressive symptoms have been shown to independently affect both antiretroviral therapy (ART) adherence and HIV clinical outcomes in high-income countries. We examined the prospective relationship between depressive symptoms and adherence, virologic failure, and suppressed immune function in people living with HIV/AIDS in Tanzania. Data from 403 study participants who were on stable ART and engaged in HIV clinical care were analyzed. We assessed crude and adjusted associations of depressive symptoms and ART adherence, both at baseline and at 12 months, using logistic regression. We used logistic generalized estimating equations to assess the association and 95% confidence intervals (CI) between depressive symptoms and both virologic failure and suppressed immune function. Ten percent of participants reported moderate or severe depressive symptoms at baseline and 31% of participants experienced virologic failure (>150 copies/ml) over two years. Depressive symptoms were associated with greater odds of reported medication nonadherence at both baseline (Odds Ratio [OR] per 1-unit increase  = 1.18, 95% CI [1.12, 1.24]) and 12 months (OR  = 1.08, 95% CI [1.03, 1.14]). By contrast, increases in depressive symptom score were inversely related to both virologic failure (OR = 0.93, 95% CI [0.87, 1.00]) and immune system suppression (OR = 0.88, 95% CI [0.79, 0.99]), though the association between depressive symptoms and clinical outcomes was less precise than for the association with nonadherence. Findings indicate a positive association between depressive symptoms and nonadherence, and also an inverse relationship between depressive symptoms and clinical outcomes, possibly due to informative loss to follow-up.
PMCID: PMC4010413  PMID: 24798428
10.  Analysis of Occupational Asbestos Exposure and Lung Cancer Mortality Using the G Formula 
American Journal of Epidemiology  2013;177(9):989-996.
We employed the parametric G formula to analyze lung cancer mortality in a cohort of textile manufacturing workers who were occupationally exposed to asbestos in South Carolina. A total of 3,002 adults with a median age of 24 years at enrollment (58% male, 81% Caucasian) were followed for 117,471 person-years between 1940 and 2001, and 195 lung cancer deaths were observed. Chrysotile asbestos exposure was measured in fiber-years per milliliter of air, and annual occupational exposures were estimated on the basis of detailed work histories. Sixteen percent of person-years involved exposure to asbestos, with a median exposure of 3.30 fiber-years/mL among those exposed. Lung cancer mortality by age 90 years under the observed asbestos exposure was 9.44%. In comparison with observed asbestos exposure, if the facility had operated under the current Occupational Safety and Health Administration asbestos exposure standard of <0.1 fibers/mL, we estimate that the cohort would have experienced 24% less lung cancer mortality by age 90 years (mortality ratio = 0.76, 95% confidence interval: 0.62, 0.94). A further reduction in asbestos exposure to a standard of <0.05 fibers/mL was estimated to have resulted in a minimal additional reduction in lung cancer mortality by age 90 years (mortality ratio = 0.75, 95% confidence interval: 0.61, 0.92).
PMCID: PMC3639723  PMID: 23558355
asbestos; bias (epidemiology); epidemiologic methods; healthy worker effect; occupations
11.  Accounting for Misclassified Outcomes in Binary Regression Models Using Multiple Imputation With Internal Validation Data 
American Journal of Epidemiology  2013;177(9):904-912.
Outcome misclassification is widespread in epidemiology, but methods to account for it are rarely used. We describe the use of multiple imputation to reduce bias when validation data are available for a subgroup of study participants. This approach is illustrated using data from 308 participants in the multicenter Herpetic Eye Disease Study between 1992 and 1998 (48% female; 85% white; median age, 49 years). The odds ratio comparing the acyclovir group with the placebo group on the gold-standard outcome (physician-diagnosed herpes simplex virus recurrence) was 0.62 (95% confidence interval (CI): 0.35, 1.09). We masked ourselves to physician diagnosis except for a 30% validation subgroup used to compare methods. Multiple imputation (odds ratio (OR) = 0.60; 95% CI: 0.24, 1.51) was compared with naive analysis using self-reported outcomes (OR = 0.90; 95% CI: 0.47, 1.73), analysis restricted to the validation subgroup (OR = 0.57; 95% CI: 0.20, 1.59), and direct maximum likelihood (OR = 0.62; 95% CI: 0.26, 1.53). In simulations, multiple imputation and direct maximum likelihood had greater statistical power than did analysis restricted to the validation subgroup, yet all 3 provided unbiased estimates of the odds ratio. The multiple-imputation approach was extended to estimate risk ratios using log-binomial regression. Multiple imputation has advantages regarding flexibility and ease of implementation for epidemiologists familiar with missing data methods.
PMCID: PMC3983405  PMID: 24627573
bias(epidemiology); logistic regression; Monte Carlo method; sensitivity and specificity
12.  Timing of HAART Initiation and Clinical Outcomes among HIV-1 Seroconverters 
Archives of internal medicine  2011;171(17):1560-1569.
To estimate the clinical benefit of HAART initiation versus deferral in a given month among patients with CD4 counts <800 cells/µL.
In this observational cohort study of HIV-1 seroconverters from CASCADE, we constructed monthly sequential nested subcohorts from 1/1996 to 5/2009 including all eligible HAART-naïve, AIDS-free individuals with a CD4 count <800 cells/uL. The primary outcome was time to AIDS or death among those who initiated HAART in the baseline month compared to those who did not, pooled across subcohorts and stratified by CD4. Using inverse-probability-of-treatment-weighted survival curves and Cox proportional hazards models, we estimated the absolute and relative effect of treatment with robust 95% confidence intervals (in parentheses).
Of 9,455 patients with 52,268 person-years of follow-up, 812 (8.6%) developed AIDS and 544 (5.8%) died. Within CD4 strata of 200–349, 350–499, and 500–799 cells/µL, HAART initiation was associated with adjusted hazard ratios for AIDS/death of 0.59 (0.43,0.81), 0.75 (0.49,1.14), and 1.10 (0.67,1.79), respectively; and with adjusted 3-year cumulative risk differences of −4.8% (−7.0%,−2.6%), −2.9% (−5.0%,−0.9%), and 0.3% (−3.7%,4.2%), respectively. In the analysis of all-cause mortality, HAART initiation was associated with adjusted hazard ratios of 0.71 (0.44,1.15), 0.51 (0.33,0.80) and 1.02 (0.49,2.12), respectively. Numbers needed to treat to prevent one AIDS event or death within 3 years were 21 (14,38) and 34 (20,115) in CD4 strata of 200–349 and 350–499 cells/µL, respectively.
Compared to deferring in a given month, HAART initiation at CD4 counts <500 (but not 500–799) cells/µL was associated with slower disease progression.
PMCID: PMC3960856  PMID: 21949165
13.  Missing Doses in the Life Span Study of Japanese Atomic Bomb Survivors 
American Journal of Epidemiology  2013;177(6):562-568.
The Life Span Study of atomic bomb survivors is an important source of risk estimates used to inform radiation protection and compensation. Interviews with survivors in the 1950s and 1960s provided information needed to estimate radiation doses for survivors proximal to ground zero. Because of a lack of interview or the complexity of shielding, doses are missing for 7,058 of the 68,119 proximal survivors. Recent analyses excluded people with missing doses, and despite the protracted collection of interview information necessary to estimate some survivors' doses, defined start of follow-up as October 1, 1950, for everyone. We describe the prevalence of missing doses and its association with mortality, distance from hypocenter, city, age, and sex. Missing doses were more common among Nagasaki residents than among Hiroshima residents (prevalence ratio = 2.05; 95% confidence interval: 1.96, 2.14), among people who were closer to ground zero than among those who were far from it, among people who were younger at enrollment than among those who were older, and among males than among females (prevalence ratio = 1.22; 95% confidence interval: 1.17, 1.28). Missing dose was associated with all-cancer and leukemia mortality, particularly during the first years of follow-up (all-cancer rate ratio = 2.16, 95% confidence interval: 1.51, 3.08; and leukemia rate ratio = 4.28, 95% confidence interval: 1.72, 10.67). Accounting for missing dose and late entry should reduce bias in estimated dose-mortality associations.
PMCID: PMC3592497  PMID: 23429722
atomic bombs; cohort studies; ionizing radiation; missing data; mortality; nuclear weapons
14.  Joint effects of alcohol consumption and high-risk sexual behavior on HIV seroconversion among men who have sex with men 
AIDS (London, England)  2013;27(5):815-823.
PMCID: PMC3746520  PMID: 23719351
Alcohol Drinking; HIV Seropositivity; Men who Have Sex with Men; Prospective Studies; Sexual Behavior
15.  A prospective study of alcohol consumption and HIV acquisition among injection drug users 
AIDS (London, England)  2011;25(2):221-228.
Estimate the effect of alcohol consumption on HIV acquisition while appropriately accounting for confounding by time-varying risk factors.
African American injection drug users in the AIDS Link to Intravenous Experience cohort study. Participants were recruited and followed with semiannual visits in Baltimore, Maryland between 1988 and 2008.
Marginal structural models were used to estimate the effect of alcohol consumption on HIV acquisition.
At entry, 28% of 1,525 participants were female with a median (quartiles) age of 37 (32; 42) years and 10 (10; 12) years of formal education. During follow up, 155 participants acquired HIV and alcohol consumption was 24%, 24%, 26%, 17%, and 9% for 0, 1–5, 6–20, 21–50 and 51–140 drinks/week over the prior two years, respectively. In analyses accounting for socio-demographic factors, drug use, and sexual activity, hazard ratios for participants reporting 1–5, 6–20, 21–50, and 51–140 drinks/week in the prior two years compared to participants who reported 0 drinks/week were 1.09 (0.60, 1.98), 1.18 (0.66, 2.09), 1.66 (0.94, 2.93) and 2.12 (1.15, 3.90), respectively. A trend test indicated a dose-response relationship between alcohol consumption and HIV acquisition (P value for trend = 9.7×10−4).
A dose-response relationship between alcohol consumption and subsequent HIV acquisition is indicated, independent of measured known risk factors.
PMCID: PMC3006640  PMID: 21099668
Alcohol consumption; HIV infection; Bias; Cohort studies; Injection drug users
16.  Marginal Structural Models for Case-Cohort Study Designs to Estimate the Association of Antiretroviral Therapy Initiation With Incident AIDS or Death 
American Journal of Epidemiology  2012;175(5):381-390.
To estimate the association of antiretroviral therapy initiation with incident acquired immunodeficiency syndrome (AIDS) or death while accounting for time-varying confounding in a cost-efficient manner, the authors combined a case-cohort study design with inverse probability-weighted estimation of a marginal structural Cox proportional hazards model. A total of 950 adults who were positive for human immunodeficiency virus type 1 were followed in 2 US cohort studies between 1995 and 2007. In the full cohort, 211 AIDS cases or deaths occurred during 4,456 person-years. In an illustrative 20% random subcohort of 190 participants, 41 AIDS cases or deaths occurred during 861 person-years. Accounting for measured confounders and determinants of dropout by inverse probability weighting, the full cohort hazard ratio was 0.41 (95% confidence interval: 0.26, 0.65) and the case-cohort hazard ratio was 0.47 (95% confidence interval: 0.26, 0.83). Standard multivariable-adjusted hazard ratios were closer to the null, regardless of study design. The precision lost with the case-cohort design was modest given the cost savings. Results from Monte Carlo simulations demonstrated that the proposed approach yields approximately unbiased estimates of the hazard ratio with appropriate confidence interval coverage. Marginal structural model analysis of case-cohort study designs provides a cost-efficient design coupled with an accurate analytic method for research settings in which there is time-varying confounding.
PMCID: PMC3282878  PMID: 22302074
acquired immunodeficiency syndrome; case-cohort studies; cohort studies; confounding bias; HIV; pharmacoepidemiology; selection bias
17.  Bayesian Posterior Distributions Without Markov Chains 
American Journal of Epidemiology  2012;175(5):368-375.
Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976–1983) assessing the relation between residential exposure to magnetic fields and the development of childhood cancer. Results from rejection sampling (odds ratio (OR) = 1.69, 95% posterior interval (PI): 0.57, 5.00) were similar to MCMC results (OR = 1.69, 95% PI: 0.58, 4.95) and approximations from data-augmentation priors (OR = 1.74, 95% PI: 0.60, 5.06). In example 2, the authors apply rejection sampling to a cohort study of 315 human immunodeficiency virus seroconverters (1984–1998) to assess the relation between viral load after infection and 5-year incidence of acquired immunodeficiency syndrome, adjusting for (continuous) age at seroconversion and race. In this more complex example, rejection sampling required a notably longer run time than MCMC sampling but remained feasible and again yielded similar results. The transparency of the proposed approach comes at a price of being less broadly applicable than MCMC.
PMCID: PMC3282880  PMID: 22306565
Bayes theorem; epidemiologic methods; inference; Monte Carlo method; posterior distribution; simulation
18.  Association of germline microRNA SNPs in pre-miRNA flanking region and breast cancer risk and survival: the Carolina Breast Cancer Study 
Cancer causes & control : CCC  2013;24(6):1099-1109.
Common germline variation in the 5′ region proximal to precursor (pre-) miRNA gene sequences is evaluated for association with breast cancer risk and survival among African Americans and Caucasians.
We genotyped 9 single nucleotide polymorphisms (SNPs) within 6 miRNA gene regions previously associated with breast cancer, in 1972 cases and 1776 controls. In a race-stratified analysis using unconditional logistic regression, odds ratios (OR) and 95% confidence intervals (CI) were calculated to evaluate SNP association with breast cancer risk. Additionally, hazard ratios (HR) for breast cancer-specific mortality were estimated.
2 miR-185 SNPs provided suggestive evidence of an inverse association with breast cancer risk (rs2008591, OR = 0.72 (95% CI = 0.53 – 0.98, p-value = 0.04) and rs887205, OR = 0.71 (95% CI = 0.52 – 0.96, p-value = 0.03), respectively) among African Americans. Two SNPs, miR-34b/34c (rs4938723, HR = 0.57 (95% CI = 0.37 – 0.89, p-value = 0.01)) and miR-206 (rs6920648, HR = 0.77 (95% CI = 0.61 – 0.97, p-value = 0.02)), provided evidence of association with breast cancer survival. Further adjustment for stage resulted in more modest associations with survival (HR = 0.65 (95% CI = 0.42 – 1.02, p-value = 0.06 and HR = 0.79 (95% CI = 0.62 – 1.00, p-value = 0.05, respectively).
Our results suggest that germline variation in the 5' region proximal to pre-miRNA gene sequences may be associated with breast cancer risk among African Americans and breast cancer-specific survival generally, however further validation is needed to confirm these findings.
PMCID: PMC3804004  PMID: 23526039
microRNA; breast cancer; germline; single nucleotide polymorphism; risk; survival
19.  Association of early HIV viremia with mortality after HIV-associated lymphoma 
AIDS (London, England)  2013;27(15):2365-2373.
To examine the association between early HIV viremia and mortality after HIV-associated lymphoma.
Multicenter observational cohort study.
Center for AIDS Research Network of Integrated Clinical Systems cohort.
HIV-infected patients with lymphoma diagnosed between 1996 and 2011, who were alive 6 months after lymphoma diagnosis and with ≥2 HIV RNA values during the 6 months after lymphoma diagnosis.
Cumulative HIV viremia during the 6 months after lymphoma diagnosis, expressed as viremia copy-6-months.
Main outcome measure
All-cause mortality between 6 months and 5 years after lymphoma diagnosis.
Of 224 included patients, 183 (82%) had non-Hodgkin lymphoma (NHL) and 41 (18%) had Hodgkin lymphoma (HL). At lymphoma diagnosis, 105 (47%) patients were on antiretroviral therapy (ART), median CD4 count was 148 cells/µlL (IQR 54– 322), and 33% had suppressed HIV RNA (<400 copies/mL). In adjusted analyses, mortality was associated with older age [adjusted hazard ratio (AHR) 1.37 per decade increase, 95% CI 1.03–1.83], lymphoma occurrence on ART (AHR 1.63, 95% CI 1.02– 2.63), lower CD4 count (AHR 0.75 per 100 cell/µL increase, 95% CI 0.64–0.89), and higher early cumulative viremia (AHR 1.35 per log10copies × 6-months/mL, 95% CI 1.11–1.65). The detrimental effect of early cumulative viremia was consistent across patient groups defined by ART status, CD4 count, and histology.
Exposure to each additional 1-unit log10 in HIV RNA throughout the 6 months after lymphoma diagnosis, was associated with a 35% increase in subsequent mortality. These results suggest that early and effective ART during chemotherapy may improve survival.
PMCID: PMC3773290  PMID: 23736149
AIDS; Burkitt lymphoma; diffuse large B-cell lymphoma; HIV; Hodgkin lymphoma; lymphoma; non-Hodgkin lymphoma
20.  A comparison of methods to estimate the hazard ratio under conditions of time-varying confounding and nonpositivity 
Epidemiology (Cambridge, Mass.)  2011;22(5):718-723.
In occupational epidemiologic studies, the healthy-worker survivor effect refers to a process that leads to bias in the estimates of an association between cumulative exposure and a health outcome. In these settings, work status acts both as an intermediate and confounding variable, and may violate the positivity assumption (the presence of exposed and unexposed observations in all strata of the confounder). Using Monte Carlo simulation, we assess the degree to which crude, work-status adjusted, and weighted (marginal structural) Cox proportional hazards models are biased in the presence of time-varying confounding and nonpositivity. We simulate data representing time-varying occupational exposure, work status, and mortality. Bias, coverage, and root mean squared error (MSE) were calculated relative to the true marginal exposure effect in a range of scenarios. For a base-case scenario, using crude, adjusted, and weighted Cox models, respectively, the hazard ratio was biased downward 19%, 9%, and 6%; 95% confidence interval coverage was 48%, 85%, and 91%; and root MSE was 0.20, 0.13, and 0.11. Although marginal structural models were less biased in most scenarios studied, neither standard nor marginal structural Cox proportional hazards models fully resolve the bias encountered under conditions of time-varying confounding and nonpositivity.
PMCID: PMC3155387  PMID: 21747286
21.  A simulation study of finite-sample properties of marginal structural Cox proportional hazards models 
Statistics in medicine  2012;31(19):2098-2109.
Motivated by a previously published study of HIV treatment, we simulated data subject to time-varying confounding affected by prior treatment to examine some finite-sample properties of marginal structural Cox proportional hazards models. We compared (a) unadjusted, (b) regression-adjusted, (c) unstabilized and (d) stabilized marginal structural (inverse probability-of-treatment [IPT] weighted) model estimators of effect in terms of bias, standard error, root mean squared error (MSE) and 95% confidence limit coverage over a range of research scenarios, including relatively small sample sizes and ten study assessments. In the base-case scenario resembling the motivating example, where the true hazard ratio was 0.5, both IPT-weighted analyses were unbiased while crude and adjusted analyses showed substantial bias towards and across the null. Stabilized IPT-weighted analyses remained unbiased across a range of scenarios, including relatively small sample size; however, the standard error was generally smaller in crude and adjusted models. In many cases, unstabilized weighted analysis showed a substantial increase in standard error compared to other approaches. Root MSE was smallest in the IPT-weighted analyses for the base-case scenario. In situations where time-varying confounding affected by prior treatment was absent, IPT-weighted analyses were less precise and therefore had greater root MSE compared with adjusted analyses. The 95% confidence limit coverage was close to nominal for all stabilized IPT-weighted but poor in crude, adjusted, and unstabilized IPT-weighted analysis. Under realistic scenarios, marginal structural Cox proportional hazards models performed according to expectations based on large-sample theory and provided accurate estimates of the hazard ratio.
PMCID: PMC3641777  PMID: 22492660
Bias; Causal inference; Marginal structural models; Monte Carlo study
22.  The Parametric G-Formula to Estimate the Effect of Highly Active Antiretroviral Therapy on Incident AIDS or Death 
Statistics in medicine  2012;31(18):2000-2009.
The parametric g-formula can be used to contrast the distribution of potential outcomes under arbitrary treatment regimes. Like g-estimation of structural nested models and inverse probability weighting of marginal structural models, the parametric g-formula can appropriately adjust for measured time-varying confounders that are affected by prior treatment. However, there have been few implementations of the parametric g-formula to date. Here, we apply the parametric g-formula to assess the impact of highly active antiretroviral therapy on time to AIDS or death in two US-based HIV cohorts including 1,498 participants. These participants contributed approximately 7,300 person-years of follow-up of which 49% was exposed to HAART and 382 events occurred; 259 participants were censored due to drop out. Using the parametric g-formula, we estimated that antiretroviral therapy substantially reduces the hazard of AIDS or death (HR=0.55; 95% confidence limits [CL]: 0.42, 0.71). This estimate was similar to one previously reported using a marginal structural model 0.54 (95% CL: 0.38, 0.78). The 6.5-year difference in risk of AIDS or death was 13% (95% CL: 8%, 18%). Results were robust to assumptions about temporal ordering, and extent of history modeled, for time-varying covariates. The parametric g-formula is a viable alternative to inverse probability weighting of marginal structural models and g-estimation of structural nested models for the analysis of complex longitudinal data.
PMCID: PMC3641816  PMID: 22495733
Cohort study; Confounding; g-formula; HIV/AIDS; Monte Carlo methods
23.  Estimating the effects of multiple time-varying exposures using joint marginal structural models: alcohol consumption, injection drug use, and HIV acquisition 
Epidemiology (Cambridge, Mass.)  2012;23(4):574-582.
The joint effects of multiple exposures on an outcome are frequently of interest in epidemiologic research. In 2001, Hernán, Brumback, and Robins (JASA 2001; 96: 440–448) presented methods for estimating the joint effects of multiple time-varying exposures subject to time-varying confounding affected by prior exposure using joint marginal structural models. Nonetheless, the use of these joint models is rare in the applied literature. Minimal uptake of these joint models, in contrast to the now widely used standard marginal structural model, is due in part to a lack of examples demonstrating the method. In this paper, we review the assumptions necessary for unbiased estimation of joint effects as well as the distinction between interaction and effect measure modification. We demonstrate the use of marginal structural models for estimating the joint effects of alcohol consumption and injection drug use on HIV acquisition, using data from 1,525 injection drug users in the AIDS Link to Intravenous Experience cohort study. In the joint model, the hazard ratio (HR) for heavy drinking in the absence of any drug injections was 1.58 (95% confidence interval= 0.67–3.73). The HR for any drug injections in the absence of heavy drinking was 1.78 (1.10–2.89). The HR for heavy drinking and any drug injections was 2.45 (1.45–4.12). The P values for multiplicative and additive interaction were 0.7620 and 0.9200, respectively, indicating a lack of departure from effects that multiply or add. However, we could not rule out interaction on either scale due to imprecision.
PMCID: PMC3367098  PMID: 22495473
24.  Aspirin in the primary prevention of cardiovascular disease in the Women’s Health Study: Effect of noncompliance 
European Journal of Epidemiology  2012;27(6):431-438.
Randomized evidence for aspirin in the primary prevention of cardiovascular disease (CVD) among women is limited and suggests at most a modest effect for total CVD. Lack of compliance, however, can null-bias estimated effects. We used marginal structural models (MSMs) to estimate the etiologic effect of continuous aspirin use on CVD events among 39,876 apparently healthy female health professionals aged 45 years and older in the Women’s Health Study, a randomized trial of 100 mg aspirin every other day versus placebo. As-treated analyses and MSMs controlled for time-varying determinants of aspirin use and CVD. Predictors of aspirin use differed by randomized group and prior use and included medical history, CVD risk factors, and intermediate CVD events. Previously reported intent-to-treat analyses found small non-significant effects of aspirin on total CVD (hazard ratio (HR) =0.91, 95% confidence interval (CI) =0.81–1.03) and CVD mortality (HR=0.95, 95% CI=0.74–1.22). As-treated analyses were similar for total CVD with a slight reduction in CVD mortality (HR=0.88, 95%CI=0.67–1.16). MSMs, which adjusted for non-compliance, were similar for total CVD (HR=0.93; 95% CI: 0.81, 1.07) but suggested lower CVD mortality with aspirin use (HR = 0.76; 95% CI: 0.54, 1.08). Adjusting for non-compliance had little impact on the estimated effect of aspirin on total CVD, but strengthened the effect on CVD mortality. These results support a limited effect of low-dose aspirin on total CVD in women, but potential benefit for CVD mortality.
PMCID: PMC3383873  PMID: 22699516
Aspirin; cardiovascular disease; marginal structural model; myocardial infarction; stroke
25.  Design and Methods of the Chronic Kidney Disease in Children (CKiD) Prospective Cohort Study 
An estimated 650,000 Americans will have ESRD by 2010. Young adults with kidney failure often develop progressive chronic kidney disease (CKD) in childhood and adolescence. The Chronic Kidney Disease in Children (CKiD) prospective cohort study of 540 children aged 1 to 16 yr and have estimated GFR between 30 and 75 ml/min per 1.73 m2 was established to identify novel risk factors for CKD progression; the impact of kidney function decline on growth, cognition, and behavior; and the evolution of cardiovascular disease risk factors. Annually, a physical examination documenting height, weight, Tanner stage, and standardized BP is conducted, and cognitive function, quality of life, nutritional, and behavioral questionnaires are completed by the parent or the child. Samples of serum, plasma, urine, hair, and fingernail clippings are stored in biosamples and genetics repositories. GFR is measured annually for 2 yr, then every other year using iohexol, HPLC creatinine, and cystatin C. Using age, gender, and serial measurements of Tanner stage, height, and creatinine, compared with iohexol GFR, a formula to estimate GFR that will improve on traditional pediatric GFR estimating equations when applied longitudinally is expected to be developed. Every other year, echocardiography and ambulatory BP monitoring will assess risk for cardiovascular disease. The primary outcome is the rate of decline of GFR. The CKiD study will be the largest North American multicenter study of pediatric CKD.
PMCID: PMC3630231  PMID: 17699320

Results 1-25 (86)