Although it is clear that there are short-term effects of sodium intake on blood pressure, little is known about the most relevant timing of sodium exposure for the onset of hypertension. This question can only be addressed in cohorts with repeated measures of sodium intake.
Using up to 7 measures of dietary sodium intake and blood pressure between 1991 and 2009, we compared baseline, the mean of all measures, and the most recent sodium intake in association with incident hypertension, in 6578 adults enrolled in the China Health and Nutrition Survey aged 18 to 65 free of hypertension at baseline. We used survival methods that account for the interval-censored nature of this study, and inverse probability weights to generate adjusted survival curves and time-specific cumulative risk differences; hazard ratios were also estimated.
For mean and most recent measures, the probability of hypertension-free survival was the lowest among those in the highest intake sodium group compared to all other intake groups across the entire follow-up. In addition, the most recent sodium intake measure had a positive dose-response association with incident hypertension [Risk Difference at 11 years of follow-up = 0.04 (95%CI −0.01, 0.09), 0.06 (0.00, 0.13), 0.18 (0.12, 0.24) and 0.20 (0.12, 0.27) for the second to fifth sodium intake groups compared to the lowest group respectively]. Baseline sodium intake was not associated with incident hypertension.
These results suggest caution when using baseline sodium intake measures with long-term follow up.
China; sodium intake; incident hypertension; interval-censored; adjusted survival curves
To estimate the clinical benefit of HAART initiation versus deferral in a given month among patients with CD4 counts <800 cells/µL.
In this observational cohort study of HIV-1 seroconverters from CASCADE, we constructed monthly sequential nested subcohorts from 1/1996 to 5/2009 including all eligible HAART-naïve, AIDS-free individuals with a CD4 count <800 cells/uL. The primary outcome was time to AIDS or death among those who initiated HAART in the baseline month compared to those who did not, pooled across subcohorts and stratified by CD4. Using inverse-probability-of-treatment-weighted survival curves and Cox proportional hazards models, we estimated the absolute and relative effect of treatment with robust 95% confidence intervals (in parentheses).
Of 9,455 patients with 52,268 person-years of follow-up, 812 (8.6%) developed AIDS and 544 (5.8%) died. Within CD4 strata of 200–349, 350–499, and 500–799 cells/µL, HAART initiation was associated with adjusted hazard ratios for AIDS/death of 0.59 (0.43,0.81), 0.75 (0.49,1.14), and 1.10 (0.67,1.79), respectively; and with adjusted 3-year cumulative risk differences of −4.8% (−7.0%,−2.6%), −2.9% (−5.0%,−0.9%), and 0.3% (−3.7%,4.2%), respectively. In the analysis of all-cause mortality, HAART initiation was associated with adjusted hazard ratios of 0.71 (0.44,1.15), 0.51 (0.33,0.80) and 1.02 (0.49,2.12), respectively. Numbers needed to treat to prevent one AIDS event or death within 3 years were 21 (14,38) and 34 (20,115) in CD4 strata of 200–349 and 350–499 cells/µL, respectively.
Compared to deferring in a given month, HAART initiation at CD4 counts <500 (but not 500–799) cells/µL was associated with slower disease progression.
The Life Span Study of atomic bomb survivors is an important source of risk estimates used to inform radiation protection and compensation. Interviews with survivors in the 1950s and 1960s provided information needed to estimate radiation doses for survivors proximal to ground zero. Because of a lack of interview or the complexity of shielding, doses are missing for 7,058 of the 68,119 proximal survivors. Recent analyses excluded people with missing doses, and despite the protracted collection of interview information necessary to estimate some survivors' doses, defined start of follow-up as October 1, 1950, for everyone. We describe the prevalence of missing doses and its association with mortality, distance from hypocenter, city, age, and sex. Missing doses were more common among Nagasaki residents than among Hiroshima residents (prevalence ratio = 2.05; 95% confidence interval: 1.96, 2.14), among people who were closer to ground zero than among those who were far from it, among people who were younger at enrollment than among those who were older, and among males than among females (prevalence ratio = 1.22; 95% confidence interval: 1.17, 1.28). Missing dose was associated with all-cancer and leukemia mortality, particularly during the first years of follow-up (all-cancer rate ratio = 2.16, 95% confidence interval: 1.51, 3.08; and leukemia rate ratio = 4.28, 95% confidence interval: 1.72, 10.67). Accounting for missing dose and late entry should reduce bias in estimated dose-mortality associations.
atomic bombs; cohort studies; ionizing radiation; missing data; mortality; nuclear weapons
To estimate the association of antiretroviral therapy initiation with incident acquired immunodeficiency syndrome (AIDS) or death while accounting for time-varying confounding in a cost-efficient manner, the authors combined a case-cohort study design with inverse probability-weighted estimation of a marginal structural Cox proportional hazards model. A total of 950 adults who were positive for human immunodeficiency virus type 1 were followed in 2 US cohort studies between 1995 and 2007. In the full cohort, 211 AIDS cases or deaths occurred during 4,456 person-years. In an illustrative 20% random subcohort of 190 participants, 41 AIDS cases or deaths occurred during 861 person-years. Accounting for measured confounders and determinants of dropout by inverse probability weighting, the full cohort hazard ratio was 0.41 (95% confidence interval: 0.26, 0.65) and the case-cohort hazard ratio was 0.47 (95% confidence interval: 0.26, 0.83). Standard multivariable-adjusted hazard ratios were closer to the null, regardless of study design. The precision lost with the case-cohort design was modest given the cost savings. Results from Monte Carlo simulations demonstrated that the proposed approach yields approximately unbiased estimates of the hazard ratio with appropriate confidence interval coverage. Marginal structural model analysis of case-cohort study designs provides a cost-efficient design coupled with an accurate analytic method for research settings in which there is time-varying confounding.
acquired immunodeficiency syndrome; case-cohort studies; cohort studies; confounding bias; HIV; pharmacoepidemiology; selection bias
Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976–1983) assessing the relation between residential exposure to magnetic fields and the development of childhood cancer. Results from rejection sampling (odds ratio (OR) = 1.69, 95% posterior interval (PI): 0.57, 5.00) were similar to MCMC results (OR = 1.69, 95% PI: 0.58, 4.95) and approximations from data-augmentation priors (OR = 1.74, 95% PI: 0.60, 5.06). In example 2, the authors apply rejection sampling to a cohort study of 315 human immunodeficiency virus seroconverters (1984–1998) to assess the relation between viral load after infection and 5-year incidence of acquired immunodeficiency syndrome, adjusting for (continuous) age at seroconversion and race. In this more complex example, rejection sampling required a notably longer run time than MCMC sampling but remained feasible and again yielded similar results. The transparency of the proposed approach comes at a price of being less broadly applicable than MCMC.
Bayes theorem; epidemiologic methods; inference; Monte Carlo method; posterior distribution; simulation
Common germline variation in the 5′ region proximal to precursor (pre-) miRNA gene sequences is evaluated for association with breast cancer risk and survival among African Americans and Caucasians.
We genotyped 9 single nucleotide polymorphisms (SNPs) within 6 miRNA gene regions previously associated with breast cancer, in 1972 cases and 1776 controls. In a race-stratified analysis using unconditional logistic regression, odds ratios (OR) and 95% confidence intervals (CI) were calculated to evaluate SNP association with breast cancer risk. Additionally, hazard ratios (HR) for breast cancer-specific mortality were estimated.
2 miR-185 SNPs provided suggestive evidence of an inverse association with breast cancer risk (rs2008591, OR = 0.72 (95% CI = 0.53 – 0.98, p-value = 0.04) and rs887205, OR = 0.71 (95% CI = 0.52 – 0.96, p-value = 0.03), respectively) among African Americans. Two SNPs, miR-34b/34c (rs4938723, HR = 0.57 (95% CI = 0.37 – 0.89, p-value = 0.01)) and miR-206 (rs6920648, HR = 0.77 (95% CI = 0.61 – 0.97, p-value = 0.02)), provided evidence of association with breast cancer survival. Further adjustment for stage resulted in more modest associations with survival (HR = 0.65 (95% CI = 0.42 – 1.02, p-value = 0.06 and HR = 0.79 (95% CI = 0.62 – 1.00, p-value = 0.05, respectively).
Our results suggest that germline variation in the 5' region proximal to pre-miRNA gene sequences may be associated with breast cancer risk among African Americans and breast cancer-specific survival generally, however further validation is needed to confirm these findings.
microRNA; breast cancer; germline; single nucleotide polymorphism; risk; survival
In occupational epidemiologic studies, the healthy-worker survivor effect refers to a process that leads to bias in the estimates of an association between cumulative exposure and a health outcome. In these settings, work status acts both as an intermediate and confounding variable, and may violate the positivity assumption (the presence of exposed and unexposed observations in all strata of the confounder). Using Monte Carlo simulation, we assess the degree to which crude, work-status adjusted, and weighted (marginal structural) Cox proportional hazards models are biased in the presence of time-varying confounding and nonpositivity. We simulate data representing time-varying occupational exposure, work status, and mortality. Bias, coverage, and root mean squared error (MSE) were calculated relative to the true marginal exposure effect in a range of scenarios. For a base-case scenario, using crude, adjusted, and weighted Cox models, respectively, the hazard ratio was biased downward 19%, 9%, and 6%; 95% confidence interval coverage was 48%, 85%, and 91%; and root MSE was 0.20, 0.13, and 0.11. Although marginal structural models were less biased in most scenarios studied, neither standard nor marginal structural Cox proportional hazards models fully resolve the bias encountered under conditions of time-varying confounding and nonpositivity.
To examine the association between early HIV viremia and mortality after HIV-associated lymphoma.
Multicenter observational cohort study.
Center for AIDS Research Network of Integrated Clinical Systems cohort.
HIV-infected patients with lymphoma diagnosed between 1996 and 2011, who were alive 6 months after lymphoma diagnosis and with ≥2 HIV RNA values during the 6 months after lymphoma diagnosis.
Cumulative HIV viremia during the 6 months after lymphoma diagnosis, expressed as viremia copy-6-months.
Main outcome measure
All-cause mortality between 6 months and 5 years after lymphoma diagnosis.
Of 224 included patients, 183 (82%) had non-Hodgkin lymphoma (NHL) and 41 (18%) had Hodgkin lymphoma (HL). At lymphoma diagnosis, 105 (47%) patients were on antiretroviral therapy (ART), median CD4 count was 148 cells/µlL (IQR 54– 322), and 33% had suppressed HIV RNA (<400 copies/mL). In adjusted analyses, mortality was associated with older age [adjusted hazard ratio (AHR) 1.37 per decade increase, 95% CI 1.03–1.83], lymphoma occurrence on ART (AHR 1.63, 95% CI 1.02– 2.63), lower CD4 count (AHR 0.75 per 100 cell/µL increase, 95% CI 0.64–0.89), and higher early cumulative viremia (AHR 1.35 per log10copies × 6-months/mL, 95% CI 1.11–1.65). The detrimental effect of early cumulative viremia was consistent across patient groups defined by ART status, CD4 count, and histology.
Exposure to each additional 1-unit log10 in HIV RNA throughout the 6 months after lymphoma diagnosis, was associated with a 35% increase in subsequent mortality. These results suggest that early and effective ART during chemotherapy may improve survival.
AIDS; Burkitt lymphoma; diffuse large B-cell lymphoma; HIV; Hodgkin lymphoma; lymphoma; non-Hodgkin lymphoma
Motivated by a previously published study of HIV treatment, we simulated data subject to time-varying confounding affected by prior treatment to examine some finite-sample properties of marginal structural Cox proportional hazards models. We compared (a) unadjusted, (b) regression-adjusted, (c) unstabilized and (d) stabilized marginal structural (inverse probability-of-treatment [IPT] weighted) model estimators of effect in terms of bias, standard error, root mean squared error (MSE) and 95% confidence limit coverage over a range of research scenarios, including relatively small sample sizes and ten study assessments. In the base-case scenario resembling the motivating example, where the true hazard ratio was 0.5, both IPT-weighted analyses were unbiased while crude and adjusted analyses showed substantial bias towards and across the null. Stabilized IPT-weighted analyses remained unbiased across a range of scenarios, including relatively small sample size; however, the standard error was generally smaller in crude and adjusted models. In many cases, unstabilized weighted analysis showed a substantial increase in standard error compared to other approaches. Root MSE was smallest in the IPT-weighted analyses for the base-case scenario. In situations where time-varying confounding affected by prior treatment was absent, IPT-weighted analyses were less precise and therefore had greater root MSE compared with adjusted analyses. The 95% confidence limit coverage was close to nominal for all stabilized IPT-weighted but poor in crude, adjusted, and unstabilized IPT-weighted analysis. Under realistic scenarios, marginal structural Cox proportional hazards models performed according to expectations based on large-sample theory and provided accurate estimates of the hazard ratio.
Bias; Causal inference; Marginal structural models; Monte Carlo study
Alcohol Drinking; HIV Seropositivity; Men who Have Sex with Men; Prospective Studies; Sexual Behavior
The parametric g-formula can be used to contrast the distribution of potential outcomes under arbitrary treatment regimes. Like g-estimation of structural nested models and inverse probability weighting of marginal structural models, the parametric g-formula can appropriately adjust for measured time-varying confounders that are affected by prior treatment. However, there have been few implementations of the parametric g-formula to date. Here, we apply the parametric g-formula to assess the impact of highly active antiretroviral therapy on time to AIDS or death in two US-based HIV cohorts including 1,498 participants. These participants contributed approximately 7,300 person-years of follow-up of which 49% was exposed to HAART and 382 events occurred; 259 participants were censored due to drop out. Using the parametric g-formula, we estimated that antiretroviral therapy substantially reduces the hazard of AIDS or death (HR=0.55; 95% confidence limits [CL]: 0.42, 0.71). This estimate was similar to one previously reported using a marginal structural model 0.54 (95% CL: 0.38, 0.78). The 6.5-year difference in risk of AIDS or death was 13% (95% CL: 8%, 18%). Results were robust to assumptions about temporal ordering, and extent of history modeled, for time-varying covariates. The parametric g-formula is a viable alternative to inverse probability weighting of marginal structural models and g-estimation of structural nested models for the analysis of complex longitudinal data.
Cohort study; Confounding; g-formula; HIV/AIDS; Monte Carlo methods
The joint effects of multiple exposures on an outcome are frequently of interest in epidemiologic research. In 2001, Hernán, Brumback, and Robins (JASA 2001; 96: 440–448) presented methods for estimating the joint effects of multiple time-varying exposures subject to time-varying confounding affected by prior exposure using joint marginal structural models. Nonetheless, the use of these joint models is rare in the applied literature. Minimal uptake of these joint models, in contrast to the now widely used standard marginal structural model, is due in part to a lack of examples demonstrating the method. In this paper, we review the assumptions necessary for unbiased estimation of joint effects as well as the distinction between interaction and effect measure modification. We demonstrate the use of marginal structural models for estimating the joint effects of alcohol consumption and injection drug use on HIV acquisition, using data from 1,525 injection drug users in the AIDS Link to Intravenous Experience cohort study. In the joint model, the hazard ratio (HR) for heavy drinking in the absence of any drug injections was 1.58 (95% confidence interval= 0.67–3.73). The HR for any drug injections in the absence of heavy drinking was 1.78 (1.10–2.89). The HR for heavy drinking and any drug injections was 2.45 (1.45–4.12). The P values for multiplicative and additive interaction were 0.7620 and 0.9200, respectively, indicating a lack of departure from effects that multiply or add. However, we could not rule out interaction on either scale due to imprecision.
Randomized evidence for aspirin in the primary prevention of cardiovascular disease (CVD) among women is limited and suggests at most a modest effect for total CVD. Lack of compliance, however, can null-bias estimated effects. We used marginal structural models (MSMs) to estimate the etiologic effect of continuous aspirin use on CVD events among 39,876 apparently healthy female health professionals aged 45 years and older in the Women’s Health Study, a randomized trial of 100 mg aspirin every other day versus placebo. As-treated analyses and MSMs controlled for time-varying determinants of aspirin use and CVD. Predictors of aspirin use differed by randomized group and prior use and included medical history, CVD risk factors, and intermediate CVD events. Previously reported intent-to-treat analyses found small non-significant effects of aspirin on total CVD (hazard ratio (HR) =0.91, 95% confidence interval (CI) =0.81–1.03) and CVD mortality (HR=0.95, 95% CI=0.74–1.22). As-treated analyses were similar for total CVD with a slight reduction in CVD mortality (HR=0.88, 95%CI=0.67–1.16). MSMs, which adjusted for non-compliance, were similar for total CVD (HR=0.93; 95% CI: 0.81, 1.07) but suggested lower CVD mortality with aspirin use (HR = 0.76; 95% CI: 0.54, 1.08). Adjusting for non-compliance had little impact on the estimated effect of aspirin on total CVD, but strengthened the effect on CVD mortality. These results support a limited effect of low-dose aspirin on total CVD in women, but potential benefit for CVD mortality.
Aspirin; cardiovascular disease; marginal structural model; myocardial infarction; stroke
Properly planned and conducted randomized clinical trials remain susceptible to a lack of external validity. The authors illustrate a model-based method to standardize observed trial results to a specified target population using a seminal human immunodeficiency virus (HIV) treatment trial, and they provide Monte Carlo simulation evidence supporting the method. The example trial enrolled 1,156 HIV-infected adult men and women in the United States in 1996, randomly assigned 577 to a highly active antiretroviral therapy and 579 to a largely ineffective combination therapy, and followed participants for 52 weeks. The target population was US people infected with HIV in 2006, as estimated by the Centers for Disease Control and Prevention. Results from the trial apply, albeit muted by 12%, to the target population, under the assumption that the authors have measured and correctly modeled the determinants of selection that reflect heterogeneity in the treatment effect. In simulations with a heterogeneous treatment effect, a conventional intent-to-treat estimate was biased with poor confidence limit coverage, but the proposed estimate was largely unbiased with appropriate confidence limit coverage. The proposed method standardizes observed trial results to a specified target population and thereby provides information regarding the generalizability of trial results.
bias; bias (epidemiology); causal inference; external validity; generalizability; randomized trials; standardization
We compared three ad hoc methods to estimate the marginal hazard of incident cancer AIDS in a highly active antiretroviral therapy (1996–2006) relative to a monotherapy/combination therapy (1990–1996) calendar period, accounting for other AIDS events and deaths as competing risks.
Study Design and Setting
Among 1911 HIV+ men from the Multicenter AIDS Cohort Study, 228 developed cancer AIDS and 745 developed competing risks in 14,202 person-years from 1990–2006. Method 1 censored competing risks at the time they occurred, method 2 excluded competing risks, and method 3 censored competing risks at the date of analysis.
The age, race and infection duration adjusted hazard ratios (HRs) for cancer AIDS were similar for all methods (HR≅0.15). We estimated bias and CI coverage of each method with Monte Carlo simulation. On average across 24 scenarios, method 1 produced less biased estimates than methods 2 or 3.
When competing risks are independent of the event of interest, only method 1 produced unbiased estimates of the marginal HR, though independence cannot be verified from the data. When competing risks are dependent, method 1 generally produced the least biased estimates of the marginal HR for the scenarios explored; however, alternative methods may be preferred.
Competing risks; epidemiology; HIV; highly active antiretroviral therapy; cancer
Typical applications of marginal structural time-to-event (e.g., Cox) models have used time on study as the time scale. Here, the authors illustrate use of time on treatment as an alternative time scale. In addition, a method is provided for estimating Kaplan-Meier–type survival curves for marginal structural models. For illustration, the authors estimate the total effect of highly active antiretroviral therapy on time to acquired immunodeficiency syndrome (AIDS) or death in 1,498 US men and women infected with human immunodeficiency virus and followed for 6,556 person-years between 1995 and 2002; 323 incident cases of clinical AIDS and 59 deaths occurred. Of the remaining 1,116 participants, 77% were still under observation at the end of follow-up. By using time on study, the hazard ratio for AIDS or death comparing always with never using highly active antiretroviral therapy from the marginal structural model was 0.52 (95% confidence interval: 0.35, 0.76). By using time on treatment, the analogous hazard ratio was 0.44 (95% confidence interval: 0.32, 0.60). In time-to-event analyses, the choice of time scale may have a meaningful impact on estimates of association and precision. In the present example, use of time on treatment yielded a hazard ratio further from the null and more precise than use of time on study as the time scale.
acquired immunodeficiency syndrome; antiretroviral therapy, highly active; bias (epidemiology); causal inference; confounding factors (epidemiology); proportional hazards model; survival curve; survival time
An estimated 650,000 Americans will have ESRD by 2010. Young adults with kidney failure often develop progressive chronic kidney disease (CKD) in childhood and adolescence. The Chronic Kidney Disease in Children (CKiD) prospective cohort study of 540 children aged 1 to 16 yr and have estimated GFR between 30 and 75 ml/min per 1.73 m2 was established to identify novel risk factors for CKD progression; the impact of kidney function decline on growth, cognition, and behavior; and the evolution of cardiovascular disease risk factors. Annually, a physical examination documenting height, weight, Tanner stage, and standardized BP is conducted, and cognitive function, quality of life, nutritional, and behavioral questionnaires are completed by the parent or the child. Samples of serum, plasma, urine, hair, and fingernail clippings are stored in biosamples and genetics repositories. GFR is measured annually for 2 yr, then every other year using iohexol, HPLC creatinine, and cystatin C. Using age, gender, and serial measurements of Tanner stage, height, and creatinine, compared with iohexol GFR, a formula to estimate GFR that will improve on traditional pediatric GFR estimating equations when applied longitudinally is expected to be developed. Every other year, echocardiography and ambulatory BP monitoring will assess risk for cardiovascular disease. The primary outcome is the rate of decline of GFR. The CKiD study will be the largest North American multicenter study of pediatric CKD.
Plasma human immunodeficiency virus type 1 (HIV-1) viral load is a valuable tool for HIV research and clinical care but is often used in a noncumulative manner. The authors developed copy-years viremia as a measure of cumulative plasma HIV-1 viral load exposure among 297 HIV seroconverters from the Multicenter AIDS Cohort Study (1984–1996). Men were followed from seroconversion to incident acquired immunodeficiency syndrome (AIDS), death, or the beginning of the combination antiretroviral therapy era (January 1, 1996); the median duration of follow-up was 4.6 years (interquartile range (IQR), 2.7–6.5). The median viral load and level of copy-years viremia over 2,281 semiannual follow-up assessments were 29,628 copies/mL (IQR, 8,547–80,210) and 63,659 copies × years/mL (IQR, 15,935–180,341). A total of 127 men developed AIDS or died, and 170 survived AIDS-free and were censored on January 1, 1996, or lost to follow-up. Rank correlations between copy-years viremia and other measures of viral load were 0.56–0.87. Each log10 increase in copy-years viremia was associated with a 1.70-fold increased hazard (95% confidence interval: 0.94, 3.07) of AIDS or death, independently of infection duration, age, race, CD4 cell count, set-point, peak viral load, or most recent viral load. Copy-years viremia, a novel measure of cumulative viral burden, may provide prognostic information beyond traditional single measures of viremia.
acquired immunodeficiency syndrome; HIV; HIV infections; viral load; viremia
To estimate the net effect of imperfectly measured highly active antiretroviral therapy on incident acquired immunodeficiency syndrome or death, the authors combined inverse probability-of-treatment-and-censoring weighted estimation of a marginal structural Cox model with regression-calibration methods. Between 1995 and 2007, 950 human immunodeficiency virus–positive men and women were followed in 2 US cohort studies. During 4,054 person-years, 374 initiated highly active antiretroviral therapy, 211 developed acquired immunodeficiency syndrome or died, and 173 dropped out. Accounting for measured confounders and determinants of dropout, the weighted hazard ratio for acquired immunodeficiency syndrome or death comparing use of highly active antiretroviral therapy in the prior 2 years with no therapy was 0.36 (95% confidence limits: 0.21, 0.61). This association was relatively constant over follow-up (P = 0.19) and stronger than crude or adjusted hazard ratios of 0.75 and 0.95, respectively. Accounting for measurement error in reported exposure using external validation data on 331 men and women provided a hazard ratio of 0.17, with bias shifted from the hazard ratio to the estimate of precision as seen by the 2.5-fold wider confidence limits (95% confidence limits: 0.06, 0.43). Marginal structural measurement-error models can simultaneously account for 3 major sources of bias in epidemiologic research: validated exposure measurement error, measured selection bias, and measured time-fixed and time-varying confounding.
acquired immunodeficiency syndrome; bias (epidemiology); cohort studies; confounding factors (epidemiology); epidemiologic measurements; HIV; pharmacoepidemiology; selection bias
Sparse-data problems are common, and approaches are needed to evaluate the sensitivity of parameter estimates based on sparse data. We propose a Bayesian approach that uses weakly informative priors to quantify sensitivity of parameters to sparse data. The weakly informative prior is based on accumulated evidence regarding the expected magnitude of relationships using relative measures of disease association. We illustrate the use of weakly informative priors with an example of the association of lifetime alcohol consumption and head and neck cancer. When data are sparse and the observed information is weak, a weakly informative prior will shrink parameter estimates toward the prior mean. Additionally, the example shows that when data are not sparse and the observed information is not weak, a weakly informative prior is not influential. Advancements in implementation of Markov Chain Monte Carlo simulation make this sensitivity analysis easily accessible to the practicing epidemiologist.
Following the outbreaks of 2009 pandemic H1N1 infection, rapid influenza diagnostic tests have been used to detect H1N1 infection. However, no meta-analysis has been undertaken to assess the diagnostic accuracy when this manuscript was drafted.
The literature was systematically searched to identify studies that reported the performance of rapid tests. Random effects meta-analyses were conducted to summarize the overall performance.
Seventeen studies were selected with 1879 cases and 3477 non-cases. The overall sensitivity and specificity estimates of the rapid tests were 0.51 (95%CI: 0.41, 0.60) and 0.98 (95%CI: 0.94, 0.99). Studies reported heterogeneous sensitivity estimates, ranging from 0.11 to 0.88. If the prevalence was 30%, the overall positive and negative predictive values were 0.94 (95%CI: 0.85, 0.98) and 0.82 (95%CI: 0.79, 0.85). The overall specificities from different manufacturers were comparable, while there were some differences for the overall sensitivity estimates. BinaxNOW had a lower overall sensitivity of 0.39 (95%CI: 0.24, 0.57) compared to all the others (p-value < 0.001), whereas QuickVue had a higher overall sensitivity of 0.57 (95%CI: 0.50, 0.63) compared to all the others (p-value = 0.005).
Rapid tests have high specificity but low sensitivity and thus limited usefulness.
meta analysis; H1N1; diagnostic tests; rapid tests; sensitivity and specificity
The method of inverse probability weighting (henceforth, weighting) can be used to adjust for measured confounding and selection bias under the four assumptions of consistency, exchangeability, positivity, and no misspecification of the model used to estimate weights. In recent years, several published estimates of the effect of time-varying exposures have been based on weighted estimation of the parameters of marginal structural models because, unlike standard statistical methods, weighting can appropriately adjust for measured time-varying confounders affected by prior exposure. As an example, the authors describe the last three assumptions using the change in viral load due to initiation of antiretroviral therapy among 918 human immunodeficiency virus-infected US men and women followed for a median of 5.8 years between 1996 and 2005. The authors describe possible tradeoffs that an epidemiologist may encounter when attempting to make inferences. For instance, a tradeoff between bias and precision is illustrated as a function of the extent to which confounding is controlled. Weight truncation is presented as an informal and easily implemented method to deal with these tradeoffs. Inverse probability weighting provides a powerful methodological tool that may uncover causal effects of exposures that are otherwise obscured. However, as with all methods, diagnostics and sensitivity analyses are essential for proper use.
bias (epidemiology); causality; confounding factors (epidemiology); probability weighting; regression model
Lagging exposure information is often undertaken to allow for a latency period in cumulative exposure-disease analyses. The authors first consider bias and confidence interval coverage when using the standard approaches of fitting models under several lag assumptions and selecting the lag that maximizes either the effect estimate or model goodness of fit. Next, they consider bias that occurs when the assumption that the latency period is a fixed constant does not hold. Expressions were derived for bias due to misspecification of lag assumptions, and simulations were conducted. Finally, the authors describe a method for joint estimation of parameters describing an exposure-response association and the latency distribution. Analyses of associations between cumulative asbestos exposure and lung cancer mortality among textile workers illustrate this approach. Selecting the lag that maximizes the effect estimate may lead to bias away from the null; selecting the lag that maximizes model goodness of fit may lead to confidence intervals that are too narrow. These problems tend to increase as the within-person exposure variation diminishes. Lagging exposure assignment by a constant will lead to bias toward the null if the distribution of latency periods is not a fixed constant. Direct estimation of latency periods can minimize bias and improve confidence interval coverage.
asbestos; cohort studies; latency; neoplasms; survival analysis