PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (30)
 

Clipboard (0)
None

Select a Filter Below

Year of Publication
1.  Semiparametric tests for sufficient cause interaction 
A sufficient cause interaction between two exposures signals the presence of individuals for whom the outcome would occur only under certain values of the two exposures. When the outcome is dichotomous and all exposures are categorical, then under certain no confounding assumptions, empirical conditions for sufficient cause interactions can be constructed based on the sign of linear contrasts of conditional outcome probabilities between differently exposed subgroups, given confounders. It is argued that logistic regression models are unsatisfactory for evaluating such contrasts, and that Bernoulli regression models with linear link are prone to misspecification. We therefore develop semiparametric tests for sufficient cause interactions under models which postulate probability contrasts in terms of a finite-dimensional parameter, but which are otherwise unspecified. Estimation is often not feasible in these models because it would require nonparametric estimation of auxiliary conditional expectations given high-dimensional variables. We therefore develop ‘multiply robust tests’ under a union model that assumes at least one of several working submodels holds. In the special case of a randomized experiment or a family-based genetic study in which the joint exposure distribution is known by design or Mendelian inheritance, the procedure leads to asymptotically distribution-free tests of the null hypothesis of no sufficient cause interaction.
doi:10.1111/j.1467-9868.2011.01011.x
PMCID: PMC4280915  PMID: 25558182
Double robustness; Effect modification; Gene-environment interaction; Gene-gene interaction; Semiparametric inference; Sufficient cause; Synergism
2.  Stochastic counterfactuals and stochastic sufficient causes 
Statistica Sinica  2012;22(1):379-392.
Most work in causal inference concerns deterministic counterfactuals; the literature on stochastic counterfactuals is small. In the stochastic counterfactual setting, the outcome for each individual under each possible set of exposures follows a probability distribution so that for any given exposure combination, outcomes vary not only between individuals but also probabilistically for each particular individual. The deterministic sufficient cause framework supplements the deterministic counterfactual framework by allowing for the representation of counterfactual outcomes in terms of sufficient causes or causal mechanisms. In the deterministic sufficient cause framework it is possible to test for the joint presence of two causes in the same causal mechanism, referred to as a sufficient cause interaction. In this paper, these ideas are extended to the setting of stochastic counterfactuals and stochastic sufficient causes. Formal definitions are given for a stochastic sufficient cause framework. It is shown that the empirical conditions that suffice to conclude the presence of a sufficient cause interaction in the deterministic sufficient cause framework suffice also to conclude the presence of a sufficient cause interaction in the stochastic sufficient cause framework. Two examples from the genetics literature, in which there is evidence that sufficient cause interactions are present, are discussed in light of the results in this paper.
doi:10.5705/ss.2008.186
PMCID: PMC4249711  PMID: 25473251
Causal inference; Interaction; Stochastic counterfactual; Sufficient cause; Synergism
3.  Causal directed acyclic graphs and the direction of unmeasured confounding bias 
Epidemiology (Cambridge, Mass.)  2008;19(5):720-728.
We present results that allow the researcher in certain cases to determine the direction of the bias that arises when control for confounding is inadequate. The results are given within the context of the directed acyclic graph causal framework and are stated in terms of signed edges. Rigorous definitions for signed edges are provided. We describe cases in which intuition concerning signed edges fails and we characterize the directed acyclic graphs that researchers can use to draw conclusions about the sign of the bias of unmeasured confounding. If there is only one unmeasured confounding variable on the graph, then non-increasing or non-decreasing average causal effects suffice to draw conclusions about the direction of the bias. When there are more than one unmeasured confounding variable, non-increasing and non-decreasing average causal effects can be used to draw conclusions only if the various unmeasured confounding variables are independent of one another conditional on the measured covariates. When this conditional independence property does not hold, stronger notions of monotonicity are needed to draw conclusions about the direction of the bias.
doi:10.1097/EDE.0b013e3181810e29
PMCID: PMC4242711  PMID: 18633331
4.  Signed directed acyclic graphs for causal inference 
Summary
Formal rules governing signed edges on causal directed acyclic graphs are described in this paper and it is shown how these rules can be useful in reasoning about causality. Specifically, the notions of a monotonic effect, a weak monotonic effect and a signed edge are introduced. Results are developed relating these monotonic effects and signed edges to the sign of the causal effect of an intervention in the presence of intermediate variables. The incorporation of signed edges into the directed acyclic graph causal framework furthermore allows for the development of rules governing the relationship between monotonic effects and the sign of the covariance between two variables. It is shown that when certain assumptions about monotonic effects can be made then these results can be used to draw conclusions about the presence of causal effects even when data is missing on confounding variables.
doi:10.1111/j.1467-9868.2009.00728.x
PMCID: PMC4239133  PMID: 25419168
Bias; Causal inference; Confounding; Directed acyclic graphs; Structural equations
5.  On weighting approaches for missing data 
We review the class of inverse probability weighting (IPW) approaches for the analysis of missing data under various missing data patterns and mechanisms. The IPW methods rely on the intuitive idea of creating a pseudo-population of weighted copies of the complete cases to remove selection bias introduced by the missing data. However, different weighting approaches are required depending on the missing data pattern and mechanism. We begin with a uniform missing data pattern (i.e., a scalar missing indicator indicating whether or not the full data is observed) to motivate the approach. We then generalize to more complex settings. Our goal is to provide a conceptual overview of existing IPW approaches and illustrate the connections and differences among these approaches.
doi:10.1177/0962280211403597
PMCID: PMC3998729  PMID: 21705435
missing data; inverse probability weighting; missing at random; missing not at random; monotone missing; non-monotone missing
6.  Randomized trials analyzed as observational studies 
Annals of internal medicine  2013;159(8):10.7326/0003-4819-159-8-201310150-00709.
doi:10.7326/0003-4819-159-8-201310150-00709
PMCID: PMC3860874  PMID: 24018844
7.  Structural Nested Cumulative Failure Time Models to Estimate the Effects of Interventions 
Journal of the American Statistical Association  2012;107(499):10.1080/01621459.2012.682532.
In the presence of time-varying confounders affected by prior treatment, standard statistical methods for failure time analysis may be biased. Methods that correctly adjust for this type of covariate include the parametric g-formula, inverse probability weighted estimation of marginal structural Cox proportional hazards models, and g-estimation of structural nested accelerated failure time models. In this article, we propose a novel method to estimate the causal effect of a time-dependent treatment on failure in the presence of informative right-censoring and time-dependent confounders that may be affected by past treatment: g-estimation of structural nested cumulative failure time models (SNCFTMs). An SNCFTM considers the conditional effect of a final treatment at time m on the outcome at each later time k by modeling the ratio of two counterfactual cumulative risks at time k under treatment regimes that differ only at time m. Inverse probability weights are used to adjust for informative censoring. We also present a procedure that, under certain “no-interaction” conditions, uses the g-estimates of the model parameters to calculate unconditional cumulative risks under nondynamic (static) treatment regimes. The procedure is illustrated with an example using data from a longitudinal cohort study, in which the “treatments” are healthy behaviors and the outcome is coronary heart disease.
doi:10.1080/01621459.2012.682532
PMCID: PMC3860902  PMID: 24347749
Causal inference; Coronary heart disease; Epidemiology; G-estimation; Inverse probability weighting
8.  Comparative effectiveness of dynamic treatment regimes: an application of the parametric g-formula 
Statistics in biosciences  2011;3(1):119-143.
Ideally, randomized trials would be used to compare the long-term effectiveness of dynamic treatment regimes on clinically relevant outcomes. However, because randomized trials are not always feasible or timely, we often must rely on observational data to compare dynamic treatment regimes. An example of a dynamic treatment regime is “start combined antiretroviral therapy (cART) within 6 months of CD4 cell count first dropping below x cells/mm3 or diagnosis of an AIDS-defining illness, whichever happens first” where x can take values between 200 and 500. Recently, Cain et al (2011) used inverse probability (IP) weighting of dynamic marginal structural models to find the x that minimizes 5-year mortality risk under similar dynamic regimes using observational data. Unlike standard methods, IP weighting can appropriately adjust for measured time-varying confounders (e.g., CD4 cell count, viral load) that are affected by prior treatment. Here we describe an alternative method to IP weighting for comparing the effectiveness of dynamic cART regimes: the parametric g-formula. The parametric g-formula naturally handles dynamic regimes and, like IP weighting, can appropriately adjust for measured time-varying confounders. However, estimators based on the parametric g-formula are more efficient than IP weighted estimators. This is often at the expense of more parametric assumptions. Here we describe how to use the parametric g-formula to estimate risk by the end of a user-specified follow-up period under dynamic treatment regimes. We describe an application of this method to answer the “when to start” question using data from the HIV-CAUSAL Collaboration.
doi:10.1007/s12561-011-9040-7
PMCID: PMC3769803  PMID: 24039638
9.  Observational studies analyzed like randomized experiments: an application to postmenopausal hormone therapy and coronary heart disease 
Epidemiology (Cambridge, Mass.)  2008;19(6):766-779.
Background
The Women’s Health Initiative randomized trial found greater coronary heart disease (CHD) risk in women assigned to estrogen/progestin therapy than in those assigned to placebo. Observational studies had previously suggested reduced CHD risk in hormone users.
Methods
Using data from the observational Nurses’ Health Study, we emulated the design and intention-to-treat (ITT) analysis of the randomized trial. The observational study was conceptualized as a sequence of “trials” in which eligible women were classified as initiators or noninitiators of estrogen/progestin therapy.
Results
The ITT hazard ratios (95% confidence intervals) of CHD for initiators versus noninitiators were 1.42 (0.92 – 2.20) for the first 2 years, and 0.96 (0.78 – 1.18) for the entire follow-up. The ITT hazard ratios were 0.84 (0.61 – 1.14) in women within 10 years of menopause, and 1.12 (0.84 – 1.48) in the others (P value for interaction = 0.08). These ITT estimates are similar to those from the Women’s Health Initiative. Because the ITT approach causes severe treatment misclassification, we also estimated adherence-adjusted effects by inverse probability weighting. The hazard ratios were 1.61 (0.97 – 2.66) for the first 2 years, and 0.98 (0.66 – 1.49) for the entire follow-up. The hazard ratios were 0.54 (0.19 – 1.51) in women within 10 years after menopause, and 1.20 (0.78 – 1.84) in others (P value for interaction = 0.01). Finally, we also present comparisons between these estimates and previously reported NHS estimates.
Conclusions
Our findings suggest that the discrepancies between the Women’s Health Initiative and Nurses’ Health Study ITT estimates could be largely explained by differences in the distribution of time since menopause and length of follow-up.
doi:10.1097/EDE.0b013e3181875e61
PMCID: PMC3731075  PMID: 18854702
10.  Improved double-robust estimation in missing data and causal inference models 
Biometrika  2012;99(2):439-456.
Recently proposed double-robust estimators for a population mean from incomplete data and for a finite number of counterfactual means can have much higher efficiency than the usual double-robust estimators under misspecification of the outcome model. In this paper, we derive a new class of double-robust estimators for the parameters of regression models with incomplete cross-sectional or longitudinal data, and of marginal structural mean models for cross-sectional data with similar efficiency properties. Unlike the recent proposals, our estimators solve outcome regression estimating equations. In a simulation study, the new estimator shows improvements in variance relative to the standard double-robust estimator that are in agreement with those suggested by asymptotic theory.
doi:10.1093/biomet/ass013
PMCID: PMC3635709  PMID: 23843666
Drop-out; Marginal structural model; Missing at random
11.  Pandemic Influenza: Risk of Multiple Introductions and the Need to Prepare for Them 
PLoS Medicine  2006;3(6):e135.
Containing an emerging influenza H5N1 pandemic in its earliest stages may be feasible, but containing multiple introductions of a pandemic-capable strain would be more difficult. Mills and colleagues argue that multiple introductions are likely, especially if risk of a pandemic is high.
doi:10.1371/journal.pmed.0030135
PMCID: PMC1370924  PMID: 17214503
12.  Relation between three classes of structural models for the effect of a time-varying exposure on survival 
Lifetime data analysis  2009;16(1):71-84.
Standard methods for estimating the effect of a time-varying exposure on survival may be biased in the presence of time-dependent confounders themselves affected by prior exposure. This problem can be overcome by inverse probability weighted estimation of Marginal Structural Cox Models (Cox MSM), g-estimation of Structural Nested Accelerated Failure Time Models (SNAFTM) and g-estimation of Structural Nested Cumulative Failure Time Models (SNCFTM). In this paper, we describe a data generation mechanism that approximately satisfies a Cox MSM, an SNAFTM and an SNCFTM. Besides providing a procedure for data simulation, our formal description of a data generation mechanism that satisfies all three models allows one to assess the relative advantages and disadvantages of each modeling approach. A simulation study is also presented to compare effect estimates across the three models.
doi:10.1007/s10985-009-9135-3
PMCID: PMC3635680  PMID: 19894116
13.  On doubly robust estimation in a semiparametric odds ratio model 
Biometrika  2009;97(1):171-180.
We consider the doubly robust estimation of the parameters in a semiparametric conditional odds ratio model. Our estimators are consistent and asymptotically normal in a union model that assumes either of two variation independent baseline functions is correctly modelled but not necessarily both. Furthermore, when either outcome has finite support, our estimators are semiparametric efficient in the union model at the intersection submodel where both nuisance functions models are correct. For general outcomes, we obtain doubly robust estimators that are nearly efficient at the intersection submodel. Our methods are easy to implement as they do not require the use of the alternating conditional expectations algorithm of Chen (2007).
doi:10.1093/biomet/asp062
PMCID: PMC3412601  PMID: 23049119
Doubly robust; Generalized odds ratio; Locally efficient; Semiparametric logistic regression
14.  Credible Mendelian Randomization Studies: Approaches for Evaluating the Instrumental Variable Assumptions 
American Journal of Epidemiology  2012;175(4):332-339.
As with other instrumental variable (IV) analyses, Mendelian randomization (MR) studies rest on strong assumptions. These assumptions are not routinely systematically evaluated in MR applications, although such evaluation could add to the credibility of MR analyses. In this article, the authors present several methods that are useful for evaluating the validity of an MR study. They apply these methods to a recent MR study that used fat mass and obesity-associated (FTO) genotype as an IV to estimate the effect of obesity on mental disorder. These approaches to evaluating assumptions for valid IV analyses are not fail-safe, in that there are situations where the approaches might either fail to identify a biased IV or inappropriately suggest that a valid IV is biased. Therefore, the authors describe the assumptions upon which the IV assessments rely. The methods they describe are relevant to any IV analysis, regardless of whether it is based on a genetic IV or other possible sources of exogenous variation. Methods that assess the IV assumptions are generally not conclusive, but routinely applying such methods is nonetheless likely to improve the scientific contributions of MR studies.
doi:10.1093/aje/kwr323
PMCID: PMC3366596  PMID: 22247045
causality; confounding factors; epidemiologic methods; instrumental variables; Mendelian randomization analysis
15.  Higher Order Inference On A Treatment Effect Under Low Regularity Conditions 
Statistics & probability letters  2011;81(7):821-828.
We describe a novel approach to nonparametric point and interval estimation of a treatment effect in the presence of many continuous confounders. We show the problem can be reduced to that of point and interval estimation of the expected conditional covariance between treatment and response given the confounders. Our estimators are higher order U-statistics. The approach applies equally to the regular case where the expected conditional covariance is root-n estimable and to the irregular case where slower non-parametric rates prevail.
doi:10.1016/j.spl.2011.02.030
PMCID: PMC3088168  PMID: 21552339
Minimax; U-statistics; Influence functions; Nonparametric; Semi-parametric; Robust Inference
16.  Time-dependent cross ratio estimation for bivariate failure times 
Biometrika  2011;98(2):341-354.
In the analysis of bivariate correlated failure time data, it is important to measure the strength of association among the correlated failure times. One commonly used measure is the cross ratio. Motivated by Cox’s partial likelihood idea, we propose a novel parametric cross ratio estimator that is a flexible continuous function of both components of the bivariate survival times. We show that the proposed estimator is consistent and asymptotically normal. Its finite sample performance is examined using simulation studies, and it is applied to the Australian twin data.
doi:10.1093/biomet/asr005
PMCID: PMC3376771  PMID: 22822258
Correlated survival times; Empirical process theory; Local dependency measure; Pseudo-partial likelihood
17.  Estimating absolute risks in the presence of nonadherence: An application to a follow-up study with baseline randomization 
Epidemiology (Cambridge, Mass.)  2010;21(4):528-539.
The intention-to-treat (ITT) analysis provides a valid test of the null hypothesis and naturally results in both absolute and relative measures of risk. However, this analytic approach may miss the occurrence of serious adverse effects that would have been detected under full adherence to the assigned treatment. Inverse probability weighting of marginal structural models has been used to adjust for nonadherence, but most studies have provided only relative measures of risk. In this study, we used inverse probability weighting to estimate both absolute and relative measures of risk of invasive breast cancer under full adherence to the assigned treatment in the Women’s Health Initiative estrogen-plus-progestin trial. In contrast to an ITT hazard ratio (HR) of 1.25 (95% confidence interval [CI] = 1.01 to 1.54), the HR for 8-year continuous estrogen-plus-progestin use versus no use was 1.68 (1.24 to 2.28). The estimated risk difference (cases/100 women) at year 8 was 0.83 (−0.03 to 1.69) in the ITT analysis, compared with 1.44 (0.52 to 2.37) in the adherence-adjusted analysis. Results were robust across various dose-response models. We also compared the dynamic treatment regime “take hormone therapy until certain adverse events become apparent, then stop taking hormone therapy” with no use (HR= 1.64; 95% CI = 1.24 to 2.18). The methods described here are also applicable to observational studies with time-varying treatments.
doi:10.1097/EDE.0b013e3181df1b69
PMCID: PMC3315056  PMID: 20526200
18.  Multiply robust inference for statistical interactions 
A primary focus of an increasing number of scientific studies is to determine whether two exposures interact in the effect that they produce on an outcome of interest. Interaction is commonly assessed by fitting regression models in which the linear predictor includes the product between those exposures. When the main interest lies in the interaction, this approach is not entirely satisfactory because it is prone to (possibly severe) bias when the main exposure effects or the association between outcome and extraneous factors are misspecified. In this article, we therefore consider conditional mean models with identity or log link which postulate the statistical interaction in terms of a finite-dimensional parameter, but which are otherwise unspecified. We show that estimation of the interaction parameter is often not feasible in this model because it would require nonparametric estimation of auxiliary conditional expectations given high-dimensional variables. We thus consider ‘multiply robust estimation’ under a union model that assumes at least one of several working submodels holds. Our approach is novel in that it makes use of information on the joint distribution of the exposures conditional on the extraneous factors in making inferences about the interaction parameter of interest. In the special case of a randomized trial or a family-based genetic study in which the joint exposure distribution is known by design or by Mendelian inheritance, the resulting multiply robust procedure leads to asymptotically distribution-free tests of the null hypothesis of no interaction on an additive scale. We illustrate the methods via simulation and the analysis of a randomized follow-up study.
doi:10.1198/016214508000001084
PMCID: PMC3097121  PMID: 21603124
Double robustness; Gene-environment interaction; Gene-gene interaction; Longitudinal data; Semiparametric inference
19.  Effectiveness of Early Antiretroviral Therapy Initiation to Improve Survival among HIV-Infected Adults with Tuberculosis: A Retrospective Cohort Study 
PLoS Medicine  2011;8(5):e1001029.
Molly Franke, Megan Murray, and colleagues report that early cART reduces mortality among HIV-infected adults with tuberculosis and improves retention in care, regardless of CD4 count.
Background
Randomized clinical trials examining the optimal time to initiate combination antiretroviral therapy (cART) in HIV-infected adults with sputum smear-positive tuberculosis (TB) disease have demonstrated improved survival among those who initiate cART earlier during TB treatment. Since these trials incorporated rigorous diagnostic criteria, it is unclear whether these results are generalizable to the vast majority of HIV-infected patients with TB, for whom standard diagnostic tools are unavailable. We aimed to examine whether early cART initiation improved survival among HIV-infected adults who were diagnosed with TB in a clinical setting.
Methods and Findings
We retrospectively reviewed charts for 308 HIV-infected adults in Rwanda with a CD4 count≤350 cells/µl and a TB diagnosis. We estimated the effect of cART on survival using marginal structural models and simulated 2-y survival curves for the cohort under different cART strategies:start cART 15, 30, 60, or 180 d after TB treatment or never start cART. We conducted secondary analyses with composite endpoints of (1) death, default, or lost to follow-up and (2) death, hospitalization, or serious opportunistic infection. Early cART initiation led to a survival benefit that was most marked for individuals with low CD4 counts. For individuals with CD4 counts of 50 or 100 cells/µl, cART initiation at day 15 yielded 2-y survival probabilities of 0.82 (95% confidence interval: [0.76, 0.89]) and 0.86 (95% confidence interval: [0.80, 0.92]), respectively. These were significantly higher than the probabilities computed under later start times. Results were similar for the endpoint of death, hospitalization, or serious opportunistic infection. cART initiation at day 15 versus later times was protective against death, default, or loss to follow-up, regardless of CD4 count. As with any observational study, the validity of these findings assumes that biases from residual confounding by unmeasured factors and from model misspecification are small.
Conclusions
Early cART reduced mortality among individuals with low CD4 counts and improved retention in care, regardless of CD4 count.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
HIV infection has exacerbated the global tuberculosis (TB) epidemic, especially in sub-Saharan Africa, in which in some countries, 70% of people with TB are currently also HIV positive—a condition commonly described as HIV/TB co-infection. The management of patients with HIV/TB co-infection is a major public health concern.
There is relatively little good evidence on the best time to initiate combination antiretroviral therapy (cART) in adults with HIV/TB co-infection. Clinicians sometimes defer cART in individuals initiating TB treatment because of concerns about complications (such as immune reconstitution inflammatory syndrome) and the risk of reduced adherence if patients have to remember to take two sets of pills. However, starting cART later in those patients who are infected with both HIV and TB can result in potentially avoidable deaths during therapy.
Why Was This Study Done?
Several randomized control trials (RCTs) have been carried out, and the results of three of these studies suggest that, among individuals with severe immune suppression, early initiation of cART (two to four weeks after the start of TB treatment) leads to better survival than later ART initiation (two to three months after the start of TB treatment). These results were reported in abstract form, but the full papers have not yet been published. One problem with RCTs is that they are carried out under controlled conditions that might not represent well the conditions in varied settings around the world. Therefore, observational studies that examine how effective a treatment is in routine clinical conditions can provide information that complements that obtained during clinical trials. In this study, the researchers aimed to confirm the results from RCTs among a cohort of adult patients with HIV/TB co-infection in Rwanda, diagnosed under routine program conditions and using routinely collected clinical data. The researchers also wanted to investigate whether early cART initiation reduced the risk of other adverse outcomes, including treatment default and loss to follow-up.
What Did the Researchers Do and Find?
The researchers retrospectively reviewed the charts and other program records of 308 patients with HIV, who had CD4 counts≤350 cells/µl, were aged 15 years or more, had never previously taken cART, and received their first TB treatment at one of five cART sites (two urban, three rural) in Rwanda between January 2004 and February 2007. Using this method, the researchers collected baseline demographic and clinical variables and relevant clinical follow-up data. They then used this data to estimate the effect of cART on survival by using sophisticated statistical models that calculated the effects of initiating cART at 15, 30, 60, or 180 d after the start of TB treatment or not at all.
The researchers then conducted a further analysis to assess combined outcomes of (1) death, default, lost to follow-up, and (2) death, hospitalization due to any cause, or occurrence of severe opportunistic infections, such as Kaposi's sarcoma. The researchers used the resulting multivariable model to estimate survival probabilities for each individual, based on his/her baseline characteristics.
The researchers found that when they set their model to first CD4 cell counts of 50 and 100 cells/µl, and starting cART at day 15, mean survival probabilities at two years were 0.82 and 0.86, respectively, statistically significantly higher than the survival probabilities calculated for each of the other treatment strategies, where cART was started later. They observed a similar pattern for the combined outcome of death, hospitalization, or serious opportunistic infection In addition, two-year outcomes for death or lost to follow-up were also improved with early cART, regardless of CD4 count at treatment initiation.
What Do These Findings Mean?
These findings show that in a real world program setting, starting cART 15 d after the start of TB treatment is more beneficial (measured by differences in survival probabilities) among patients with HIV/TB co-infection who have CD4 cell counts≤100 cells/µl than starting later. Early cART initiation may also increase retention in care for all individuals with CD4 cell counts≤350 cells/µl.
As the outcomes of this modeling study are based on data from a retrospective observational study, the biases associated with use of these data must be carefully addressed. However, the results support the recommendation of cART initiation after 15 d of TB treatment for patients with CD4 cell counts≤100 cells/µl and can be used as an advocacy base for TB treatment to be used as an opportunity to refer and retain HIV-infected individuals in care, regardless of CD4 cell count.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001029.
Information is available on HIV/TB co-infection from the World Health Organization, the US Centers for Disease Control and Prevention, and the International AIDS Society
doi:10.1371/journal.pmed.1001029
PMCID: PMC3086874  PMID: 21559327
20.  When to Start Treatment? A Systematic Approach to the Comparison of Dynamic Regimes Using Observational Data* 
Dynamic treatment regimes are the type of regime most commonly used in clinical practice. For example, physicians may initiate combined antiretroviral therapy the first time an individual’s recorded CD4 cell count drops below either 500 cells/mm3 or 350 cells/mm3. This paper describes an approach for using observational data to emulate randomized clinical trials that compare dynamic regimes of the form “initiate treatment within a certain time period of some time-varying covariate first crossing a particular threshold.” We applied this method to data from the French Hospital database on HIV (FHDH-ANRS CO4), an observational study of HIV-infected patients, in order to compare dynamic regimes of the form “initiate treatment within m months after the recorded CD4 cell count first drops below x cells/mm3” where x takes values from 200 to 500 in increments of 10 and m takes values 0 or 3. We describe the method in the context of this example and discuss some complications that arise in emulating a randomized experiment using observational data.
doi:10.2202/1557-4679.1212
PMCID: PMC3406513  PMID: 21972433
dynamic treatment regimes; marginal structural models; HIV infection; antiretroviral therapy
21.  Dynamic Regime Marginal Structural Mean Models for Estimation of Optimal Dynamic Treatment Regimes, Part II: Proofs of Results* 
In this companion article to “Dynamic Regime Marginal Structural Mean Models for Estimation of Optimal Dynamic Treatment Regimes, Part I: Main Content” [Orellana, Rotnitzky and Robins (2010), IJB, Vol. 6, Iss. 2, Art. 7] we present (i) proofs of the claims in that paper, (ii) a proposal for the computation of a confidence set for the optimal index when this lies in a finite set, and (iii) an example to aid the interpretation of the positivity assumption.
doi:10.2202/1557-4679.1242
PMCID: PMC2854089  PMID: 20405047
dynamic treatment regime; double-robust; inverse probability weighted; marginal structural model; optimal treatment regime; causality
22.  Marginal Structural Models for Sufficient Cause Interactions 
American Journal of Epidemiology  2010;171(4):506-514.
Sufficient cause interactions concern cases in which there is a particular causal mechanism for some outcome that requires the presence of 2 or more specific causes to operate. Empirical conditions have been derived to test for sufficient cause interactions. However, when regression outcome models are used to control for confounding variables in tests for sufficient cause interactions, the outcome models impose restrictions on the relation between the confounding variables and certain unidentified background causes within the sufficient cause framework; often, these assumptions are implausible. By using marginal structural models, rather than outcome regression models, to test for sufficient cause interactions, modeling assumptions are instead made on the relation between the causes of interest and the confounding variables; these assumptions will often be more plausible. The use of marginal structural models also allows for testing for sufficient cause interactions in the presence of time-dependent confounding. Such time-dependent confounding may arise in cases in which one factor of interest affects both the second factor of interest and the outcome. It is furthermore shown that marginal structural models can be used not only to test for sufficient cause interactions but also to give lower bounds on the prevalence of such sufficient cause interactions.
doi:10.1093/aje/kwp396
PMCID: PMC2877448  PMID: 20067916
causal inference; interaction; marginal structural models; sufficient causes; synergism; weighting
23.  Intervening on risk factors for coronary heart disease: an application of the parametric g-formula 
Estimating the population risk of disease under hypothetical interventions—such as the population risk of coronary heart disease (CHD) were everyone to quit smoking and start exercising or to start exercising if diagnosed with diabetes—may not be possible using standard analytic techniques. The parametric g-formula, which appropriately adjusts for time-varying confounders affected by prior exposures, is especially well suited to estimating effects when the intervention involves multiple factors (joint interventions) or when the intervention involves decisions that depend on the value of evolving time-dependent factors (dynamic interventions). We describe the parametric g-formula, and use it to estimate the effect of various hypothetical lifestyle interventions on the risk of CHD using data from the Nurses’ Health Study. Over the period 1982–2002, the 20-year risk of CHD in this cohort was 3.50%. Under a joint intervention of no smoking, increased exercise, improved diet, moderate alcohol consumption and reduced body mass index, the estimated risk was 1.89% (95% confidence interval: 1.46–2.41). We discuss whether the assumptions required for the validity of the parametric g-formula hold in the Nurses’ Health Study data. This work represents the first large-scale application of the parametric g-formula in an epidemiologic cohort study.
doi:10.1093/ije/dyp192
PMCID: PMC2786249  PMID: 19389875
g-formula; coronary heart disease; hypothetical interventions
24.  Transmission Dynamics and Control of Severe Acute Respiratory Syndrome 
Science (New York, N.Y.)  2003;300(5627):1966-1970.
Severe acute respiratory syndrome (SARS) is a recently described illness of humans that has spread widely over the past 6 months. With the use of detailed epidemiologic data from Singapore and epidemic curves from other settings, we estimated the reproductive number for SARS in the absence of interventions and in the presence of control efforts. We estimate that a single infectious case of SARS will infect about three secondary cases in a population that has not yet instituted control measures. Public-health efforts to reduce transmission are expected to have a substantial impact on reducing the size of the epidemic.
doi:10.1126/science.1086616
PMCID: PMC2760158  PMID: 12766207
25.  Incorporating prior beliefs about selection bias into the analysis of randomized trials with missing outcomes 
Biostatistics (Oxford, England)  2003;4(4):495-512.
Summary
In randomized studies with missing outcomes, non-identifiable assumptions are required to hold for valid data analysis. As a result, statisticians have been advocating the use of sensitivity analysis to evaluate the effect of varying asssumptions on study conclusions. While this approach may be useful in assessing the sensitivity of treatment comparisons to missing data assumptions, it may be dissatisfying to some researchers/decision makers because a single summary is not provided. In this paper, we present a fully Bayesian methodology that allows the investigator to draw a ‘single’ conclusion by formally incorporating prior beliefs about non-identifiable, yet interpretable, selection bias parameters. Our Bayesian model provides robustness to prior specification of the distributional form of the continuous outcomes.
doi:10.1093/biostatistics/4.4.495
PMCID: PMC2748253  PMID: 14557107
Dirichlet process prior; Identifiability; MCHC; Non-parametric Bayes; Selection model; Sensitivity analysis

Results 1-25 (30)