PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (887295)

Clipboard (0)
None

Related Articles

1.  Using and Interpreting Adjusted NNT Measures in Biomedical Research 
The number needed to treat (NNT) is a popular effect measure to present study results in biomedical research. NNTs were originally proposed to describe the absolute effect of a new treatment compared with a standard treatment or placebo in randomized controlled trials (RCTs) with binary outcome. The concept of the NNT measure has been applied to a number of other research areas involving the development of related measures and more sophisticated techniques to calculate and interpret NNT measures in biomedical research. In epidemiology and public health research an adequate adjustment for covariates is usually required leading to the application of adjusted NNT measures. An overview of the recent developments regarding adjustment of NNT measures is given. The use and interpretation of adjusted NNT measures is illustrated by means of examples from dentistry research.
doi:10.2174/1874210601004020072
PMCID: PMC2944994  PMID: 20871755
Number needed to treat; evidence-based medicine; confounding; adjustment for covariates; regression analysis.
2.  Evaluating the Effect of Hospital and Insurance Type on the Risk of 1-Year Mortality of Very Low Birth Weight Infants: Controlling for Selection Bias 
Medical Care  2012;50(4):353-360.
OBJECTIVES
We examined the effect of hospital type and medical coverage on the risk of 1-year mortality of very low birth weight (VLBW) infants while adjusting for possible selection bias.
METHODS
The study population was limited to singleton live birth infants having birth weight between 500 and 1,500 grams with no congenital anomalies who were born in Arkansas hospitals between 2001 and 2007. Propensity score (PS) matching and PS covariate adjustment were used to mitigate selection bias. Additionally, a conventional multivariable logistic regression model was used for comparison purposes.
RESULTS
Generally, all three analytical approaches provided consistent results in terms of the estimated relative risk, absolute risk reduction, and the number needed to treat (NNT). Using the PS matching method, VLBW infants delivered at a hospital with a neonatal intensive care unit (NICU) were associated with a 35% relative decrease (95% bootstrap CI: 18.5% – 48.9%) in the risk of 1-year mortality as compared to those infants delivered at non-NICU hospitals. Furthermore, our results showed that on average, 16 VLBW infants (95% bootstrap CI: 11 – 32), would need to be delivered at a hospital with an NICU to prevent one additional death at one year. However, there was not a difference in the risk of 1-year mortality between VLBW infants born to Medicaid-insured versus non-Medicaid-insured women.
CONCLUSIONS
Estimated relative risk of infant mortality was significantly lower for births that occurred in hospitals with an NICU; therefore, greater efforts should be made to deliver VLBW neonates in an NICU hospital.
doi:10.1097/MLR.0b013e318245a128
PMCID: PMC3306601  PMID: 22422056
3.  The performance of different propensity-score methods for estimating differences in proportions (risk differences or absolute risk reductions) in observational studies 
Statistics in Medicine  2010;29(20):2137-2148.
Propensity score methods are increasingly being used to estimate the effects of treatments on health outcomes using observational data. There are four methods for using the propensity score to estimate treatment effects: covariate adjustment using the propensity score, stratification on the propensity score, propensity-score matching, and inverse probability of treatment weighting (IPTW) using the propensity score. When outcomes are binary, the effect of treatment on the outcome can be described using odds ratios, relative risks, risk differences, or the number needed to treat. Several clinical commentators suggested that risk differences and numbers needed to treat are more meaningful for clinical decision making than are odds ratios or relative risks. However, there is a paucity of information about the relative performance of the different propensity-score methods for estimating risk differences. We conducted a series of Monte Carlo simulations to examine this issue. We examined bias, variance estimation, coverage of confidence intervals, mean-squared error (MSE), and type I error rates. A doubly robust version of IPTW had superior performance compared with the other propensity-score methods. It resulted in unbiased estimation of risk differences, treatment effects with the lowest standard errors, confidence intervals with the correct coverage rates, and correct type I error rates. Stratification, matching on the propensity score, and covariate adjustment using the propensity score resulted in minor to modest bias in estimating risk differences. Estimators based on IPTW had lower MSE compared with other propensity-score methods. Differences between IPTW and propensity-score matching may reflect that these two methods estimate the average treatment effect and the average treatment effect for the treated, respectively. Copyright © 2010 John Wiley & Sons, Ltd.
doi:10.1002/sim.3854
PMCID: PMC3068290  PMID: 20108233
propensity score; observational study; binary data; risk difference; number needed to treat; matching; IPTW; inverse probability of treatment weighting; propensity-score matching
4.  Optimal caliper widths for propensity-score matching when estimating differences in means and differences in proportions in observational studies 
Pharmaceutical Statistics  2010;10(2):150-161.
In a study comparing the effects of two treatments, the propensity score is the probability of assignment to one treatment conditional on a subject's measured baseline covariates. Propensity-score matching is increasingly being used to estimate the effects of exposures using observational data. In the most common implementation of propensity-score matching, pairs of treated and untreated subjects are formed whose propensity scores differ by at most a pre-specified amount (the caliper width). There has been a little research into the optimal caliper width. We conducted an extensive series of Monte Carlo simulations to determine the optimal caliper width for estimating differences in means (for continuous outcomes) and risk differences (for binary outcomes). When estimating differences in means or risk differences, we recommend that researchers match on the logit of the propensity score using calipers of width equal to 0.2 of the standard deviation of the logit of the propensity score. When at least some of the covariates were continuous, then either this value, or one close to it, minimized the mean square error of the resultant estimated treatment effect. It also eliminated at least 98% of the bias in the crude estimator, and it resulted in confidence intervals with approximately the correct coverage rates. Furthermore, the empirical type I error rate was approximately correct. When all of the covariates were binary, then the choice of caliper width had a much smaller impact on the performance of estimation of risk differences and differences in means. Copyright © 2010 John Wiley & Sons, Ltd.
doi:10.1002/pst.433
PMCID: PMC3120982  PMID: 20925139
propensity score; observational study; binary data; risk difference; propensity-score matching; Monte Carlo simulations; bias; matching
5.  Calculation of NNTs in RCTs with time-to-event outcomes: A literature review 
Background
The number needed to treat (NNT) is a well-known effect measure for reporting the results of clinical trials. In the case of time-to-event outcomes, the calculation of NNTs is more difficult than in the case of binary data. The frequency of using NNTs to report results of randomised controlled trials (RCT) investigating time-to-event outcomes and the adequacy of the applied calculation methods are unknown.
Methods
We searched in PubMed for RCTs with parallel group design and individual randomisation, published in four frequently cited journals between 2003 and 2005. We evaluated the type of outcome, the frequency of reporting NNTs with corresponding confidence intervals, and assessed the adequacy of the methods used to calculate NNTs in the case of time-to-event outcomes.
Results
The search resulted in 734 eligible RCTs. Of these, 373 RCTs investigated time-to-event outcomes and 361 analyzed binary data. In total, 62 articles reported NNTs (34 articles with time-to-event outcomes, 28 articles with binary outcomes). Of the 34 articles reporting NNTs derived from time-to-event outcomes, only 17 applied an appropriate calculation method. Of the 62 articles reporting NNTs, only 21 articles presented corresponding confidence intervals.
Conclusion
The NNT is used as effect measure to present the results from RCTs with binary and time-to-event outcomes in the current medical literature. In the case of time-to-event data incorrect methods were frequently applied. Confidence intervals for NNTs were given in one third of the NNT reporting articles only. In summary, there is much room for improvement in the application of NNTs to present results of RCTs, especially where the outcome is time to an event.
doi:10.1186/1471-2288-9-21
PMCID: PMC2666755  PMID: 19302699
6.  Covariate adjustment in randomized trials with binary outcomes: Targeted maximum likelihood estimation 
Statistics in medicine  2009;28(1):39-64.
SUMMARY
Covariate adjustment using linear models for continuous outcomes in randomized trials has been shown to increase efficiency and power over the unadjusted method in estimating the marginal effect of treatment. However, for binary outcomes, investigators generally rely on the unadjusted estimate as the literature indicates that covariate-adjusted estimates based on the logistic regression models are less efficient. The crucial step that has been missing when adjusting for covariates is that one must integrate/average the adjusted estimate over those covariates in order to obtain the marginal effect. We apply the method of targeted maximum likelihood estimation (tMLE) to obtain estimators for the marginal effect using covariate adjustment for binary outcomes. We show that the covariate adjustment in randomized trials using the logistic regression models can be mapped, by averaging over the covariate(s), to obtain a fully robust and efficient estimator of the marginal effect, which equals a targeted maximum likelihood estimator. This tMLE is obtained by simply adding a clever covariate to a fixed initial regression. We present simulation studies that demonstrate that this tMLE increases efficiency and power over the unadjusted method, particularly for smaller sample sizes, even when the regression model is mis-specified.
doi:10.1002/sim.3445
PMCID: PMC2857590  PMID: 18985634
clinical trails; efficiency; covariate adjustment; variable selection
7.  Treatment Time-Specific Number Needed to Treat Estimates for Tissue Plasminogen Activator Therapy in Acute Stroke Based on Shifts Over the Entire Range of the Modified Rankin Scale 
Background and Purpose
To make informed treatment decisions, patients and physicians need to be aware of the benefits and risks of a proposed treatment. The number needed to treat (NNT) for benefit and harm are intuitive and statistically valid measures to describe a treatment effect. The aim of this study is to calculate treatment time-specific NNT estimates based on shifts over the entire spectrum of clinically relevant functional outcomes.
Methods
The pooled data set of the first 6 major randomized acute stroke trials of intravenous tissue plasminogen activator was used for this study. The data were stratified by 90-minute treatment time windows. NNT for benefit and NNT for harm estimates were determined based on expert generation of joint outcome distribution tables. NNT for benefit estimates were also calculated based on joint outcome distribution tables generated by a computer model.
Results
NNT for benefit estimates based on the expert panel were 3.6 for patients treated between 0 and 90 minutes, 4.3 with treatment between 91 and 180 minutes, 5.9 with treatment between 181 and 270 minutes, and 19.3 with treatment between 271 and 360 minutes. The computer simulation yielded very similar results. The NNT for harm estimates for the corresponding time intervals are 65, 38, 30, and 14.
Conclusions
Up to 4½ hours after symptom onset, tissue plasminogen activator therapy is associated with more benefit than harm, whereas there is no evidence of a net benefit in the 4½- to 6-hour time window. The NNT estimates for each 90-minute epoch provide useful and intuitive information based on which patients may be able to make better informed treatment decisions.
doi:10.1161/STROKEAHA.108.540708
PMCID: PMC2881642  PMID: 19372447
biostatistics; number needed to treat; stroke; thrombolysis
8.  Covariate balance in a Bayesian propensity score analysis of beta blocker therapy in heart failure patients 
Regression adjustment for the propensity score is a statistical method that reduces confounding from measured variables in observational data. A Bayesian propensity score analysis extends this idea by using simultaneous estimation of the propensity scores and the treatment effect. In this article, we conduct an empirical investigation of the performance of Bayesian propensity scores in the context of an observational study of the effectiveness of beta-blocker therapy in heart failure patients. We study the balancing properties of the estimated propensity scores. Traditional Frequentist propensity scores focus attention on balancing covariates that are strongly associated with treatment. In contrast, we demonstrate that Bayesian propensity scores can be used to balance the association between covariates and the outcome. This balancing property has the effect of reducing confounding bias because it reduces the degree to which covariates are outcome risk factors.
doi:10.1186/1742-5573-6-5
PMCID: PMC2758880  PMID: 19744338
9.  Comparing paired vs non-paired statistical methods of analyses when making inferences about absolute risk reductions in propensity-score matched samples 
Statistics in Medicine  2011;30(11):1292-1301.
Propensity-score matching allows one to reduce the effects of treatment-selection bias or confounding when estimating the effects of treatments when using observational data. Some authors have suggested that methods of inference appropriate for independent samples can be used for assessing the statistical significance of treatment effects when using propensity-score matching. Indeed, many authors in the applied medical literature use methods for independent samples when making inferences about treatment effects using propensity-score matched samples. Dichotomous outcomes are common in healthcare research. In this study, we used Monte Carlo simulations to examine the effect on inferences about risk differences (or absolute risk reductions) when statistical methods for independent samples are used compared with when statistical methods for paired samples are used in propensity-score matched samples. We found that compared with using methods for independent samples, the use of methods for paired samples resulted in: (i) empirical type I error rates that were closer to the advertised rate; (ii) empirical coverage rates of 95 per cent confidence intervals that were closer to the advertised rate; (iii) narrower 95 per cent confidence intervals; and (iv) estimated standard errors that more closely reflected the sampling variability of the estimated risk difference. Differences between the empirical and advertised performance of methods for independent samples were greater when the treatment-selection process was stronger compared with when treatment-selection process was weaker. We recommend using statistical methods for paired samples when using propensity-score matched samples for making inferences on the effect of treatment on the reduction in the probability of an event occurring. Copyright © 2011 John Wiley & Sons, Ltd.
doi:10.1002/sim.4200
PMCID: PMC3110307  PMID: 21337595
propensity score; propensity-score matching; risk difference; absolute risk reduction; Monte Carlo simulations; statistical inference; hypothesis testing; type I error rate; categorical data analysis
10.  How to Obtain NNT from Cohen's d: Comparison of Two Methods 
PLoS ONE  2011;6(4):e19070.
Background
In the literature we find many indices of size of treatment effect (effect size: ES). The preferred index of treatment effect in evidence-based medicine is the number needed to treat (NNT), while the most common one in the medical literature is Cohen's d when the outcome is continuous. There is confusion about how to convert Cohen's d into NNT.
Methods
We conducted meta-analyses of individual patient data from 10 randomized controlled trials of second generation antipsychotics for schizophrenia (n = 4278) to produce Cohen's d and NNTs for various definitions of response, using cutoffs of 10% through 90% reduction on the symptom severity scale. These actual NNTs were compared with NNTs calculated from Cohen's d according to two proposed methods in the literature (Kraemer, et al., Biological Psychiatry, 2006; Furukawa, Lancet, 1999).
Results
NNTs from Kraemer's method overlapped with the actual NNTs in 56%, while those based on Furukawa's method fell within the observed ranges of NNTs in 97% of the examined instances. For various definitions of response corresponding with 10% through 70% symptom reduction where we observed a non-small number of responders, the degree of agreement for the former method was at a chance level (ANOVA ICC of 0.12, p = 0.22) but that for the latter method was ANOVA ICC of 0.86 (95%CI: 0.55 to 0.95, p<0.01).
Conclusions
Furukawa's method allows more accurate prediction of NNTs from Cohen's d. Kraemer's method gives a wrong impression that NNT is constant for a given d even when the event rate differs.
doi:10.1371/journal.pone.0019070
PMCID: PMC3083419  PMID: 21556361
11.  A Weighting Approach to Causal Effects and Additive Interaction in Case-Control Studies: Marginal Structural Linear Odds Models 
American Journal of Epidemiology  2011;174(10):1197-1203.
Estimates of additive interaction from case-control data are often obtained by logistic regression; such models can also be used to adjust for covariates. This approach to estimating additive interaction has come under some criticism because of possible misspecification of the logistic model: If the underlying model is linear, the logistic model will be misspecified. The authors propose an inverse probability of treatment weighting approach to causal effects and additive interaction in case-control studies. Under the assumption of no unmeasured confounding, the approach amounts to fitting a marginal structural linear odds model. The approach allows for the estimation of measures of additive interaction between dichotomous exposures, such as the relative excess risk due to interaction, using case-control data without having to rely on modeling assumptions for the outcome conditional on the exposures and covariates. Rather than using conditional models for the outcome, models are instead specified for the exposures conditional on the covariates. The approach is illustrated by assessing additive interaction between genetic and environmental factors using data from a case-control study.
doi:10.1093/aje/kwr334
PMCID: PMC3246690  PMID: 22058231
case-control studies; interaction; linear model; structural model; synergism; weighting
12.  Covariate adjustment for two-sample treatment comparisons in randomized clinical trials: A principled yet flexible approach 
Statistics in medicine  2008;27(23):4658-4677.
SUMMARY
There is considerable debate regarding whether and how covariate adjusted analyses should be used in the comparison of treatments in randomized clinical trials. Substantial baseline covariate information is routinely collected in such trials, and one goal of adjustment is to exploit covariates associated with outcome to increase precision of estimation of the treatment effect. However, concerns are routinely raised over the potential for bias when the covariates used are selected post hoc; and the potential for adjustment based on a model of the relationship between outcome, covariates, and treatment to invite a “fishing expedition” for that leading to the most dramatic effect estimate. By appealing to the theory of semiparametrics, we are led naturally to a characterization of all treatment effect estimators and to principled, practically-feasible methods for covariate adjustment that yield the desired gains in efficiency and that allow covariate relationships to be identified and exploited while circumventing the usual concerns. The methods and strategies for their implementation in practice are presented. Simulation studies and an application to data from an HIV clinical trial demonstrate the performance of the techniques relative to existing methods.
doi:10.1002/sim.3113
PMCID: PMC2562926  PMID: 17960577
baseline variables; clinical trials; covariate adjustment; efficiency; semiparametric theory; variable selection
13.  Bias and variance trade-offs when combining propensity score weighting and regression: with an application to HIV status and homeless men 
The quality of propensity scores is traditionally measured by assessing how well they make the distributions of covariates in the treatment and control groups match, which we refer to as “good balance”. Good balance guarantees less biased estimates of the treatment effect. However, the cost of achieving good balance is that the variance of the estimates increases due to a reduction in effective sample size, either through the introduction of propensity score weights or dropping cases when propensity score matching. In this paper, we investigate whether it is best to optimize the balance or to settle for a less than optimal balance and use double robust estimation to adjust for remaining differences. We compare treatment effect estimates from regression, propensity score weighting, and double robust estimation with varying levels of effort expended to achieve balance using data from a study about the differences in outcomes by HIV status in heterosexually active homeless men residing in Los Angeles. Because of how costly data collection efforts are for this population, it is important to find an alternative estimation method that does not reduce effective sample size as much as methods that aggressively aim to optimize balance. Results from a simulation study suggest that there are instances in which we can obtain more precise treatment effect estimates without increasing bias too much by using a combination of regression and propensity score weights that achieve a less than optimal balance. There is a bias-variance tradeoff at work in propensity score estimation; every step toward better balance usually means an increase in variance and at some point a marginal decrease in bias may not be worth the associated increase in variance.
PMCID: PMC3433039  PMID: 22956891
Propensity score; Double robust estimation; HIV status; Homeless men
14.  A Randomized Comparison of Patients' Understanding of Number Needed to Treat and Other Common Risk Reduction Formats 
BACKGROUND
Commentators have suggested that patients may understand quantitative information about treatment benefits better when they are presented as numbers needed to treat (NNT) rather than as absolute or relative risk reductions.
OBJECTIVE
To determine whether NNT helps patients interpret treatment benefits better than absolute risk reduction (ARR), relative risk reduction (RRR), or a combination of all three of these risk reduction presentations (COMBO).
DESIGN
Randomized cross-sectional survey.
SETTING
University internal medicine clinic.
PATIENTS
Three hundred fifty-seven men and women, ages 50 to 80, who presented for health care.
INTERVENTIONS
Subjects were given written information about the baseline risk of a hypothetical “disease Y” and were asked (1) to compare the benefits of two drug treatments for disease Y, stating which provided more benefit; and (2) to calculate the effect of one of those drug treatments on a given baseline risk of disease. Risk information was presented to each subject in one of four randomly allocated risk formats: NNT, ARR, RRR, or COMBO.
MAIN RESULTS
When asked to state which of two treatments provided more benefit, subjects who received the RRR format responded correctly most often (60% correct vs 43% for COMBO, 42% for ARR, and 30% for NNT, P = .001). Most subjects were unable to calculate the effect of drug treatment on the given baseline risk of disease, although subjects receiving the RRR and ARR formats responded correctly more often (21% and 17% compared to 7% for COMBO and 6% for NNT, P = .004).
CONCLUSION
Patients are best able to interpret the benefits of treatment when they are presented in an RRR format with a given baseline risk of disease. ARR also is easily interpreted. NNT is often misinterpreted by patients and should not be used alone to communicate risk to patients.
doi:10.1046/j.1525-1497.2003.21102.x
PMCID: PMC1494938  PMID: 14687273
data interpretation (statistical); decision making; numeracy; patient participation (statistics and numerical data)
15.  Estimating Effects of Nursing Intervention via Propensity Score Analysis 
Nursing research  2008;57(6):444-452.
Background
Lack of randomization of nursing intervention in outcome effectiveness studies may lead to imbalanced covariates. Consequently, estimation of nursing intervention effect can be biased as in other observational studies. Propensity score analysis is an effective statistical method to reduce such bias and further derive causal effects in observational studies.
Objectives
To illustrate the use of propensity score analysis in quantitative nursing research through an example of pain management effect on length of hospital stay.
Methods
Propensity scores are generated through a regression model treating the nursing intervention as the dependent variable and all confounding covariates as predictor variables. Then propensity scores are used to adjust for this nonrandomized assignment of nursing intervention through three approaches: regression covariance adjustment, stratification, and matching in the predictive outcome model for nursing intervention.
Results
Propensity score analysis reduces the confounding covariates into a single variable of propensity score. After stratification and matching on propensity scores, observed covariates between nursing intervention groups are more balanced within each stratum or in the matched samples. The likelihood of receiving pain management is accounted for in the outcome model through the propensity scores. Both regression covariance adjustment and matching methods report a significant pain management effect on length of hospital stay in this example. The pain management effect can be regarded as causal when the strongly ignorable treatment assignment assumption holds.
Discussion
Propensity score analysis provides an alternative statistical approach to the classical multivariate regression, stratification and matching techniques for examining the effects of nursing intervention with a large number of confounding covariates in the background. It can be used to derive causal effects of nursing intervention in observational studies under certain circumstances.
doi:10.1097/NNR.0b013e31818c66f6
PMCID: PMC2778306  PMID: 19018219
matching; nursing effectiveness research; nursing interventions; propensity score
16.  A structural mean model to allow for noncompliance in a randomized trial comparing 2 active treatments 
Biostatistics (Oxford, England)  2010;12(2):247-257.
We propose a structural mean modeling approach to obtain compliance-adjusted estimates for treatment effects in a randomized-controlled trial comparing 2 active treatments. The model relates an individual's observed outcome to his or her counterfactual untreated outcome through the observed receipt of active treatments. Our proposed estimation procedure exploits baseline covariates that predict compliance levels on each arm. We give a closed-form estimator which allows for differential and unexplained selectivity (i.e. noncausal compliance-outcome association due to unobserved confounding) as well as a nonparametric error distribution. In a simple linear model for a 2-arm trial, we show that the distinct causal parameters are identified unless covariate-specific expected compliance levels are proportional on both treatment arms. In the latter case, only a linear contrast between the 2 treatment effects is estimable and may well be of key interest. We demonstrate the method in a clinical trial comparing 2 antidepressants.
doi:10.1093/biostatistics/kxq053
PMCID: PMC3062146  PMID: 20805286
Causal inference; Randomized-controlled trials; Structural mean models
17.  An Introduction to Propensity Score Methods for Reducing the Effects of Confounding in Observational Studies 
Multivariate Behavioral Research  2011;46(3):399-424.
The propensity score is the probability of treatment assignment conditional on observed baseline characteristics. The propensity score allows one to design and analyze an observational (nonrandomized) study so that it mimics some of the particular characteristics of a randomized controlled trial. In particular, the propensity score is a balancing score: conditional on the propensity score, the distribution of observed baseline covariates will be similar between treated and untreated subjects. I describe 4 different propensity score methods: matching on the propensity score, stratification on the propensity score, inverse probability of treatment weighting using the propensity score, and covariate adjustment using the propensity score. I describe balance diagnostics for examining whether the propensity score model has been adequately specified. Furthermore, I discuss differences between regression-based methods and propensity score-based methods for the analysis of observational data. I describe different causal average treatment effects and their relationship with propensity score analyses.
doi:10.1080/00273171.2011.568786
PMCID: PMC3144483  PMID: 21818162
18.  Imputing missing covariate values for the Cox model 
Statistics in Medicine  2009;28(15):1982-1998.
Multiple imputation is commonly used to impute missing data, and is typically more efficient than complete cases analysis in regression analysis when covariates have missing values. Imputation may be performed using a regression model for the incomplete covariates on other covariates and, importantly, on the outcome. With a survival outcome, it is a common practice to use the event indicator D and the log of the observed event or censoring time T in the imputation model, but the rationale is not clear.
We assume that the survival outcome follows a proportional hazards model given covariates X and Z. We show that a suitable model for imputing binary or Normal X is a logistic or linear regression on the event indicator D, the cumulative baseline hazard H0(T), and the other covariates Z. This result is exact in the case of a single binary covariate; in other cases, it is approximately valid for small covariate effects and/or small cumulative incidence. If we do not know H0(T), we approximate it by the Nelson–Aalen estimator of H(T) or estimate it by Cox regression.
We compare the methods using simulation studies. We find that using log T biases covariate-outcome associations towards the null, while the new methods have lower bias. Overall, we recommend including the event indicator and the Nelson–Aalen estimator of H(T) in the imputation model. Copyright © 2009 John Wiley & Sons, Ltd.
doi:10.1002/sim.3618
PMCID: PMC2998703  PMID: 19452569
missing data; missing covariates; multiple imputation; proportional hazards model
19.  ESTIMATING TREATMENT EFFECTS ON HEALTHCARE COSTS UNDER EXOGENEITY: IS THERE A ‘MAGIC BULLET’? 
Methods for estimating average treatment effects, under the assumption of no unmeasured confounders, include regression models; propensity score adjustments using stratification, weighting, or matching; and doubly robust estimators (a combination of both). Researchers continue to debate about the best estimator for outcomes such as health care cost data, as they are usually characterized by an asymmetric distribution and heterogeneous treatment effects,. Challenges in finding the right specifications for regression models are well documented in the literature. Propensity score estimators are proposed as alternatives to overcoming these challenges. Using simulations, we find that in moderate size samples (n= 5000), balancing on propensity scores that are estimated from saturated specifications can balance the covariate means across treatment arms but fails to balance higher-order moments and covariances amongst covariates. Therefore, unlike regression model, even if a formal model for outcomes is not required, propensity score estimators can be inefficient at best and biased at worst for health care cost data. Our simulation study, designed to take a ‘proof by contradiction’ approach, proves that no one estimator can be considered the best under all data generating processes for outcomes such as costs. The inverse-propensity weighted estimator is most likely to be unbiased under alternate data generating processes but is prone to bias under misspecification of the propensity score model and is inefficient compared to an unbiased regression estimator. Our results show that there are no ‘magic bullets’ when it comes to estimating treatment effects in health care costs. Care should be taken before naively applying any one estimator to estimate average treatment effects in these data. We illustrate the performance of alternative methods in a cost dataset on breast cancer treatment.
doi:10.1007/s10742-011-0072-8
PMCID: PMC3244728  PMID: 22199462
Propensity score; non-linear regression; average treatment effect; health care costs
20.  Effects of Information Framing on the Intentions of Family Physicians to Prescribe Long-Term Hormone Replacement Therapy 
OBJECTIVE
To determine whether the way in which information on benefits and harms of long-term hormone replacement therapy (HRT) is presented influences family physicians' intentions to prescribe this treatment.
DESIGN
Family physicians were randomized to receive information on treatment outcomes expressed in relative terms, or as the number needing to be treated (NNT) with HRT to prevent or cause an event. A control group received no information.
SETTING
Primary care.
PARTICIPANTS
Family physicians practicing in the Hunter Valley, New South Wales, Australia.
INTERVENTION
Estimates of the impact of long-term HRT on risk of coronary events, hip fractures, and breast cancer were summarized as relative (proportional) decreases or increases in risk, or as NNT.
MEASUREMENTS AND MAIN RESULTS
Intention to prescribe HRT for seven hypothetical patients was measured on Likert scales. Of 389 family physicians working in the Hunter Valley, 243 completed the baseline survey and 215 participated in the randomized trial. Baseline intention to prescribe varied across patients—it was highest in the presence of risk factors for hip fracture, but coexisting risk factors for breast cancer had a strong negative influence. Overall, a larger proportion of subjects receiving information expressed as NNT had reduced intentions, and a smaller proportion had increased intentions to prescribe HRT than those receiving the information expressed in relative terms, or the control group. However, the differences were small and only reached statistical significance for three hypothetical patients. Framing effects were minimal when the hypothetical patient had coexisting risk factors for breast cancer.
CONCLUSIONS
Information framing had some effect on family physicians' intentions to prescribe HRT, but the effects were smaller than those previously reported, and they were modified by the presence of serious potential adverse treatment effects.
doi:10.1046/j.1525-1497.1999.09028.x
PMCID: PMC1496748  PMID: 10571703
information framing; medical decision making; relative risk; absolute risk; randomized controlled trial
21.  Meta-analysis, Simpson's paradox, and the number needed to treat 
Background
There is debate concerning methods for calculating numbers needed to treat (NNT) from results of systematic reviews.
Methods
We investigate the susceptibility to bias for alternative methods for calculating NNTs through illustrative examples and mathematical theory.
Results
Two competing methods have been recommended: one method involves calculating the NNT from meta-analytical estimates, the other by treating the data as if it all arose from a single trial. The 'treat-as-one-trial' method was found to be susceptible to bias when there were imbalances between groups within one or more trials in the meta-analysis (Simpson's paradox). Calculation of NNTs from meta-analytical estimates is not prone to the same bias. The method of calculating the NNT from a meta-analysis depends on the treatment effect used. When relative measures of treatment effect are used the estimates of NNTs can be tailored to the level of baseline risk.
Conclusions
The treat-as-one-trial method of calculating numbers needed to treat should not be used as it is prone to bias. Analysts should always report the method they use to compute estimates to enable readers to judge whether it is appropriate.
doi:10.1186/1471-2288-2-3
PMCID: PMC65634  PMID: 11860606
22.  American, British and European recommendations for statins in the primary prevention of cardiovascular disease applied to British men studied prospectively 
Heart  2006;92(9):1213-1218.
Objective
To compare national and international recommendations for statin treatment in the primary prevention of cardiovascular disease (CVD) in middle‐aged men.
Design
Application of the current American, British and European recommendations to results of a prospective study.
Participants
Men aged 49–65 years (n  =  1653) who participated in the Caerphilly Prospective Study.
Main outcome measures
Proportion of patients who would receive statin treatment, the number needed to treat (NNT) to prevent one first CVD event (myocardial infarction or stroke) over 10 years and the potential number of events prevented over 10 years in the whole population (population impact) by the use of statins in accordance with each set of guidelines, assuming a reduction of risk in the range 10–50% from the observed events and baseline risk factors.
Results
212 events were noted. For an anticipated reduction in first CVD events of 30% with statin treatment, the NNT was 26.0, if the whole population was treated. The lowest NNT was 12.1 for the National Service Framework, achieved when only 14% of the men received a statin. This prevented the lowest number of events (19.2/212), however, and had the smallest population impact on CVD incidence (−9.1%). The American and earlier Joint British Societies guidelines, although giving NNTs of around 21, prevented more events and had a greater population impact of −21.6% to −23.3%. They did, however, target about 60% of the male population. The British Hypertension Society guidelines and new Joint British Societies recommendations achieved the greatest population impact of −27% while maintaining the NNT at 22.2. They did, however, target three quarters of this population.
Conclusion
Even effective preventive treatment will have little impact in preventing disease if patients at typical risk are not treated. Whether cholesterol lowering on such a scale should be attempted with drugs raises philosophical, psychological and economic considerations, particularly in view of the high likelihood of individual benefit from statin treatment. More effective nutritional policies to reduce serum cholesterol on a population level and reduce the requirement for statins in primary prevention should also be considered.
doi:10.1136/hrt.2005.085183
PMCID: PMC1861164  PMID: 16717068
23.  Systematic Review of the Literature on Comparative Effectiveness of Antiviral Treatments for Chronic Hepatitis B Infection 
OBJECTIVES
To evaluate the comparative effectiveness of antiviral drugs in adults with chronic hepatitis B monoinfection for evidence-based decision-making.
METHODS
A systematic review of randomized controlled clinical trials (RCTs) published in English. Results after interferon and nucleos(t)ides analog therapies were synthesized with random-effects meta-analyses and number needed to treat (NNT).
RESULTS
Despite sustained improvements in selected biomarkers, no one drug regimen improved all intermediate outcomes. In 16 underpowered RCTs, drug treatments did not reduce mortality, liver cancer, or cirrhosis. Sustained HBV DNA clearance was achieved in one patient when two were treated with adefovir (NNT from 1 RCT = 2 95%CI 1;2) or interferon alfa-2b (NNT from 2 RCTs = 2 95%CI 2;4), 13 with lamivudine (NNT from 1 RCT = 13 95%CI 7;1000), and 11 with peginterferon alfa-2a vs. lamivudine (NNT from 1 RCT = 11 95%CI 7;25). Sustained HBeAg seroconversion was achieved in one patient when eight were treated with interferon alfa-2b (NNT from 2 RCTs = 8 95%CI 5;33) or 10—with peginterferon alfa-2b vs. interferon alfa-2b (NNT from 1 RCT = 10 95%CI 5;1000). Greater benefits and safety after entecavir vs. lamivudine or pegylated interferon alfa-2b vs. interferon alfa-2b require future investigation of clinical outcomes. Adverse events were common and more frequent after interferon. Treatment utilization for adverse effects is unknown.
CONCLUSIONS
Individual clinical decisions should rely on comparative effectiveness and absolute rates of intermediate outcomes and adverse events. Future research should clarify the relationship of intermediate and clinical outcomes and cost-effectiveness of drugs for evidence-based policy and clinical decisions.
Electronic supplementary material
The online version of this article (doi:10.1007/s11606-010-1569-5) contains supplementary material, which is available to authorized users.
doi:10.1007/s11606-010-1569-5
PMCID: PMC3043173  PMID: 21203860
antiviral agents/adverse effects; antiviral agents/therapeutic use; hepatitis B/therapy; treatment outcome; cost-benefit analysis; decision trees
24.  PROPORTIONAL HAZARDS MODELS WITH CONTINUOUS MARKS 
Annals of statistics  2009;37(1):394-426.
For time-to-event data with finitely many competing risks, the proportional hazards model has been a popular tool for relating the cause-specific outcomes to covariates [Prentice et al. Biometrics 34 (1978) 541–554]. This article studies an extension of this approach to allow a continuum of competing risks, in which the cause of failure is replaced by a continuous mark only observed at the failure time. We develop inference for the proportional hazards model in which the regression parameters depend nonparametrically on the mark and the baseline hazard depends nonparametrically on both time and mark. This work is motivated by the need to assess HIV vaccine efficacy, while taking into account the genetic divergence of infecting HIV viruses in trial participants from the HIV strain that is contained in the vaccine, and adjusting for covariate effects. Mark-specific vaccine efficacy is expressed in terms of one of the regression functions in the mark-specific proportional hazards model. The new approach is evaluated in simulations and applied to the first HIV vaccine efficacy trial.
doi:10.1214/07-AOS554
PMCID: PMC2762218  PMID: 19838313
Competing risks; distribution-free confidence bands and tests; failure time data; genetic data; HIV vaccine trial; pointwise and simultaneous confidence bands; semiparametric model; survival analysis
25.  Improving efficiency of inferences in randomized clinical trials using auxiliary covariates 
Biometrics  2008;64(3):707-715.
Summary
The primary goal of a randomized clinical trial is to make comparisons among two or more treatments. For example, in a two-arm trial with continuous response, the focus may be on the difference in treatment means; with more than two treatments, the comparison may be based on pairwise differences. With binary outcomes, pairwise odds-ratios or log-odds ratios may be used. In general, comparisons may be based on meaningful parameters in a relevant statistical model. Standard analyses for estimation and testing in this context typically are based on the data collected on response and treatment assignment only. In many trials, auxiliary baseline covariate information may also be available, and it is of interest to exploit these data to improve the efficiency of inferences. Taking a semiparametric theory perspective, we propose a broadly-applicable approach to adjustment for auxiliary covariates to achieve more efficient estimators and tests for treatment parameters in the analysis of randomized clinical trials. Simulations and applications demonstrate the performance of the methods.
doi:10.1111/j.1541-0420.2007.00976.x
PMCID: PMC2574960  PMID: 18190618
Covariate adjustment; Hypothesis test; k-arm trial; Kruskal-Wallis test; Log-odds ratio; Longitudinal data; Semiparametric theory

Results 1-25 (887295)