To report the longer term outcomes following either a strategy of endovascular repair first or open repair of ruptured abdominal aortic aneurysm, which are necessary for both patient and clinical decision-making.
Methods and results
This pragmatic multicentre (29 UK and 1 Canada) trial randomized 613 patients with a clinical diagnosis of ruptured aneurysm; 316 to an endovascular first strategy (if aortic morphology is suitable, open repair if not) and 297 to open repair. The principal 1-year outcome was mortality; secondary outcomes were re-interventions, hospital discharge, health-related quality-of-life (QoL) (EQ-5D), costs, Quality-Adjusted-Life-Years (QALYs), and cost-effectiveness [incremental net benefit (INB)]. At 1 year, all-cause mortality was 41.1% for the endovascular strategy group and 45.1% for the open repair group, odds ratio 0.85 [95% confidence interval (CI) 0.62, 1.17], P = 0.325, with similar re-intervention rates in each group. The endovascular strategy group and open repair groups had average total hospital stays of 17 and 26 days, respectively, P < 0.001. Patients surviving rupture had higher average EQ-5D utility scores in the endovascular strategy vs. open repair groups, mean differences 0.087 (95% CI 0.017, 0.158), 0.068 (95% CI −0.004, 0.140) at 3 and 12 months, respectively. There were indications that QALYs were higher and costs lower for the endovascular first strategy, combining to give an INB of £3877 (95% CI £253, £7408) or €4356 (95% CI €284, €8323).
An endovascular first strategy for management of ruptured aneurysms does not offer a survival benefit over 1 year but offers patients faster discharge with better QoL and is cost-effective.
Clinical trial registration
Aneurysm; Aorta; Rupture; Surgery; Stent grafts; Cost-effectiveness
Individual participant time-to-event data from multiple prospective epidemiologic studies enable detailed investigation into the predictive ability of risk models. Here we address the challenges in appropriately combining such information across studies. Methods are exemplified by analyses of log C-reactive protein and conventional risk factors for coronary heart disease in the Emerging Risk Factors Collaboration, a collation of individual data from multiple prospective studies with an average follow-up duration of 9.8 years (dates varied). We derive risk prediction models using Cox proportional hazards regression analysis stratified by study and obtain estimates of risk discrimination, Harrell's concordance index, and Royston's discrimination measure within each study; we then combine the estimates across studies using a weighted meta-analysis. Various weighting approaches are compared and lead us to recommend using the number of events in each study. We also discuss the calculation of measures of reclassification for multiple studies. We further show that comparison of differences in predictive ability across subgroups should be based only on within-study information and that combining measures of risk discrimination from case-control studies and prospective studies is problematic. The concordance index and discrimination measure gave qualitatively similar results throughout. While the concordance index was very heterogeneous between studies, principally because of differing age ranges, the increments in the concordance index from adding log C-reactive protein to conventional risk factors were more homogeneous.
C index; coronary heart disease; D measure; individual participant data; inverse variance; meta-analysis; risk prediction; weighting
Mendelian randomization analyses are often performed using summarized data. The causal estimate from a one‐sample analysis (in which data are taken from a single data source) with weak instrumental variables is biased in the direction of the observational association between the risk factor and outcome, whereas the estimate from a two‐sample analysis (in which data on the risk factor and outcome are taken from non‐overlapping datasets) is less biased and any bias is in the direction of the null. When using genetic consortia that have partially overlapping sets of participants, the direction and extent of bias are uncertain. In this paper, we perform simulation studies to investigate the magnitude of bias and Type 1 error rate inflation arising from sample overlap. We consider both a continuous outcome and a case‐control setting with a binary outcome. For a continuous outcome, bias due to sample overlap is a linear function of the proportion of overlap between the samples. So, in the case of a null causal effect, if the relative bias of the one‐sample instrumental variable estimate is 10% (corresponding to an F parameter of 10), then the relative bias with 50% sample overlap is 5%, and with 30% sample overlap is 3%. In a case‐control setting, if risk factor measurements are only included for the control participants, unbiased estimates are obtained even in a one‐sample setting. However, if risk factor data on both control and case participants are used, then bias is similar with a binary outcome as with a continuous outcome. Consortia releasing publicly available data on the associations of genetic variants with continuous risk factors should provide estimates that exclude case participants from case‐control samples.
aggregated data; instrumental variables; Mendelian randomization; summarized data; weak instrument bias
Carotid intima-media thickness (CIMT) is a marker of subclinical organ damage and predicts cardiovascular disease (CVD) events in the general population. It has also been associated with vascular risk in people with diabetes. However, the association of CIMT change in repeated examinations with subsequent CVD events is uncertain, and its use as a surrogate end point in clinical trials is controversial. We aimed at determining the relation of CIMT change to CVD events in people with diabetes.
RESEARCH DESIGN AND METHODS
In a comprehensive meta-analysis of individual participant data, we collated data from 3,902 adults (age 33–92 years) with type 2 diabetes from 21 population-based cohorts. We calculated the hazard ratio (HR) per standard deviation (SD) difference in mean common carotid artery intima-media thickness (CCA-IMT) or in CCA-IMT progression, both calculated from two examinations on average 3.6 years apart, for each cohort, and combined the estimates with random-effects meta-analysis.
Average mean CCA-IMT ranged from 0.72 to 0.97 mm across cohorts in people with diabetes. The HR of CVD events was 1.22 (95% CI 1.12–1.33) per SD difference in mean CCA-IMT, after adjustment for age, sex, and cardiometabolic risk factors. Average mean CCA-IMT progression in people with diabetes ranged between −0.09 and 0.04 mm/year. The HR per SD difference in mean CCA-IMT progression was 0.99 (0.91–1.08).
Despite reproducing the association between CIMT level and vascular risk in subjects with diabetes, we did not find an association between CIMT change and vascular risk. These results do not support the use of CIMT progression as a surrogate end point in clinical trials in people with diabetes.
The interpretation of trial results can be helped by understanding how generalisable they are to the target population for which inferences are intended. INTERVAL, a large pragmatic randomised trial of blood donors in England, is assessing the effectiveness and safety of reducing inter-donation intervals. The trial recruited mainly from the blood service’s static centres, which collect only about 10 % of whole-blood donations. Hence, the extent to which the trial’s participants are representative of the general blood donor population is uncertain. We compare these groups in detail.
We present the CONSORT flowchart from participant invitation to randomisation in INTERVAL. We compare the characteristics of those eligible and consenting to participate in INTERVAL with the general donor population, using the national blood supply ’PULSE’ database for the period of recruitment. We compare the characteristics of specific groups of trial participants recruited from different sources, as well as those who were randomised versus those not randomised.
From a total of 540,459 invitations, 48,725 donors were eligible and consented to participate in INTERVAL. The proportion of such donors varied from 1–22 % depending on the source of recruitment. The characteristics of those consenting were similar to those of the general population of 1.3 million donors in terms of ethnicity, blood group distribution and recent deferral rates from blood donation due to low haemoglobin. However, INTERVAL participants included more men (50 % versus 44 %), were slightly older (mean age 43.1 versus 42.3 years), included fewer new donors (3 % versus 22 %) and had given more donations over the previous 2 years (mean 3.3 versus 2.2) than the general donor population. Of the consenting participants, 45,263 (93 %) donors were randomised. Compared to those not randomised, the randomised donors showed qualitatively similar differences to those described above.
There was broad similarity of participants in INTERVAL with the general blood donor population of England, notwithstanding some differences in age, sex and donation history. Any heterogeneity of the trial’s results according to these characteristics will need to be studied to ensure its generalisability to the general donor population.
Current Controlled Trials ISRCTN24760606. Registered on 25 January 2012.
Electronic supplementary material
The online version of this article (doi:10.1186/s13063-016-1579-7) contains supplementary material, which is available to authorized users.
Randomised trial; Recruitment; Representativeness; Generalisability; Blood donors; Blood donation
Mendelian randomization is the use of genetic instrumental variables to obtain causal inferences from observational data. Two recent developments for combining information on multiple uncorrelated instrumental variables (IVs) into a single causal estimate are as follows: (i) allele scores, in which individual‐level data on the IVs are aggregated into a univariate score, which is used as a single IV, and (ii) a summary statistic method, in which causal estimates calculated from each IV using summarized data are combined in an inverse‐variance weighted meta‐analysis. To avoid bias from weak instruments, unweighted and externally weighted allele scores have been recommended. Here, we propose equivalent approaches using summarized data and also provide extensions of the methods for use with correlated IVs. We investigate the impact of different choices of weights on the bias and precision of estimates in simulation studies. We show that allele score estimates can be reproduced using summarized data on genetic associations with the risk factor and the outcome. Estimates from the summary statistic method using external weights are biased towards the null when the weights are imprecisely estimated; in contrast, allele score estimates are unbiased. With equal or external weights, both methods provide appropriate tests of the null hypothesis of no causal effect even with large numbers of potentially weak instruments. We illustrate these methods using summarized data on the causal effect of low‐density lipoprotein cholesterol on coronary heart disease risk. It is shown that a more precise causal estimate can be obtained using multiple genetic variants from a single gene region, even if the variants are correlated. © 2015 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Mendelian randomization; weak instruments; instrumental variables; causal inference; genetic variants; summarized data; aggregated data; allele score; genetic risk score
Large-scale epidemiological evidence on the role of inflammation in early atherosclerosis, assessed by carotid ultrasound, is lacking. We aimed to quantify cross-sectional and longitudinal associations of inflammatory markers with common-carotid-artery intima-media thickness (CCA-IMT) in the general population.
Information on high-sensitivity C-reactive protein, fibrinogen, leucocyte count and CCA-IMT was available in 20 prospective cohort studies of the PROG-IMT collaboration involving 49,097 participants free of pre-existing cardiovascular disease. Estimates of associations were calculated within each study and then combined using random-effects meta-analyses.
Mean baseline CCA-IMT amounted to 0.74mm (SD = 0.18) and mean CCA-IMT progression over a mean of 3.9 years to 0.011 mm/year (SD = 0.039). Cross-sectional analyses showed positive linear associations between inflammatory markers and baseline CCA-IMT. After adjustment for traditional cardiovascular risk factors, mean differences in baseline CCA-IMT per one-SD higher inflammatory marker were: 0.0082mm for high-sensitivity C-reactive protein (p < 0.001); 0.0072mm for fibrinogen (p < 0.001); and 0.0025mm for leucocyte count (p = 0.033). ‘Inflammatory load’, defined as the number of elevated inflammatory markers (i.e. in upper two quintiles), showed a positive linear association with baseline CCA-IMT (p < 0.001). Longitudinal associations of baseline inflammatory markers and changes therein with CCA-IMT progression were null or at most weak. Participants with the highest ‘inflammatory load’ had a greater CCA-IMT progression (p = 0.015).
Inflammation was independently associated with CCA-IMT cross-sectionally. The lack of clear associations with CCA-IMT progression may be explained by imprecision in its assessment within a limited time period. Our findings for ‘inflammatory load’ suggest important combined effects of the three inflammatory markers on early atherosclerosis.
Inflammation; atherosclerosis; meta-analysis
Strategies for screening and intervening to reduce the risk of cardiovascular disease (CVD) in primary care settings need to be assessed in terms of both their costs and long-term health effects. We undertook a literature review to investigate the methodologies used.
In a framework of developing a new health-economic model for evaluating different screening strategies for primary prevention of CVD in Europe (EPIC-CVD project), we identified seven key modeling issues and reviewed papers published between 2000 and 2013 to assess how they were addressed.
We found 13 relevant health-economic modeling studies of screening to prevent CVD in primary care. The models varied in their degree of complexity, with between two and 33 health states. Programmes that screen the whole population by a fixed cut-off (e.g., predicted 10-year CVD risk >20 %) identify predominantly elderly people, who may not be those most likely to benefit from long-term treatment. Uncertainty and model validation were generally poorly addressed. Few studies considered the disutility of taking drugs in otherwise healthy individuals or the budget impact of the programme.
Model validation, incorporation of parameter uncertainty, and sensitivity analyses for assumptions made are all important components of model building and reporting, and deserve more attention. Complex models may not necessarily give more accurate predictions. Availability of a large enough source dataset to reliably estimate all relevant input parameters is crucial for achieving credible results. Decision criteria should consider budget impact and the medicalization of the population as well as cost-effectiveness thresholds.
Electronic supplementary material
The online version of this article (doi:10.1007/s10198-015-0753-2) contains supplementary material, which is available to authorized users.
Cost-effectiveness analysis; Screening; Cardiovascular disease; Primary prevention; Statins; Literature review; I180; H510
Carotid intima media thickness (IMT) progression is increasingly used as a surrogate for vascular risk. This use is supported by data from a few clinical trials investigating statins, but established criteria of surrogacy are only partially fulfilled. To provide a valid basis for the use of IMT progression as a study end point, we are performing a 3-step meta-analysis project based on individual participant data.
Objectives of the 3 successive stages are to investigate (1) whether IMT progression prospectively predicts myocardial infarction, stroke, or death in population-based samples; (2) whether it does so in prevalent disease cohorts; and (3) whether interventions affecting IMT progression predict a therapeutic effect on clinical end points.
Recruitment strategies, inclusion criteria, and estimates of the expected numbers of eligible studies are presented along with a detailed analysis plan.
A case–cohort study is an efficient epidemiological study design for estimating exposure–outcome associations. When sampling of the subcohort is stratified, several methods of analysis are possible, but it is unclear how they compare. Our objective was to compare five analysis methods using Cox regression for this type of data, ranging from a crude model that ignores the stratification to a flexible one that allows nonproportional hazards and varying covariate effects across the strata.
Study Design and Setting
We applied the five methods to estimate the association between physical activity and incident type 2 diabetes using data from a stratified case–cohort study and also used artificial data sets to exemplify circumstances in which they can give different results.
In the diabetes study, all methods except the method that ignores the stratification gave similar results for the hazard ratio associated with physical activity. In the artificial data sets, the more flexible methods were shown to be necessary when certain assumptions of the simpler models failed. The most flexible method gave reliable results for all the artificial data sets.
The most flexible method is computationally straightforward, and appropriate whether or not key assumptions made by the simpler models are valid.
Case–cohort study; Cox model; Hazard ratio; Meta-analysis; Stratification; Subcohort selection
The value of measuring levels of glycated hemoglobin (HbA1c) for the prediction of first cardiovascular events is uncertain.
To determine whether adding information on HbA1c values to conventional cardiovascular risk factors is associated with improvement in prediction of cardiovascular disease (CVD) risk.
DESIGN, SETTING, AND PARTICIPANTS
Analysis of individual-participant data available from 73 prospective studies involving 294 998 participants without a known history of diabetes mellitus or CVD at the baseline assessment.
MAIN OUTCOMES AND MEASURES
Measures of risk discrimination for CVD outcomes (eg, C-index) and reclassification (eg, net reclassification improvement) of participants across predicted 10-year risk categories of low (<5%), intermediate (5%to <7.5%), and high (≥7.5%) risk.
During a median follow-up of 9.9 (interquartile range, 7.6-13.2) years, 20 840 incident fatal and nonfatal CVD outcomes (13 237 coronary heart disease and 7603 stroke outcomes) were recorded. In analyses adjusted for several conventional cardiovascular risk factors, there was an approximately J-shaped association between HbA1c values and CVD risk. The association between HbA1c values and CVD risk changed only slightly after adjustment for total cholesterol and triglyceride concentrations or estimated glomerular filtration rate, but this association attenuated somewhat after adjustment for concentrations of high-density lipoprotein cholesterol and C-reactive protein. The C-index for a CVD risk prediction model containing conventional cardiovascular risk factors alone was 0.7434 (95% CI, 0.7350 to 0.7517). The addition of information on HbA1c was associated with a C-index change of 0.0018 (0.0003 to 0.0033) and a net reclassification improvement of 0.42 (−0.63 to 1.48) for the categories of predicted 10-year CVD risk. The improvement provided by HbA1c assessment in prediction of CVD risk was equal to or better than estimated improvements for measurement of fasting, random, or postload plasma glucose levels.
CONCLUSIONS AND RELEVANCE
In a study of individuals without known CVD or diabetes, additional assessment of HbA1c values in the context of CVD risk assessment provided little incremental benefit for prediction of CVD risk.
Numerous meta-analyses in healthcare research combine results from only a small number of studies, for which the variance representing between-study heterogeneity is estimated imprecisely. A Bayesian approach to estimation allows external evidence on the expected magnitude of heterogeneity to be incorporated.
The aim of this paper is to provide tools that improve the accessibility of Bayesian meta-analysis. We present two methods for implementing Bayesian meta-analysis, using numerical integration and importance sampling techniques. Based on 14 886 binary outcome meta-analyses in the Cochrane Database of Systematic Reviews, we derive a novel set of predictive distributions for the degree of heterogeneity expected in 80 settings depending on the outcomes assessed and comparisons made. These can be used as prior distributions for heterogeneity in future meta-analyses.
The two methods are implemented in R, for which code is provided. Both methods produce equivalent results to standard but more complex Markov chain Monte Carlo approaches. The priors are derived as log-normal distributions for the between-study variance, applicable to meta-analyses of binary outcomes on the log odds-ratio scale. The methods are applied to two example meta-analyses, incorporating the relevant predictive distributions as prior distributions for between-study heterogeneity.
We have provided resources to facilitate Bayesian meta-analysis, in a form accessible to applied researchers, which allow relevant prior information on the degree of heterogeneity to be incorporated. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
meta-analysis; Bayesian methods; heterogeneity; prior distributions
Genome-wide association studies, which typically report regression coefficients summarizing the associations of many genetic variants with various traits, are potentially a powerful source of data for Mendelian randomization investigations. We demonstrate how such coefficients from multiple variants can be combined in a Mendelian randomization analysis to estimate the causal effect of a risk factor on an outcome. The bias and efficiency of estimates based on summarized data are compared to those based on individual-level data in simulation studies. We investigate the impact of gene–gene interactions, linkage disequilibrium, and ‘weak instruments’ on these estimates. Both an inverse-variance weighted average of variant-specific associations and a likelihood-based approach for summarized data give similar estimates and precision to the two-stage least squares method for individual-level data, even when there are gene–gene interactions. However, these summarized data methods overstate precision when variants are in linkage disequilibrium. If the P-value in a linear regression of the risk factor for each variant is less than , then weak instrument bias will be small. We use these methods to estimate the causal association of low-density lipoprotein cholesterol (LDL-C) on coronary artery disease using published data on five genetic variants. A 30% reduction in LDL-C is estimated to reduce coronary artery disease risk by 67% (95% CI: 54% to 76%). We conclude that Mendelian randomization investigations using summarized data from uncorrelated variants are similarly efficient to those using individual-level data, although the necessary assumptions cannot be so fully assessed.
Mendelian randomization; instrumental variables; genome-wide association study; causal inference; weak instruments
Finding individual-level data for adequately-powered Mendelian randomization analyses may be problematic. As publicly-available summarized data on genetic associations with disease outcomes from large consortia are becoming more abundant, use of published data is an attractive analysis strategy for obtaining precise estimates of the causal effects of risk factors on outcomes. We detail the necessary steps for conducting Mendelian randomization investigations using published data, and present novel statistical methods for combining data on the associations of multiple (correlated or uncorrelated) genetic variants with the risk factor and outcome into a single causal effect estimate. A two-sample analysis strategy may be employed, in which evidence on the gene-risk factor and gene-outcome associations are taken from different data sources. These approaches allow the efficient identification of risk factors that are suitable targets for clinical intervention from published data, although the ability to assess the assumptions necessary for causal inference is diminished. Methods and guidance are illustrated using the example of the causal effect of serum calcium levels on fasting glucose concentrations. The estimated causal effect of a 1 standard deviation (0.13 mmol/L) increase in calcium levels on fasting glucose (mM) using a single lead variant from the CASR gene region is 0.044 (95 % credible interval −0.002, 0.100). In contrast, using our method to account for the correlation between variants, the corresponding estimate using 17 genetic variants is 0.022 (95 % credible interval 0.009, 0.035), a more clearly positive causal effect.
Electronic supplementary material
The online version of this article (doi:10.1007/s10654-015-0011-z) contains supplementary material, which is available to authorized users.
Mendelian randomization; Instrumental variable; Causal inference; Published data; Two-sample Mendelian randomization; Summarized data
A conventional Mendelian randomization analysis assesses the causal effect of a risk factor on an outcome by using genetic variants that are solely associated with the risk factor of interest as instrumental variables. However, in some cases, such as the case of triglyceride level as a risk factor for cardiovascular disease, it may be difficult to find a relevant genetic variant that is not also associated with related risk factors, such as other lipid fractions. Such a variant is known as pleiotropic. In this paper, we propose an extension of Mendelian randomization that uses multiple genetic variants associated with several measured risk factors to simultaneously estimate the causal effect of each of the risk factors on the outcome. This “multivariable Mendelian randomization” approach is similar to the simultaneous assessment of several treatments in a factorial randomized trial. In this paper, methods for estimating the causal effects are presented and compared using real and simulated data, and the assumptions necessary for a valid multivariable Mendelian randomization analysis are discussed. Subject to these assumptions, we demonstrate that triglyceride-related pathways have a causal effect on the risk of coronary heart disease independent of the effects of low-density lipoprotein cholesterol and high-density lipoprotein cholesterol.
causal inference; epidemiologic methods; instrumental variables; lipid fractions; Mendelian randomization; pleiotropy
There has been limited study of factors influencing response rates and attrition in online research. Online experiments were nested within the pilot (study 1, n = 3780) and main trial (study 2, n = 2667) phases of an evaluation of a Web-based intervention for hazardous drinkers: the Down Your Drink randomized controlled trial (DYD-RCT).
The objective was to determine whether differences in the length and relevance of questionnaires can impact upon loss to follow-up in online trials.
A randomized controlled trial design was used. All participants who consented to enter DYD-RCT and completed the primary outcome questionnaires were randomized to complete one of four secondary outcome questionnaires at baseline and at follow-up. These questionnaires varied in length (additional 23 or 34 versus 10 items) and relevance (alcohol problems versus mental health). The outcome measure was the proportion of participants who completed follow-up at each of two follow-up intervals: study 1 after 1 and 3 months and study 2 after 3 and 12 months.
At all four follow-up intervals there were no significant effects of additional questionnaire length on follow-up. Randomization to the less relevant questionnaire resulted in significantly lower rates of follow-up in two of the four assessments made (absolute difference of 4%, 95% confidence interval [CI] 0%-8%, in both study 1 after 1 month and in study 2 after 12 months). A post hoc pooled analysis across all four follow-up intervals found this effect of marginal statistical significance (unadjusted difference, 3%, range 1%-5%, P = .01; difference adjusted for prespecified covariates, 3%, range 0%-5%, P = .05).
Apparently minor differences in study design decisions may have a measurable impact on attrition in trials. Further investigation is warranted of the impact of the relevance of outcome measures on follow-up rates and, more broadly, of the consequences of what we ask participants to do when we invite them to take part in research studies.
ISRCTN Register 31070347; http://www.controlled-trials.com/ISRCTN31070347/31070347 Archived by WebCite at (http://www.webcitation.org/62cpeyYaY)
Attrition; retention; missing data; response rates; alcohol; online
Previous Mendelian randomization studies have suggested that, while low-density lipoprotein cholesterol (LDL-c) and triglycerides are causally implicated in coronary artery disease (CAD) risk, high-density lipoprotein cholesterol (HDL-c) may not be, with causal effect estimates compatible with the null.
The causal effects of these three lipid fractions can be better identified using the extended methods of ‘multivariable Mendelian randomization’. We employ this approach using published data on 185 lipid-related genetic variants and their associations with lipid fractions in 188,578 participants, and with CAD risk in 22,233 cases and 64,762 controls. Our results suggest that HDL-c may be causally protective of CAD risk, independently of the effects of LDL-c and triglycerides. Estimated causal odds ratios per standard deviation increase, based on 162 variants not having pleiotropic associations with either blood pressure or body mass index, are 1.57 (95% credible interval 1.45 to 1.70) for LDL-c, 0.91 (0.83 to 0.99, p-value = 0.028) for HDL-c, and 1.29 (1.16 to 1.43) for triglycerides.
Some interventions on HDL-c concentrations may influence risk of CAD, but to a lesser extent than interventions on LDL-c. A causal interpretation of these estimates relies on the assumption that the genetic variants do not have pleiotropic associations with risk factors on other pathways to CAD. If they do, a weaker conclusion is that genetic predictors of LDL-c, HDL-c and triglycerides each have independent associations with CAD risk.
Ageing populations may demand more blood transfusions, but the blood supply could be limited by difficulties in attracting and retaining a decreasing pool of younger donors. One approach to increase blood supply is to collect blood more frequently from existing donors. If more donations could be safely collected in this manner at marginal cost, then it would be of considerable benefit to blood services. National Health Service (NHS) Blood and Transplant in England currently allows men to donate up to every 12 weeks and women to donate up to every 16 weeks. In contrast, some other European countries allow donations as frequently as every 8 weeks for men and every 10 weeks for women. The primary aim of the INTERVAL trial is to determine whether donation intervals can be safely and acceptably decreased to optimise blood supply whilst maintaining the health of donors.
INTERVAL is a randomised trial of whole blood donors enrolled from all 25 static centres of NHS Blood and Transplant. Recruitment of about 50,000 male and female donors started in June 2012 and was completed in June 2014. Men have been randomly assigned to standard 12-week versus 10-week versus 8-week inter-donation intervals, while women have been assigned to standard 16-week versus 14-week versus 12-week inter-donation intervals. Sex-specific comparisons will be made by intention-to-treat analysis of outcomes assessed after two years of intervention. The primary outcome is the number of blood donations made. A key secondary outcome is donor quality of life, assessed using the Short Form Health Survey. Additional secondary endpoints include the number of ‘deferrals’ due to low haemoglobin (and other factors), iron status, cognitive function, physical activity, and donor attitudes. A comprehensive health economic analysis will be undertaken.
The INTERVAL trial should yield novel information about the effect of inter-donation intervals on blood supply, acceptability, and donors’ physical and mental well-being. The study will generate scientific evidence to help formulate blood collection policies in England and elsewhere.
Current Controlled Trials ISRCTN24760606, 25 January 2012.
whole blood donation; randomised controlled trial; donation frequency; blood supply; donor well-being
Background: Mendelian randomization uses genetic variants, assumed to be instrumental variables for a particular exposure, to estimate the causal effect of that exposure on an outcome. If the instrumental variable criteria are satisfied, the resulting estimator is consistent even in the presence of unmeasured confounding and reverse causation.
Methods: We extend the Mendelian randomization paradigm to investigate more complex networks of relationships between variables, in particular where some of the effect of an exposure on the outcome may operate through an intermediate variable (a mediator). If instrumental variables for the exposure and mediator are available, direct and indirect effects of the exposure on the outcome can be estimated, for example using either a regression-based method or structural equation models. The direction of effect between the exposure and a possible mediator can also be assessed. Methods are illustrated in an applied example considering causal relationships between body mass index, C-reactive protein and uric acid.
Results: These estimators are consistent in the presence of unmeasured confounding if, in addition to the instrumental variable assumptions, the effects of both the exposure on the mediator and the mediator on the outcome are homogeneous across individuals and linear without interactions. Nevertheless, a simulation study demonstrates that even considerable heterogeneity in these effects does not lead to bias in the estimates.
Conclusions: These methods can be used to estimate direct and indirect causal effects in a mediation setting, and have potential for the investigation of more complex networks between multiple interrelated exposures and disease outcomes.
Mendelian randomization; mediation; instrumental variable; direct effect; indirect effect
Attrition from follow-up is a major methodological challenge in randomized trials. Incentives are known to improve response rates in cross-sectional postal and online surveys, yet few studies have investigated whether they can reduce attrition from follow-up in online trials, which are particularly vulnerable to low follow-up rates.
Our objective was to determine the impact of incentives on follow-up rates in an online trial.
Two randomized controlled trials were embedded in a large online trial of a Web-based intervention to reduce alcohol consumption (the Down Your Drink randomized controlled trial, DYD-RCT). Participants were those in the DYD pilot trial eligible for 3-month follow-up (study 1) and those eligible for 12-month follow-up in the DYD main trial (study 2). Participants in both studies were randomly allocated to receive an offer of an incentive or to receive no offer of an incentive. In study 1, participants in the incentive arm were randomly offered a £5 Amazon.co.uk gift voucher, a £5 charity donation to Cancer Research UK, or entry in a prize draw for £250. In study 2, participants in the incentive arm were offered a £10 Amazon.co.uk gift voucher. The primary outcome was the proportion of participants who completed follow-up questionnaires in the incentive arm(s) compared with the no incentive arm.
In study 1 (n = 1226), there was no significant difference in response rates between those participants offered an incentive (175/615, 29%) and those with no offer (162/611, 27%) (difference = 2%, 95% confidence interval [CI] –3% to 7%). There was no significant difference in response rates among the three different incentives offered. In study 2 (n = 2591), response rates were 9% higher in the group offered an incentive (476/1296, 37%) than in the group not offered an incentive (364/1295, 28%) (difference = 9%, 95% CI 5% to 12%, P < .001). The incremental cost per extra successful follow-up in the incentive arm was £110 in study 1 and £52 in study 2.
Whereas an offer of a £10 Amazon.co.uk gift voucher can increase follow-up rates in online trials, an offer of a lower incentive may not. The marginal costs involved require careful consideration.
ISRCTN31070347; http://www.controlled-trials.com/ISRCTN31070347 (Archived by WebCite at http://www.webcitation.org/5wgr5pl3s)
Nonresponse; attrition; Internet; alcohol drinking; randomized controlled trial
The extent to which diabetes mellitus or hyperglycemia is related to risk of death from cancer or other nonvascular conditions is uncertain.
We calculated hazard ratios for cause-specific death, according to baseline diabetes status or fasting glucose level, from individual-participant data on 123,205 deaths among 820,900 people in 97 prospective studies.
After adjustment for age, sex, smoking status, and body-mass index, hazard ratios among persons with diabetes as compared with persons without diabetes were as follows: 1.80 (95% confidence interval [CI], 1.71 to 1.90) for death from any cause, 1.25 (95% CI, 1.19 to 1.31) for death from cancer, 2.32 (95% CI, 2.11 to 2.56) for death from vascular causes, and 1.73 (95% CI, 1.62 to 1.85) for death from other causes. Diabetes (vs. no diabetes) was moderately associated with death from cancers of the liver, pancreas, ovary, colorectum, lung, bladder, and breast. Aside from cancer and vascular disease, diabetes (vs. no diabetes) was also associated with death from renal disease, liver disease, pneumonia and other infectious diseases, mental disorders, nonhepatic digestive diseases, external causes, intentional self-harm, nervous-system disorders, and chronic obstructive pulmonary disease. Hazard ratios were appreciably reduced after further adjustment for glycemia measures, but not after adjustment for systolic blood pressure, lipid levels, inflammation or renal markers. Fasting glucose levels exceeding 100 mg per deciliter (5.6 mmol per liter), but not levels of 70 to 100 mg per deciliter (3.9 to 5.6 mmol per liter), were associated with death. A 50-year-old with diabetes died, on average, 6 years earlier than a counterpart without diabetes, with about 40% of the difference in survival attributable to excess nonvascular deaths.
In addition to vascular disease, diabetes is associated with substantial premature death from several cancers, infectious diseases, external causes, intentional self-harm, and degenerative disorders, independent of several major risk factors. (Funded by the British Heart Foundation and others.)
The case-cohort study design combines the advantages of a cohort study with the efficiency of a nested case-control study. However, unlike more standard observational study designs, there are currently no guidelines for reporting results from case-cohort studies. Our aim was to review recent practice in reporting these studies, and develop recommendations for the future. By searching papers published in 24 major medical and epidemiological journals between January 2010 and March 2013 using PubMed, Scopus and Web of Knowledge, we identified 32 papers reporting case-cohort studies. The median subcohort sampling fraction was 4.1% (interquartile range 3.7% to 9.1%). The papers varied in their approaches to describing the numbers of individuals in the original cohort and the subcohort, presenting descriptive data, and in the level of detail provided about the statistical methods used, so it was not always possible to be sure that appropriate analyses had been conducted. Based on the findings of our review, we make recommendations about reporting of the study design, subcohort definition, numbers of participants, descriptive information and statistical methods, which could be used alongside existing STROBE guidelines for reporting observational studies.
Health care and health care services are increasingly being delivered over the Internet. There is a strong argument that interventions delivered online should also be evaluated online to maximize the trial’s external validity. Conducting a trial online can help reduce research costs and improve some aspects of internal validity. To date, there are relatively few trials of health interventions that have been conducted entirely online. In this paper we describe the major methodological issues that arise in trials (recruitment, randomization, fidelity of the intervention, retention, and data quality), consider how the online context affects these issues, and use our experience of one online trial evaluating an intervention to help hazardous drinkers drink less (DownYourDrink) to illustrate potential solutions. Further work is needed to develop online trial methodology.
Internet; randomized controlled trial; research design; alcohol drinking
Carotid intima-media thickness (cIMT) is related to the risk of
cardiovascular events in the general population. An association between
changes in cIMT and cardiovascular risk is frequently assumed but has rarely
been reported. Our aim was to test this association.
We identified general population studies that assessed cIMT at least
twice and followed up participants for myocardial infarction, stroke, or
death. The study teams collaborated in an individual participant data
meta-analysis. Excluding individuals with previous myocardial infarction or
stroke, we assessed the association between cIMT progression and the risk of
cardiovascular events (myocardial infarction, stroke, vascular death, or a
combination of these) for each study with Cox regression. The log hazard
ratios (HRs) per SD difference were pooled by random effects
Of 21 eligible studies, 16 with 36 984 participants were included.
During a mean follow-up of 7·0 years, 1519 myocardial infarctions,
1339 strokes, and 2028 combined endpoints (myocardial infarction, stroke,
vascular death) occurred. Yearly cIMT progression was derived from two
ultrasound visits 2–7 years (median 4 years) apart. For mean common
carotid artery intima-media thickness progression, the overall HR of the
combined endpoint was 0·97 (95% CI
0·94–1·00) when adjusted for age, sex, and mean
common carotid artery intima-media thickness, and 0·98
(0·95–1·01) when also adjusted for vascular risk
factors. Although we detected no associations with cIMT progression in
sensitivity analyses, the mean cIMT of the two ultrasound scans was
positively and robustly associated with cardiovascular risk (HR for the
combined endpoint 1·16, 95% CI
1·10–1·22, adjusted for age, sex, mean common
carotid artery intima-media thickness progression, and vascular risk
factors). In three studies including 3439 participants who had four
ultrasound scans, cIMT progression did not correlate between occassions
(reproducibility correlations between
The association between cIMT progression assessed from two ultrasound
scans and cardiovascular risk in the general population remains unproven. No
conclusion can be derived for the use of cIMT progression as a surrogate in
Background The extent to which adult height, a biomarker of the interplay of genetic endowment and early-life experiences, is related to risk of chronic diseases in adulthood is uncertain.
Methods We calculated hazard ratios (HRs) for height, assessed in increments of 6.5 cm, using individual–participant data on 174 374 deaths or major non-fatal vascular outcomes recorded among 1 085 949 people in 121 prospective studies.
Results For people born between 1900 and 1960, mean adult height increased 0.5–1 cm with each successive decade of birth. After adjustment for age, sex, smoking and year of birth, HRs per 6.5 cm greater height were 0.97 (95% confidence interval: 0.96–0.99) for death from any cause, 0.94 (0.93–0.96) for death from vascular causes, 1.04 (1.03–1.06) for death from cancer and 0.92 (0.90–0.94) for death from other causes. Height was negatively associated with death from coronary disease, stroke subtypes, heart failure, stomach and oral cancers, chronic obstructive pulmonary disease, mental disorders, liver disease and external causes. In contrast, height was positively associated with death from ruptured aortic aneurysm, pulmonary embolism, melanoma and cancers of the pancreas, endocrine and nervous systems, ovary, breast, prostate, colorectum, blood and lung. HRs per 6.5 cm greater height ranged from 1.26 (1.12–1.42) for risk of melanoma death to 0.84 (0.80–0.89) for risk of death from chronic obstructive pulmonary disease. HRs were not appreciably altered after further adjustment for adiposity, blood pressure, lipids, inflammation biomarkers, diabetes mellitus, alcohol consumption or socio-economic indicators.
Conclusion Adult height has directionally opposing relationships with risk of death from several different major causes of chronic diseases.
Height; cardiovascular disease; cancer; cause-specific mortality; epidemiological study; meta-analysis