PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-16 (16)
 

Clipboard (0)
None

Select a Filter Below

Journals
Year of Publication
Document Types
1.  Characteristics of a loop of evidence that affect detection and estimation of inconsistency: a simulation study 
Background
The assumption of consistency, defined as agreement between direct and indirect sources of evidence, underlies the increasingly popular method of network meta-analysis. This assumption is often evaluated by statistically testing for a difference between direct and indirect estimates within each loop of evidence. However, the test is believed to be underpowered. We aim to evaluate its properties when applied to a loop typically found in published networks.
Methods
In a simulation study we estimate type I error, power and coverage probability of the inconsistency test for dichotomous outcomes using realistic scenarios informed by previous empirical studies. We evaluate test properties in the presence or absence of heterogeneity, using different estimators of heterogeneity and by employing different methods for inference about pairwise summary effects (Knapp-Hartung and inverse variance methods).
Results
As expected, power is positively associated with sample size and frequency of the outcome and negatively associated with the presence of heterogeneity. Type I error converges to the nominal level as the total number of individuals in the loop increases. Coverage is close to the nominal level in most cases. Different estimation methods for heterogeneity do not greatly impact on test performance, but different methods to derive the variances of the direct estimates impact on inconsistency inference. The Knapp-Hartung method is more powerful, especially in the absence of heterogeneity, but exhibits larger type I error. The power for a ‘typical’ loop (comprising of 8 trials and about 2000 participants) to detect a 35% relative change between direct and indirect estimation of the odds ratio was 14% for inverse variance and 21% for Knapp-Hartung methods (with type I error 5% in the former and 11% in the latter).
Conclusions
The study gives insight into the conditions under which the statistical test can detect important inconsistency in a loop of evidence. Although different methods to estimate the uncertainty of the mean effect may improve the test performance, this study suggests that the test has low power for the ‘typical’ loop. Investigators should interpret results very carefully and always consider the comparability of the studies in terms of potential effect modifiers.
Electronic supplementary material
The online version of this article (doi:10.1186/1471-2288-14-106) contains supplementary material, which is available to authorized users.
doi:10.1186/1471-2288-14-106
PMCID: PMC4190337  PMID: 25239546
Mixed treatment comparison; Multiple interventions; Coherence; Consistency; Simulation study; Bias
2.  Homocysteine lowering interventions for preventing cardiovascular events 
Background
Cardiovascular disease such as coronary artery disease, stroke and congestive heart failure, is a leading cause of death worldwide. A postulated risk factor is elevated circulating total homocysteine (tHcy) levels which is influenced mainly by blood levels of cyanocobalamin (vitamin B12), folic acid (vitamin B9) and pyridoxine (vitamin B6). There is uncertainty regarding the strength of association between tHcy and the risk of cardiovascular disease.
Objectives
To assess the clinical effectiveness of homocysteine-lowering interventions (HLI) in people with or without pre-existing cardiovascular disease.
Search methods
We searched The Cochrane Central Register of Controlled Trials (CENTRAL) on The Cochrane Library (issue 3 2008), MEDLINE (1950 to August 2008), EMBASE (1988 to August 2008), and LILACS (1982 to September 2, 2008). We also searched in Allied and Complementary Medicine (AMED; 1985 to August 2008), ISI Web of Science (1993 to August 2008), and the Cochrane Stroke Group Specialised Register (April 2007). We hand searched pertinent journals and the reference lists of included papers. We also contacted researchers in the field. There was no language restriction in the search.
Selection criteria
We included randomised clinical trials (RCTs) assessing the effects of HLI for preventing cardiovascular events with a follow-up period of 1 year or longer. We considered myocardial infarction and stroke as the primary outcomes. We excluded studies in patients with end-stage renal disease.
Data collection and analysis
We independently performed study selection, risk of bias assessment and data extraction. We estimated relative risks (RR) for dichotomous outcomes. We measured statistical heterogeneity using I2. We used a random-effects model to synthesise the findings.
Main results
We included eight RCTs involving 24,210 participants with a low risk of bias in general terms. HLI did not reduce the risk of non-fatal or fatal myocardial infarction, stroke, or death by any cause (pooled RR 1.03, 95% CI 0.94 to 1.13, I2 = 0%; pooled RR 0.89, 95% CI 0.73 to 1.08, I2 = 15%); and pooled RR 1.00 (95% CI 0.92 to 1.09, I2: 0%), respectively.
Authors’ conclusions
Results from available published trials suggest that there is no evidence to support the use of HLI to prevent cardiovascular events.
doi:10.1002/14651858.CD006612.pub2
PMCID: PMC4164174  PMID: 19821378
Angina Pectoris [prevention & control]; Cardiovascular Diseases [*prevention & control]; Hyperhomocysteinemia [*therapy]; Myocardial Infarction [prevention & control]; Randomized Controlled Trials as Topic; Stroke [prevention & control]; Vitamin B Complex [*therapeutic use]; Humans
3.  Evaluating the Quality of Evidence from a Network Meta-Analysis 
PLoS ONE  2014;9(7):e99682.
Systematic reviews that collate data about the relative effects of multiple interventions via network meta-analysis are highly informative for decision-making purposes. A network meta-analysis provides two types of findings for a specific outcome: the relative treatment effect for all pairwise comparisons, and a ranking of the treatments. It is important to consider the confidence with which these two types of results can enable clinicians, policy makers and patients to make informed decisions. We propose an approach to determining confidence in the output of a network meta-analysis. Our proposed approach is based on methodology developed by the Grading of Recommendations Assessment, Development and Evaluation (GRADE) Working Group for pairwise meta-analyses. The suggested framework for evaluating a network meta-analysis acknowledges (i) the key role of indirect comparisons (ii) the contributions of each piece of direct evidence to the network meta-analysis estimates of effect size; (iii) the importance of the transitivity assumption to the validity of network meta-analysis; and (iv) the possibility of disagreement between direct evidence and indirect evidence. We apply our proposed strategy to a systematic review comparing topical antibiotics without steroids for chronically discharging ears with underlying eardrum perforations. The proposed framework can be used to determine confidence in the results from a network meta-analysis. Judgements about evidence from a network meta-analysis can be different from those made about evidence from pairwise meta-analyses.
doi:10.1371/journal.pone.0099682
PMCID: PMC4084629  PMID: 24992266
5.  The Quality of Reporting Methods and Results in Network Meta-Analyses: An Overview of Reviews and Suggestions for Improvement 
PLoS ONE  2014;9(3):e92508.
Introduction
Some have suggested the quality of reporting of network meta-analyses (a technique used to synthesize information to compare multiple interventions) is sub-optimal. We sought to review information addressing this claim.
Objective
To conduct an overview of existing evaluations of quality of reporting in network meta-analyses and indirect treatment comparisons, and to compile a list of topics which may require detailed reporting guidance to enhance future reporting quality.
Methods
An electronic search of Medline and the Cochrane Registry of methodologic studies (January 2004–August 2013) was performed by an information specialist. Studies describing findings from quality of reporting assessments were sought. Screening of abstracts and full texts was performed by two team members. Descriptors related to all aspects of reporting a network meta-analysis were summarized.
Results
We included eight reports exploring the quality of reporting of network meta-analyses. From past reviews, authors found several aspects of network meta-analyses were inadequately reported, including primary information about literature searching, study selection, and risk of bias evaluations; statement of the underlying assumptions for network meta-analysis, as well as efforts to verify their validity; details of statistical models used for analyses (including information for both Bayesian and Frequentist approaches); completeness of reporting of findings; and approaches for summarizing probability measures as additional important considerations.
Conclusions
While few studies were identified, several deficiencies in the current reporting of network meta-analyses were observed. These findings reinforce the need to develop reporting guidance for network meta-analyses. Findings from this review will be used to guide next steps in the development of reporting guidance for network meta-analysis in the format of an extension of the PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analysis) Statement.
doi:10.1371/journal.pone.0092508
PMCID: PMC3966807  PMID: 24671099
6.  Characteristics of Networks of Interventions: A Description of a Database of 186 Published Networks 
PLoS ONE  2014;9(1):e86754.
Systematic reviews that employ network meta-analysis are undertaken and published with increasing frequency while related statistical methodology is evolving. Future statistical developments and evaluation of the existing methodologies could be motivated by the characteristics of the networks of interventions published so far in order to tackle real rather than theoretical problems. Based on the recently formed network meta-analysis literature we aim to provide an insight into the characteristics of networks in healthcare research. We searched PubMed until end of 2012 for meta-analyses that used any form of indirect comparison. We collected data from networks that compared at least four treatments regarding their structural characteristics as well as characteristics of their analysis. We then conducted a descriptive analysis of the various network characteristics. We included 186 networks of which 35 (19%) were star-shaped (treatments were compared to a common comparator but not between themselves). The median number of studies per network was 21 and the median number of treatments compared was 6. The majority (85%) of the non-star shaped networks included at least one multi-arm study. Synthesis of data was primarily done via network meta-analysis fitted within a Bayesian framework (113 (61%) networks). We were unable to identify the exact method used to perform indirect comparison in a sizeable number of networks (18 (9%)). In 32% of the networks the investigators employed appropriate statistical methods to evaluate the consistency assumption; this percentage is larger among recently published articles. Our descriptive analysis provides useful information about the characteristics of networks of interventions published the last 16 years and the methods for their analysis. Although the validity of network meta-analysis results highly depends on some basic assumptions, most authors did not report and evaluate them adequately. Reviewers and editors need to be aware of these assumptions and insist on their reporting and accuracy.
doi:10.1371/journal.pone.0086754
PMCID: PMC3899297  PMID: 24466222
7.  Meta-analysis and The Cochrane Collaboration: 20 years of the Cochrane Statistical Methods Group 
Systematic Reviews  2013;2:80.
The Statistical Methods Group has played a pivotal role in The Cochrane Collaboration over the past 20 years. The Statistical Methods Group has determined the direction of statistical methods used within Cochrane reviews, developed guidance for these methods, provided training, and continued to discuss and consider new and controversial issues in meta-analysis. The contribution of Statistical Methods Group members to the meta-analysis literature has been extensive and has helped to shape the wider meta-analysis landscape.
In this paper, marking the 20th anniversary of The Cochrane Collaboration, we reflect on the history of the Statistical Methods Group, beginning in 1993 with the identification of aspects of statistical synthesis for which consensus was lacking about the best approach. We highlight some landmark methodological developments that Statistical Methods Group members have contributed to in the field of meta-analysis. We discuss how the Group implements and disseminates statistical methods within The Cochrane Collaboration. Finally, we consider the importance of robust statistical methodology for Cochrane systematic reviews, note research gaps, and reflect on the challenges that the Statistical Methods Group faces in its future direction.
doi:10.1186/2046-4053-2-80
PMCID: PMC4219183  PMID: 24280020
8.  Graphical Tools for Network Meta-Analysis in STATA 
PLoS ONE  2013;8(10):e76654.
Network meta-analysis synthesizes direct and indirect evidence in a network of trials that compare multiple interventions and has the potential to rank the competing treatments according to the studied outcome. Despite its usefulness network meta-analysis is often criticized for its complexity and for being accessible only to researchers with strong statistical and computational skills. The evaluation of the underlying model assumptions, the statistical technicalities and presentation of the results in a concise and understandable way are all challenging aspects in the network meta-analysis methodology. In this paper we aim to make the methodology accessible to non-statisticians by presenting and explaining a series of graphical tools via worked examples. To this end, we provide a set of STATA routines that can be easily employed to present the evidence base, evaluate the assumptions, fit the network meta-analysis model and interpret its results.
doi:10.1371/journal.pone.0076654
PMCID: PMC3789683  PMID: 24098547
9.  Timing Matters in Hip Fracture Surgery: Patients Operated within 48 Hours Have Better Outcomes. A Meta-Analysis and Meta-Regression of over 190,000 Patients 
PLoS ONE  2012;7(10):e46175.
Background
To assess the relationship between surgical delay and mortality in elderly patients with hip fracture. Systematic review and meta-analysis of retrospective and prospective studies published from 1948 to 2011. Medline (from 1948), Embase (from 1974) and CINAHL (from 1982), and the Cochrane Library. Odds ratios (OR) and 95% confidence intervals for each study were extracted and pooled with a random effects model. Heterogeneity, publication bias, Bayesian analysis, and meta-regression analyses were done. Criteria for inclusion were retro- and prospective elderly population studies, patients with operated hip fractures, indication of timing of surgery and survival status.
Methodology/Principal Findings
There were 35 independent studies, with 191,873 participants and 34,448 deaths. The majority considered a cut-off between 24 and 48 hours. Early hip surgery was associated with a lower risk of death (pooled odds ratio (OR) 0.74, 95% confidence interval (CI) 0.67 to 0.81; P<0.000) and pressure sores (0.48, 95% CI 0.38 to 0.60; P<0.000). Meta-analysis of the adjusted prospective studies gave similar results. The Bayesian probability predicted that about 20% of future studies might find that early surgery is not beneficial for decreasing mortality. None of the confounders (e.g. age, sex, data source, baseline risk, cut-off points, study location, quality and year) explained the differences between studies.
Conclusions/Significance
Surgical delay is associated with a significant increase in the risk of death and pressure sores. Conservative timing strategies should be avoided. Orthopaedic surgery services should ensure the majority of patients are operated within one or two days.
doi:10.1371/journal.pone.0046175
PMCID: PMC3463569  PMID: 23056256
10.  Meta-Analysis of the Immunogenicity and Tolerability of Pandemic Influenza A 2009 (H1N1) Vaccines 
PLoS ONE  2011;6(9):e24384.
Background
Although the 2009 (H1N1) influenza pandemic officially ended in August 2010, the virus will probably circulate in future years. Several types of H1N1 vaccines have been tested including various dosages and adjuvants, and meta-analysis is needed to identify the best formulation.
Methods
We searched MEDLINE, EMBASE, and nine clinical trial registries to April 2011, in any language for randomized clinical trials (RCTs) on healthy children, adolescents, adults and the elderly. Primary outcome was the seroconversion rate according to hemagglutinination-inhibition (HI); secondary outcomes were adverse events. For the primary outcome, we used head-to-head meta-analysis and multiple-treatments meta-analysis.
Results
Eighteen RCTs could be included in all primary analyses, for a total of 76 arms (16,725 subjects). After 2 doses, all 2009 H1N1 split/subunit inactivated vaccines were highly immunogenic and overcome CPMP seroconversion criteria. After 1 dose only, all split/subunit vaccines induced a satisfactory immunogenicity (> = 70%) in adults and adolescents, while only some formulations showed acceptable results for children and elderly (non-adjuvanted at high-doses and oil-in-water adjuvanted vaccines). Vaccines with oil-in-water adjuvants were more immunogenic than both nonadjuvanted and aluminum-adjuvanted vaccines at equal doses and their immunogenicity at doses < = 6 µg (even with as little as 1.875 µg of hemagglutinin antigen) was not significantly lower than that achieved after higher doses. Finally, the rate of serious vaccine-related adverse events was low for all 2009 H1N1 vaccines (3 cases, resolved in 10 days, out of 22826 vaccinated subjects). However, mild to moderate adverse reactions were more (and very) frequent for oil-in-water adjuvanted vaccines.
Conclusions
Several one-dose formulations might be valid for future vaccines, but 2 doses may be needed for children, especially if a low-dose non-adjuvanted vaccine is used. Given that 15 RCTs were sponsored by vaccine manufacturers, future trials sponsored by non-industry agencies and comparing vaccines using different types of adjuvants are needed.
doi:10.1371/journal.pone.0024384
PMCID: PMC3167852  PMID: 21915319
11.  Discovery Properties of Genome-wide Association Signals From Cumulatively Combined Data Sets 
American Journal of Epidemiology  2009;170(10):1197-1206.
Genetic effects for common variants affecting complex disease risk are subtle. Single genome-wide association (GWA) studies are typically underpowered to detect these effects, and combination of several GWA data sets is needed to enhance discovery. The authors investigated the properties of the discovery process in simulated cumulative meta-analyses of GWA study-derived signals allowing for potential genetic model misspecification and between-study heterogeneity. Variants with null effects on average (but also between-data set heterogeneity) could yield false-positive associations with seemingly homogeneous effects. Random effects had higher than appropriate false-positive rates when there were few data sets. The log-additive model had the lowest false-positive rate. Under heterogeneity, random-effects meta-analyses of 2–10 data sets averaging 1,000 cases/1,000 controls each did not increase power, or the meta-analysis was even less powerful than a single study (power desert). Upward bias in effect estimates and underestimation of between-study heterogeneity were common. Fixed-effects calculations avoided power deserts and maximized discovery of association signals at the expense of much higher false-positive rates. Therefore, random- and fixed-effects models are preferable for different purposes (fixed effects for initial screenings, random effects for generalizability applications). These results may have broader implications for the design and interpretation of large-scale multiteam collaborative studies discovering common gene variants.
doi:10.1093/aje/kwp262
PMCID: PMC2800267  PMID: 19808636
epidemiology; genetics; genome-wide association study; Human Genome Project; meta-analysis; models, genetic; polymorphism, single nucleotide
12.  Underlying Genetic Models of Inheritance in Established Type 2 Diabetes Associations 
American Journal of Epidemiology  2009;170(5):537-545.
For most associations of common single nucleotide polymorphisms (SNPs) with common diseases, the genetic model of inheritance is unknown. The authors extended and applied a Bayesian meta-analysis approach to data from 19 studies on 17 replicated associations with type 2 diabetes. For 13 SNPs, the data fitted very well to an additive model of inheritance for the diabetes risk allele; for 4 SNPs, the data were consistent with either an additive model or a dominant model; and for 2 SNPs, the data were consistent with an additive or recessive model. Results were robust to the use of different priors and after exclusion of data for which index SNPs had been examined indirectly through proxy markers. The Bayesian meta-analysis model yielded point estimates for the genetic effects that were very similar to those previously reported based on fixed- or random-effects models, but uncertainty about several of the effects was substantially larger. The authors also examined the extent of between-study heterogeneity in the genetic model and found generally small between-study deviation values for the genetic model parameter. Heterosis could not be excluded for 4 SNPs. Information on the genetic model of robustly replicated association signals derived from genome-wide association studies may be useful for predictive modeling and for designing biologic and functional experiments.
doi:10.1093/aje/kwp145
PMCID: PMC2732984  PMID: 19602701
Bayes theorem; diabetes mellitus, type 2; meta-analysis; models, genetic; polymorphism, genetic; population characteristics
13.  Efficacy and acceptability of selective serotonin reuptake inhibitors for the treatment of depression in Parkinson's disease: a systematic review and meta-analysis of randomized controlled trials 
BMC Neurology  2010;10:49.
Background
Selective serotonin reuptake inhibitors (SSRIs) are the most commonly prescribed antidepressants for the treatment of depression in patients with Parkinson's Disease (PD) but data on their efficacy are controversial.
Methods
We conducted a systematic review and meta-analysis of randomized controlled trials to investigate the efficacy and acceptability of SSRIs in the treatment of depression in PD.
Results
Ten studies were included. In the comparison between SSRIs and Placebo (n = 6 studies), the combined risk ratio (random effects) was 1.08 (95% confidence interval: 0.77 - 1.55, p = 0.67). In the comparison between SSRIs and Tricyclic Antidepressants (TCAs) (n = 3 studies) the combined risk ratio was 0.75 (0.39 - 1.42, p = 0.37). An acceptability analysis showed that SSRIs were generally well tolerated.
Conclusions
These results suggest that there is insufficient evidence to reject the null hypothesis of no differences in efficacy between SSRIs and placebo in the treatment of depression in PD. Due to the limited number of studies and the small sample sizes a type II error (false negative) cannot be excluded. The comparison between SSRIs and TCAs is based on only three studies and further trials with more pragmatic design are needed.
doi:10.1186/1471-2377-10-49
PMCID: PMC2903535  PMID: 20565960
14.  Underlying genetic models of inheritance in established type 2 diabetes associations 
American journal of epidemiology  2009;170(5):537-545.
For most associations of common polymorphisms with common diseases, the genetic model of inheritance is unknown. We extended and applied a Bayesian meta-analysis approach to data from 19 studies on 17 replicated associations for type 2 diabetes. For 13 polymorphisms, the data fit very well to an additive model, for 4 polymorphisms the data were consistent with either an additive or dominant model, and for 2 polymorphisms with an additive or recessive model of inheritance for the diabetes risk allele. Results were robust to using different priors and after excluding data where index polymorphisms had been examined indirectly through proxy markers. The Bayesian meta-analysis model yielded point estimates for the genetic effects that are very similar to those previously reported based on fixed or random effects models, but uncertainty about several of the effects was substantially larger. We also examined the extent of between-study heterogeneity in the genetic model and found generally small values of the between-study deviation for the genetic model parameter. Heterosis could not be excluded in 4 SNPs. Information on the genetic model of robustly replicated GWA-derived association signals may be useful for predictive modeling, and for designing biological and functional experiments.
doi:10.1093/aje/kwp145
PMCID: PMC2732984  PMID: 19602701
15.  Family-Based versus Unrelated Case-Control Designs for Genetic Associations 
PLoS Genetics  2006;2(8):e123.
The most simple and commonly used approach for genetic associations is the case-control study design of unrelated people. This design is susceptible to population stratification. This problem is obviated in family-based studies, but it is usually difficult to accumulate large enough samples of well-characterized families. We addressed empirically whether the two designs give similar estimates of association in 93 investigations where both unrelated case-control and family-based designs had been employed. Estimated odds ratios differed beyond chance between the two designs in only four instances (4%). The summary relative odds ratio (ROR) (the ratio of odds ratios obtained from unrelated case-control and family-based studies) was close to unity (0.96 [95% confidence interval, 0.91–1.01]). There was no heterogeneity in the ROR across studies (amount of heterogeneity beyond chance I2 = 0%). Differences on whether results were nominally statistically significant (p < 0.05) or not with the two designs were common (opposite classification rates 14% and 17%); this reflected largely differences in power. Conclusions were largely similar in diverse subgroup analyses. Unrelated case-control and family-based designs give overall similar estimates of association. We cannot rule out rare large biases or common small biases.
Synopsis
Different types of designs are used for the assessment of genetic associations for complex diseases. Case-control studies of unrelated people and family-based designs are the most widely used. Each has its advantages and disadvantages. This paper compares the estimates of the two types of design using a meta-analytic approach, i.e. a systematic selection of data and quantitative synthesis of results across many studies. The authors examined 93 associations where both unrelated case-control and family-based designs had been employed. Both designs gave overall similar estimates of association and the conclusions were very similar in subgroup analyses that considered various design features that might affect in theory the degree of agreement between the two designs. No heterogeneity between studies was observed. Hence, there was no consistent pattern of over-estimation or under-estimation of the probed association with one or the other design. However, one cannot exclude the possibility that rare large differences or common small differences may occur between the two designs.
doi:10.1371/journal.pgen.0020123
PMCID: PMC1534078  PMID: 16895437
16.  A non-parametric framework for estimating threshold limit values 
Background
To estimate a threshold limit value for a compound known to have harmful health effects, an 'elbow' threshold model is usually applied. We are interested on non-parametric flexible alternatives.
Methods
We describe how a step function model fitted by isotonic regression can be used to estimate threshold limit values. This method returns a set of candidate locations, and we discuss two algorithms to select the threshold among them: the reduced isotonic regression and an algorithm considering the closed family of hypotheses. We assess the performance of these two alternative approaches under different scenarios in a simulation study. We illustrate the framework by analysing the data from a study conducted by the German Research Foundation aiming to set a threshold limit value in the exposure to total dust at workplace, as a causal agent for developing chronic bronchitis.
Results
In the paper we demonstrate the use and the properties of the proposed methodology along with the results from an application. The method appears to detect the threshold with satisfactory success. However, its performance can be compromised by the low power to reject the constant risk assumption when the true dose-response relationship is weak.
Conclusion
The estimation of thresholds based on isotonic framework is conceptually simple and sufficiently powerful. Given that in threshold value estimation context there is not a gold standard method, the proposed model provides a useful non-parametric alternative to the standard approaches and can corroborate or challenge their findings.
doi:10.1186/1471-2288-5-36
PMCID: PMC1298303  PMID: 16274473

Results 1-16 (16)