PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-18 (18)
 

Clipboard (0)
None

Select a Filter Below

Journals
Year of Publication
Document Types
author:("homo, Iztok")
1.  Peptidoglycan Recognition Proteins Kill Bacteria by Inducing Oxidative, Thiol, and Metal Stress 
PLoS Pathogens  2014;10(7):e1004280.
Mammalian Peptidoglycan Recognition Proteins (PGRPs) are a family of evolutionary conserved bactericidal innate immunity proteins, but the mechanism through which they kill bacteria is unclear. We previously proposed that PGRPs are bactericidal due to induction of reactive oxygen species (ROS), a mechanism of killing that was also postulated, and later refuted, for several bactericidal antibiotics. Here, using whole genome expression arrays, qRT-PCR, and biochemical tests we show that in both Escherichia coli and Bacillus subtilis PGRPs induce a transcriptomic signature characteristic of oxidative stress, as well as correlated biochemical changes. However, induction of ROS was required, but not sufficient for PGRP killing. PGRPs also induced depletion of intracellular thiols and increased cytosolic concentrations of zinc and copper, as evidenced by transcriptome changes and supported by direct measurements. Depletion of thiols and elevated concentrations of metals were also required, but by themselves not sufficient, for bacterial killing. Chemical treatment studies demonstrated that efficient bacterial killing can be recapitulated only by the simultaneous addition of agents leading to production of ROS, depletion of thiols, and elevation of intracellular metal concentrations. These results identify a novel mechanism of bacterial killing by innate immunity proteins, which depends on synergistic effect of oxidative, thiol, and metal stress and differs from bacterial killing by antibiotics. These results offer potential targets for developing new antibacterial agents that would kill antibiotic-resistant bacteria.
Author Summary
Bacterial infections are still a major cause of morbidity and mortality because of increasing antibiotic resistance. New targets for developing new approaches to antibacterial therapy are needed, because discovering new or improving current antibiotics have become increasingly difficult. One such approach is developing new antibacterial agents based on the antibacterial mechanisms of bactericidal innate immunity proteins, such as human peptidoglycan recognition proteins (PGRPs). Thus, our aim was to determine how PGRPs kill bacteria. We previously proposed that PGRPs kill bacteria by inducing toxic oxygen by-products (“reactive oxygen species”, ROS) in bacteria. It was also previously proposed, but recently refuted, that bactericidal antibiotics kill bacteria by inducing ROS production in bacteria. These findings prompted us to evaluate in greater detail the mechanism of PGRP-induced bacterial killing, including the role of ROS in PGRP killing. We show here that PGRPs kill bacteria through synergistic induction of ROS, depletion of thiols, and increasing intracellular concentration of metals, which are all required, but individually not sufficient for bacterial killing. Our results reveal a novel bactericidal mechanism of innate immunity proteins, which differs from killing by antibiotics and offers alternative targets for developing new antibacterial therapies for antibiotic-resistant bacteria.
doi:10.1371/journal.ppat.1004280
PMCID: PMC4102600  PMID: 25032698
2.  How do physicians decide to treat: an empirical evaluation of the threshold model 
Background
According to the threshold model, when faced with a decision under diagnostic uncertainty, physicians should administer treatment if the probability of disease is above a specified threshold and withhold treatment otherwise. The objectives of the present study are to a) evaluate if physicians act according to a threshold model, b) examine which of the existing threshold models [expected utility theory model (EUT), regret-based threshold model, or dual-processing theory] explains the physicians’ decision-making best.
Methods
A survey employing realistic clinical treatment vignettes for patients with pulmonary embolism and acute myeloid leukemia was administered to forty-one practicing physicians across different medical specialties. Participants were randomly assigned to the order of presentation of the case vignettes and re-randomized to the order of “high” versus “low” threshold case. The main outcome measure was the proportion of physicians who would or would not prescribe treatment in relation to perceived changes in threshold probability.
Results
Fewer physicians choose to treat as the benefit/harms ratio decreased (i.e. the threshold increased) and more physicians administered treatment as the benefit/harms ratio increased (and the threshold decreased). When compared to the actual treatment recommendations, we found that the regret model was marginally superior to the EUT model [Odds ratio (OR) = 1.49; 95% confidence interval (CI) 1.00 to 2.23; p = 0.056]. The dual-processing model was statistically significantly superior to both EUT model [OR = 1.75, 95% CI 1.67 to 4.08; p < 0.001] and regret model [OR = 2.61, 95% CI 1.11 to 2.77; p = 0.018].
Conclusions
We provide the first empirical evidence that physicians’ decision-making can be explained by the threshold model. Of the threshold models tested, the dual-processing theory of decision-making provides the best explanation for the observed empirical results.
doi:10.1186/1472-6947-14-47
PMCID: PMC4055375  PMID: 24903517
Medical decision-making; Threshold model; Dual-processing theory; Regret, Expected utility theory
3.  Effect of Initial Conditions on Reproducibility of Scientific Research 
Acta Informatica Medica  2014;22(3):156-159.
Background:
It is estimated that about half of currently published research cannot be reproduced. Many reasons have been offered as explanations for failure to reproduce scientific research findings- from fraud to the issues related to design, conduct, analysis, or publishing scientific research. We also postulate a sensitive dependency on initial conditions by which small changes can result in the large differences in the research findings when attempted to be reproduced at later times.
Methods:
We employed a simple logistic regression equation to model the effect of covariates on the initial study findings. We then fed the input from the logistic equation into a logistic map function to model stability of the results in repeated experiments over time. We illustrate the approach by modeling effects of different factors on the choice of correct treatment.
Results:
We found that reproducibility of the study findings depended both on the initial values of all independent variables and the rate of change in the baseline conditions, the latter being more important. When the changes in the baseline conditions vary by about 3.5 to about 4 in between experiments, no research findings could be reproduced. However, when the rate of change between the experiments is ≤2.5 the results become highly predictable between the experiments.
Conclusions:
Many results cannot be reproduced because of the changes in the initial conditions between the experiments. Better control of the baseline conditions in-between the experiments may help improve reproducibility of scientific findings.
doi:10.5455/aim.2014.22.156-159
PMCID: PMC4130690  PMID: 25132705
scientific research; initial conditions; reproducibility
4.  Genetic Association of Peptidoglycan Recognition Protein Variants with Inflammatory Bowel Disease 
PLoS ONE  2013;8(6):e67393.
Inflammatory bowel disease (IBD) is a common disease, includes Crohn's disease (CD) and ulcerative colitis (UC), and is determined by altered gut bacterial populations and aberrant host immune response. Peptidoglycan recognition proteins (PGLYRP) are innate immunity bactericidal proteins expressed in the intestine. In mice, PGLYRPs modulate bacterial populations in the gut and sensitivity to experimentally induced UC. The role of PGLYRPs in humans with CD and/or UC has not been previously investigated. Here we tested the hypothesis that genetic variants in PGLYRP1, PGLYRP2, PGLYRP3 and PGLYRP4 genes associate with CD and/or UC and with gender and/or age of onset of disease in the patient population. We sequenced all PGLYRP exons in 372 CD patients, 77 UC patients, 265 population controls, 210 familial CD controls, and 24 familial UC controls, identified all polymorphisms in these populations, and analyzed the variants for significant association with CD and UC. We identified 16 polymorphisms in the four PGLYRP genes that significantly associated with CD, UC, and/or subgroups of patient populations. Of the 16, 5 significantly associated with both CD and UC, 6 with CD, and 5 with UC. 12 significant variants result in amino acid substitutions and based on structural modeling several of these missense variants may have structural and/or functional consequences for PGLYRP proteins. Our data demonstrate that genetic variants in PGLYRP genes associate with CD and UC and may provide a novel insight into the mechanism of pathogenesis of IBD.
doi:10.1371/journal.pone.0067393
PMCID: PMC3686734  PMID: 23840689
5.  Treatment Success in Cancer: Industry Compared to Publicly Sponsored Randomized Controlled Trials 
PLoS ONE  2013;8(3):e58711.
Objective
To assess if commercially sponsored trials are associated with higher success rates than publicly-sponsored trials.
Study Design and Settings
We undertook a systematic review of all consecutive, published and unpublished phase III cancer randomized controlled trials (RCTs) conducted by GlaxoSmithKline (GSK) and the NCIC Clinical Trials Group (CTG). We included all phase III cancer RCTs assessing treatment superiority from 1980 to 2010. Three metrics were assessed to determine treatment successes: (1) the proportion of statistically significant trials favouring the experimental treatment, (2) the proportion of the trials in which new treatments were considered superior according to the investigators, and (3) quantitative synthesis of data for primary outcomes as defined in each trial.
Results
GSK conducted 40 cancer RCTs accruing 19,889 patients and CTG conducted 77 trials enrolling 33,260 patients. 42% (99%CI 24 to 60) of the results were statistically significant favouring experimental treatments in GSK compared to 25% (99%CI 13 to 37) in the CTG cohort (RR = 1.68; p = 0.04). Investigators concluded that new treatments were superior to standard treatments in 80% of GSK compared to 44% of CTG trials (RR = 1.81; p<0.001). Meta-analysis of the primary outcome indicated larger effects in GSK trials (odds ratio = 0.61 [99%CI 0.47–0.78] compared to 0.86 [0.74–1.00]; p = 0.003). However, testing for the effect of treatment over time indicated that treatment success has become comparable in the last decade.
Conclusions
While overall industry sponsorship is associated with higher success rates than publicly-sponsored trials, the difference seems to have disappeared over time.
doi:10.1371/journal.pone.0058711
PMCID: PMC3605423  PMID: 23555593
6.  Dual processing model of medical decision-making 
Background
Dual processing theory of human cognition postulates that reasoning and decision-making can be described as a function of both an intuitive, experiential, affective system (system I) and/or an analytical, deliberative (system II) processing system. To date no formal descriptive model of medical decision-making based on dual processing theory has been developed. Here we postulate such a model and apply it to a common clinical situation: whether treatment should be administered to the patient who may or may not have a disease.
Methods
We developed a mathematical model in which we linked a recently proposed descriptive psychological model of cognition with the threshold model of medical decision-making and show how this approach can be used to better understand decision-making at the bedside and explain the widespread variation in treatments observed in clinical practice.
Results
We show that physician’s beliefs about whether to treat at higher (lower) probability levels compared to the prescriptive therapeutic thresholds obtained via system II processing is moderated by system I and the ratio of benefit and harms as evaluated by both system I and II. Under some conditions, the system I decision maker’s threshold may dramatically drop below the expected utility threshold derived by system II. This can explain the overtreatment often seen in the contemporary practice. The opposite can also occur as in the situations where empirical evidence is considered unreliable, or when cognitive processes of decision-makers are biased through recent experience: the threshold will increase relative to the normative threshold value derived via system II using expected utility threshold. This inclination for the higher diagnostic certainty may, in turn, explain undertreatment that is also documented in the current medical practice.
Conclusions
We have developed the first dual processing model of medical decision-making that has potential to enrich the current medical decision-making field, which is still to the large extent dominated by expected utility theory. The model also provides a platform for reconciling two groups of competing dual processing theories (parallel competitive with default-interventionalist theories).
doi:10.1186/1472-6947-12-94
PMCID: PMC3471048  PMID: 22943520
7.  When is it rational to participate in a clinical trial? A game theory approach incorporating trust, regret and guilt 
Background
Randomized controlled trials (RCTs) remain an indispensable form of human experimentation as a vehicle for discovery of new treatments. However, since their inception RCTs have raised ethical concerns. The ethical tension has revolved around “duties to individuals” vs. “societal value” of RCTs. By asking current patients “to sacrifice for the benefit of future patients” we risk subjugating our duties to patients’ best interest to the utilitarian goal for the good of others. This tension creates a key dilemma: when is it rational, from the perspective of the trial patients and researchers (as societal representatives of future patients), to enroll in RCTs?
Methods
We employed the trust version of the prisoner’s dilemma since interaction between the patient and researcher in the setting of a clinical trial is inherently based on trust. We also took into account that the patient may have regretted his/her decision to participate in the trial, while a researcher may feel guilty because he/she abused the patient’s trust.
Results
We found that under typical circumstances of clinical research, most patients can be expected not to trust researchers, and most researchers can be expected to abuse the patients’ trust. The most significant factor determining trust was the success of experimental or standard treatments, respectively. The more that a researcher believes the experimental treatment will be successful, the more incentive the researcher has to abuse trust. The analysis was sensitive to the assumptions about the utilities related to success and failure of therapies that are tested in RCTs. By varying all variables in the Monte Carlo analysis we found that, on average, the researcher can be expected to honor a patient’s trust 41% of the time, while the patient is inclined to trust the researcher 69% of the time. Under assumptions of our model, enrollment into RCTs represents a rational strategy that can meet both patients’ and researchers’ interests simultaneously 19% of the time.
Conclusions
There is an inherent ethical dilemma in the conduct of RCTs. The factors that hamper full co-operation between patients and researchers in the conduct of RCTs can be best addressed by: a) having more reliable estimates on the probabilities that new vs. established treatments will be successful, b) improving transparency in the clinical trial system to ensure fulfillment of “the social contract” between patients and researchers.
doi:10.1186/1471-2288-12-85
PMCID: PMC3473303  PMID: 22726276
8.  Optimism bias leads to inconclusive results - an empirical study 
Journal of clinical epidemiology  2010;64(6):583-593.
Objective
Optimism bias refers to unwarranted belief in the efficacy of new therapies. We assessed the impact of optimism bias on a proportion of trials that did not answer their research question successfully, and explored whether poor accrual or optimism bias is responsible for inconclusive results.
Study Design
Systematic review
Setting
Retrospective analysis of a consecutive series phase III randomized controlled trials (RCTs) performed under the aegis of National Cancer Institute Cooperative groups.
Results
359 trials (374 comparisons) enrolling 150,232 patients were analyzed. 70% (262/374) of the trials generated conclusive results according to the statistical criteria. Investigators made definitive statements related to the treatment preference in 73% (273/374) of studies. Investigators’ judgments and statistical inferences were concordant in 75% (279/374) of trials. Investigators consistently overestimated their expected treatment effects, but to a significantly larger extent for inconclusive trials. The median ratio of expected over observed hazard ratio or odds ratio was 1.34 (range 0.19 – 15.40) in conclusive trials compared to 1.86 (range 1.09 – 12.00) in inconclusive studies (p<0.0001). Only 17% of the trials had treatment effects that matched original researchers’ expectations.
Conclusion
Formal statistical inference is sufficient to answer the research question in 75% of RCTs. The answers to the other 25% depend mostly on subjective judgments, which at times are in conflict with statistical inference. Optimism bias significantly contributes to inconclusive results.
doi:10.1016/j.jclinepi.2010.09.007
PMCID: PMC3079810  PMID: 21163620
optimism-bias; inconclusive trials; randomized controlled trials; bias; study design; systematic review
9.  Extensions to Regret-based Decision Curve Analysis: An application to hospice referral for terminal patients 
Background
Despite the well documented advantages of hospice care, most terminally ill patients do not reap the maximum benefit from hospice services, with the majority of them receiving hospice care either prematurely or delayed. Decision systems to improve the hospice referral process are sorely needed.
Methods
We present a novel theoretical framework that is based on well-established methodologies of prognostication and decision analysis to assist with the hospice referral process for terminally ill patients. We linked the SUPPORT statistical model, widely regarded as one of the most accurate models for prognostication of terminally ill patients, with the recently developed regret based decision curve analysis (regret DCA). We extend the regret DCA methodology to consider harms associated with the prognostication test as well as harms and effects of the management strategies. In order to enable patients and physicians in making these complex decisions in real-time, we developed an easily accessible web-based decision support system available at the point of care.
Results
The web-based decision support system facilitates the hospice referral process in three steps. First, the patient or surrogate is interviewed to elicit his/her personal preferences regarding the continuation of life-sustaining treatment vs. palliative care. Then, regret DCA is employed to identify the best strategy for the particular patient in terms of threshold probability at which he/she is indifferent between continuation of treatment and of hospice referral. Finally, if necessary, the probabilities of survival and death for the particular patient are computed based on the SUPPORT prognostication model and contrasted with the patient's threshold probability. The web-based design of the CDSS enables patients, physicians, and family members to participate in the decision process from anywhere internet access is available.
Conclusions
We present a theoretical framework to facilitate the hospice referral process. Further rigorous clinical evaluation including testing in a prospective randomized controlled trial is required and planned.
doi:10.1186/1472-6947-11-77
PMCID: PMC3305393  PMID: 22196308
10.  Instrumental variable meta-analysis of individual patient data: application to adjust for treatment non-compliance 
Background
Intention-to-treat (ITT) is the standard data analysis method which includes all patients regardless of receiving treatment. Although the aim of ITT analysis is to prevent bias due to prognostic dissimilarity, it is also a counter-intuitive type of analysis as it counts patients who did not receive treatment, and may lead to "bias toward the null." As treated (AT) method analyzes patients according to the treatment actually received rather than intended, but is affected by the selection bias. Both ITT and AT analyses can produce biased estimates of treatment effect, so instrumental variable (IV) analysis has been proposed as a technique to control for bias when using AT data. Our objective is to correct for bias in non-experimental data from previously published individual patient data meta-analysis by applying IV methods
Methods
Center prescribing preference was used as an IV to assess the effects of methotrexate (MTX) in preventing debilitating complications of chronic graft-versus-host-disease (cGVHD) in patients who received peripheral blood stem cell (PBSCT) or bone marrow transplant (BMT) in nine randomized controlled trials (1107 patients). IV methods are applied using 2-stage logistic, 2-stage probit and generalized method of moments models.
Results
ITT analysis showed a statistically significant detrimental effect with the use of day 11 MTX, resulting in cGVHD odds ratio (OR) of 1.34 (95% CI 1.02-1.76). AT results showed no difference in the odds of cGVHD with the use of MTX [OR 1.31 (95%CI 0.99-1.73)]. IV analysis further corrected the results toward no difference in the odds of cGVHD between PBSCT vs. BMT, allowing for a possibility of beneficial effects of MTX in preventing cGVHD in PBSCT recipients (OR 1.14; 95%CI 0.83-1.56).
Conclusion
All instrumental variable models produce similar results. IV estimates correct for bias and do not exclude the possibility that MTX may be beneficial, contradicting the ITT analysis.
doi:10.1186/1471-2288-11-55
PMCID: PMC3117817  PMID: 21510899
11.  A Social Network Analysis of Treatment Discoveries in Cancer 
PLoS ONE  2011;6(3):e18060.
Controlled clinical trials are widely considered to be the vehicle to treatment discovery in cancer that leads to significant improvements in health outcomes including an increase in life expectancy. We have previously shown that the pattern of therapeutic discovery in randomized controlled trials (RCTs) can be described by a power law distribution. However, the mechanism generating this pattern is unknown. Here, we propose an explanation in terms of the social relations between researchers in RCTs. We use social network analysis to study the impact of interactions between RCTs on treatment success. Our dataset consists of 280 phase III RCTs conducted by the NCI from 1955 to 2006. The RCT networks are formed through trial interactions formed i) at random, ii) based on common characteristics, or iii) based on treatment success. We analyze treatment success in terms of survival hazard ratio as a function of the network structures. Our results show that the discovery process displays power law if there are preferential interactions between trials that may stem from researchers' tendency to interact selectively with established and successful peers. Furthermore, the RCT networks are “small worlds”: trials are connected through a small number of ties, yet there is much clustering among subsets of trials. We also find that treatment success (improved survival) is proportional to the network centrality measures of closeness and betweenness. Negative correlation exists between survival and the extent to which trials operate within a limited scope of information. Finally, the trials testing curative treatments in solid tumors showed the highest centrality and the most influential group was the ECOG. We conclude that the chances of discovering life-saving treatments are directly related to the richness of social interactions between researchers inherent in a preferential interaction model.
doi:10.1371/journal.pone.0018060
PMCID: PMC3065482  PMID: 21464896
12.  A regret theory approach to decision curve analysis: A novel method for eliciting decision makers' preferences and decision-making 
Background
Decision curve analysis (DCA) has been proposed as an alternative method for evaluation of diagnostic tests, prediction models, and molecular markers. However, DCA is based on expected utility theory, which has been routinely violated by decision makers. Decision-making is governed by intuition (system 1), and analytical, deliberative process (system 2), thus, rational decision-making should reflect both formal principles of rationality and intuition about good decisions. We use the cognitive emotion of regret to serve as a link between systems 1 and 2 and to reformulate DCA.
Methods
First, we analysed a classic decision tree describing three decision alternatives: treat, do not treat, and treat or no treat based on a predictive model. We then computed the expected regret for each of these alternatives as the difference between the utility of the action taken and the utility of the action that, in retrospect, should have been taken. For any pair of strategies, we measure the difference in net expected regret. Finally, we employ the concept of acceptable regret to identify the circumstances under which a potentially wrong strategy is tolerable to a decision-maker.
Results
We developed a novel dual visual analog scale to describe the relationship between regret associated with "omissions" (e.g. failure to treat) vs. "commissions" (e.g. treating unnecessary) and decision maker's preferences as expressed in terms of threshold probability. We then proved that the Net Expected Regret Difference, first presented in this paper, is equivalent to net benefits as described in the original DCA. Based on the concept of acceptable regret we identified the circumstances under which a decision maker tolerates a potentially wrong decision and expressed it in terms of probability of disease.
Conclusions
We present a novel method for eliciting decision maker's preferences and an alternative derivation of DCA based on regret theory. Our approach may be intuitively more appealing to a decision-maker, particularly in those clinical situations when the best management option is the one associated with the least amount of regret (e.g. diagnosis and treatment of advanced cancer, etc).
doi:10.1186/1472-6947-10-51
PMCID: PMC2954854  PMID: 20846413
13.  Treatment Success in Cancer 
Archives of internal medicine  2008;168(6):632-642.
Background
The evaluation of research output, such as estimation of the proportion of treatment successes, is of ethical, scientific, and public importance but has rarely been evaluated systematically. We assessed how often experimental cancer treatments that undergo testing in randomized clinical trials (RCTs) result in discovery of successful new interventions.
Methods
We extracted data from all completed (published and unpublished) phase 3 RCTs conducted by the National Cancer Institute cooperative groups since their inception in 1955. Therapeutic successes were determined by (1) assessing the proportion of statistically significant trials favoring new or standard treatments, (2) determining the proportion of the trials in which new treatments were considered superior to standard treatments according to the original researchers, and (3) quantitatively synthesizing data for main clinical outcomes (overall and event-free survival).
Results
Data from 624 trials (781 randomized comparisons) involving 216 451 patients were analyzed. In all, 30% of trials had statistically significant results, of which new interventions were superior to established treatments in 80% of trials. The original researchers judged that the risk-benefit profile favored new treatments in 41% of comparisons (316 of 766). Hazard ratios for overall and event-free survival, available for 614 comparisons, were 0.95 (99% confidence interval [CI], 0.93-0.98) and 0.90 (99% CI, 0.87- 0.93), respectively, slightly favoring new treatments. Breakthrough interventions were discovered in 15% of trials.
Conclusions
Approximately 25% to 50% of new cancer treatments that reach the stage of assessment in RCTs will prove successful. The pattern of successes has become more stable over time. The results are consistent with the hypothesis that the ethical principle of equipoise defines limits of discoverability in clinical research and ultimately drives therapeutic advances in clinical medicine.
doi:10.1001/archinte.168.6.632
PMCID: PMC2773511  PMID: 18362256
14.  Evaluation of New Treatments in Radiation Oncology 
Context
The superiority of innovative over standard treatments is not known. To describe accurately the outcomes of innovations that are tested in randomized controlled trials (RCTs) 3 factors have to be considered: publication rate, quality of trials, and the choice of the adequate comparator intervention.
Objective
To determine the success rate of innovative treatments by assessing preferences between experimental and standard treatments according to original investigators′ conclusions, determining the proportion of RCTs that achieved primary outcomes′ statistical significance, and performing meta-analysis to examine if the summary point estimate favored innovative vs standard treatments.
Data Sources
Randomized controlled trials conducted by the Radiation Therapy Oncology Group (RTOG).
Study Selection
All completed phase 3 trials conducted by the RTOG since its creation in 1968 until 2002. For multiple publications of the same study, we used the one with the most complete primary outcomes and with the longest follow-up information.
Data Extraction
We used the US National Cancer Institute definition of completed studies to determine the publication rate. We extracted data related to publication status, methodological quality, and treatment comparisons. One investigator extracted the data from all studies and 2 independent investigators extracted randomly about 50% of the data. Disagreements were resolved by consensus during a meeting.
Data Synthesis
Data on 12734 patients from 57 trials were evaluated. The publication rate was 95%. The quality of trials was high. We found no evidence of inappropriateness of the choice of comparator. Although the investigators judged that standard treatments were preferred in 71% of the comparisons, when data were meta-analyzed innovations were as likely as standard treatments to be successful (odds ratio for survival, 1.01; 99% confidence interval, 0.96-1.07; P=.5). In contrast, treatment-related mortality was worse with innovations (odds ratio, 1.76; 99% confidence interval, 1.01-3.07; P=.008). We found no predictable pattern of treatment successes in oncology: sometimes innovative treatments are better than the standard ones and vice versa; in most cases there were no substantive differences between experimental and conventional treatments.
Conclusion
The finding that the results in individual trials cannot be predicted in advance indicates that the system and rationale for RCTs is well preserved and that successful interventions can only be identified after an RCT is completed.
doi:10.1001/jama.293.8.970
PMCID: PMC1779758  PMID: 15728168
15.  When Should Potentially False Research Findings Be Considered Acceptable? 
PLoS Medicine  2007;4(2):e26.
Summary
Ioannidis estimated that most published research findings are false [1], but he did not indicate when, if at all, potentially false research results may be considered as acceptable to society. We combined our two previously published models [2,3] to calculate the probability above which research findings may become acceptable. A new model indicates that the probability above which research results should be accepted depends on the expected payback from the research (the benefits) and the inadvertent consequences (the harms). This probability may dramatically change depending on our willingness to tolerate error in accepting false research findings. Our acceptance of research findings changes as a function of what we call “acceptable regret,” i.e., our tolerance of making a wrong decision in accepting the research hypothesis. We illustrate our findings by providing a new framework for early stopping rules in clinical research (i.e., when should we accept early findings from a clinical trial indicating the benefits as true?). Obtaining absolute “truth” in research is impossible, and so society has to decide when less-than-perfect results may become acceptable.
The authors calculate the probability above which potentially false research findings may become acceptable to society.
doi:10.1371/journal.pmed.0040026
PMCID: PMC1808081  PMID: 17326703
16.  Are experimental treatments for cancer in children superior to established treatments? Observational study of randomised controlled trials by the Children's Oncology Group 
BMJ : British Medical Journal  2005;331(7528):1295.
Objectives To assess how often new treatments for childhood cancer assessed in phase III randomised trials are superior or inferior to standard treatments and whether the pattern of successes and failures in new treatments is consistent with uncertainty being the ethical basis for enrolling patients in such trials.
Design Observational study.
Setting Phase III randomised controlled trials carried out under the aegis of the Children's Oncology Group between 1955 and 1997, regardless of whether they were published.
Main outcome measures Overall survival, event free survival, and treatment related mortality.
Results 126 trials were included, involving 152 comparisons and 36 567 patients. The odds ratio for overall survival with experimental treatments was 0.96 (99% confidence interval 0.89 to 1.03), indicating that new treatments are as likely to be inferior as they are to be superior to standard treatments. This result was not affected by publication bias, methodological quality, treatment type, disease, or comparator.
Conclusions New treatments in childhood cancer tested in randomised controlled trials are, on average, as likely to be inferior as they are to be superior to standard treatments, confirming that the uncertainty principle has been operating.
doi:10.1136/bmj.38628.561123.7C
PMCID: PMC1298846  PMID: 16299015
17.  Use of re-randomized data in meta-analysis 
Background
Outcomes collected in randomized clinical trials are observations of random variables that should be independent and identically distributed. However, in some trials, the patients are randomized more than once thus violating both of these assumptions. The probability of an event is not always the same when a patient is re-randomized; there is probably a non-zero covariance coming from observations on the same patient. This is of particular importance to the meta-analysts.
Methods
We developed a method to estimate the relative error in the risk differences with and without re-randomization of the patients. The relative error can be estimated by an expression depending on the percentage of the patients who were re-randomized, multipliers (how many times more likely it is to repeat an event) for the probability of reoccurrences, and the ratio of the total events reported and the initial number of patients entering the trial.
Results
We illustrate our methods using two randomized trials testing growth factors in febrile neutropenia. We showed that under some circumstances the relative error of taking into account re-randomized patients was sufficiently small to allow using the results in the meta-analysis. Our findings indicate that if the study in question is of similar size to other studies included in the meta-analysis, the error introduced by re-randomization will only minimally affect meta-analytic summary point estimate.
We also show that in our model the risk ratio remains constant during the re-randomization, and therefore, if a meta-analyst is concerned about the effect of re-randomization on the meta-analysis, one way to sidestep the issue and still obtain reliable results is to use risk ratio as the measure of interest.
Conclusion
Our method should be helpful in the understanding of the results of clinical trials and particularly helpful to the meta-analysts to assess if re-randomized patient data can be used in their analyses.
doi:10.1186/1471-2288-5-17
PMCID: PMC1145185  PMID: 15882470
18.  Estimating the mean and variance from the median, range, and the size of a sample 
Background
Usually the researchers performing meta-analysis of continuous outcomes from clinical trials need their mean value and the variance (or standard deviation) in order to pool data. However, sometimes the published reports of clinical trials only report the median, range and the size of the trial.
Methods
In this article we use simple and elementary inequalities and approximations in order to estimate the mean and the variance for such trials. Our estimation is distribution-free, i.e., it makes no assumption on the distribution of the underlying data.
Results
We found two simple formulas that estimate the mean using the values of the median (m), low and high end of the range (a and b, respectively), and n (the sample size). Using simulations, we show that median can be used to estimate mean when the sample size is larger than 25. For smaller samples our new formula, devised in this paper, should be used. We also estimated the variance of an unknown sample using the median, low and high end of the range, and the sample size. Our estimate is performing as the best estimate in our simulations for very small samples (n ≤ 15). For moderately sized samples (15 70), the formula range/6 gives the best estimator for the standard deviation (variance).
We also include an illustrative example of the potential value of our method using reports from the Cochrane review on the role of erythropoietin in anemia due to malignancy.
Conclusion
Using these formulas, we hope to help meta-analysts use clinical trials in their analysis even when not all of the information is available and/or reported.
doi:10.1186/1471-2288-5-13
PMCID: PMC1097734  PMID: 15840177

Results 1-18 (18)