The results of this long-term, extensive study, evidence that available published clinical trial results do not match the originally generated outcomes, since negative results are published significantly less and later than positive results, compromising evidence based medical decisions. Interestingly, no differences have been found on impact factor of publications of clinical trials with positive or negative results, which seems to indicate that overall, statistical significance of results is not a major reason for rejection of clinical trial articles by journal editors.
Clinical trial publication is not a clearly dichotomic process 
. Results may appear in a number of different forms of presentation, which include, but are not restricted to medical journal articles. In order to assess publication bias, we considered different sources of presentation of results, which include final clinical study reports addressed to the EC, synopsis of results made available online by the sponsors, meeting abstracts, and especially, journal articles. However, a study has been classified as published only when global results of the trial have appeared as an article in a medical journal, since it has been considered that only original articles contain public information sufficiently detailed to allow decision making 
. It should be noted that journal articles, in contrast to data submitted online by sponsoring companies or available as meeting abstracts, have a previous review process that guarantees quality and completeness of the information provided. A documented phenomenon 
not foreseen in the original design of this study was that comparison between publications and original protocols revealed in some cases differences in main outcomes presented in the journal article. Quantifying and analyzing this phenomenon was beyond the scope of this study, and in those cases, a decision was made to prioritize the main outcome as presented in the article in order to classify study results. Thus, in some cases, a trial classified as positive might have been negative if the protocol main outcome had been taken into consideration.
Out of the 785 completed clinical trials, the results of 244 (31%) could not be identified by any of the means of results presentation investigated in this study. Fifty eight of these 244 were prematurely cancelled, but the other 186 were completed according to protocol, which means there is a substantial amount of information missing regarding clinical trials that were performed and completed. A possibility of mismatching and even misidentification of publication exists 
, although we believe we have partially overcome this problem by using multiple criteria to match publications with protocols. Another potential weakness of this study comes from the censoring date used. The follow up time since study submission was decided in order to allow for the maximum number of studies to be completed and published. Since we are aware that these data might change overtime, a survival study design was applied to analyze this data, in order to add validity to our results. It should also be noted that investigator – and more importantly- sponsor reasons for not dissemination of study results – other than the direction of results- were not investigated in this study.
Previous studies evidencing publication bias have been very diverse in terms of research questions, design and study characteristics, and thus have limitations in terms of validity. The present study has used a single wide cohort of clinical trials which were followed since their inception, and has quantified publication bias through the fusion of results published on medical journals and results obtained through other routes, including a non-public cohort, less susceptible to biases, constituted by clinical final reports submitted to the EC.
Our study provides clear evidence of the existence of publication bias favoring positive result studies over negative. Previous studies, performed in different areas of clinical research, including basic experimental studies, observational studies, and clinical trials, have shown the existence of a positive association between publication rates and favorable outcomes 
. Publication bias may be quantified not only through publication rates, but also through the speed with which results are made available. When time of publication depends on the nature of results, this phenomenon is qualified as time lag bias 
. In our study, mean survival time to publication has been over one year shorter for positive studies than for negative studies. Previous studies, using smaller cohorts of clinical trials, have confirmed an association between study results over time to publication 
. However, the findings of these studies cannot be compared with our results, due to the different criteria used to measure time to publication. Given the high variability in study duration, we considered more appropriate to measure time to publication as the interval between date of study closure (end of follow-up) and date of publication.
Studies with a descriptive hypothesis have shown lower publication rates and longer times to publication, which is indicative of a lesser priority applied to this type of studies in comparison to those with a comparative hypothesis. However, surprisingly, extent of publication and time lag bias has been greater when analyzing only phase 3/4 trials and those whose conditions of use have been approved after the trial, since this is a population of trials with a major influence on clinical practice. These results confirm the findings of a previous study evaluating the rate of publication of 909 trials supporting new drugs approved by the FDA and in which it was reported that rate of publication was associated with significance of outcomes 
. Undoubtedly, the application of the FDA Amendments Act of 2007, which mandates basic public results reporting for all trials supporting FDA-approved drugs, will have an effect on this scenario in the future. As other authors have pointed out 
, this information will be publically available, but it is unknown whether -or how/when- negative trials data will translate into journal article publications.
Impact factor is an index based on the frequency with which a journal article is cited on scientific publication, and is considered a marker of journal quality 
. There is a possibility of bias during review process previous to publication, if the reasons for rejection or acceptation are related to study results, independently of the scientific quality of the study. Four studies that examined manuscripts submitted to different journals concluded that manuscript acceptation was not associated with the statistical significance of the studies 
. However, experimental cohort studies on published trials seem to indicate otherwise 
Our study has found no differences in impact factor of publication of clinical trials with positive or negative results, while for descriptive trials the values of impact factor have been significantly lower. Moreover, in phase 3 and 4 trials, higher impact factor values were found for trials with negative results. This would seem to indicate that publication bias is set, not at the moment of selection of articles by journal editors, but previously, when the decision to submit or not manuscripts for publication is taken, as other authors have stated 
The design of a study of these characteristics, involving follow-up of trials since their inception, requires a considerable amount of time of execution, to allow for a sufficient time in order for the trials to be completed and published, as is the case for this cohort of studies, which started in 1997. Further research is needed to evaluate the effect of the newly initiatives destined to increase study results transparency, such as prospective registration of clinical trials, open access to results policy, and improved trial publication guidelines.
The results of this study point out to the fact that a change in paradigm is needed when access to clinical trial results is concerned. All actors implicated, investigators, regulatory authorities, journal editors, and especially, sponsoring companies, should provide for means to guarantee and increase public availability of unpublished results.