Very few empirical studies examined both study publication bias and outcome reporting bias in the same cohort. However, 12 of the included empirical studies demonstrate consistent evidence of an association between positive or statistically significant results and publication. They suggest that studies reporting positive/statistically significant results are more likely to be published and that statistically significant outcomes have higher odds of being fully reported.
In this review we focused on empirical studies that included RCTs since they provide the best evidence of the efficacy of medical interventions 
. RCTs are prone to study publication bias, but it has been shown that other types of studies are more prone to study publication bias 
. The main limitation of this review was that for eight of the 16 included cohorts, information on RCTs could not be separated from information on other studies. Due to this barrier, and variability across empirical studies in the time lapse between when the protocol was approved and when the data were censored for analysis, we felt it was not appropriate to combine statistically the results from the different cohorts. Also, the fact that in five empirical studies 
follow-up of trials was less than 90% could mean that the problem of study publication bias is underestimated in these cohorts.
It is difficult to tell the current state of the literature with respect to study publication bias, as even the most recently published empirical evaluations included in the review, considered RCTs which began 10 years ago. Nevertheless, the empirical studies that were published within the last eight years show that the total amount of studies published was less than 50% on average.
None of the empirical studies explored the idea of all outcomes being non-significant versus those deemed most important being non-significant. In the reasons given, it was not stated which outcomes/how many outcomes were non-significant. Some empirical studies imply that all results were non-significant although this is due to the way the reason was written i.e. no significant results; but it is not explained whether this means for all outcomes, or primary and secondary, harm and efficacy etc. This implies a potential ambiguity of ‘no significant results’. It is not clear whether studies remain unpublished because all outcomes are non-significant and those that are published are so because significant results are selectively reported. This is where study publication bias and outcome reporting bias overlap.
Dubben et al 
looked at whether study publication bias exists in studies which investigate the problem of study publication bias. Although they found no evidence of study publication bias, it is interesting to note that two of the included cohorts in this review have not been published 
. The study conducted by Wormald et al 
concluded that ‘there was limited evidence of study publication bias’ whereas the authors of the other study 
have not as yet had time to submit the study for publication. The inclusion and exclusion criteria were applied by only the first author, and there may be other unpublished studies of study publication bias or outcome reporting bias that were not located by the search, however contact with experts in the field reduces the likelihood of these issues introducing bias.
Submission is an important aspect of investigating study publication bias as it will provide information on whether reports are not being published because they are not submitted or they are submitted but not accepted. Obviously those studies that are not submitted are not published and it was found by Dickersin et al 
that non-publication was primarily a result of failure to write up and submit the trial results rather than rejection of submitted manuscripts. This is confirmed for the cohorts identified here with the percentage of studies not published due to not being submitted ranging from 63% to 100%. Olson et al 
also found that there was no evidence that study publication bias occurred once manuscripts had been submitted to a medical journal. However, this study looks at a high impact general journal, which is unlikely to be representative for specialist journals that publish the majority of clinical trials.
Ten studies assessed the impact of funding on publication; this was done in several ways. Three studies found that external funding lead to a higher rate of publication 
. von Elm et al 
found that the probability of publication decreased if the study was commercially funded and increased with non commercial funding. Easterbrook et al 
found that compared with unfunded studies, government funded studies were more likely to yield statistically significant results but government sponsorship was not found to have a statistically significant effect on the likelihood of publication and company sponsored trials were less likely to be published or presented. Dickersin et al 
found no difference in the funding mechanism grant versus contract and Ioannidis et al 
found no difference in whether data was managed by the pharmaceutical industry or other federally sponsored organisations. Chan 2004b et al 
found that 61% of the 51 trials with major discrepancies were funded solely by industry sources compared with 49% of the 51 trials without discrepancies. Ghersi 
did examine the effect of funding in terms of reporting and discrepancies of outcomes but no information about the results is currently available. Hahn et al 
compared the funder stated in protocol to publication. These studies indicate that funding is an important factor to consider when investigating publication bias and outcome reporting bias, however more work needs to be done to examine common questions before conclusions regarding the relationship between funding and outcome reporting bias can be drawn.
Our review has examined inception cohorts only, however, other authors have investigated aspects of study publication bias and outcome reporting bias using different study designs, with similar conclusions. The Cochrane review by Scherer et al 
investigating the full publication of results initially presented in abstracts found that only 63% of results from abstracts describing randomized or controlled clinical trials are published in full and ‘positive’ results were more frequently published than non ‘positive’ results. Several studies investigated a cohort of trials submitted to drug licensing authorities 
and all found that many of these trials remain unpublished, with one study demonstrating that trials with positive outcomes resulted more often in submission of a final report to the regulatory authority 
. Olson et al 
conducted a prospective cohort study of manuscripts submitted to JAMA and assessed whether the submitted manuscripts were more likely to be published if they reported positive results. They did not find a statistically significant difference in publication rates between those with positive and negative results. None of the inception cohorts addressed the question as to whether the significance determined whether a submitted paper was accepted or not, with the exception of one inception cohort 
that found that “positive” trials were published significantly more rapidly after submission than “negative” trials. Finally, a comparison of the published version of RCTs in a specialist clinical journal with the original trial protocol found that important changes between protocol and published paper are common; the published primary outcome was exactly the same as in the protocol in six out of 26 trials (23%) 
We recommend that researchers use the flow diagram presented in this work as the standard for reporting of future similar studies that look at study publication bias and ORB as it clearly shows what happens to all trials in the cohort.
Reviewers should scrutinise trials with missing outcome data and ensure that an attempt to contact trialists is always made if the study does not report results. Also, the lack of reporting of specified outcome(s) should not be an automatic reason for exclusion of studies. Statisticians should be involved for the data extraction of more complex outcomes, for example, time to event. Methods that have been developed to assess the robustness of the conclusions of systematic reviews to ORB 
should be used. Meta-analyses of outcomes where several relevant trials have missing data should be seen with extra caution. In all, the credibility of clinical research findings may decrease when there is wide flexibility in the use of various outcomes and analysis in a specific field and this is coupled with selective reporting biases.
The setting up of clinical trials registers and the advance publication of detailed protocols with an explicit description of outcomes and analysis plans should help combat these problems. Trialists should be encouraged to describe legitimate changes to outcomes stated in the protocol. With the set up of online journals, where more space is available, trialists should be encouraged to write up and submit for publication without selection of results.
For empirical evaluations of selective reporting biases, the definition of significance is important as is whether the direction of the results is taken into account, i.e. whether the results are significant for or against the experimental intervention. However, only one study took this into account 
. The selective publication preference forces may change over time. For example, it is often seen that initially studies favouring treatment are more likely to be published and those favouring control suppressed. However, as time passes, contradicting trials that favour control may become attractive for publication, as they are ‘different.’ The majority of cohorts included in this review do not consider this possibility.
Another recommendation is to conduct empirical evaluations looking at both ORB and study publication bias in RCTs to investigate the relative importance of both i.e. which type of bias is the greater problem. The effects of factors such as funding, i.e. the influence of pharmaceutical industry trials versus non pharmaceutical trials, should also be factored in these empirical evaluations.
Evidence of the personal communications can be provided upon request.