Two years after the emergence of the influenza 2009 H1N1 pandemic and well after the end of both the 2009–2010 and 2010–2011 seasons only a minority of the registered randomized evidence on the potential vaccines has been published. The global response to the pandemic was ultrafast 1
and this included the early launch of numerous randomized trials for testing many different vaccine formulations. However, very limited randomized evidence was published in the peer-reviewed literature by the time major decisions were made in the fall of 2009 about the use of these vaccines 
. Peer-reviewed data appeared in the highest-impact journals on over 15,000 participants by the end of 2009, but relatively limited evidence was published in 2010 or 2011 and none of the trials launched after October 2009 have been published as of June 2011, well after the 2010–2011 influenza season has finished. Trials were generally completed promptly, with a median time of 5 months from starting until completion. This is not surprising given the relatively simple design of these trials with short-term follow-up. The major problem was the delay after completion of the trials. Trials that started late and similarly trials completed late had limited chances of getting published.
Other investigators have described that for yet another major epidemic, SARS, the proportion of relevant research published during the epidemic was limited 
. Most literature on SARS was published after the epidemic had ceased to be a problem. It is unknown what portion of the conducted research was actually never published at all, as interest in SARS declined sharply in later years in most circles. However, the core literature of SARS did not involve randomized trials, while vaccine trials were of pivotal interest for the 2009 H1N1 pandemic.
The publication of clinical trial results is generally considered an ethical imperative. Much as a survey with 30–40% response rate is considered of questionable validity, a randomized trials agenda where only 38% of the data have been published poses concerns. Lack of publication of randomized trials, often coupled with a biased selection against trials with specific results, is well documented across very diverse fields 
. Only 42% of an unselected sample of trials completed in 2005 had been published by the end of 2007 
. Randomized controlled trials, in particular phase III trials, can vary substantially on the time they take to conduct, analyze and publish. This time includes enrolment, patient follow-up, data analysis, manuscript preparation, peer-review, possible rejections, and publication phases 
. For trials that require substantial follow-up, results may be published many years after the trial starts 
For vaccine trials where timely evidence is needed, the evaluation of the primary immunogenicity and short-term safety outcomes can be performed quickly and trials are completed in minimal time. Therefore the rate-limiting steps are manuscript preparation, review and publication of the results. Our data do not allow us to know with certainty which of these steps in the publication process may have been most retarding for influenza H1N1 vaccine trials. However, it is reasonable to suspect that authors, reviewers and journals may all show urgency in writing, reviewing and publishing results, if these become available early on. This is proven by the very rapid publication of the very first few trials, all of which were published in record time in the most prestigious medical journals and attracted enormous attention in 2009 
. The three trials published in New England Journal of Medicine in 2009 
received according to the Thompson Reuters Web of Knowledge 80, 58, and 58 citations, respectively, within the first year from their publication. However, this was just the tip of the iceberg of the randomized evidence on this topic. Interest in the other trials diminished and faded over time, in particular after the fall of 2009. Later published trials appeared in journals of far lesser citation impact. By 2011 two trials were published as a single paper in a low impact-factor journal, while trials of similar magnitude could have been published in a major journal in 2009.
Eventually, less than 30% of the trials registered in 2009–2010 were published by mid-2011. This lack of published data for the majority of the evidence creates difficulties in systematically appraising the overall randomized agenda of influenza H1N1 vaccines 
. Moreover, numerous formulations have been developed from at least 14 different companies and it is not easy to extrapolate inferences from one formulation to another. Fragmentation and lack of publication shrink the evidence-base on a topic of major public health importance.
Some limitations should be acknowledged. First, we do not know the results of the unpublished trials and few of them (n
5) seem not even completed yet. There is a substantial literature in other fields that unpublished or late-published trials have less favourable or even “negative” results as compared with more rapidly published trials 
. However, we have no evidence for such a bias in 2009 H1N1 vaccine trials. Most of these trials do not have results that can be categorized as “positive” and “negative” anyhow, since they compare different doses and formulations and, with the exception of very low doses, they are likely to generate substantial immunogenicity. Moreover, safety seems to have been very well established currently, at least in the short-term, based on observational studies of thousands of people who received 2009 H1N1 vaccines 
. However, the lack of published information on the majority of the randomized data on immunogenicity does not allow estimating with high reliability the relative merits of different formulations.
Second, it is possible that some additional trials exist that are not registered. Then our reported non-publication rates may even underestimate the magnitude of this problem. For example, an updated search at the time of the revision of this manuscript (October 25, 2011) identified two otherwise eligible trials 
that were recently published (in August and September 2011, respectively) and that made no mention to registration. This is despite the fact that the publishing journals for these trials have instructions to the authors asking for registration of randomized trials and documentation of the registration number. The denominator of the total number of launched unregistered trials is by default unknown. Otherwise, the quality of the registry-recorded information is probably adequate. One potential exception is that, as we acknowledge in the Methods
, information on the time of completion of a trial based on registry information can be sometimes tenuous, thus analyses using the date of completion require extra caution.
Finally, some additional trial results may have been made available by companies to select committees of key organizations and experts/insiders in the H1N1 field. Such insider-views and privileged communications are typical in almost any medical field. However, this does not negate the importance of publishing the results in the wider peer-reviewed literature. When we checked for such publicly available information, we found only scarce and fragmented data on H1N1 trial results at the FDA website, and only a minority of the trial reports posted on the EMA website reported vaccine compositions (covered under manufacturer's codes), thus it was impossible to ascertain which formulations are most immunogenic or safe 
. Having widely accessible data in regulatory agencies and also in the peer-reviewed literature may diminish the publication delay issue. Such public data transparency will also help address concerns about the differences observed between regulatory-submitted and literature-published results that have been documented for medication trials 
Expedited posting, review and timely online publication of randomized results may also be feasible, employing evolving structures such as PLoS Currents: Influenza 
. However, one has to ensure that such online options employ also rigorous and transparent peer-review and also are utilized for this purpose. To our knowledge, none of these trials were posted in PLoS Currents: Influenza. A perusal of the 75 articles in PLoS Currents: Influenza as of October 24, 2011 shows that none of them are randomized clinical trials on humans (there is only one trial on pigs). Investigators may feel that there is an opportunity cost in writing up manuscripts for publication if they feel that they would no longer be attractive and cited and would most likely be published only in low impact journals. However, one has to find incentives for the majority of trials to become published after peer-review, including the majority of trials that did not make it into publication during the early phase of golden opportunity for publication in major journals. This information may be of critical importance in giving a more comprehensive picture of the available evidence for the future, for any subsequent pandemics by the same virus. Remedying the publication system for such trials would also be critical for improving the completeness of the randomized evidence for future pandemics by other infectious agents.
The study did not require ethics approval.
Technical appendix with all the data on the 73 registered randomized controlled trials on 2009 A(H1N1) influenza vaccines available from the corresponding author.