About one third of abstracts submitted to biomedical meetings is eventually published as full reports. This estimation was derived from two different methodological approaches and both gave very similar results. It is based on the assumption that the analysed abstracts are representative samples of all abstracts. Also, for rejected abstracts, the time-course until publication remained obscure. However, the estimate implies that about 2/3 of all submitted biomedical research abstracts will not
get published. Selective underreporting of research may be seen as a form of scientific misconduct [87
], and it may result in publication bias. It is important, therefore, to understand why some abstracts are accepted at meetings and others are not, and why some of the accepted abstracts are published subsequently and others are not.
Ideally, all published full reports are based on data from sound scientific studies. Accordingly, peer review of meetings and journals may be regarded as a putatively laudable mechanism to filter out invalid studies. Abstract acceptance and oral presentation were associated with subsequent full publication. One might hypothesise that only the most valid studies would be selected for presentation at the meetings, and that among those only the best would be chosen for oral presentation and for subsequent full publication. Unfortunately, there is no reliable evidence to support this view. One report found prospective study design, affiliation with a university, and randomisation to be associated with abstract acceptance [76
]. Another report showed an association between the originality of a study and abstract acceptance [74
]. However, in the same report, a quality score was not significantly associated with abstract acceptance [74
We identified five factors that possibly play a role when abstracts are filtered on their way from meeting submission to subsequent publication. First, abstracts that reported on a positive study outcome were more likely to be published subsequently, as already shown in a previous analysis [2
]. Positive outcome was defined differently in the included studies; we decided to combine these data despite diverging definitions. When authors were asked why their study had remained unpublished, less than 5% indicated that a negative study result was the reason [82
]. Low priority, anticipated rejection, or lack of time were more often cited, but these reasons may be related to negative study outcome [82
]. When investigators were surveyed for reasons of non-completion or non-submission of work initially presented at a meeting, similar answers were found [88
]. Our analysis provides evidence that positive outcome bias is likely to occur already at an earlier stage, i.e. at abstract submission. Peers who select abstracts submitted to biomedical meetings might favour studies that report on positive results. The authors themselves may be not only less enthusiastic about publication of negative study results [89
], but also about their prior submission to scientific meetings.
Second, abstracts that reported on basic (as opposed to clinical) research were more likely to be accepted at biomedical meetings and to be published subsequently. Differences in the quality of conducting and reporting of basic and clinical research may explain this finding. However, in both domains, studies that investigated the relationship between the quality of abstracts and their likelihood of publication came to contradictory conclusions [22
]. Whether the abstracts included in the analysed follow-up studies were representative for basic or clinical research could not be assessed. It is conceivable that, in basic research, only the most important findings were submitted for presentation and later published. When chairpersons and senior research advisors were asked to rate robustness and quality of research projects performed in their departments, they consistently rated basic research highest [91
]. Assuming a significant involvement of these research leaders in the reviewing of meeting abstracts and journal manuscripts, this implies the existence of a bias in favour of basic research.
Third, meetings comprising a larger number of abstracts had higher acceptance rates. But abstracts presented at smaller meetings were more likely to be published subsequently. Since meeting organisers often wish to attract the maximum number of attendees, a less rigorous selection of abstracts may result for larger meetings. At smaller meetings the peer review process may be more stringent, leading to selection of higher-quality papers that, in turn, would be more likely to be published eventually. However, numerous factors determine the size of a meeting and make the relationship with the acceptance of abstracts difficult to interpret.
Fourth, abstracts submitted to US meetings were accepted less often, but when accepted, were more likely to be published subsequently. This suggests that filtering mechanisms at US meetings are different from those at non-US meetings, but once again other underlying factors may play a role.
Fifth, the most frequently investigated specialities had different rates of acceptance and publication. Internal medicine and paediatrics had higher abstract acceptance rates than surgery. Paediatrics and surgery meetings had higher full publication rates than anaesthesia/emergency medicine meetings.
The analysed reports did not allow to explain these disparities.
Choosing an appropriate statistical approach for combining the time-to-event data presented a methodological challenge since publication rates were reported in widely varying average follow-up intervals. In previous analyses a single publication rate after the median follow-up time was calculated [2
]. We estimated the time-course of publication rates using the techniques of survival analysis. Then we performed sensitivity analyses by excluding data from reports with less detailed follow-up intervals. Based on reports with the most detailed follow-up (intervals ≤ 1 year), the publication rates increased to about 44% at six years after meeting presentation. When data from reports with longer follow-up intervals were included in the analyses, estimates differed. Generally, publication rates were lower for observation periods up to six years, and were higher for those longer than six years. Thus, for this type of combined time-to-event analyses, there may be an argument to only include data from studies with short follow-up intervals, i.e. more detailed follow-up.
Often, acceptance rates were reported as a secondary outcome of studies that investigated the functioning of scientific meeting committees. Therefore, we may have missed other relevant studies, and the retrieved studies may not be representative. Also, the methodology of included studies varied with regard to the length of follow-up after a meeting, follow-up intervals, type and number of data sources, and criteria of matching between abstracts and subsequent full reports. In an attempt to take this heterogeneity into account, we conducted sensitivity analyses with subgroups of studies with different maximum follow-up intervals. This resulted in different Kaplan-Meier curves.
We decided pre hoc to exclude studies on collections of abstracts that originated from a variety of meetings, since they did not provide information on the individual meetings. Two studies with good methodological quality were among those [8
]. The publication rate of abstracts from the Oxford Database of Perinatal Trials was reported to be 39% after at least four years of follow-up [8
]. For abstracts collected by the Cochrane Cystic Fibrosis Group, the publication rate after five years was 40% [9
There are still unresolved questions, and these could be addressed in further studies. For instance, we did not study what happens to scientific data before abstract submission, nor did we look at alternative pathways of dissemination. An unknown number of manuscripts are submitted to journals directly and arrive at full publication without former presentation at meetings. Also, criteria applied by peers to select abstracts for meeting presentation are not well understood. Finally, we could not draw any conclusions on the relationship between study quality and the likelihood of abstract acceptance and subsequent full publication.