Search tips
Search criteria 


Logo of bmjgroupThis articleSubmit a manuscriptOpen Access at BMJContact us
Evidence-Based Medicine
Evid Based Med. 2013 April; 18(2): 48–53.
PMCID: PMC3607116

A comparison of the accuracy of clinical decisions based on full-text articles and on journal abstracts alone: a study among residents in a tertiary care hospital



Many clinicians depend solely on journal abstracts to guide clinical decisions.


This study aims to determine if there are differences in the accuracy of responses to simulated cases between resident physicians provided with an abstract only and those with full-text articles. It also attempts to describe their information-seeking behaviour.


Seventy-seven resident physicians from four specialty departments of a tertiary care hospital completed a paper-based questionnaire with clinical simulation cases, then randomly assigned to two intervention groups—access to abstracts-only and access to both abstracts and full-text. While having access to medical literature, they completed an online version of the same questionnaire.


The average improvement across departments was not significantly different between the abstracts-only group and the full-text group (p=0.44), but when accounting for an interaction between intervention and department, the effect was significant (p=0.049) with improvement greater with full-text in the surgery department. Overall, the accuracy of responses was greater after the provision of either abstracts-only or full-text (p<0.0001). Although some residents indicated that ‘accumulated knowledge’ was sufficient to respond to the patient management questions, in most instances (83% of cases) they still sought medical literature.


Our findings support studies that doctors will use evidence when convenient and current evidence improved clinical decisions. The accuracy of decisions improved after the provision of evidence. Clinical decisions guided by full-text articles were more accurate than those guided by abstracts alone, but the results seem to be driven by a significant difference in one department.



Broad estimates show that a specialist would need almost two million pieces of data to practice good medicine.1 To keep updated and apply evidence-based medicine (EBM) in practice, a physician must critically appraise full-text articles to guide their clinical decision making.2 However, owing to limited access to full-text articles, inadequate critical appraisal skills or lack of time to read the entire article, many clinicians depend solely on journal abstracts to answer clinical questions.3–8 Journal abstracts may have become the de facto resource for health professionals wanting to practice EBM because they are easy to read and are easily accessible anywhere.2 5 7 9

Although abstracts are commonly utilised for clinical decisions, caution should be made in using them because they may not completely reflect the entire article.7 Studies by Pitkin et al10 11 and Peacock et al12 identified abstracts that contained data which were different or missing in the full-text. High-impact factor journals had abstracts that failed to include harm despite being mentioned in the main article.13 A study by Berwanger et al8 found that the abstracts of randomised controlled trials from major journals were reported with suboptimal quality. Moreover, abstracts are also subject to authors’ biases which may mislead the readers.14

Efforts have been made to improve the quality and accuracy of journal abstracts since they are often the most commonly read part of an article—if not the only part read.10 11 15 In 1987, the Ad Hoc Working Group for Critical Appraisal of the Medical Literature introduced a seven-heading format (Objectives, Design, Setting, Patients, Interventions, Measurements and Conclusion) for structured abstracts.16 Variations in structured abstracts include the eight-heading format proposed by Haynes et al,17 IMRAD18–20 (Introduction, Methods, Results and Discussion), and more recently, BMJ's pico format21 (Patient, Intervention, Comparison and Outcome). Structured abstracts tend to be longer than traditional ones but they also tend to have better content, readability, recall and retrieval.14 18 22–25 Aside from structuring, ‘quality criteria’ and guidelines have been developed to assist authors in preparing abstracts.26 27

Most of the research on journal abstracts focus on their quality compared to the full-text,10 11 13 22 23 or based on their structure.14 20 24 Given the tendency of physicians to use abstracts for evidence, there is a need to evaluate their reliability in clinical decision making. A study by Barry et al5 looked at the effect of abstract format on physicians’ management decisions. However, we are unable to find studies that compare clinical decisions between those with access to abstract-only or full-text.

The primary objective of this study was to determine whether there is a significant difference in the accuracy of the clinical decisions made on simulated cases by residents with access to full-text articles and those with access to abstract-only. The specific objectives were: (1) to compare the effect of access to abstracts-only or full-text articles on the clinical decision-making of residents; (2) to determine whether providing either the abstract or full-text article increased the accuracy of clinical decisions and (3) to characterise the information-seeking behaviour and use of information resources by residents of four departments in a tertiary care hospital.


Ethics review

The research protocol was submitted for technical review to the Research Grants Administration Office of the University of the Philippines Manila and for ethical evaluation to the Institutional Review Board, both of which approved the study.

Prestudy clinical case development

A physician consultant from each of four clinical departments (Surgery, Internal Medicine, Emergency Medicine and Family and Community Medicine) prepared five simulated clinical cases of varying complexity and the corresponding clinical questions to assess the residents’ management decisions. They searched PubMed for at least three recent (from 2007 onwards) journal articles that were deemed relevant for each case. ‘Gold standard’ answers to the clinical questions were based on the journal articles and other relevant information (applicability and appropriateness to local conditions, available resources and practice environments). A paper-based questionnaire was used for the preintervention assessment while an online version was used during the intervention phase to allow access to journal abstracts or full-text articles.

Study participants and setting

Seventy-seven resident physicians from the four clinical departments (above) at the Philippine General Hospital participated in the study. The Philippine General Hospital is a 1500-bed tertiary care, state-owned, referral center and teaching hospital of the University of the Philippines College of Medicine, College of Nursing, College of Dentistry and allied colleges. It is the largest government hospital with a yearly patient load of 600 000, mostly indigent patients. Fourteen clinical departments offer residency and fellowship training.

Study design

During the prestudy briefing, the residents were informed that they were to answer questions related to the case simulations and that they could access reference articles if needed during the online phase of the study. Written consent was obtained and paper-based case simulations were given to each resident to replicate the hospital scenario of paper patient records. After reading the case simulations, they were asked to respond to five clinical questions and indicate whether they considered a literature search was needed to answer the questions or accumulated knowledge28 was adequate. Accumulated knowledge was defined in this study as the residents’ personal knowledge base accumulated through years of formal education, training, research of the medical literature and clinical experience. Immediately after the preintervention phase, the residents were randomly assigned to one of two groups—access to ‘full-text’ or ‘abstracts-only,’ stratified by department. The same clinical cases and questions in the preintervention phase were presented to the residents using the online version of the questionnaire to simulate real-time access to medical literature. A 20-min time limit was allotted for each question both for the paper-based and online questionnaire. The journal material provided, whether abstracts-only or full-text, was dependent on their assigned group. If the resident assigned to the abstracts-only group clicked on the link to the full-text, a prompt saying, ‘Sorry, full-text is unavailable’ appeared. Although access to either journal abstracts or full-text articles on the online version was available to all residents, they had the option of not using any resource at all. The residents’ actions regarding the use or non-use of medical literature were recorded. Mouse clicks related to the residents’ request for the articles’ abstracts or full-text were logged in the server. The accuracy of response was a measure of correctness of residents’ answer when compared with the answers (‘gold standard’) provided by the consultants. The same consultants who prepared the clinical cases and questions evaluated the accuracy of the residents’ answers. A correct response was scored as ‘1’ and an incorrect response scored ‘0’. Incomplete responses were rated as inaccurate and scored ‘0’. Resident responses were anonymised in both the paper and online versions.

Data analysis

In order to account for the repeated measures nature of the data (physicians answered multiple questions), we fit mixed effects logistic regression models with department type, intervention, and the interaction between department and intervention as independent variables, and accuracy of the response as the dependent variable. Unless otherwise stated, all results were based on this model. Resident year level was also considered as a predictor in the model but was not found to be significant and was dropped. We also fit a model with an interaction between intervention and department. For univariate analysis, we used the nonparametric Wilcoxon two-sample tests and Fisher exact tests, as appropriate.29 All analyses were performed using R Statistical Software.30


Participant profile

Seventy-seven residents from the departments of Surgery (n=20), Internal Medicine (n=20), Emergency Medicine (n=20) and Family and Community Medicine (n=17) participated in this study. Table 1 shows the description of the study participants by department.

Table 1
Characteristics of participating resident trainees by department

Comparing the effect of abstract-only and full-text access on the accuracy of clinical decision making

The first objective was to answer the question: Is there a significant difference in the accuracy of responses of residents in the abstract-only group and full-text group? Overall, there was no significant difference between the interventions (p = 0.44). Post-hoc power of the experiment to detect an overall difference between the interventions was low and varied from approximately 44–58%,31 depending on the level of correlation between the answers within each department. In a model fit to include an interaction between intervention and department, the interaction was significant suggesting that intervention effects differed by department (p = 0.03). In that model, access to full-text was significantly better than access to abstracts-only (p=0.049).

We then compared the effect of the interventions within departments in order to investigate which departments seemed to respond differently to the others with respect to the effect of the interventions. We found no significant difference between the interventions for the Internal Medicine, Emergency Medicine and Family Medicine departments (p=0.73, 0.13 and 0.37, respectively), but there was a difference between the interventions for the Surgery department (p=0.02). The OR for each department is given in table 2. The full-text group had 3.6 times the probability of getting a correct answer on a case simulation compared to the abstract-only group. Note that the CI for Surgery does not include 1.0, which indicates a significant difference. There were no differences found between the interventions for any other department. Power to detect a difference between the interventions in a specific department was low (approx. 17%) because of the reduced sample size in each group (n=10). We also investigated whether resident year was a significant predictor of clinical decision accuracy, but it was not significant in any model.

Table 2
Estimates of the odds of getting an accurate response after full-text intervention compared to abstract-only intervention

Accuracy of clinical decisions before and after access to literature search

We calculated the mean percentage of accurate responses to the simulated clinical questions before and after each intervention. Overall, mean accuracy increased from 42% to 68% for the abstract-only intervention, and 48–75% for the full-text intervention. The differences between the scores before and after the two interventions were significant (p<0.0001). Table 3 shows the comparison of these percentages by department and the tests of significance.

Table 3
Comparison of the average percentage of accurate of responses before and after interventions and tests of significance

When given full-text articles, the departments of Surgery, Internal Medicine and Family Medicine showed significant improvements (p=0.003, 0.03 and <0.0001, respectively), while there was no change for the department of Emergency Medicine (p = 1.0). The differences among the departments were significant (p < 0.0001) for full-text intervention group. This suggests that full-text was more effective for Surgery, Internal Medicine and Family Medicine, but not in the Emergency Medicine department. However, the sample size was small (n=10 or less) at this level. The effect of the abstract-only intervention seems to have been in a similar direction for all the departments, and no significant difference in effects across departments was detected.

Information-seeking trends of the residents

The majority of the residents (86%) indicated that the articles provided in the online version were adequate to answer their questions and 77% indicated that they had actually read the articles. When asked whether they used abstracts-only or full-text articles to answer clinical questions in actual practice, 53 of the 77 residents (69%) indicated that they relied on abstracts most of the time, while only 24 (31%) said they would read the full-text article.

Residents were asked whether or not they felt they needed extra information in order to answer the question correctly. We recorded whether they clicked on the links for the abstract or the full-text. We wanted to answer the question: does a perceived need for more information correlate with how often the physicians actually accessed the links for abstract-only or full-text? For the 157 cases where the resident indicated that they did not require additional information, there were 131 (83%) instances where literature was actually accessed (95% CI 77% to 88%). In contrast, out of the 228 cases where residents indicated that they needed additional information, there were only 12 (5%) cases where they did not actually access literature (95% CI 3% to 9%). Table 4 shows a summary of whether the resident requested additional information and whether they actually accessed literature.

Table 4
Residents’ perceived need for additional information and actual access of literature


The main question we wanted to address in this study was whether there is a significant difference in the clinical decisions between residents who have access to abstracts-only and those with access to full-text articles. Overall, our results seem to demonstrate no difference in the accuracy of responses between residents provided with full-text articles and those with abstracts-only (p=0.4415). When we consider the clustering of physicians by department, we found a difference between the two interventions (p=0.0494) but further analysis showed that this difference was observed only in the department of Surgery (p=0.016). The effects of abstracts-only and full-text were not significantly different for the Internal Medicine, Emergency Medicine and Family Medicine departments. However, the study had low power to detect differences between the interventions within a department.

Our study provides preliminary but useful information related to the use of journal abstracts in evidence-based practice. We believe this to be the first report involving physicians, that attempted to evaluate how abstracts measure up to full-text articles in guiding clinical decisions. This finding offers support for using ‘consensus abstracts’ (concurring and corroborating abstracts from independently conducted randomised clinical studies and systematic research from meta-analysis and systematic reviews that form the basis of clinical evidence) as a possible alternative when access to full-text is limited or in other circumstances when it is not feasible.2 However, clinicians who want to practice EBM will also find online many summaries, reviews and preappraised free resources (TRIP Database, ACP Journal Club, Cochrane Library, etc) or by subscription (UpToDate, 5-Minute Clinical Consult, etc). EBM websites will have links to these. Many of these resources will have applications for mobile devices like the iPhone or Android devices. Our observations set the stage for further research on the role of using abstracts in evidence-based practice. Future studies may include randomised controlled trials with real-time clinical decision-making encountered at the bedside.

EBM encourages the use of timely and relevant information to complement the clinical acumen of clinicians.32 We found that the average improvement in the accuracy of responses across all the departments when either abstracts or full-text articles were provided was significant (p<0.0001 for both interventions). This finding supports previous research regarding the role of medical literature in improving clinical decisions.33–36 However, when individual departments were considered, there seems to be a significant difference between the departments in the full-text intervention group (p=0.0001). This difference in the effect of full-text between the departments appears to be due to the fact that there was no change in the accuracy of responses of Emergency Medicine residents compared to the increase in scores for the other residents when full-text was provided. This may mean that full-text articles were beneficial to Surgery, Family Medicine and Internal Medicine residents but did not benefit Emergency Medicine residents. A possible explanation for this is that the Emergency Medicine department is fast paced and residents may not have the time to read the full-text article. This hypothesis was further supported by data for the abstract-only group where we found no significant difference between the departments on how the intervention improved the accuracy of the responses by residents.

Our study also demonstrated some trends in information-seeking and utilisation of evidence by residents when presented with clinical questions. We observed that although residents indicated that accumulated knowledge was sufficient to answer the questions, in most instances (83.4%), they still accessed the medical literature provided. This observation supports earlier studies that health professionals will use evidence from the literature when they are easily accessible at the time the question arises.37

More than a third of the residents (68.8%) who participated in this study claimed that they commonly used abstracts in seeking answers to their clinical dilemma. Other studies have reported similar observations. A study by Haynes et al3 found that two-thirds of clinical decisions were influenced by literature even if the full-text was not read. Moreover, internists reported that in 63% of the articles they come across, only the abstracts were read.4 These findings may even be higher among physicians in low- and middle-income countries because of even more limited availability of full-text articles.


The small sample of residents from a tertiary government hospital in the Philippines limits the generalisability of the study to the larger medical community. Simulated clinical cases were used as surrogate to actual clinical encounters that a resident may be presented with. The clinical questions were specific within the realm of the disciplines and are not necessarily comparable to each other. The residents only answered five questions which reduced the variation in the study. A ‘learning effect’ was considered to explain the higher score during the intervention phase but was deemed unlikely because of the short interval period—the residents took the online version questions immediately after the preintervention session. Furthermore, this study does not address whether access to full-text would have more impact than access to the abstract in a complex case, a case in which the details of a treatment or outcome or magnitude or significance might affect practice. It also does not address the impact on standard or routine or long-term practice. Finally, although there was reasonable power to detect a difference between the interventions overall, there was low power to detect differences within a department. It is possible that there were differences between the interventions for each department but our study did not have enough sample size to investigate the effect of the intervention at the department level.


In this study, we demonstrated that clinical decisions made by residents improved when evidence, either abstracts or full-text articles were provided. However, this study also indicates that some clinical questions may be simple enough; answered quickly using accumulated knowledge, but accumulated knowledge was enhanced by the use of appropriate medical information. The residents, in spite of initially stating that accumulated knowledge was adequate to answer clinical questions, accessed evidence anyway. This confirms previous findings that easy availability of evidence encourages the practice of evidence-based medicine. When clustered by department, clinical decisions guided by full-text articles were more accurate than those guided by abstracts alone, but this difference can be largely attributed to a significant difference in Surgery. It may be less or not at all in the other three departments but the analysis is not conclusive because of the limited power of this study. Without departmental clustering, the findings seem to show that they may not be significantly different.


Funding: This research was supported by the Intramural Research Program of the National Institutes of Health (NIH), National Library of Medicine (NLM) and Lister Hill National Center for Biomedical Communications (LHNCBC).

Disclaimer: The views and opinions of the authors expressed herein do not necessarily state or reflect those of the National Library of Medicine, National Institutes of Health or the US Department of Health and Human Services.

Open Access: This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 3.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See:


1. Pauker SG, Gorry GA, Kassirer JP, et al. Towards the simulation of clinical cognition: taking a present illness by computer. Am J Med 1976;60:981–96 [PubMed]
2. Fontelo P. Consensus abstracts for evidence-based medicine. Evid Based Med 2011;16:36–8 [PubMed]
3. Haynes RB, McKibbon KA, Walker CJ, et al. Online access to MEDLINE in clinical settings. A study of use and usefulness. Ann Intern Med 1990;112:78–84 [PubMed]
4. Saint S, Christakis DA, Saha S, et al. Journal reading habits of internists. J Gen Intern Med 2000;15:881–4 [PMC free article] [PubMed]
5. Barry HC, Ebell MH, Shaughnessy AF, et al. Family physicians’ use of medical abstracts to guide decision making: style or substance? J Am Board Fam Pract 2001;14:437–42 [PubMed]
6. The PLoS Medicine Editors The impact of open access upon public health. PLoS Med 2006;3:e252. [PMC free article] [PubMed]
7. Editorial. Read MEDLINE abstracts with a pinch of salt. Lancet 2006;368:1394 [PubMed]
8. Berwanger O, Ribeiro RA, Finkelsztejn A, et al. The quality of reporting of trial abstracts is suboptimal: survey of major general medical journals. J Clin Epidemiol 2009;62:387–92 [PubMed]
9. Haynes RB, Ramsden MF, McKibbon KA, et al. Online access to MEDLINE in clinical settings: impact of user fees. Bull Med Libr Assoc 1991;79:377–81 [PMC free article] [PubMed]
10. Pitkin RM, Branagan MA. Can the accuracy of abstracts be improved by providing specific instructions? A randomized controlled trial. JAMA 1998;280:267–9 [PubMed]
11. Pitkin RM, Branagan MA, Burmeister LF. Accuracy of data in abstracts of published research articles. JAMA 1999;281:1110–11 [PubMed]
12. Peacock PJ, Peters TJ, Peacock JL. How well do structured abstracts reflect the articles they summarize? Eur Sci Editing 2009;35:3–5
13. Bernal-Delgado E, Fisher ES. Abstracts in high profile journals often fail to report harm. BMC Med Res Methodol 2008;8:14. [PMC free article] [PubMed]
14. Taddio A, Pain T, Fassos FF, et al. Quality of nonstructured and structured abstracts of original research articles in the British Medical Journal, the Canadian Medical Association Journal and the Journal of the American Medical Association. CMAJ 1994;150:1611–15 [PMC free article] [PubMed]
15. Pitkin RM. The importance of the abstract. Obstet Gynecol 1987;70:267. [PubMed]
16. Ad Hoc Working Group for Critical Appraisal of the Medical Literature A proposal for more informative abstracts of clinical articles. Ann Intern Med 1987;106:598–604 [PubMed]
17. Haynes RB, Mulrow CD, Huth EJ, et al. More informative abstracts revisited. Ann Intern Med 1990;113:69–76 [PubMed]
18. Huth EJ. Structured abstracts for papers reporting clinical trials. Ann Intern Med 1987;106:626–7 [PubMed]
19. MacAuley D. Critical appraisal of medical literature: an aid to rational decision making. Fam Pract 1995;12:98–103 [PubMed]
20. Nakayama T, Hirai N, Yamazaki S, et al. Adoption of structured abstracts by general medical journals and format for a structured abstract. J Med Libr Assoc 2005;93:237–42 [PMC free article] [PubMed]
21. Groves T, Godlee F. Innovations in publishing BMJ research. BMJ 2008;337:a3123. [PubMed]
22. Narine L, Yee DS, Einarson TR, et al. Quality of abstracts of original research articles in CMAJ in 1989. CMAJ 1991;144:449–53 [PMC free article] [PubMed]
23. Dupuy A, Khosrotehrani K, Lebbe C, et al. Quality of abstracts in 3 clinical dermatology journals. Arch Dermatol 2003;139:589–93 [PubMed]
24. Hartley J. Current findings from research on structured abstracts. J Med Libr Assoc 2004;92:368–71 [PMC free article] [PubMed]
25. The Editors Addressing the limitations of structured abstracts. Ann Intern Med 2004;140:480–1
26. Winker MA. The need for concrete improvement in abstract quality. JAMA 1999;281:1129–30 [PubMed]
27. Hopewell S, Clarke M, Moher D, et al. CONSORT for reporting randomized controlled trials in journal and conference abstracts: explanation and elaboration. PLoS Med 2008;5:e20. [PMC free article] [PubMed]
28. Choudhry NK, Fletcher RH, Soumerai SB. Systematic review: the relationship between clinical experience and quality of health care. Ann Intern Med 2005;142:260–73 [PubMed]
29. Rosner B. Fundamentals of Biostatistics. 5th edn Pacific Grove, CA: Duxbury, 2000
30. R Development Core Team R: a language and environment for statistical computing. R Foundation for Statistical Computing, 2011. Vienna, Austria. (accessed 7 May 2012).
31. Diggle P, Heargerty P, Liang K, et al. Analysis of longitudinal data. 2nd edn Oxford University Press, Oxford, UK: 2002
32. Sackett DL, Rosenberg WM, Gray JA, et al. Evidence based medicine: what it is and what it isn't. BMJ 1996;312:71–2 [PMC free article] [PubMed]
33. McGowan J, Hogg W, Campbell C, et al. Just-in-time information improved decision-making in primary care: a randomized controlled trial. PLoS ONE 2008;3:e3785. [PMC free article] [PubMed]
34. Crowley SD, Owens TA, Schardt CM, et al. A Web-based compendium of clinical questions and medical evidence to educate internal medicine residents. Acad Med 2003;78: 270–4 [PubMed]
35. Westbrook JI, Coiera EW, Gosling AS. Do online information retrieval systems help experienced clinicians answer clinical questions? J Am Med Inform Assoc 2005;12:315–21 [PMC free article] [PubMed]
36. Leon SA, Fontelo P. MedlinePlus en Español and Spanish-speakers. AMIA Annu Symp Proc 2007;1028. [PubMed]
37. Sackett DL, Straus SE. Finding and applying evidence during clinical rounds: the ‘evidence cart’. JAMA 1998;280:1336–8 [PubMed]

Articles from BMJ Open Access are provided here courtesy of BMJ Publishing Group