|Home | About | Journals | Submit | Contact Us | Français|
Many clinicians depend solely on journal abstracts to guide clinical decisions.
This study aims to determine if there are differences in the accuracy of responses to simulated cases between resident physicians provided with an abstract only and those with full-text articles. It also attempts to describe their information-seeking behaviour.
Seventy-seven resident physicians from four specialty departments of a tertiary care hospital completed a paper-based questionnaire with clinical simulation cases, then randomly assigned to two intervention groups—access to abstracts-only and access to both abstracts and full-text. While having access to medical literature, they completed an online version of the same questionnaire.
The average improvement across departments was not significantly different between the abstracts-only group and the full-text group (p=0.44), but when accounting for an interaction between intervention and department, the effect was significant (p=0.049) with improvement greater with full-text in the surgery department. Overall, the accuracy of responses was greater after the provision of either abstracts-only or full-text (p<0.0001). Although some residents indicated that ‘accumulated knowledge’ was sufficient to respond to the patient management questions, in most instances (83% of cases) they still sought medical literature.
Our findings support studies that doctors will use evidence when convenient and current evidence improved clinical decisions. The accuracy of decisions improved after the provision of evidence. Clinical decisions guided by full-text articles were more accurate than those guided by abstracts alone, but the results seem to be driven by a significant difference in one department.
Broad estimates show that a specialist would need almost two million pieces of data to practice good medicine.1 To keep updated and apply evidence-based medicine (EBM) in practice, a physician must critically appraise full-text articles to guide their clinical decision making.2 However, owing to limited access to full-text articles, inadequate critical appraisal skills or lack of time to read the entire article, many clinicians depend solely on journal abstracts to answer clinical questions.3–8 Journal abstracts may have become the de facto resource for health professionals wanting to practice EBM because they are easy to read and are easily accessible anywhere.2 5 7 9
Although abstracts are commonly utilised for clinical decisions, caution should be made in using them because they may not completely reflect the entire article.7 Studies by Pitkin et al10 11 and Peacock et al12 identified abstracts that contained data which were different or missing in the full-text. High-impact factor journals had abstracts that failed to include harm despite being mentioned in the main article.13 A study by Berwanger et al8 found that the abstracts of randomised controlled trials from major journals were reported with suboptimal quality. Moreover, abstracts are also subject to authors’ biases which may mislead the readers.14
Efforts have been made to improve the quality and accuracy of journal abstracts since they are often the most commonly read part of an article—if not the only part read.10 11 15 In 1987, the Ad Hoc Working Group for Critical Appraisal of the Medical Literature introduced a seven-heading format (Objectives, Design, Setting, Patients, Interventions, Measurements and Conclusion) for structured abstracts.16 Variations in structured abstracts include the eight-heading format proposed by Haynes et al,17 IMRAD18–20 (Introduction, Methods, Results and Discussion), and more recently, BMJ's pico format21 (Patient, Intervention, Comparison and Outcome). Structured abstracts tend to be longer than traditional ones but they also tend to have better content, readability, recall and retrieval.14 18 22–25 Aside from structuring, ‘quality criteria’ and guidelines have been developed to assist authors in preparing abstracts.26 27
Most of the research on journal abstracts focus on their quality compared to the full-text,10 11 13 22 23 or based on their structure.14 20 24 Given the tendency of physicians to use abstracts for evidence, there is a need to evaluate their reliability in clinical decision making. A study by Barry et al5 looked at the effect of abstract format on physicians’ management decisions. However, we are unable to find studies that compare clinical decisions between those with access to abstract-only or full-text.
The primary objective of this study was to determine whether there is a significant difference in the accuracy of the clinical decisions made on simulated cases by residents with access to full-text articles and those with access to abstract-only. The specific objectives were: (1) to compare the effect of access to abstracts-only or full-text articles on the clinical decision-making of residents; (2) to determine whether providing either the abstract or full-text article increased the accuracy of clinical decisions and (3) to characterise the information-seeking behaviour and use of information resources by residents of four departments in a tertiary care hospital.
The research protocol was submitted for technical review to the Research Grants Administration Office of the University of the Philippines Manila and for ethical evaluation to the Institutional Review Board, both of which approved the study.
A physician consultant from each of four clinical departments (Surgery, Internal Medicine, Emergency Medicine and Family and Community Medicine) prepared five simulated clinical cases of varying complexity and the corresponding clinical questions to assess the residents’ management decisions. They searched PubMed for at least three recent (from 2007 onwards) journal articles that were deemed relevant for each case. ‘Gold standard’ answers to the clinical questions were based on the journal articles and other relevant information (applicability and appropriateness to local conditions, available resources and practice environments). A paper-based questionnaire was used for the preintervention assessment while an online version was used during the intervention phase to allow access to journal abstracts or full-text articles.
Seventy-seven resident physicians from the four clinical departments (above) at the Philippine General Hospital participated in the study. The Philippine General Hospital is a 1500-bed tertiary care, state-owned, referral center and teaching hospital of the University of the Philippines College of Medicine, College of Nursing, College of Dentistry and allied colleges. It is the largest government hospital with a yearly patient load of 600 000, mostly indigent patients. Fourteen clinical departments offer residency and fellowship training.
During the prestudy briefing, the residents were informed that they were to answer questions related to the case simulations and that they could access reference articles if needed during the online phase of the study. Written consent was obtained and paper-based case simulations were given to each resident to replicate the hospital scenario of paper patient records. After reading the case simulations, they were asked to respond to five clinical questions and indicate whether they considered a literature search was needed to answer the questions or accumulated knowledge28 was adequate. Accumulated knowledge was defined in this study as the residents’ personal knowledge base accumulated through years of formal education, training, research of the medical literature and clinical experience. Immediately after the preintervention phase, the residents were randomly assigned to one of two groups—access to ‘full-text’ or ‘abstracts-only,’ stratified by department. The same clinical cases and questions in the preintervention phase were presented to the residents using the online version of the questionnaire to simulate real-time access to medical literature. A 20-min time limit was allotted for each question both for the paper-based and online questionnaire. The journal material provided, whether abstracts-only or full-text, was dependent on their assigned group. If the resident assigned to the abstracts-only group clicked on the link to the full-text, a prompt saying, ‘Sorry, full-text is unavailable’ appeared. Although access to either journal abstracts or full-text articles on the online version was available to all residents, they had the option of not using any resource at all. The residents’ actions regarding the use or non-use of medical literature were recorded. Mouse clicks related to the residents’ request for the articles’ abstracts or full-text were logged in the server. The accuracy of response was a measure of correctness of residents’ answer when compared with the answers (‘gold standard’) provided by the consultants. The same consultants who prepared the clinical cases and questions evaluated the accuracy of the residents’ answers. A correct response was scored as ‘1’ and an incorrect response scored ‘0’. Incomplete responses were rated as inaccurate and scored ‘0’. Resident responses were anonymised in both the paper and online versions.
In order to account for the repeated measures nature of the data (physicians answered multiple questions), we fit mixed effects logistic regression models with department type, intervention, and the interaction between department and intervention as independent variables, and accuracy of the response as the dependent variable. Unless otherwise stated, all results were based on this model. Resident year level was also considered as a predictor in the model but was not found to be significant and was dropped. We also fit a model with an interaction between intervention and department. For univariate analysis, we used the nonparametric Wilcoxon two-sample tests and Fisher exact tests, as appropriate.29 All analyses were performed using R Statistical Software.30
Seventy-seven residents from the departments of Surgery (n=20), Internal Medicine (n=20), Emergency Medicine (n=20) and Family and Community Medicine (n=17) participated in this study. Table 1 shows the description of the study participants by department.
The first objective was to answer the question: Is there a significant difference in the accuracy of responses of residents in the abstract-only group and full-text group? Overall, there was no significant difference between the interventions (p=0.44). Post-hoc power of the experiment to detect an overall difference between the interventions was low and varied from approximately 44–58%,31 depending on the level of correlation between the answers within each department. In a model fit to include an interaction between intervention and department, the interaction was significant suggesting that intervention effects differed by department (p=0.03). In that model, access to full-text was significantly better than access to abstracts-only (p=0.049).
We then compared the effect of the interventions within departments in order to investigate which departments seemed to respond differently to the others with respect to the effect of the interventions. We found no significant difference between the interventions for the Internal Medicine, Emergency Medicine and Family Medicine departments (p=0.73, 0.13 and 0.37, respectively), but there was a difference between the interventions for the Surgery department (p=0.02). The OR for each department is given in table 2. The full-text group had 3.6 times the probability of getting a correct answer on a case simulation compared to the abstract-only group. Note that the CI for Surgery does not include 1.0, which indicates a significant difference. There were no differences found between the interventions for any other department. Power to detect a difference between the interventions in a specific department was low (approx. 17%) because of the reduced sample size in each group (n=10). We also investigated whether resident year was a significant predictor of clinical decision accuracy, but it was not significant in any model.
We calculated the mean percentage of accurate responses to the simulated clinical questions before and after each intervention. Overall, mean accuracy increased from 42% to 68% for the abstract-only intervention, and 48–75% for the full-text intervention. The differences between the scores before and after the two interventions were significant (p<0.0001). Table 3 shows the comparison of these percentages by department and the tests of significance.
When given full-text articles, the departments of Surgery, Internal Medicine and Family Medicine showed significant improvements (p=0.003, 0.03 and <0.0001, respectively), while there was no change for the department of Emergency Medicine (p=1.0). The differences among the departments were significant (p<0.0001) for full-text intervention group. This suggests that full-text was more effective for Surgery, Internal Medicine and Family Medicine, but not in the Emergency Medicine department. However, the sample size was small (n=10 or less) at this level. The effect of the abstract-only intervention seems to have been in a similar direction for all the departments, and no significant difference in effects across departments was detected.
The majority of the residents (86%) indicated that the articles provided in the online version were adequate to answer their questions and 77% indicated that they had actually read the articles. When asked whether they used abstracts-only or full-text articles to answer clinical questions in actual practice, 53 of the 77 residents (69%) indicated that they relied on abstracts most of the time, while only 24 (31%) said they would read the full-text article.
Residents were asked whether or not they felt they needed extra information in order to answer the question correctly. We recorded whether they clicked on the links for the abstract or the full-text. We wanted to answer the question: does a perceived need for more information correlate with how often the physicians actually accessed the links for abstract-only or full-text? For the 157 cases where the resident indicated that they did not require additional information, there were 131 (83%) instances where literature was actually accessed (95% CI 77% to 88%). In contrast, out of the 228 cases where residents indicated that they needed additional information, there were only 12 (5%) cases where they did not actually access literature (95% CI 3% to 9%). Table 4 shows a summary of whether the resident requested additional information and whether they actually accessed literature.
The main question we wanted to address in this study was whether there is a significant difference in the clinical decisions between residents who have access to abstracts-only and those with access to full-text articles. Overall, our results seem to demonstrate no difference in the accuracy of responses between residents provided with full-text articles and those with abstracts-only (p=0.4415). When we consider the clustering of physicians by department, we found a difference between the two interventions (p=0.0494) but further analysis showed that this difference was observed only in the department of Surgery (p=0.016). The effects of abstracts-only and full-text were not significantly different for the Internal Medicine, Emergency Medicine and Family Medicine departments. However, the study had low power to detect differences between the interventions within a department.
Our study provides preliminary but useful information related to the use of journal abstracts in evidence-based practice. We believe this to be the first report involving physicians, that attempted to evaluate how abstracts measure up to full-text articles in guiding clinical decisions. This finding offers support for using ‘consensus abstracts’ (concurring and corroborating abstracts from independently conducted randomised clinical studies and systematic research from meta-analysis and systematic reviews that form the basis of clinical evidence) as a possible alternative when access to full-text is limited or in other circumstances when it is not feasible.2 However, clinicians who want to practice EBM will also find online many summaries, reviews and preappraised free resources (TRIP Database, ACP Journal Club, Cochrane Library, etc) or by subscription (UpToDate, 5-Minute Clinical Consult, etc). EBM websites will have links to these. Many of these resources will have applications for mobile devices like the iPhone or Android devices. Our observations set the stage for further research on the role of using abstracts in evidence-based practice. Future studies may include randomised controlled trials with real-time clinical decision-making encountered at the bedside.
EBM encourages the use of timely and relevant information to complement the clinical acumen of clinicians.32 We found that the average improvement in the accuracy of responses across all the departments when either abstracts or full-text articles were provided was significant (p<0.0001 for both interventions). This finding supports previous research regarding the role of medical literature in improving clinical decisions.33–36 However, when individual departments were considered, there seems to be a significant difference between the departments in the full-text intervention group (p=0.0001). This difference in the effect of full-text between the departments appears to be due to the fact that there was no change in the accuracy of responses of Emergency Medicine residents compared to the increase in scores for the other residents when full-text was provided. This may mean that full-text articles were beneficial to Surgery, Family Medicine and Internal Medicine residents but did not benefit Emergency Medicine residents. A possible explanation for this is that the Emergency Medicine department is fast paced and residents may not have the time to read the full-text article. This hypothesis was further supported by data for the abstract-only group where we found no significant difference between the departments on how the intervention improved the accuracy of the responses by residents.
Our study also demonstrated some trends in information-seeking and utilisation of evidence by residents when presented with clinical questions. We observed that although residents indicated that accumulated knowledge was sufficient to answer the questions, in most instances (83.4%), they still accessed the medical literature provided. This observation supports earlier studies that health professionals will use evidence from the literature when they are easily accessible at the time the question arises.37
More than a third of the residents (68.8%) who participated in this study claimed that they commonly used abstracts in seeking answers to their clinical dilemma. Other studies have reported similar observations. A study by Haynes et al3 found that two-thirds of clinical decisions were influenced by literature even if the full-text was not read. Moreover, internists reported that in 63% of the articles they come across, only the abstracts were read.4 These findings may even be higher among physicians in low- and middle-income countries because of even more limited availability of full-text articles.
The small sample of residents from a tertiary government hospital in the Philippines limits the generalisability of the study to the larger medical community. Simulated clinical cases were used as surrogate to actual clinical encounters that a resident may be presented with. The clinical questions were specific within the realm of the disciplines and are not necessarily comparable to each other. The residents only answered five questions which reduced the variation in the study. A ‘learning effect’ was considered to explain the higher score during the intervention phase but was deemed unlikely because of the short interval period—the residents took the online version questions immediately after the preintervention session. Furthermore, this study does not address whether access to full-text would have more impact than access to the abstract in a complex case, a case in which the details of a treatment or outcome or magnitude or significance might affect practice. It also does not address the impact on standard or routine or long-term practice. Finally, although there was reasonable power to detect a difference between the interventions overall, there was low power to detect differences within a department. It is possible that there were differences between the interventions for each department but our study did not have enough sample size to investigate the effect of the intervention at the department level.
In this study, we demonstrated that clinical decisions made by residents improved when evidence, either abstracts or full-text articles were provided. However, this study also indicates that some clinical questions may be simple enough; answered quickly using accumulated knowledge, but accumulated knowledge was enhanced by the use of appropriate medical information. The residents, in spite of initially stating that accumulated knowledge was adequate to answer clinical questions, accessed evidence anyway. This confirms previous findings that easy availability of evidence encourages the practice of evidence-based medicine. When clustered by department, clinical decisions guided by full-text articles were more accurate than those guided by abstracts alone, but this difference can be largely attributed to a significant difference in Surgery. It may be less or not at all in the other three departments but the analysis is not conclusive because of the limited power of this study. Without departmental clustering, the findings seem to show that they may not be significantly different.
Funding: This research was supported by the Intramural Research Program of the National Institutes of Health (NIH), National Library of Medicine (NLM) and Lister Hill National Center for Biomedical Communications (LHNCBC).
Disclaimer: The views and opinions of the authors expressed herein do not necessarily state or reflect those of the National Library of Medicine, National Institutes of Health or the US Department of Health and Human Services.
Open Access: This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 3.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/3.0/