PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Patient Educ Couns. Author manuscript; available in PMC Jul 1, 2010.
Published in final edited form as:
PMCID: PMC2765808
NIHMSID: NIHMS128303
Impact of Two Types of Internet-based Information on Medical Students’ Performance in an Objective Structured Clinical Exam (OSCE)
William G. Elder, Jr., Paul L. Dassow, Geza G. Bruckner, and Terry D. Stratton
William G. Elder, Jr., Department of Family & Community Medicine, University of Kentucky College of Medicine, Lexington, Kentucky;
Please direct correspondence to: William Elder, Ph.D., Department of Family and Community Medicine, University of Kentucky College of Medicine, 800 Rose Street, Lexington, KY, USA, 40536-0298; phone: (859) 257-9569; fax: (859) 323-6661; e-mail: welder/at/email.uky.edu
Objectives
Internet-based information has potential to impact physician-patient relationships. This study examined medical students’ interpretation and response to such information presented during an objective clinical examination.
Method
Ninety three medical students who had received training for a patient centered response to inquiries about alternative treatments completed a comprehensive examination in their third year. In one of twelve objective structured clinical exams, a SP presented internet-based information on l-theanine – an amino acid available as a supplement. In Condition A, materials were from commercial websites; in Condition B, materials were from the PubMed website.
Results
Analyses revealed no significant differences between Conditions in student performance or patient (SP) satisfaction. Students in Condition A rated the information less compelling than students in Condition B (z = −1.78, p = .037), and attributed less of the treatment’s action to real vs. placebo effects (z = −1.61, p = .053).
Conclusions
Students trained in a patient centered response to inquiries about alternative treatment perceived the credibility of the two types of internet-based information differently but were able to respond to the patient without jeopardizing patient satisfaction. Approach to information was superficial. Training in information evaluation may be warranted.
Keywords: internet based-information, physician-patient relationship, patient counseling, alternative medicine, student-evaluation, evidence-based medicine
Health information is one of the most frequently sought topics on the internet (1) and the use of internet derived information within the health care encounter is increasing; its presence has potential to significantly alter the dynamics of the physician-patient relationship (2). Most patients see their internet searches as complementing rather than opposing professional expertise and do not intend by their research to disrupt the balance of power, or roles, with their physician(3). They appear to view the internet as an additional resource to support existing relationships with their physician (4) and are seeking the physician’s opinion rather than a specific intervention (5). Physicians may respond to patient internet health searches either defensively or in a patient centered manner (1). Validation of patient efforts is associated with higher patient satisfaction and reduced concern. Showing that the information is being seriously considered may reduce negative effects when there is disagreement over treatment (6).
Interest in and use of complementary and alternative medicines (CAM) is increasing (7) and information is available about CAM from a wide array of sources on the internet, including sites developed by the NIH specifically to disseminate accurate information to the public. Many medical training programs now offer courses containing some CAM content (8, 9). Although debate continues over what and how much CAM content should be taught (8), a frequent recommendation is that physicians be prepared to respond in a patient-centered, evidence-based manner regarding patient inquiries about CAM. For example, Frenkel and Borkan (10) suggest physicians’ responsibilities, regarding their patients’ use of CAM, should include the ability to evaluate the appropriateness of the treatment, maintain patient contact, and monitor outcomes. They further state that any advice offered must be based, among other things, on “the safety of the method in question” and “current knowledge on indications and contraindications of that modality” (10). Similarly, Owen and Lewith (11) suggest that the trainees should be able to constructively and critically appraise the relative merits and claims of different treatments, and assess and communicate meaningfully with patients who request or may benefit from the treatment. Evidence-based decisions are a challenge to physicians faced with time pressures, voluminous scientific data, lack of pertinent information on many clinical questions, and inexperience in judging levels of evidence (12). As with any decision, attitudes towards CAM may affect data considered and the response to the patient (13, 14). More open attitudes towards CAM may occur with professional development; especially training that emphasizes current research on CAM (15).
At the (information omitted for review), one of fifteen recipients of National Institutes of Health funding to enhance training on CAM (9), curriculum was developed to encourage a patient centered response to patient use or questions about CAM treatments. As part of interviewing training during a first year course, students were taught to acknowledge degree of familiarity with the treatment, defer judgment on safety/efficacy of the treatment until they were able to investigate it, respect patients’ interest in the treatment for their condition, and schedule a follow-up visit to discuss the use of the treatment. In a second year course, these students received training in evidence-based medicine principles, application of statistics to clinical problems (e.g., base rates, numbers needed to test, etc.) and on accessing the medical literature.
Using an Objective Standardized Clinical Exams (OSCE) format, this study sought to examine three research questions: (1) Can students’ convey their assessments of evidence in a professional, non-judgmental manner? (2) Can students distinguish between sources of information in assessing the credibility of scientific evidence? (3) Are students’ assessments of evidence related to the perceived mechanism (real vs. placebo) of action?
As part of a required, 12-station comprehensive clinical performance exam, third-year medical students (n=93) completed a 15-minute OSCE where they were expected to take a focused history and provide appropriate counseling. OSCEs typically use standardized patients (SPs) trained to follow a predefined “script” in a simulated context (16). Performance within this controlled setting can be reliably measured via trained observers, typically the SPs themselves using a behavioral checklist.
In this OSCE, a 50-year old Caucasian female (Temp 98.8° F, HR 86, RR 16, BP 110/60, Weight 130 lbs., Height 5’6”) presents to the primary care clinic complaining of feeling stressed and anxious. No references to CAM or self-care are made. After the student has completed the history-taking portion of the station, the SP is instructed to ask, without solicitation, about the stress-reducing effectiveness of l-theanine, a naturally-occurring amino acid commonly found in green tea and available as a dietary supplement in extract form. Simultaneously, the SP produces one of two forms of information regarding l-theanine: testimonials from two commercial websites (Condition A, shown in Figure 1), or two PubMed research abstracts (Condition B, shown in Figure 2). [Maintained by the U.S. National Library of Medicine, PubMed is a searchable, publicly-accessible database that includes over 16 million citations from Medline® and other sources.] Source materials were of approximately equal length, contained highlighted excerpts, and described generally encouraging results from use of the supplements. Neither type of information provided results of a scientifically valid study of l-theanine regarding decreasing stress/anxiety in a clinical population.
Figure 1
Figure 1
Condition A: Commercial Information
Figure 2
Figure 2
Condition B: PubMed Information
Immediately following each patient encounter, the SP completed a checklist for 36 observed history taking and counseling behaviors. In addition to asking general history-related items, examinees’ counseling performance was based on their abilities to: 1) acknowledge a limited familiarity with l-theanine, 2) defer judgment on safety/efficacy of l-theanine until a later time, 3) respect patients’ interest in l-theanine as a treatment for stress/anxiety, and 4) schedule a follow-up visit to discuss the use of l-theanine. Lastly, using a 5-point Likert-type scale, the SP completed a 10-item “global” ratings scale assessing the quality of students’ communication and interpersonal skills. Copies of ratings scales are available from the authors. Prior to the exam, all SPs underwent routine reliability testing to ensure their familiarity with the case and their accuracy in observing and recording key behaviors.
Following each patient encounter, student were given five minutes to rate, on a 1–10 scale: the credibility of the evidence presented (i.e., “How would you rate the evidence on l-theanine for stress/anxiety presented by this patient?”), and the proportional benefit (real versus placebo) of l-theanine to relieve stress/anxiety. Since students’ schedules for the larger CPX exam were pre-established – and the ordering of exam stations was not mutable – randomization of the script (version) occurred at the level of the SP. However, although SPs were instructed to randomly alter the version of the case presented, they were also encouraged to do so for blocks of 2–3 students in order to minimize examiner error. As a result, strict formal randomization did not occur, even though the desired results were achieved.
A chi-square test for two independent samples was used to examine differences in nominal attributes between treatment conditions. Where distributional assumptions were met, the Student’s t test for independent samples was used to examine mean differences. As a non-parametric alternative to the t test, the Mann-Whitney U test used to examine median (ranked) differences. All tests were one-tailed, with alpha set at p = < .05. Power analysis for the subjective ratings by SPs revealed a 95% power to detect a 2 point difference between groups using a one tailed test with alpha =.05. This study protocol was approved by the medical center institutional review board (IRB).
Ninety-three medical students (51 males, 42 females) completed the OSCE exercise. Fifty-three were shown the commercial website information (Condition A) by SPs, while 40 received the PubMed research abstracts (Condition B).
When grouped by Condition, analyses revealed no significant differences in overall objective station performance, overall subjective station performance, or patient (SP) satisfaction (see Table 1). There were no significant differences by gender.
Table 1
Table 1
Equivalency of Randomly-Assigned Treatment Groups and Differences in Assessments of Evidence and Mechanisms of Action
Internal consistency of the 36-item OSCE checklist was good (alpha = 0.79). Student scores on the objective portion ranged from 33–92% correct – with a mean of 63.6% (SD = 12.6). On the subjective component, scores were more narrowly distributed (85.0–100.0%), averaging 97.6% (SD = 2.6). On both objective and subjective components, female students posted significantly higher scores than did males (t =1.96, p = .05; t = 2.06, p = .04, respectively).
Regarding perceived credibility of the l-theanine website information, results of a Mann-Whitney test indicated that the students given the PubMed research abstracts (Condition B) rated this form of information as significantly more compelling than those shown information from the commercial websites (Condition A) (z = −1.78, p = .037). Mann-Whitney results showed students assessing the commercial website information in Condition A attributed more of the perceived effects to placebo than those reviewing the PubMed information (z = −1.61, p = .053), although this difference fell just short of statistical significance. Regardless of Condition, students’ assessments of information were positively correlated with the perception of efficacy (real vs. placebo) of l-theanine on stress/anxiety (rs = 0.38, p = <.001).
Student performance on subjective and objective exam components was not significantly related to student perceptions of information credibility or treatment efficacy.
4.1 Discussion
In this study we varied type of internet information presented, assuming that less credible information might result in differences in student responses. While students did assess the credibility of the two types of internet information differently, they were able to respond to the patients presenting both types of information without jeopardizing patient satisfaction.
This study demonstrated a number of intriguing characteristics about how medical students consider medical information brought to them by patients. All of the medical students who participated in this study were presented with information regarding a possible treatment for the chief complaint (i.e., stress and anxiety). One group of students was presented with essentially marketing information - including a testimonial and unsubstantiated claims that “recent studies have shown that … l-theanine has the ability to promote generation of alpha waves” (Condition A). In contrast, the other group was presented with two PubMed abstracts summarizing studies of amino acids and biochemical functioning – neither of which mentioned l-theanine for the treatment of stress/anxiety in humans (Condition B).
The students appropriately noted that the source of the information has implications regarding its credibility. Marketing information is not subject to the same standards of evidence or peer review as journal articles. Students in Condition B, however, imputed credibility to the use of l-theanine for the treatment of stress/anxiety based solely on the source of the information – when, in fact, none was directly shown. In reality, neither group was presented with credible data supporting or refuting the therapeutic use of l-theanine. Thus, neither group should have reported any reasonable credibility with the information presented. These students, who had received second year training in evidence-based medicine principles, displayed only a cursory, nominal approach to the evaluation of the studies; that is, they noted the credibility of the source, but failed to recognize that the studies were not directly relevant to the question at hand. While such superficial analysis may have been the result of inadequate time to fully assess each study, the ability to quickly ascertain the relevance of medical information needs is a crucial yet challenging skill for all health care providers (17).
In a related question, students were asked their impressions of the likelihood that l-theanine had pertinent biochemical effects. Those who were presented with the marketing information (Condition A) estimated that any effects were more likely related to placebo than biochemical changes (with a p-value just over 0.05). Given that neither group was given credible data to support or refute l-theanine’s efficacy, students should have made this decision independent of the information given, based on their knowledge of amino acids and their possible effects on stress/anxiety. The fact that the groups differed on this judgment indicates a bias related to the type of information presented. Such subtle effects on students’ judgments need to be recognized and prevented through training.
Educators should remind students that the source of information, while important, is secondary to the specific application of any related findings to the situation at hand. Moreover, research findings alone are not credible substitutes for a thorough understanding of the underlying mechanisms of action.
Several limitations to this study should be recognized. In the context of an OSCE, students may have lacked adequate time to fully assess the credibility of the evidence presented. It could be argued that this situation may more closely mirror restrictions many providers experience in their daily practices. This speaks to the importance of skill development for evidence-based medicine. Regarding student response and impact on counseling of patients, first, a comparison was not included that would enable judgment of student performance in a condition not involving patient presentation of internet information. Second, although the 10-item global ratings scale developed for the study exhibited adequate content validity, it was not subject to more rigorous validation. Finally, although other SPs also rated students on the physician patient relationship, possible differences among SPs and OSCEs also prevent us from comparisons of performance in “unchallenged” conditions. There is a possibility that our students, receiving extra exposure to CAM in their curricula, may have responded in a more patient centered manner to internet based information about a supplement. As we did not compare groups with and without this training, we cannot say this, but we hope it is the case.
4.2 Conclusion
Students trained in a patient centered response to inquiries about alternative treatment perceived the credibility of the two types of internet-based information differently but were able to respond to the patient without jeopardizing patient satisfaction. Their approach to information was only very nominal and cursory. Typical training in evidence based medicine may be insufficient and training in information evaluation may be of value.
4.3 Recommendations for Practice
I confirm all patient/personal identifiers have been removed or disguised so the patient/person(s) described are not identifiable and cannot be identified through the details of the story.
Footnotes
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Contributor Information
William G. Elder, Jr., Department of Family & Community Medicine, University of Kentucky College of Medicine, Lexington, Kentucky.
Paul L. Dassow, Department of Family & Community Medicine, University of Kentucky.
Geza G. Bruckner, Department of Clinical Nutrition, University of Kentucky.
Terry D. Stratton, Department of Behavioral Science, University of Kentucky.
1. McMullan M. Patients using the Internet to obtain health information: how this affects the patient-health professional relationship. Patient Educ Couns. 2006;63:24–28. [PubMed]
2. Wald H, Dube C, Anthony D. Untangling the Web - the impact of Internet use on health care and the physician-patient relationship. Patient Educ Couns. 2007;68:218–224. [PubMed]
3. Kivits J. Informed patients and the internet: a mediated context for consultations with health professionals. J Health Psychol. 2006;11:269–282. [PubMed]
4. Stevenson F, Kerr C, Murray E, Nazareth I. Information from the Internet and the doctor-patient relationship: the patient perspective — a qualitative study. BMC Fam Pract. 2007;8:47. [PMC free article] [PubMed]
5. Murray E, Lo B, Pollack L, Donelan K, Catania J, White M, Zapert K, Turner R. The impact of health information on the internet on the physician patient-relationship: patient perceptions. Arch Intern Med. 2003;163:1727–1734. [PubMed]
6. Bylund C, Sabee C, Imes R, Sanford A. Exploration of the construct of reliance among patients who talk with their providers about internet information. J Health Commun. 2007;12:17–28. [PubMed]
7. Barnes P, Powell-Griner E, McFann K, Nahin R. Complementary and Alternative Medicine Use among Adults: United States, 2002. Atlanta, GA: CDC; May 27, 2004. CDC Advance Data Report #343. [PubMed]
8. Brokaw JJ, Tunnicliff G, Raess BU, Saxon DW. The teaching of complementary and alternative medicine in U.S. medical schools: a survey of course directors. Acad Med. 2002;77:876–881. [PubMed]
9. Haramati A, Elder WG, Heitkemper M, Warber S. Insights from Educational Initiatives in Complementary and Alternative Medicine. Acad Med. 2007 Oct;82(10):919–920.
10. Frenkel MA, Borkan JM. An approach for integrating complementary-alternative medicine into primary care. Fam Pract. 2003;20:324–332. [PubMed]
11. Owen D, Lewith GT. Complementary and alternative medicine (CAM) in the undergraduate medical curriculum: the Southampton experience. Med Educ. 2001;35:73–77. [PubMed]
12. Linton AM, Wilson PH, Gomes A, Abate L, Mintz M. Evaluation of evidence based medicine search skill in the clinical years. Med Ref Serv Q. 2004;23:21. [PubMed]
13. Maha N, Shaw A. Academic doctors’ views of complementary and alternative medicine (CAM) and its role within the NHS: an exploratory qualitative study. BMC Complement Altern Med. 2007 May 30;7:17. [PMC free article] [PubMed]
14. Sawni A, Thomas R. Pediatricians’ attitudes, experience and referral patterns regarding Complementary/Alternative Medicine: a national survey. BMC Complement Altern Med. 2007 Jun 4;7:18. [PMC free article] [PubMed]
15. Hewson MG, Copeland HL, Mascha E, Arrigain S, Topol E, Fox JE. Integrative medicine: implementation and evaluation of a professional development program using experiential learning and conceptual change teaching approaches. Patient Educ Couns. 2006 Jul;62(1):5–12. [PubMed]
16. Newble DI. Techniques for measuring clinical competence: objective structured clinical exams. Med Educ. 2004;38:199–203. [PubMed]
17. Green ML, Ruff TR. Why do residents fail to answer their clinical question? A qualitative study of barriers to practicing EBM. Acad Med. 2005;80:176–182. [PubMed]