|Home | About | Journals | Submit | Contact Us | Français|
We hypothesized that trainees would perform better using a hypothesis-driven rather than a traditional screening approach to the neurologic examination.
We randomly assigned 16 medical students to perform screening examinations of all major aspects of neurologic function or hypothesis-driven examinations focused on aspects suggested by the history. Each student examined 4 patients, 2 of whom had focal deficits. Outcomes of interest were the correct identification of patients with focal deficits, number of specific deficits detected, and examination duration. Outcomes were assessed by an investigator blinded to group assignments. The McNemar test was used to compare the sensitivity and specificity of the 2 examination methods.
Sensitivity was higher with hypothesis-driven examinations than with screening examinations (78% vs 56%; p = 0.046), although specificity was lower (71% vs 100%; p = 0.046). The hypothesis-driven group identified 61% of specific examination abnormalities, whereas the screening group identified 53% (p = 0.008). Median examination duration was 1 minute shorter in the hypothesis-driven group (7.0 minutes vs 8.0 minutes; p = 0.13).
In this randomized trial comparing 2 methods of neurologic examination, a hypothesis-driven approach resulted in greater sensitivity and a trend toward faster examinations, at the cost of lower specificity, compared with the traditional screening approach. Our findings suggest that a hypothesis-driven approach may be superior when the history is concerning for an acute focal neurologic process.
Many medical trainees find the neurologic examination difficult; a recent survey indicated that graduating medical students do not feel comfortable using the neurologic examination after learning it during a rotation in neurology.1 Despite this, many of these newly minted physicians will need to be proficient in examining patients with neurologic symptoms; for example, neurologic complaints account for 6% of visits to the emergency department.2 Therefore, it is important to determine how we can improve the way we teach it to trainees.
Prior attempts to improve the teaching of the neurologic examination to non-neurologists have pared down the traditional screening examination.1,3 We hypothesize that by exclusively relying on a screening approach, neurologic teaching may be limited because little instruction is devoted to the type of hypothesis-driven examination used by most neurologists. With this approach, an explicit link is made between the presenting symptoms, possible anatomical lesions suggested by those symptoms, and the appropriate examination maneuvers to look for evidence of those lesions. A hypothesis-drive approach is likely to be more efficient because it requires fewer examination maneuvers. It may also be more accurate; by anticipating specific findings, physicians may be more likely to detect important abnormalities and less likely to be distracted by irrelevant findings.4–6
We hypothesized that neurologic examination skills would be improved by teaching an explicit and systematic approach to a hypothesis-driven neurologic examination. To test this, we compared the accuracy and efficiency of trainees using the traditional screening neurologic examination vs a hypothesis-driven examination.
We conducted a randomized trial in the neurology clinic of a tertiary care academic medical center.
Our study was approved by the University of California, San Francisco (UCSF) Committee on Human Research, and all subjects provided written, informed consent.
To avoid disrupting the established medical school curriculum, we recruited 16 fourth-year medical students who had already completed a core clerkship in neurology. Five patients were recruited, 1 from our patient panels and 4 from the UCSF Kanbar Center for Simulation, Clinical Skills and Telemedicine Education. Four patients participated in each session, during which each patient was assigned a chief complaint potentially due to an acute neurologic process. Several of these patients had neurologic deficits relevant to their chief complaint, including extremely subtle proximal weakness, moderate hemiparesis, and an isolated cranial nerve palsy (table 1). The proportion of patients with and without examination findings was based on available data regarding the pretest probability of focal neurologic deficits in patients presenting with potentially serious neurologic emergencies.7,8
We collected information from the students regarding their demographic characteristics, time since completion of their neurology clerkship, and degree of confidence in performing the neurologic examination, on a scale of 0–10, with 10 indicating the highest degree of confidence. For each patient, we documented age, sex, and the results of a study neurologist's neurologic examination, which served as the gold standard.
Using sealed opaque envelopes, we randomly assigned the students in a 1:1 ratio. Those assigned to the screening approach participated in a 30-minute review of a basic neurologic examination. We used the standard screening examination from the UCSF neurology curriculum, one derived from the neurology clerkship core curriculum proposed by the American Academy of Neurology.9 At the end of the session, the students were given a printed checklist of the maneuvers involved in this examination (table 2) and were advised to follow this checklist in full or focus their examination as they felt appropriate for each clinical scenario. Students assigned to the hypothesis-driven group participated in a 30-minute session covering an explicit approach to a focused, hypothesis-driven neurologic examination. This approach was based on our own clinical experience and the available evidence.10–15 To maximize its utility in acute settings, it was designed to not require special tools such as reflex hammers or tuning forks. The students in the hypothesis-driven group were given a printed checklist containing this algorithm (table 3) and were advised to perform the maneuvers suggested for each clinical scenario. To minimize sources of variation related to students' interviewing skills, students in both groups were provided with each patient's presenting complaint before the examination and were not given an opportunity to interview patients. Patients were instructed to not answer questions or otherwise engage with the examiner. Each student examined all 4 patients in a single session. The study was conducted during 3 sessions spread over 2 weeks. Students who completed a session were asked to not discuss the study with their peers until the end of the study.
After each examination, the students were asked to report the results of all examination maneuvers, whether or not the patient had a focal neurologic deficit, and their degree of confidence in this assessment, again on a 0–10 scale, with 10 indicating the highest degree of confidence. Students recorded their findings on a structured data collection form, which was used as the basis for evaluating their examination performance. We used the average of the 4 postexamination self-assessments for comparison with the baseline self-assessment. Our outcomes of interest were the correct identification of patients with focal neurologic deficits, the number of specific deficits detected, the duration of the examination, and the change in the students' degree of confidence in their examination. Outcomes were assessed by an investigator blinded to group assignments.
We calculated the sensitivity and specificity of the screening and hypothesis-driven methods in identifying patients with focal neurologic deficits. Because both groups examined the same patients, we compared their sensitivity and specificity using the McNemar test for paired proportions.16 We also used the McNemar test to compare the number of specific deficits detected. Because our data were not normally distributed, we used the Wilcoxon signed rank and rank sum tests to compare continuous data. All analyses were performed using Stata (version 10; StataCorp, College Station, TX).
There were no significant differences between the 2 groups in the students' baseline characteristics (table 4). The sensitivity of the students in the hypothesis-driven group for identifying patients with any focal neurologic deficit was 78% compared with 56% for those in the screening group (p = 0.046). The specificity was 71% in the hypothesis-driven group compared with 100% in the screening group (p = 0.046). The 2 patients with deficits had a total of 11 specific examination findings (e.g., facial weakness and pronator drift) between them; the students in the hypothesis-driven group identified 61% of these abnormalities, whereas those in the screening group identified 53% (p = 0.008).
The students' sensitivity for detecting these specific findings ranged widely; all students (100%) correctly identified a moderate hemiparesis, whereas only 25% identified a subtle fourth-nerve palsy. In the one patient with a nonfocal complaint of diffuse weakness, subtle proximal weakness was correctly identified by 5 of 8 students (63%) in the hypothesis-driven group and 2 of 8 students (25%) in the screening group.
The median examination duration was 7.0 minutes (interquartile range [IQR] 2.5 minutes) in the hypothesis-driven group and 8.0 minutes (IQR 4 minutes) in the screening group (p = 0.13).
After the study sessions, the students' assessment of confidence in their neurologic examination improved significantly from baseline in both groups, by 2.1 points in the hypothesis-driven group (p = 0.017) and 1.4 points in the screening group (p = 0.042). There was no significant difference between the groups in the degree of improved confidence (p = 0.429).
In this randomized trial comparing 2 methods of neurologic examination by trainees, a hypothesis-driven approach resulted in greater sensitivity and a trend toward shorter examination times at the cost of lower specificity than with the traditional screening approach.
Our findings suggest that a hypothesis-driven approach may be superior when the history is concerning for an acute focal neurologic process. On the one hand, the higher specificity of the screening examination may result in fewer false findings and therefore less unnecessary testing and consultation, favoring its use in low-risk settings. Furthermore, our hypothesis-driven approach relies on linking specific symptoms to worst-case anatomical locations (for example, acute bilateral leg weakness is assumed to be from a spinal cord lesion until proven otherwise), whereas a screening examination may be more helpful in patients with an unclear history or multifocal complaints, because it can help generate hypotheses and ensure that alternative diagnoses are not missed. Conversely, a hypothesis-driven approach may be superior in acute situations with a high likelihood of serious disease, because higher sensitivity ensures that patients with focal lesions are reliably identified and referred for appropriate testing and treatment. Therefore, our study supports supplementing traditional methods of teaching the neurologic examination with a hypothesis-driven approach.
This study involved both pedagogical and utilitarian aspects, because we examined the performance of students who both learned (more specifically, reviewed) and performed the neurologic examination using 2 different strategies. Further studies will be required to more clearly delineate these 2 aspects when the hypothesis-driven approach is compared with the traditional screening approach. In particular, larger studies will be required to measure the impact of screening vs hypothesis-driven examination strategies on providers' usage of tests and imaging, the rates of correct diagnoses, and ultimately patients' clinical outcomes. In parallel, it will be important to measure the comparative effects of these strategies on learners' understanding of neurology. For example, we omitted reflex testing from the hypothesis-driven strategy to increase its utility for non-neurologists such as emergency physicians and hospitalists, who often do not carry reflex hammers. This has pedagogical ramifications, because students must at some point develop the ability to accurately test and interpret reflexes, which can be critical in certain clinical scenarios, such as the acute presentation of Guillain-Barré syndrome. Furthermore, our results emphasize that the quality of a neurologic examination depends on properly obtaining a history, establishing a neuroanatomical localization, and formulating a differential diagnosis; these areas are thus important topics for further study.
The findings of this study should be interpreted in light of several limitations of its design. Some may disagree with our choice of examination maneuvers for specific situations. We created the algorithm for hypothesis-driven examinations on the basis of our own clinical experience and the limited evidence available10–15; certainly, more high-quality research on the utility of examination findings in specific settings is required. In addition, it may be argued that physicians learn through experience to appropriately focus their neurologic examinations. However, if this is the case, early teaching of an explicit approach may instill more robust examination skills and confidence in younger trainees. Conversely, we anticipate that some neurologists will disagree with such a focused and algorithmic approach and view it as yet another blow against the art of the neurologic examination. In response, we would stress that we are not proposing to change or abandon the screening neurologic examination, which all physicians should learn, not least because it involves a laying on of hands that is of timeless value but increasingly endangered. Instead, we wish to add a supplemental approach that our results suggest is superior in acute settings. From a pedagogical perspective, further studies will be required to determine the optimal time to introduce this approach into the neurology curriculum. Furthermore, our study emphasized acute, focal presentations of neurologic disease, thereby limiting our ability to comment on the utility of hypothesis-driven examinations outside this setting. Finally, the students in our study were given printed checklists to aid them during their examinations, and it may be argued that this does not replicate real-world conditions. However, we primarily wished to compare the actual performance of the 2 methods of examination and not students' ability to absorb them in a single 30-minute session. With the increasing use of electronic aids in medicine, such printed checklists may become more practical to use, at least until students have internalized them. Nevertheless, future studies should examine the teachability and ease of memorization of the 2 methods.
In the meantime, physicians will continue to face difficult and urgent diagnostic decisions in patients presenting with acute neurologic symptoms. The neurologic examination is indispensable in these situations, but it is a complex tool that can be difficult to master. Our study suggests that its performance and usability can be improved by supplementing traditional teaching with a focused, hypothesis-driven approach.
Editorial, page 1328
Dr. Kamel: drafting/revising the manuscript, study concept or design, analysis or interpretation of data, acquisition of data, statistical analysis, study supervision, and obtaining funding. Dr. Dhaliwal: drafting/revising the manuscript and study concept or design. Dr. Navi: drafting/revising the manuscript, study concept or design, analysis or interpretation of data, acquisition of data, and study supervision. Dr. Pease: drafting/revising the manuscript. Dr. Shah: drafting/revising the manuscript, acquisition of data, and study supervision. Dr. Dhand: drafting/revising the manuscript and acquisition of data. Dr. Johnston: study concept or design, analysis or interpretation of data, study supervision, and statistical analysis. Dr. Josephson: drafting/revising the manuscript, study concept or design, analysis or interpretation of data, and study supervision.
Dr. Kamel, Dr. Dhaliwal, Dr. Navi, Dr. Pease, Dr. Shah, and Dr. Dhand report no disclosures. Dr. Johnston is co-holder of patent on the RNA panel to identify and risk stratify TIA and receives research support from Sanofi-aventis, Stryker Neurovascular, Boston Scientific, the NIH (NCRR, CTSA, NINDS), Kaiser-Permanente, the AHA/ASA, and the Bugher Foundation. Dr. Josephson serves as an Associate Editor for Annals of Neurology and for The Neurohospitalist and as Editor-in-Chief for Journal Watch Neurology.