PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-17 (17)
 

Clipboard (0)
None

Select a Filter Below

Journals
Year of Publication
1.  An investigation of professionalism reflected by student comments on formative virtual patient encounters 
Background
This study explored the use of virtual patient generated data by investigating the association between students’ unprofessional patient summary statements, which they entered during an on-line virtual patient case, and detection of their future unprofessional behavior.
Method
At the USUHS, students complete a number of virtual patient encounters, including a patient summary, to meet the clerkship requirements of Internal Medicine, Family Medicine, and Pediatrics. We reviewed the summary statements of 343 students who graduated in 2012 and 2013. Each statement was rated with regard to four features: Unprofessional, Professional, Equivocal (could be construed as unprofessional), and Unanswered (students did not enter a statement). We also combined Unprofessional and Equivocal into a new category to indicate a statement receiving either rating. We then examined the associations of students’ scores on these categories (i.e. whether received a particular rating or not) and Expertise score and Professionalism score reflected by a post-graduate year one (PGY-1) program director (PD) evaluation form. The PD forms contained 58 Likert-scale items designed to measure the two constructs (Expertise and Professionalism).
Results
The inter-rater reliability of statements coding was high (Cohen’s Kappa = .97). The measure of receiving an Unprofessional or Equivocal rating was significantly correlated with lower Expertise score (r = −.19, P < .05) as well as lower Professionalism score (r = −.17, P < .05) during PGY-1.
Conclusion
Incident reports and review of routine student evaluations are what most schools rely on to identify the majority of professionalism lapses. Unprofessionalism reflected in student entries may provide additional markers foreshadowing subsequent unprofessional behavior.
doi:10.1186/s12909-016-0840-9
PMCID: PMC5217219  PMID: 28056962
Virtual patient encounter; Professionalism; Residency evaluation
2.  Education and the island of misfit toys 
doi:10.1007/s40037-016-0309-x
PMCID: PMC5122518  PMID: 27766577
3.  Teaching metacognition in clinical decision-making using a novel mnemonic checklist: an exploratory study 
Singapore Medical Journal  2016;57(12):694-700.
INTRODUCTION
Metacognition is a cognitive debiasing strategy that clinicians can use to deliberately detach themselves from the immediate context of a clinical decision, which allows them to reflect upon the thinking process. However, cognitive debiasing strategies are often most needed when the clinician cannot afford the time to use them. A mnemonic checklist known as TWED (T = threat, W = what else, E = evidence and D = dispositional factors) was recently created to facilitate metacognition. This study explores the hypothesis that the TWED checklist improves the ability of medical students to make better clinical decisions.
METHODS
Two groups of final-year medical students from Universiti Sains Malaysia, Malaysia, were recruited to participate in this quasi-experimental study. The intervention group (n = 21) received educational intervention that introduced the TWED checklist, while the control group (n = 19) received a tutorial on basic electrocardiography. Post-intervention, both groups received a similar assessment on clinical decision-making based on five case scenarios.
RESULTS
The mean score of the intervention group was significantly higher than that of the control group (18.50 ± 4.45 marks vs. 12.50 ± 2.84 marks, p < 0.001). In three of the five case scenarios, students in the intervention group obtained higher scores than those in the control group.
CONCLUSION
The results of this study support the use of the TWED checklist to facilitate metacognition in clinical decision-making.
doi:10.11622/smedj.2016015
PMCID: PMC5165179  PMID: 26778635
checklist; cognitive bias; cognitive debiasing strategy; medical education; metacognition
4.  Pre-clerkship clinical skills and clinical reasoning course performance: Explaining the variance in clerkship performance 
Introduction: Evidence suggests that pre-clerkship courses in clinical skills and clinical reasoning positively impact student performance on the clerkship. Given the increasing emphasis on reducing diagnostic reasoning errors, it is very important to develop this critical area of medical education. An integrated approach between clinical skills and clinical reasoning courses may better predict struggling learners, and better allocate scarce resources to remediate these learners before the clerkship. Methods: Pre-clerkship and clerkship outcome measures from 514 medical students graduating between 2009 and 2011were analyzed in a multiple linear regression model. Results: Learners with poor performances on integrated pre-clerkship outcome measures had a relative risk of 6.96 and 5.85 for poor performance on National Board of Medical Examiners (NBME) subject exams and clerkship performance, respectively, and explained 22 % of the variance in clerkship NBME subject exam scores and 20.2 % of the variance in clerkship grades. Discussion: Pre-clerkship outcome measures from clinical skills and clinical reasoning courses explained a significant amount of clerkship performance beyond baseline academic ability. These courses provide valuable information regarding student abilities, and may serve as an early indicator for students requiring remediation. Conclusions: Integrating pre-clerkship outcome measures may be an important aspect of ensuring the validity of this information as the pre-clerkship curriculum becomes compressed, and may serve as the basis for identifying students in need of clinical skills remediation.
doi:10.1007/s40037-016-0287-z
PMCID: PMC4978640  PMID: 27432368
Clinical skills; Clinical reasoning; Struggling learners
5.  A portable mnemonic to facilitate checking for cognitive errors 
BMC Research Notes  2016;9:445.
Background
Although a clinician may have the intention of carrying out strategies to reduce cognitive errors, this intention may not be realized especially under heavy workload situations or following a period of interruptions. Implementing strategies to reduce cognitive errors in clinical setting may be facilitated by a portable mnemonic in the form of a checklist.
Methods
A 2-stage approach using both qualitative and quantitative methods was used in the development and evaluation of a mnemonic checklist. In the development stage, a focus-driven literature search and a face-to-face discussion with a content expert in cognitive errors were carried out. Categories of cognitive errors addressed and represented in the checklist were identified. In the judgment stage, the face and content validity of the categories of cognitive errors represented in the checklist were determined. This was accomplished through coding responses of a panel of experts in cognitive errors.
Results
From the development stage, a preliminary version of the checklist in the form of four questions represented by four specific letters was developed. The letter ‘T’ in the TWED checklist stands for ‘Threat’ (i.e., ‘is there any life or limb threat that I need to rule out in this patient?’), ‘W’ for ‘Wrong/What else’ (i.e., ‘What if I am wrong? What else could it be?’), ‘E’ for ‘evidences’ (i.e., ‘Do I have sufficient evidences to support or exclude this diagnosis?’), and ‘D’ for ‘dispositional factors’ (i.e., ‘is there any dispositional factor that influence my decision’). In the judgment stage, the content validity of most categories of cognitive errors addressed in the checklist was rated highly in terms of their relevance and representativeness (with modified kappa values ranging from 0.65 to 1.0). Based on the coding of responses from seven experts, this checklist was shown to be sufficiently comprehensive to activate the implementation intention of checking cognitive errors.
Conclusion
The TWED checklist is a portable mnemonic checklist that can be used to activate implementation intentions for checking cognitive errors in clinical settings. While its mnemonic structure eases recall, its brevity makes it portable for quick application in every clinical case until it becomes habitual in daily clinical practice.
Electronic supplementary material
The online version of this article (doi:10.1186/s13104-016-2249-2) contains supplementary material, which is available to authorized users.
doi:10.1186/s13104-016-2249-2
PMCID: PMC5027116  PMID: 27639851
6.  Beyond standard checklist assessment: Question sequence may impact student performance 
Introduction
Clinical encounters are often assessed using a checklist. However, without direct faculty observation, the timing and sequence of questions are not captured. We theorized that the sequence of questions can be captured and measured using coherence scores that may distinguish between low and high performing candidates.
Methods
A logical sequence of key features was determined using the standard case checklist for an observed structured clinical exam (OSCE). An independent clinician educator reviewed each encounter to provide a global rating. Coherence scores were calculated based on question sequence. These scores were compared with global ratings and checklist scores.
Results
Coherence scores were positively correlated to checklist scores and to global ratings, and these correlations increased as global ratings improved. Coherence scores explained more of the variance in student performance as global ratings improved.
Discussion
Logically structured question sequences may indicate a higher performing student, and this information is often lost when using only overall checklist scores.
Conclusions
The sequence test takers ask questions can be accurately recorded, and is correlated to checklist scores and to global ratings. The sequence of questions during a clinical encounter is not captured by traditional checklist scoring, and may represent an important dimension of performance.
doi:10.1007/s40037-016-0265-5
PMCID: PMC4839012  PMID: 27056080
Clinical skills; Medical education; Assessment
7.  Dual processing theory and expertsʼ reasoning: exploring thinking on national multiple-choice questions 
Background
An ongoing debate exists in the medical education literature regarding the potential benefits of pattern recognition (non-analytic reasoning), actively comparing and contrasting diagnostic options (analytic reasoning) or using a combination approach. Studies have not, however, explicitly explored faculty’s thought processes while tackling clinical problems through the lens of dual process theory to inform this debate. Further, these thought processes have not been studied in relation to the difficulty of the task or other potential mediating influences such as personal factors and fatigue, which could also be influenced by personal factors such as sleep deprivation. We therefore sought to determine which reasoning process(es) were used with answering clinically oriented multiple-choice questions (MCQs) and if these processes differed based on the dual process theory characteristics: accuracy, reading time and answering time as well as psychometrically determined item difficulty and sleep deprivation.
Methods
We performed a think-aloud procedure to explore faculty’s thought processes while taking these MCQs, coding think-aloud data based on reasoning process (analytic, nonanalytic, guessing or combination of processes) as well as word count, number of stated concepts, reading time, answering time, and accuracy. We also included questions regarding amount of work in the recent past. We then conducted statistical analyses to examine the associations between these measures such as correlations between frequencies of reasoning processes and item accuracy and difficulty. We also observed the total frequencies of different reasoning processes in the situations of getting answers correctly and incorrectly.
Results
Regardless of whether the questions were classified as ‘hard’ or ‘easy’, non-analytical reasoning led to the correct answer more often than to an incorrect answer. Significant correlations were found between self-reported recent number of hours worked with think-aloud word count and number of concepts used in the reasoning but not item accuracy. When all MCQs were included, 19 % of the variance of correctness could be explained by the frequency of expression of these three think-aloud processes (analytic, nonanalytic, or combined).
Discussion
We found evidence to support the notion that the difficulty of an item in a test is not a systematic feature of the item itself but is always a result of the interaction between the item and the candidate. Use of analytic reasoning did not appear to improve accuracy. Our data suggest that individuals do not apply either System 1 or System 2 but instead fall along a continuum with some individuals falling at one end of the spectrum.
doi:10.1007/s40037-015-0196-6
PMCID: PMC4530528  PMID: 26243535
clinical reasoning; assessment; dual-process theory
8.  Neural basis of nonanalytical reasoning expertise during clinical evaluation 
Brain and Behavior  2015;5(3):e00309.
Introduction
Understanding clinical reasoning is essential for patient care and medical education. Dual-processing theory suggests that nonanalytic reasoning is an essential aspect of expertise; however, assessing nonanalytic reasoning is challenging because it is believed to occur on the subconscious level. This assumption makes concurrent verbal protocols less reliable assessment tools.
Methods
Functional magnetic resonance imaging was used to explore the neural basis of nonanalytic reasoning in internal medicine interns (novices) and board-certified staff internists (experts) while completing United States Medical Licensing Examination and American Board of Internal Medicine multiple-choice questions.
Results
The results demonstrated that novices and experts share a common neural network in addition to nonoverlapping neural resources. However, experts manifested greater neural processing efficiency in regions such as the prefrontal cortex during nonanalytical reasoning.
Conclusions
These findings reveal a multinetwork system that supports the dual-process mode of expert clinical reasoning during medical evaluation.
doi:10.1002/brb3.309
PMCID: PMC4356847  PMID: 25798328
Dual-process theory; expertise; functional MRI; medical education; neural efficiency; nonanalytical reasoning
9.  Neural basis of nonanalytical reasoning expertise during clinical evaluation 
Brain and Behavior  2015;e00309.
Abstract
Introduction
Understanding clinical reasoning is essential for patient care and medical education. Dual‐processing theory suggests that nonanalytic reasoning is an essential aspect of expertise; however, assessing nonanalytic reasoning is challenging because it is believed to occur on the subconscious level. This assumption makes concurrent verbal protocols less reliable assessment tools.
Methods
Functional magnetic resonance imaging was used to explore the neural basis of nonanalytic reasoning in internal medicine interns (novices) and board‐certified staff internists (experts) while completing United States Medical Licensing Examination and American Board of Internal Medicine multiple‐choice questions.
Results
The results demonstrated that novices and experts share a common neural network in addition to nonoverlapping neural resources. However, experts manifested greater neural processing efficiency in regions such as the prefrontal cortex during nonanalytical reasoning.
Conclusions
These findings reveal a multinetwork system that supports the dual‐process mode of expert clinical reasoning during medical evaluation.
doi:10.1002/brb3.309
PMCID: PMC4356847  PMID: 25798328
Dual‐process theory; expertise; functional MRI; medical education; neural efficiency; nonanalytical reasoning
10.  Functional Neuroimaging Correlates of Burnout among Internal Medicine Residents and Faculty Members 
Burnout is prevalent in residency training and practice and is linked to medical error and suboptimal patient care. However, little is known about how burnout affects clinical reasoning, which is essential to safe and effective care. The aim of this study was to examine how burnout modulates brain activity during clinical reasoning in physicians. Using functional Magnetic Resonance Imaging (fMRI), brain activity was assessed in internal medicine residents (n = 10) and board-certified internists (faculty, n = 17) from the Uniformed Services University (USUHS) while they answered and reflected upon United States Medical Licensing Examination and American Board of Internal Medicine multiple-choice questions. Participants also completed a validated two-item burnout scale, which includes an item assessing emotional exhaustion and an item assessing depersonalization. Whole brain covariate analysis was used to examine blood-oxygen-level-dependent (BOLD) signal during answering and reflecting upon clinical problems with respect to burnout scores. Higher depersonalization scores were associated with less BOLD signal in the right dorsolateral prefrontal cortex and middle frontal gyrus during reflecting on clinical problems and less BOLD signal in the bilateral precuneus while answering clinical problems in residents. Higher emotional exhaustion scores were associated with more right posterior cingulate cortex and middle frontal gyrus BOLD signal in residents. Examination of faculty revealed no significant influence of burnout on brain activity. Residents appear to be more susceptible to burnout effects on clinical reasoning, which may indicate that residents may need both cognitive and emotional support to improve quality of life and to optimize performance and learning. These results inform our understanding of mental stress, cognitive control as well as cognitive load theory.
doi:10.3389/fpsyt.2013.00131
PMCID: PMC3796712  PMID: 24133462
expertise; burnout; clinical reasoning; cognitive load; fMRI
11.  Exploring clinical reasoning in novices: a self-regulated learning microanalytic assessment approach 
Medical Education  2014;48(3):280-291.
Objectives
The primary objectives of this study were to examine the regulatory processes of medical students as they completed a diagnostic reasoning task and to examine whether the strategic quality of these regulatory processes were related to short-term and longer-term medical education outcomes.
Methods
A self-regulated learning (SRL) microanalytic assessment was administered to 71 second-year medical students while they read a clinical case and worked to formulate the most probable diagnosis. Verbal responses to open-ended questions targeting forethought and performance phase processes of a cyclical model of SRL were recorded verbatim and subsequently coded using a framework from prior research. Descriptive statistics and hierarchical linear regression models were used to examine the relationships between the SRL processes and several outcomes.
Results
Most participants (90%) reported focusing on specific diagnostic reasoning strategies during the task (metacognitive monitoring), but only about one-third of students referenced these strategies (e.g. identifying symptoms, integration) in relation to their task goals and plans for completing the task. After accounting for prior undergraduate achievement and verbal reasoning ability, strategic planning explained significant additional variance in course grade (ΔR2 = 0.15, p < 0.01), second-year grade point average (ΔR2 = 0.14, p < 0.01), United States Medical Licensing Examination Step 1 score (ΔR2 = 0.08, p < 0.05) and National Board of Medical Examiner subject examination score in internal medicine (ΔR2 = 0.10, p < 0.05).
Conclusions
These findings suggest that most students in the formative stages of learning diagnostic reasoning skills are aware of and think about at least one key diagnostic reasoning process or strategy while solving a clinical case, but a substantially smaller percentage set goals or develop plans that incorporate such strategies. Given that students who developed more strategic plans achieved better outcomes, the potential importance of forethought regulatory processes is underscored.
doi:10.1111/medu.12303
PMCID: PMC4235424  PMID: 24528463
12.  Internal Medicine Clerkship Directors’ Perceptions About Student Interest in Internal Medicine Careers 
Journal of General Internal Medicine  2008;23(7):1101-1104.
Background
Experienced medical student educators may have insight into the reasons for declining interest in internal medicine (IM) careers, particularly general IM.
Objective
To identify factors that, according to IM clerkship directors, influence students’ decisions for specialty training in IM.
Design
Cross-sectional national survey.
Participants
One hundred ten institutional members of Clerkship Directors in IM.
Measurements
Frequency counts and percentages were reported for descriptive features of clerkships, residency match results, and clerkship directors’ perceptions of factors influencing IM career choice at participating schools. Perceptions were rated on a five-point scale (1 = very much pushes students away from IM careers; 5 = very much attracts students toward IM careers).
Results
Survey response rate was 83/110 (76%); 80 answered IM career-choice questions. Clerkship directors identified three educational items attracting students to IM careers: quality of IM faculty (mean score 4.3, SD = 0.56) and IM rotation (4.1, SD = 0.67), and experiences with IM residents (3.9, SD = 0.94). Items felt most strongly to push students away from IM careers were current practice environment for internists (mean score 2.1, SD = 0.94), income (2.1, SD = 1.08), medical school debt (2.3, SD = 0.89), and work hours in IM (2.4, SD = 1.05). Factor analysis indicated three factors explaining students’ career choices: value/prestige of IM, clerkship experience, and exposure to internists.
Conclusions
IM clerkship directors believe that IM clerkship experiences attract students toward IM, whereas the income and lifestyle for practicing internists dissuade them. These results suggest that interventions to enhance the practice environment for IM could increase student interest in the field.
doi:10.1007/s11606-008-0640-y
PMCID: PMC2517945  PMID: 18612752
career choice; education, medical, undergraduate; medical students, workforce
13.  Expectations for Oral Case Presentations for Clinical Clerks: Opinions of Internal Medicine Clerkship Directors 
BACKGROUND
Little is known about the expectations of undergraduate internal medicine educators for oral case presentations (OCPs).
OBJECTIVE
We surveyed undergraduate internal medicine educational leaders to determine the degree to which they share the same expectations for oral case presentations.
SUBJECTS
Participants were institutional members of the Clerkship Directors of Internal Medicine (CDIM).
DESIGN
We included 20 questions relating to the OCP within the CDIM annual survey of its institutional members. We asked about the relative importance of specific attributes in a third-year medical student OCP of a new patient as well as its expected length. Percentage of respondents rating attributes as “very important” were compared using chi-squared analysis.
RESULTS
Survey response rate was 82/110 (75%). Some attributes were more often considered very important than others ( < .001). Eight items, including aspects of the history of present illness, organization, a directed physical exam, and a prioritized assessment and plan focused on the most important problems, were rated as very important by >50% of respondents. Respondents expected the OCP to last a median of 7 minutes.
CONCLUSIONS
Undergraduate internal medicine education leaders from a geographically diverse group of North American medical schools share common expectations for OCPs which can guide instruction and evaluation of this skill.
doi:10.1007/s11606-008-0900-x
PMCID: PMC2642568  PMID: 19139965
education leaders; oral case presentations; clinical clerks
15.  Identifying Medical Students Likely to Exhibit Poor Professionalism and Knowledge During Internship 
Journal of General Internal Medicine  2007;22(12):1711-1717.
CONTEXT
Identifying medical students who will perform poorly during residency is difficult.
OBJECTIVE
Determine whether commonly available data predicts low performance ratings during internship by residency program directors.
DESIGN
Prospective cohort involving medical school data from graduates of the Uniformed Services University (USU), surveys about experiences at USU, and ratings of their performance during internship by their program directors.
SETTING
Uniformed Services University.
PARTICIPANTS
One thousand sixty-nine graduates between 1993 and 2002.
MAIN OUTCOME MEASURE(S)
Residency program directors completed an 18-item survey assessing intern performance. Factor analysis of these items collapsed to 2 domains: knowledge and professionalism. These domains were scored and performance dichotomized at the 10th percentile.
RESULTS
Many variables showed a univariate relationship with ratings in the bottom 10% of both domains. Multivariable logistic regression modeling revealed that grades earned during the third year predicted low ratings in both knowledge (odds ratio [OR] = 4.9; 95%CI = 2.7–9.2) and professionalism (OR = 7.3; 95%CI = 4.1–13.0). USMLE step 1 scores (OR = 1.03; 95%CI = 1.01–1.05) predicted knowledge but not professionalism. The remaining variables were not independently predictive of performance ratings. The predictive ability for the knowledge and professionalism models was modest (respective area under ROC curves = 0.735 and 0.725).
CONCLUSIONS
A strong association exists between the third year GPA and internship ratings by program directors in professionalism and knowledge. In combination with third year grades, either the USMLE step 1 or step 2 scores predict poor knowledge ratings. Despite a wealth of available markers and a large data set, predicting poor performance during internship remains difficult.
doi:10.1007/s11606-007-0405-z
PMCID: PMC2219838  PMID: 17952512
predicting; intern; professionalism; knowledge; medical education
16.  Are Commonly Used Resident Measurements Associated with Procedural Skills in Internal Medicine Residency Training? 
Background
Acquisition of competence in performing a variety of procedures is essential during Internal Medicine (IM) residency training.
Purposes
Determine the rate of procedural complications by IM residents; determine whether there was a correlation between having 1 or more complications and institutional procedural certification status or attending ratings of resident procedural skill competence on the American Board of Internal Medicine (ABIM) monthly evaluation form (ABIM-MEF). Assess if an association exists between procedural complications and in-training examination and ABIM board certification scores.
Methods
We retrospectively reviewed all procedure log sheets, procedural certification status, ABIM-MEF procedural skills ratings, in-training exam and certifying examination (ABIM-CE) scores from the period 1990–1999 for IM residency program graduates from a training program.
Results
Among 69 graduates, 2,212 monthly procedure log sheets and 2,475 ABIM-MEFs were reviewed. The overall complication rate was 2.3/1,000 procedures (95% CI: 1.4–3.1/1,000 procedure). With the exception of procedural certification status as judged by institutional faculty, there was no association between our resident measurements and procedural complications.
Conclusions
Our findings support the need for a resident procedural competence certification system based on direct observation. Our data support the ABIM’s action to remove resident procedural competence from the monthly ABIM-MEF ratings.
doi:10.1007/s11606-006-0068-1
PMCID: PMC1824756  PMID: 17356968
procedural skills; Internal Medicine residency training program; ABIM evaluation
17.  Are Commonly Used Resident Measurements Associated with Procedural Skills in Internal Medicine Residency Training? 
Background
Acquisition of competence in performing a variety of procedures is essential during Internal Medicine (IM) residency training.
Purposes
Determine the rate of procedural complications by IM residents; determine whether there was a correlation between having 1 or more complications and institutional procedural certification status or attending ratings of resident procedural skill competence on the American Board of Internal Medicine (ABIM) monthly evaluation form (ABIM-MEF). Assess if an association exists between procedural complications and in-training examination and ABIM board certification scores.
Methods
We retrospectively reviewed all procedure log sheets, procedural certification status, ABIM-MEF procedural skills ratings, in-training exam and certifying examination (ABIM-CE) scores from the period 1990–1999 for IM residency program graduates from a training program.
Results
Among 69 graduates, 2,212 monthly procedure log sheets and 2,475 ABIM-MEFs were reviewed. The overall complication rate was 2.3/1,000 procedures (95% CI: 1.4–3.1/1,000 procedure). With the exception of procedural certification status as judged by institutional faculty, there was no association between our resident measurements and procedural complications.
Conclusions
Our findings support the need for a resident procedural competence certification system based on direct observation. Our data support the ABIM’s action to remove resident procedural competence from the monthly ABIM-MEF ratings.
doi:10.1007/s11606-006-0068-1
PMCID: PMC1824756  PMID: 17356968
procedural skills; Internal Medicine residency training program; ABIM evaluation

Results 1-17 (17)