|Home | About | Journals | Submit | Contact Us | Français|
Mark Hecimovich is a Senior Lecturer and Academic Chair Sports Science, Jo-Anne Maire is a Senior Lecturer, and Barrett Losco is a Lecturer. All are with the School of Chiropractic and Sports Science at Murdoch University.
To compare the effect of two learning opportunities, clinician feedback and video self-assessment, on 5th-year chiropractic students' patient communication skills, specifically those required for history taking.
A cohort of 51 final-year students was divided into two groups. The first group received immediate feedback from a clinical supervisor following a history-taking encounter with a patient. The second group performed self-assessments of their videotaped history-taking encounter. An end-of-year Viva Voce examination was used to measure the effectiveness of the students' history-taking skills, using two subscores, one for behavior and another for content, as well as an overall total score. An unpaired t-test was performed to determine whether any significant difference occurred between the two groups of students. Each group was then subdivided into two subgroups based on gender, and a two-way analysis of variance was performed to determine whether the type of feedback or the students' gender had any significant effect on the outcome of the Viva Voce.
There were no significant differences between the two groups of students in terms of their final scores in the Viva Voce. After dividing each group into their gender subgroups and further analysis of the results, neither the mode of feedback nor the students' gender had any significant effect on the outcome of the Viva Voce.
This study suggests that, for a mixed cohort, video self-assessment and clinician feedback are equivalent in their ability to enhance students' communication skills relating to history taking.
Patient communication is a vital clinical skill that has a significant impact on patient care.1 Empirical evidence supports the correlation between effective patient communication and improved health outcomes and health care quality in patients.2, 3 In health education the development of effective learning opportunities that build proficiency in patient communication skills is an ongoing process. Diverse learning opportunities, such as lectures and small group discussions,4–9 computer-aided learning,10 self-assessment,11 virtual patients, narrative case summaries,12 and various methods of feedback, have been implemented in health care education programs with the aim of building patient communication skills for their students. For these diverse learning opportunities, there is an abundance of documentation in health education advocating the various methods of feedback, such as clinician, peer, video, clinical encounter cards, patient, and academic/teacher, as the primary learning opportunity implemented to assist in the building of patient communication skills.
Feedback is about providing information to students with the intention of narrowing the gap between actual and desired performance13, 14 and has been shown to be effective in the building of communication skills such as patient interviewing and history taking.15, 16 Feedback encourages students to think about their performance and how they might improve17, 18 and, without it, mistakes may go uncorrected, good performance might not be reinforced, and clinical competence is achieved empirically or not at all.19 However, the lack of adequate amounts of effective feedback in the clinical setting, such as the student-clinic internship, has been identified as a major ongoing problem in health care education.19–23 For example, teaching in the clinical setting often occurs at a rapid pace with multiple demands placed on the clinical supervisor; is variable in teaching and learning opportunities because cases vary unpredictably in number, type, and complexity; and has a relative lack of continuity, which limits the time for effective feedback.24 This is vital because if feedback is limited, then the building of patient communication skills, one may assume, is affected, which ultimately may have a profound impact on the quality of patient care. Therefore, it is essential that educators implement effective learning opportunities that can be used in addition to feedback methods in order to enhance patient communication skills in their students.
It has been suggested that self-assessment of communication skills using videotape may assist the building of patient communication skills because it provides an opportunity for students to assess themselves and identify what they did well and which aspects of their communication they could improve.11 This was documented when Dawson et al25 observed students who were entering their 6th year of medical school, with the use of video self-assessment during history taking, patient interviewing, and physical examination procedures. Results from their study provided evidence that students strongly believed video self-assessment improved their history and interviewing skills. However, there is strong support that suggests that student self-assessment does not necessarily correlate with actual external independent ratings of their communication skills.26–28 Arguably though, the support for this comes from studies that utilized closed-ended approaches, which gauge the accuracy of students' self-assessment against the assessment of peers, standardized patients, or expert observers,29–32 but fail to take into account the value of open-ended formats which encourage deeper reflection and active learning. The open-ended format is a nonthreatening approach because it eliminates concern for the judgment of others. For example, when Zick et al33 assessed the development of communication skills with 1st-year medical students, they were concerned more about the formative value of self-assessment than about the precision of student responses during assessment tasks. In their study they used an open-ended approach that encouraged students to reflect on their behaviors in a way that was meaningful to them. This approach yielded more salient data on students' strengths and weaknesses, as opposed to data pertaining to a set of specific behaviors on a checklist, and could be used to complement other learning opportunities, such as standardized patient exams, direct observations of encounters with real patients, and collection of feedback from patients.34, 35 The authors concluded that self-assessment using videotaping was a feasible, practical, and informative approach to developing effective patient communication skills by providing students with an opportunity to review their own behavior and make specific comments that were supported by tangible examples. It could be assumed then that self-assessment with an open-ended approach, coupled with feedback, may assist in the building of patient communication skills and therefore provide a way of dealing with the lack of immediate feedback in the academic or clinical setting.
In sum, if effective patient communication is an important element for a health care provider to possess, it is imperative to determine how diverse learning opportunities offered to students during their education may contribute to its development. This is important because the use of feedback is the primary learning opportunity most often used to build this skill but many times cannot be adequately done due to a myriad of reasons. Therefore, it is essential that other learning opportunities be explored as to their effectiveness in the development of patient communication skills in health care students. The aim of this study was to assess two learning opportunities, clinician feedback and video self-assessment, and their effect on students' patient communication skills during their clinical experience. The impetus for this study stemmed from a review of literature which supports the view that there is a lack of adequate amounts of feedback during the clinical experience and first-hand perceptions of this in our program. If video self-assessment is as effective as clinician feedback for developing students' communication skills, this may imply that it is a valid tool for inclusion in the chiropractic clinical environment.
Murdoch University School of Chiropractic and Sports Science offers an accredited, 5-year, double bachelor degree in chiropractic, which, on successful completion, allows the student to register and practice as a chiropractor in Australia. Additionally, graduates from the program are allowed to sit for registration and/or licensing boards globally. Starting in year 4, students commence their clinical experience at the school's on-campus outpatient clinic by observing 5th-year students during all aspects of patient care. They gradually take on more responsibilities so that by the end of year 4, students take over the caseload of their 5th-year students. Once in year 5, students assume the responsibility of patient care under the direct supervision of registered chiropractic clinicians (supervising clinician). Supervising clinicians are an essential constituent to the clinic experience and their responsibilities are to manage patient care and to help the students learn patient care skills by involving them in all phases of care whenever possible. It is also the clinician's responsibility to determine the readiness of students to proceed with the various phases of patient care and to assess and verify students' competence. Clinician feedback is provided after history taking, physical examination procedures, presentation of management plan, consent procedures, and treatment to the patient; however, it is often limited due to time constraints. In order to address the issue of limited feedback opportunities, the use of videotaping has also become an essential constituent to the student's clinic experience. With this, students are required to electronically record (via web-cam and with patient approval) history taking and physical examination and to carry out self-assessments allowing them to document their views and perceptions of the patient encounters.
This study used a quantitative design in order to determine the effectiveness of clinician feedback and video self-assessment on the history-taking skills of 5th-year chiropractic students. The 2008 year 5 cohort (N = 51) was randomly divided into two groups: one who received clinician feedback immediately after taking patient histories and one who completed video self-assessments after taking patient histories. The clinician feedback group (n = 26) received feedback after each of three separate patient encounters, while the self-assessment group (n = 25) performed self-assessments of videotaped encounters after each of three separate patient histories. These interventions occurred once each trimester (12-week trimesters) for both groups, from January to September, 2008. Following this, each student was required to participate in an assessed Viva Voce examination that included a history-taking component. Scores from the history-taking component, comprising content and behavior subscores, and total scores for this component, were matched against the learning opportunity that was provided to the student.
This study used two interventions and one instrument of measurement. An end-of-year Viva Voce was used to measure the effectiveness of students' history-taking skills, while the two interventions were clinician feedback and video self-assessment. Each is outlined in the next section.
Three times between January and September a supervising clinician observed and then provided verbal feedback to a student after a new patient history encounter using the Patient History Assessment form (Fig. (Fig.1).1). The feedback took place immediately after the encounter. This form is a revised and abbreviated version of the History Taking Checklist used for the History Taking Objective Structured Teaching Experience (OSTE) developed by the Clinical Skills Training and Assessment Program Office of Medical Education at Indiana University School of Medicine.36 It was used only as an aide memoir for the supervising clinician because there was no actual mark given.
Three times between January and September the students videotaped themselves while taking a new patient history and, within a nominated period (maximum 1 week), viewed the recording and used a specific performance evaluation template (Fig. (Fig.2)2) that was designed to guide students through an evaluation of their history-taking performance.
In October, at completion of their clinical experience, all students were assessed with a Viva Voce examination. The history-taking component aims to ascertain whether students are competent in history taking and was utilized as the instrument of measurement. Two subscores were used: content and behavior. The content subscore assessed students' ability to elicit relevant information from a standardized patient, while the behavior subscore assessed skills such as active listening, appropriate maintenance of eye contact, recognition of verbal and nonverbal cues and body language, appropriate use of open- and closed-ended questions, and ability to clarify information gathered from the patient. An overall mark was given for the history-taking component, which is the combined scores of the two subscores. The overall history-taking mark and each of the two subscores were used for comparison between the two groups.
After receiving ethical approval from Murdoch University's Human Research Ethics Committee for this study, all 5th-year chiropractic students enrolled at Murdoch University School of Chiropractic and Sports Science in the academic session 2008 were invited to participate. All of the 51 students (21 male and 30 female) agreed to participate in this study and were randomly placed into either the clinician feedback group (group A) or the video self-assessment group (group B). Group A had 26 participants (12 male, 14 female) and group B had 25 participants (9 male, 16 female). Age and educational background were not analyzed for this study.
Patients who agreed to have their encounters video recorded for this study signed a patient consent form authorizing the recording. All patients who agreed to be recorded were independent and aged 18 or older.
Data were analyzed using Graph Pad Prism 4.1 (GraphPad Software, La Jolla, California) statistical analysis program. An unpaired t-test was utilized to determine whether any significant difference in the students' overall results occurred between either of the two groups as a whole. During the course of the project, a further question arose as to whether the students' gender may have an effect in conjunction with the mode of feedback on the final result of the Viva Voce. As a result, each group was divided into two subgroups according to student gender. A two-way analysis of variance (ANOVA) was utilized, where significance was set at the 95% confidence interval (p-value of.05 indicated a statistically significant difference), as there were two independent variables (feedback and gender) and one dependent variable (final scores). Data analysis with the two-way ANOVA was conducted to determine (1) whether the mode of feedback has the same effect at all values of gender, (2) whether the mode of feedback has an effect on the final result, and (3) whether the student's gender has an effect on the final result?
The statistical breakdown between group A (clinician feedback) and group B (video self-assessment) demonstrated 26 students for group A (12 male, 14 female) and 25 students for group B (9 male, 16 female). Three scores achieved during the Viva Voce were analyzed, namely total scores, content scores, and behavior scores (Table (Table11).
The results of the unpaired t-test comparing the total scores achieved by group A and group B failed to demonstrate any significant difference between the two groups with a p-value of.4048. A similar result occurred when analyzing the content scores for group A and group B, and when analyzing the behavior scores of group A and group B, where the t-test failed to demonstrate any significant difference between the two groups with p-values of.35 and.48, respectively.
Analysis of the total scores achieved between the respective groups using the two-way ANOVA revealed that the interaction between mode of feedback and gender had no significant effect on the total scores with a p-value of.44. Similarly, the two-way ANOVA failed to demonstrate any significant effect of feedback mode or gender on the total scores with p-values of.40 and.66, respectively, suggesting that neither of the variables had any significant effect on the total scores achieved by the student in the Viva Voce.
Analysis of the content scores to determine whether there was any interaction between the mode of feedback and gender using the two-way ANOVA failed to demonstrate any significant interaction between these two variables with a p-value of.42. Further analysis to determine whether mode of feedback and gender had any effect on content scores did not demonstrate any significant effect of either variable on the content scores with p-values of.61 and.70, respectively, once again suggesting that neither variable has any effect on the content scores achieved during the Viva Voce.
Analysis of the behavior scores to determine whether the interaction between gender and mode of feedback, mode of feedback, and gender had an effect on the behavior scores failed to demonstrate any significant results, with p-values of.24,.24, and.36, respectively. This suggests that neither the mode of feedback, gender of the student, nor an interaction between the mode of feedback and gender had an effect on the overall behavior scores.
This study assessed two forms of learning opportunities and their effect on chiropractic students' communication skills. The two opportunities were traditional clinician feedback and videotape self-assessment. Video self-assessment without clinician feedback has grown in popularity due to cost and convenience factors; however, its effectiveness on improvement of skills is questionable because without clinician feedback students may not notice behaviors that need improvement.37 Conversely, clinician feedback is a concept that is strongly theory based and has been shown to reinforce or modify behaviors38 but can also cause demotivation and deterioration in performance if not carefully managed.39 Therefore, both opportunities may be either supportive or an impediment for success.
The results from this study suggest, based solely on a single measurement tool, that the clinician feedback and the video self-assessment were equally effective in the development of overall communications skills. However, a breakdown of the results did demonstrate some interesting findings. For example, although not statistically significant, males who utilized the video self-assessment (group B) had higher behavior scores (mean 67.72) than those who received clinician feedback (group A, mean 45.86). This could be due, in part, to how men respond to negative feedback. According to Roberts and Nolen-Hoeksema40 and Roberts,41 men are influenced more by positive feedback and appear to ignore negative feedback by assuming a self-protective posture and discounting the value of others' evaluation. It may be assumed then that the clinician feedback provided to the males in this study was perceived negatively by them and thus not utilized to incorporate change. This may lead to the conclusion that the video self-assessment group should have scored better on the Viva Voce assessment than their clinician feedback counterparts. However, psychology and health education literature42, 43 supports the concept that men tend to overestimate their abilities when self-assessing, supporting the argument that the self-assessment group in this study should have consequently documented lower Viva Voce communication scores. This contrast in findings might suggest that the clinician feedback is more influential, especially in the development of communication skills, than the video self-assessment. This would need to be supported by future studies.
Another interesting result was the lack of significant difference between the men and women who received clinician feedback (group A) in total score and its two subcomponents (behavior and content scores). This also is in contrast to empirical evidence, notably social role theory,41 which posits that women tend to treat feedback as an opportunity to learn more about themselves and are more willing to internalize and apply the feedback to future decisions. It may be assumed then that the female scores should have been greater than those of the males in reference to those who received the clinician feedback. Unfortunately, this study did not categorize the type of feedback that each student received—positive or negative.
Being a single cohort study meant that it was underpowered to demonstrate significant differences, which was a notable limitation. Another limitation was its use of a single measurement tool (Viva Voce) to assess communication skills, an issue that needs to be addressed in future studies. But this may be difficult because there is substantial evidence that health care students can be taught communication skills16, 44 but limited empirical evidence supporting effective assessment of these skills.11 The Objective Structured Clinical Examination (OSCE) is commonly used in most health education programs and includes assessment of communication skills. However, Rees et al11 noted that medical students do not value OSCEs and feel they are artificial and contain discrepancies between the assessment and the teaching of communication skills. Additionally they noted that even if the marking sheets were available to students before an assessment, students feel that their patient interactions are then being reduced to a series of boxes which have to be ticked. This may hold true for the Viva Voce as well. A suggestion would be to include some form of formal assessment (OSCE, Viva Voce) but also rely on clinician and patient assessment of the students' communication skills as they progress through their clinical internship.
The lack of a control group may also be viewed as a limitation. However, this poses a dilemma in clinical education studies. Typically one of the aims of the clinical internship is student feedback, or a similar opportunity. Removing it in order to justify a control group would be questionable. The control group participants would be, in essence, left out of clinical educational opportunities. Another issue that would need to be addressed and was not included in this study was clinician demographics. This could be important due to differences in students' response to clinicians of a certain age, clinical experience, gender, and nationality.
In sum, future studies would need to address in greater detail, or expand on, the main components assessed in this study, such as clinicians, video self-assessment, communication assessment, and student demographics. First, the clinician feedback would need to be documented as either critical or supportive and by both the student and clinician independently. The demographics of the clinician should be noted in order to document if this affects how certain students respond to a specific clinician. The use of qualitative data may shed light on this issue. Second, the use of the video self-assessment may also need to be evaluated by the students to see whether they feel comfortable with this type of self-assessment and if it needs to be formulated differently. If students perceive that it is not a supportive educational opportunity, they may discount its aim of providing a useful way for self-assessment. Additionally, the use of clinician and peer video assessment may benefit the student by providing additional feedback and also be incorporated into a busy and oftentimes understaffed student clinic. Third, the way in which students' patient communication skills are assessed needs to be expanded. For example, the use of periodic formative evaluations from the clinicians and possibly patients before a formal assessment such as an OSCE or Viva Voce would lend supportive evidence in accurately documenting change in patient communication skills. Last, the age, educational background, and other student specifics may also be useful.
It has been documented that effective patient communication is an essential element for health care providers to possess. Equally, it is essential that academics understand which learning opportunities contribute to its development during a provider's formal training. Given the results of this study, it appears that two clinical learning opportunities, including one that is regularly implemented in clinical education curricula, whose aims are to assist in the development of patient communication skills, are essentially equal. The two learning opportunities, clinician feedback and student video self-assessment, are implemented in clinical education but it is the former which is utilized to a greater extent. However, there is a lack of adequate amount of clinician feedback during the clinical experience, which may have a profound effect on the development of clinical skills such as patient communication. Therefore, another learning opportunity, video self-assessment, which is easier to implement and contributes equally to the development of patient communication may be confidently added to the clinic experience.
The authors have no conflicts of interest to declare.
Mark D. Hecimovich, Murdoch University.
Jo-Anne Maire, Murdoch University.
Barrett Losco, Murdoch University.