We have developed a fully automated testing station to assess BLS skills. An interactive FlashTM module, embedded in commercially available software (Resusci Anne Skills StationTM software), allowed guiding students accurately through the testing procedure without instructor involvement. Although the software contained a timer to indicate the duration of the test, this does not automatically imply that rescuers performed BLS during the full three minutes. By recording the actual time-on-task, we could confirm that average test duration was three minutes. An automated testing station can be used to assess large groups of trainees (i.e. for certificative testing). On a 14 hour per day base, considering an average time of 7.5 minutes per student, eight students could be tested in one hour and in total 112 subjects could be tested in a day. Achieving this number with an instructor would be far more labour- and time-intensive.
Testing stations could also present an added value as an integral part of training, since testing has been shown to yield a powerful effect on retention which may be essential to consolidate newly acquired skills [12
]. Adding a test as a final activity in a BLS course seems to have a stronger long-term learning impact as compared to spending an equal amount of time practising the same skills [13
]. At a theoretical level, the training of continuous retrieval processes seems to account for the “testing effect”. Also, requiring learners to elaborate while retrieving earlier knowledge during testing, has been found to affect long term learning [12
]. Though these assumptions explain the testing effect in relation to declarative knowledge acquisition, the theoretical assumptions also fit the beneficial impact on the acquisition of skills, and tests also invoke retrieval and elaboration of procedural knowledge [14
The SWOT analysis in Table describes strengths and weaknesses that might affect the achievement of this objective. The automated testing station can also be used in the context of research where there is a need for pre- and post testing of BLS mastery after experimental interventions (i.e. formative testing). The student’s responses to the perception questionnaire indicate that students are positive about the automated testing station. In this respect, the results show that 80
% of the students agreed, certainly agreed or strongly agreed that they prefer an automated testing station to an IL test (item 18). With respect to item nine, the scores were not in line with the other items. Almost 40
% of the students agreed, certainly agreed or strongly agreed that “the test was too long and was causing fatigue in the end”. There may be two reasons for atypical scoring of this item. First, it is the only negatively formulated item in the questionnaire, and students may have overlooked this. Second, it combines two statements, namely “the test was too long” and “the test was causing fatigue in the end”. We also notice the atypical behaviour of this item in the PCA, since it does not reach the threshold loading (higher than 0.4) on the components.
SWOT analysis of automated BLS skills testing
The PCA resulted in a two-component structure, with one component focusing on the quality of instructional organisation (goals, instructions, assessment and feedback) and the other component focusing on usability. Average scores indicated that students certainly to strongly agreed that the instructional organisation was appropriate and students certainly agreed that the approach was usable. The results of this questionnaire are important for two reasons. First, they show that the automated testing station is functioning properly and is adequately organised. Second, they show that students were positive about the usability of the testing station.
As suggested by Kromann and colleagues, future studies should investigate the intrinsic testing effect and the extrinsic learning effect of formative testing, informing the participant about performance and guiding him towards further skills improvement and mastery [14
]. These studies could incorporate automated skills testing as a formative assessment procedure in an adaptive learning cycle with repetitive testing [16
A number of limitations have to be stressed. When discussing the quality of this specific assessment setting, two aspects have to be distinguished. The first aspect is the quality of the assessment process. The second aspect is the quality of the measurement of the performance indicators. This is guaranteed by the intrinsic quality of the manikin sensors and by the use of existing registration software. Maintenance protocols and timely replacement of sensors, valves and springs are imperative to guarantee measurement reliability and validity. In the context of the present study, the students were familiar with training in a SL station and that may have improved the usability of the testing station. However, the automated testing situation and the specific FlashTM module were completely new to the students. Presenting the usability questionnaire six months after testing may have introduced a bias. Further research is needed to confirm these results in terms of non-inferiority compared to IL testing and usability in other student populations.
The software prototype we used only focussed on testing the technical CPR components. Future developments could embed interactive components allowing the trainee to dial a phone number or assessing cardiac arrest by performing the right actions on-screen.