PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of jmrsLink to Publisher's site
 
J Med Radiat Sci. 2017 December; 64(4): 321–327.
Published online 2017 October 14. doi:  10.1002/jmrs.249
PMCID: PMC5715258

The development and implementation of a performance appraisal framework for radiation therapists in planning and simulation

Jillian Becker, MHealthSc (MRS) (RT),corresponding author 1 Pete Bridge, MSc, BSc (Hons) (RT), 2 Elizabeth Brown, PhD (RT), 3 Janet Ferrari‐Anderson, BAppSc (MRT) (RT), 3 and Ryan Lusk, BAppSc (MRT) (RT) 3

Abstract

It is a challenge for radiation therapists (RTs) to keep pace with changing planning technology and techniques while maintaining appropriate skills levels. The ability of individual RTs to meet the demands of this constantly changing practice can only be assured through establishing clearly defined standards for practice and a systematic process for providing feedback on performance. Investigation into existing models for performance appraisal produced minimal results so a radiation therapy‐specific framework was developed. The goal for this initiative was to establish a framework that would reflect the complexity of practice and provide a clear measure of performance against them. This paper outlines the implementation of this framework into practice and discusses some lessons learned in the process. The framework was developed and implemented in six stages: (1) project team, (2) scope, (3) dosimetry pilot, (4) staff consultation, (5) finalisation and implementation and (6) future development and evaluation. Both cultural and organisational obstacles needed to be addressed before this framework could be successfully introduced. Even though this slowed progress, addressing these obstacles during the development process was essential to the success of this framework. The incremental approach provided the opportunity for each aspect to be tested and the development of subsequent stages to be informed by lessons learned during the previous one. This approach may be beneficial when developing and implementing projects involving performance appraisal to promote consistency, fairness and quality.

Introduction

It is a challenge for radiation therapists (RTs) to keep pace with changing planning technology and techniques while maintaining appropriate skill levels. At the inception of a new department the challenge of managing the range of professional experience and skill in the new team was identified. The senior team comprised RTs from different departments, representing varied perceptions of standard practice, resulting in inconsistent expectations of junior staff. This raised the need for agreed practice standards and evidence‐based skills assessment.

Investigation into existing performance appraisal models produced minimal results. The hospital‐based template provided general role expectations but failed to adequately articulate technical and professional practice to support skills assessment. Examples of other competency assessments1, 2 defined entry‐level skill requirements but not the range of skills evident in an experienced staff group. Allied health professions have traditionally employed a task‐based approach to competency which carries the risk of ‘creating professionals who have isolated skill sets that are not integrated with the knowledge to create complex meaningful performance in the workplace’.3 McAllister et al.4 acknowledge the dilemma of defining competency that includes specific skills and the ability to practise in a dynamic environment. The initiative was taken to establish a framework that would reflect the complexity of practice and provide a means to measure performance. It was also anticipated that this would support progression to roles requiring higher levels of skill. Equally important was the promotion of a culture that was fair, consistent, objective, transparent, based on evidence and focused on skills development. Feedback can motivate staff by setting objectives and providing for training and development needs,5 but it must be based on explicit aims and objectives and be delivered with a real desire to assist learning.6

This paper outlines the implementation of this framework into practice and discusses some lessons learned in the process.

Stages of Development and Implementation

The framework was developed and implemented in six stages as illustrated in Figure Figure11.

Figure 1
Development and implementation process.

Stage 1: Project team

The project team was chosen to represent all levels of skill and experience. Involving more people than less in developing a performance appraisal process provides quality judgement of performance and enhances the perceptions of fairness and the chance of relevant feedback.7, 8, 9 Diversity of experience and perspective within the team proved valuable in developing a process to support the professional development of all staff.

Stage 2: Scope

To avoid adding load to a busy work area, the framework was designed to complement the existing workflow. Planning practice was structured such that each planning RT was responsible for the simulation, dosimetry and plan finalisation for patients allocated to them. Computed tomography (CT) simulation sessions were performed by the planning RT and a dedicated RT (CT RT) rostered to the simulation area. For the purpose of gathering evidence on performance, planning practice was divided into CT simulation, dosimetry and plan finalisation.

Stage 3: Dosimetry pilot

The next stage was to pilot the framework in a defined context to identify any ambiguity and oversights in the developmental process. Dosimetry was chosen because it was a discrete area of practice and supported by a plan evaluation process. The elements contributing to plan quality were identified by the senior RT team as technical complexity, innovation, practical application and compliance with standards of practice. The skills identified as contributing to plan quality were: knowledge of standard practice, appropriate deviation from standard practice, consideration of practical implications for treatment and autonomy. These were drafted into a patient‐specific form to be included in the plan evaluation process.

Development of a criterion‐referenced assessment

To support consistency and objectivity in plan evaluation, the elements of plan quality were reviewed to determine those open to interpretation. Complexity and innovation were considered most open to subjectivity so to test the understanding of these terms, 12 patient plans were submitted for evaluation. These included a standard 2‐field breast technique, radical pelvic and head and neck techniques, and a palliative case including overlap with previous treatment. The plans were de‐identified and rated by nine senior RTs with experience in routine plan evaluations. No definitions for complexity or innovation were provided, and participants were asked to rate the plans using the three‐tier criterion‐referenced system shown in Table 1 and include a justification to identify factors influencing the rating.

Table 1
Rating guide for complexity and innovation

For complexity, 3 cases were rated consistently and 9 were rated across all 3 levels. For innovation, 2 cases were rated consistently and 10 were rated across all 3 levels. Justifications for ratings were collated and although the identified factors were common to all participants, the application to the level of complexity or innovation was inconsistent. The rating for innovation was consistently based on whether the approach was ‘common or known’ and whether the plan was supported by an existing protocol. Inconsistencies arose as to what was considered ‘common or known’. This illustrates a risk of an assumption of knowledge which can influence ratings and lead to unfair expectations of practice.

In consultation with the senior group and established practice standards, a criterion‐referenced assessment was developed to support consistency in plan evaluation. For 2 weeks, each plan was then assessed against this criterion‐referenced assessment to introduce the dosimetry rating form and the practice of completing it. The patient‐specific rating form and criterion‐referenced assessment for dosimetry are shown in Tables 2 and 3.

Table 2
Patient‐specific rating form – dosimetry
Table 3
Criterion‐reference assessment for dosimetry

Stage 4: Staff consultation

Before proceeding, the framework outline and the work done to date were presented to the RT group. Response indicated both support for the initiative and concerns that the process may not be fair. Additional concerns were: lack of support for the CT RT role, lack of ready access to protocols and procedures, inconsistent advice from senior RTs and how confidentiality of feedback would be assured. These issues identified barriers to the success of the framework so further development was put on hold until they were addressed.

A role description and an orientation process for the CT RT role were developed to support transition to and consistency in this role.

Concerns regarding the availability of protocols and the communication of practice changes were responded to by initiating a review and update of protocols, improving access to them and providing a means for communicating any inconsistencies in practice and advice. These inconsistencies were discussed in the senior group, and once a consensus was reached, the decisions were documented as standard practice.

To support confidentiality, it was decided that the detailed feedback would remain the property of the recipient. The feedback conversation included devising a plan to address development needs or requests with the agreement that managers or clinical educators would be consulted to gain the support needed. An agreed summary of the feedback and development goals was then recorded in the mandatory performance appraisal document and filed with management. Over time the process was refined so that a senior was nominated to coordinate each cycle of feedback for the RT. Even though changed circumstance, staffing or personal preference could require flexibility in this, it was believed that consistency in the coordination of the feedback would allow trust to develop and for continuity of information and accountability for learning needs to be ensured.

Stage 5: Finalisation and implementation

After addressing staff concerns, skill sets were defined for the remaining areas of planning practice. CT simulation practice was assessed by the CT RT in terms of CT simulation practice and technique, patient positioning, communication and stabilisation and positioning (Table 4). Plan finalisation was assessed in terms of treatment plan presentation and data transfer to the treatment record and was evaluated at the final RT check.

Table 4
CT evaluation form

A final feedback form was drafted to summarise the dosimetry, CT simulation and plan finalisation forms and include professional attitude, time/workload management, technical communication and commitment to quality. These were assessed through observation by the senior RTs.

The entire process was then trialled with a planning RT and a coordinating senior RT and at the completion of this trial, refinements were suggested. It was identified that the three‐tiered rating did not apply to all performance indicators and a two‐tier rating was included (Tables 4 and 5). Completing forms at the end of each CT session was found to hinder workflow, so it was decided that these would be completed after a block of simulations. Provision was also made in the dosimetry rating form for the RT to document any justifications for technical choices that may influence the rating.

Table 5
Final feedback (excluding dosimetry and CT)

Application of the tool

A senior RT was nominated to co‐ordinate the process and only two RTs underwent the process at any one time, acknowledging the additional demand on senior staff. Those undergoing the process were rostered in planning for 2 weeks before their 4‐week review period commenced. Feedback was collated and delivered shortly afterwards. This allowed for reorientation to practice, opportunity to demonstrate range of skill and opportunity to respond to feedback before being rostered out of the area. Timely delivery of feedback ensures that any issues raised are current and that opportunity is given for development. Frequent feedback is recommended,10, 11, 12 however the frequency was determined by rostering and being able to give opportunity for other RTs to participate.

The RT receiving feedback contributed to the feedback by completing a self‐evaluation and at the end of the period, the feedback forms were collated in consultation with the senior team in planning. The collated feedback provided an overview of the RT's performance, how self‐perception aligned with the perception of the team and whether the allocated case mix had provided adequate opportunity for demonstration of skill. In this way, feedback was given to both the planner and the senior staff. It was also important to allow the RT to contribute additional information that may add context to the feedback given. Allowing feedback to be a ‘conversation about performance’ rather than a ‘one way transmission of information’ can contribute to the perception of justice.6

Stage 6: Further development and evaluation

Further development of the framework included supervisory roles, such as the CT RT. These incorporated 360° feedback which provides insight into the perceptions of impact on the team. These perceptions determine the success of an individual in their role.13 Following the implementation, a study was conducted to evaluate the effectiveness of the framework as experienced by RTs, the results of which are the subject of a previous paper.14

Obstacles to implementation

Cultural and organisational obstacles were encountered during the introduction of this framework. Mistrust among staff was based on experience and concerns were expressed that the process would not be fair. The importance of a performance appraisal system may be denied if fairness and trust are not perceived in the process.7 Fairness and objectivity in a performance appraisal process are promoted through sharing control of the process, involving multiple contributors, open knowledge of the process and trust that supervisors are free of bias.15

The work was initially based on the assumption of commonly understood practice standards and protocols. Staff identified the lack of accessible and current documentation to support consistent practice, and the dosimetry pilot emphasised the need to normalise expectations of senior staff. The issues surrounding defined practice standards are significant because the lack of a defined standard makes performance appraisal unreliable.

Even though progress was slowed, addressing these obstacles was essential to the success of this framework.

Conclusion

The ability of individual RTs to meet the demands of constantly changing practice can only be assured through establishing clearly defined standards for practice and a systematic process for providing feedback on performance. The framework was introduced to define standards of practice and assess the performance of RTs against them. The goal was to provide feedback on performance that was evidence based, objective and fair. The incremental approach allowed the opportunity for each aspect to be tested and the development of subsequent stages to be informed by lessons learned during the previous one. This approach may be beneficial when developing and implementing projects involving performance appraisal and feedback to promote consistency, fairness and quality.

Conflict of Interest

The authors declare no conflict of interest.

Acknowledgement

The authors acknowledge all radiation therapists at the Radiation Oncology Centre, Princess Alexandra Hospital for their generous support of this project.

Notes

J Med Radiat Sci 64 (2017) 321–327

References

1. Technologists CAoMR . Radiation Therapy Curriculum Guide, 2007.
2. COMPASS®: Competency Assessment in Speech Pathology, 2006.
3. Hodges B. Medical education and the maintenance of incompetence. Med Teach 2006; 28: 690–6. [PubMed]
4. McAllister SM, Lincoln M, Ferguson A, McAllister L. Dilemmas in assessing performance on fieldwork education placements, 2010.
5. Prowse P, Prowse J. The dilemma of performance appraisal. Meas Bus Excell 2009; 13: 69–77.
6. Cantillon P, Sargeant J. Giving feedback in clinical settings. BMJ 2008;337(Nov 10_2):1961. [PubMed]
7. Kondrasuk JN. So what would an ideal performance appraisal look like? J Appl Bus Econ 2011; 12: 57.
8. Roberts GE. Employee performance appraisal system participation: A technique that works. Public Pers Manage 2003; 32: 89–98.
9. Schuwirth L, Southgate L, Page G, et al. When enough is enough: A conceptual basis for fair and defensible practice performance assessment. Med Educ 2002; 36: 925–30. [PubMed]
10. Cleary ML, Walter G. Giving feedback to learners in clinical and academic settings: Practical considerations. J Contin Educ Nurs 2010; 41: 153–4. [PubMed]
11. Baker N. Employee feedback technologies in the human performance system. Hum Resour Dev Int 2010; 13: 477–85.
12. Leggat SG. A guide to performance management for the Health Information Manager. Health Inf Manag J 2009; 38: 11. [PubMed]
13. Maylett T. 360‐degree feedback revisited: The transition from development to appraisal. Compens Benefits Rev 2009; 41: 52.
14. Becker J, Bridge P, Brown E, Lusk R, Ferrari‐Anderson J. Evaluation of a performance appraisal framework for radiation therapists in planning and simulation. J Med Radiat Sci 2015; 62: 114–21. [PubMed]
15. Kavanagh P, Benson J, Brown M. Understanding performance appraisal fairness. Asia Pac J Hum Resour 2007; 45: 132–50.

Articles from Journal of Medical Radiation Sciences are provided here courtesy of Wiley-Blackwell