A 72% response rate to the online questionnaire gives high significance to the results, being a good representation of UK practice. In most cases the respondent was either the head of radiotherapy physics or the clinical scientist responsible for QC, ensuring validity of the responses.
A surprising number of radiotherapy centres reported that they were aware their QC schedules were in need of an update and suspected that efficiency or productivity could be improved. It is possible this is a result of reliance on the aged IPEM Report 81 [7
], which was cited by 100% of respondents as a primary source of guidance in determining QC schedules. Contemporary references were used as additional key guidance by only 30% of centres. Workload and resource pressures in radiotherapy physics departments mean prioritisation of tasks must be made, and it is possible that the need to keep pace with advanced radiotherapy techniques is to the detriment of review of basic QC testing methodologies. There is certainly scope for a more modern approach to determining QC schedules, with very few centres adopting intelligent design for QC testing, and only 30% using risk assessments. Indeed, a risk-based approach is considered preferable to the adoption of a standardised list of required QC, since it will take into account local equipment factors and proven reliability, local physics knowledge and expertise, availability and complexity of QC testing equipment, and local radiotherapy treatment techniques. This is also the reason why complete standardisation of QC schedules and testing between radiotherapy centres is not possible. The data presented in this work are useful as an input to risk assessment, however, demonstrating a benchmark comparison of QC tests upon which local circumstances can be considered.
There was significant variation in the time spent performing linear accelerator QC at centres across the UK. A limitation of the study is that “physics departments” were asked for their QC schedules, and this may have led to the non-reporting of QC tests performed by radiographers or engineers in some departments. This may have led to under-reporting of daily or weekly “quick check” tests. The quartile range or maximum values of testing frequency may be more reliable than the minimum for this reason. However, even accounting for this potential varied interpretation of data, the range in time spent performing QC is considerable. There must be significant differences in efficiency of processes, design of QC schedules or quantity of QC testing performed at centres across the UK. The quantity of QC testing performed may be related to ease of access to equipment, established staffing or other local demands.
It is surprising that less than half of UK radiotherapy centres report percentage completion of QC tests within their own departments, especially as only one-third of centres routinely achieve all planned QC tests each month. Including QC as a key performance indicator may improve compliance. It was also perceived that other professional groups within radiotherapy had little interest in QC and moderate understanding of the QC tolerance values used. Raising the profile of QC may be beneficial—for example, ensuring that clinicians are aware of the accuracy of the treatment being delivered, such as absolute dosimetry output accuracy.
Collation of QC schedules from 40% of UK radiotherapy centres is deemed sufficient to be representative of practice across the UK, since respondents were distributed between both large and small centres (which may impact on QC testing economies of scale), and were not subject to bias in terms of geographical location. A number of centres could not contribute their QC schedules as they stated a desire to “update” or “review” the documents prior to submission, and this was not possible within the timescale of the study. There is a bias towards Varian linear accelerator centres, which could reduce the value of the results for alternative manufacturers' equipment, particularly where equipment-design-specific elements are considered. However, from the data received, there were no apparent differences in tolerance values or testing frequency between the different linear accelerators. For fundamental QC tests, such as output calibration and beam quality measurements, equipment manufacturer is not expected to be a deterministic factor, and hence the majority of results are applicable to all linear accelerators. Local interpretation of the presented results is essential prior to any implementation or amendment to QC testing.
The survey presented in this work differs from that conducted for IPEM Report 81 [7
] since we have analysed all QC tests which are undertaken in each centre rather than only requesting the frequency at which specific pre-defined test descriptors are conducted. This is expected to lead to an increased number of tests. There is a high level of agreement between the QC testing frequencies reported here and that of IPEM Report 81 [7
], and also with the survey conducted by Venables et al in 2001 for the START trial [13
]. This indicates little change in basic QC practice over the last decade. A notable difference is routine output (calibration) measurement, which is now commonly performed monthly (in combination with a daily check device), compared with weekly in both the survey by Venables et al [13
] and IPEM Report 81 [7
]. The 2009 report on QC testing from the USA, AAPM TG 142 [8
], also recommends monthly physics output measurement and daily constancy checks. The AAPM report [8
] includes modern treatment equipment and techniques missing from IPEM Report 81 [7
], and is hence a good source of further reference.
The relationship between linear accelerator system QC and patient-specific IMRT QC is relevant in considering linear accelerator QC schedules. While the adoption of patient-specific testing may reduce the need for certain general system checks, it is preferable to improve the basic QC testing of linear accelerators to reduce the demands of individual plan delivery checks. The majority of radiotherapy centres in the UK aim to increase the percentage of treatments that utilise IMRT techniques. A prerequisite for significant increase will be the reduction of patient-specific QC testing. The adoption of dynamic MLC checking into routine system QC, the adoption of robust independent software checks on treatment plans, the use of linear accelerator-generated MLC performance data, and the use of electronic portal imaging devices and in vivo dosimetry may all reduce the total QC time required for linear accelerators, at the same time as improving adoption of modern treatment techniques. QC schedules must be continually reviewed with the introduction of these additional verification methods.