This study was part of the first national survey undertaken in Norway to assess cancer patients’ experiences with somatic hospitals. The development of the CPEQ followed a review of the literature, interviews with cancer patients and consultation with an expert group of professionals and researchers. The resulting questionnaire underwent a thorough process of piloting and testing for data quality, reliability and construct validity, as recommended for evaluating such questionnaires. The CPEQ addresses broad domains of cancer-related care at somatic hospitals, rather than focusing on specific treatments, cancer types or specific professionals involved in care of the patients.
The results from the survey can be used as national quality indicators in Norway and were designed to inform patient choice and for quality improvement. The CPEQ was designed specifically for use with cancer patients attending somatic hospitals, and was assumed to increase content validity from a patient perspective as well as allowing hospital staff to investigate in detail the extent to which their service meets the needs of their patients. Questionnaires that assess specific aspects of care allow the domains where patients have poorer experiences to be identified and potentially improved.
Satisfactory evidence of internal consistency, test–retest reliability and construct validity was obtained, indicating that the CPEQ can be considered a high-quality instrument. The results of the EFAs and tests of internal consistency provided empirical support for the scales, and confirmed that both outpatient and inpatient experiences are multidimensional concepts. CFAs were supportive of the structures suggested by EFAs. There is evidence for construct validity of the questionnaire following the application of hypotheses based on previous research findings and theory.7
The results also provide support for the longitudinal temporal stability of the measure. High agreement between scores administered approximately 1 week apart provided good evidence of the test–retest reliability of the CPEQ.
Some limitations of the study should be considered. The levels of missing data suggest that the measure is acceptable to patients. However, some of the included items were only relevant for some of the respondents. It may be possible to extract a shorter version of the CPEQ with fewer questions without sacrificing the psychometric qualities of the measure, but this task was beyond the scope of this study. Another potential limitation is the response rate. In general, postal surveys have lower response rates than other data-collection modes.4
Non-response bias occurs when the main variables differ systematically between respondents and non-respondents.28
The response rate (52%) means that almost half of the patients failed to respond; however, it was relatively high compared with previous user-experience surveys carried out in Norway.8–19
Findings from some of these surveys have shown that the low response rates have not caused serious bias.15
The findings from a Norwegian follow-up study involving a hospital population showed that postal respondents and non-respondents had almost the same scores.32
These studies indicate that non-response might be of less concern, but uncertainty related to external validity means that more research is needed on the effect of non-response in patient experience surveys on cancer care and that the main findings in this study should be replicated in future studies.
Consistent with previous findings,33–35
some skewing towards positive assessment was identified. Whether this reflects truly positive experiences or low expectations is unknown.36
As for any study based on self-reports, social desirability bias and recall bias may also have affected the results. Respondents may introduce bias in several ways, for example, by giving socially desirable responses as a result of cognitive consistency pressure (making ratings congruent with their continuing use of the service) and through acquiescent response sets (a tendency always to agree or reply positively).4
However, respondents have been shown to give more positive and socially desirable responses in interview surveys than in self-administered surveys.37
Moreover, it is assumed that recall bias is less likely when asking about the overall experience rather than about a specific visit or hospitalisation.
Instead of developing a cancer-specific questionnaire, one of the existing generic questionnaires could potentially have been used in the national survey, such as the Patient Experience Questionnaire.8
This would have reduced the resource requirements, and also has some empirical support. One study compared the measurement properties and the patients’ evaluation of one generic and two psychiatric-specific patient satisfaction questionnaires in a sample of psychiatric patients. The results indicated that no single instrument was superior in either respect.38
Another study identified 10 generic core items covering major dimensions of experiences that patients across a range of specialist healthcare services report to be important.39
A short, generic questionnaire might be expected to give a higher response rate and better comparability than the CPEQ, but would not be suited for the purpose of a national survey in Norway. The purpose of the present study was a broad assessment of hospital cancer care. Furthermore, content validity is better for a cancer-specific questionnaire, since all activities are directed against securing validity for cancer patients, rather than to patients in general. Naturally, a national survey with a narrower focus could have used a generic and perhaps shorter questionnaire.
Results from the national patient experience survey programme in Norway are used to develop quality indicators presented both to the public and to the responsible institutions. Public use includes an Internet site for free hospital choice in Norway. Research has shown that patients have difficulty in understanding quality information,40
and that ‘less is more’ in this respect.41
Therefore, an aggregated and overall measure of experiences with the hospitals seems appropriate in the context of presenting information to patients. Further research is needed to determine how to construct a composite score, including how to weight each of the underlying subdimensions. More specific results are called for when reporting information to health providers with the aim of evaluating and improving the quality of care.33
Consequently, aggregated scores on the 13 CPEQ subdimensions might be a useful supplement when reporting results to the responsible hospitals.
Comparing the contribution at the organisational and individual levels is relevant for comparisons of hospitals based on patient evaluations. The approach for institutional benchmarking in the national survey programme involves developing an appropriate case-mix model and correct for multiple comparisons in statistical testing. Another emerging approach is to use multilevel analysis to estimate the amount of variation in scores that can be explained by levels above the individual level.42
A previous study of patient experiences found that only a small part of the variation is attributable to the organisational level.43
Future studies based on the CPEQ should explore this topic further in order to elucidate the usefulness of the CPEQ as a basis for quality indicators at the hospital level. This also includes research on hospital-level reliability, which is based on the theory that patients who are treated at the same hospital should agree regarding their assessments of that hospital. The larger the ratio of between-hospital to within-hospital variation in the scores, and the larger the number of respondents, the more precise will be the measurement of differences between hospitals, and thus the greater the reliability of the scores.44
Patient-satisfaction questionnaires have been criticised for insufficient knowledge of their reliability and validity in psychometric testing.3
The strengths of the present study include the psychometric assessment of the CPEQ following a national survey, including data quality, dimensionality, internal consistency and construct validity. The scale should prove useful for evaluating cancer patients’ experiences with hospitals in Norway and in similar settings in other countries, and includes the most important aspects regarding both inpatient and outpatient hospital care from the patient perspective.