Our study establishes the feasibility and validity of the SCA for assessment of patient satisfaction with cancer care outcome, process, and structure via paper- or Internet-based administration. The four SCA satisfaction domains demonstrated test–retest reliability, internal consistency, and validity for the paper- and Internet-based version. These findings support the use of the SCA version most appropriate for the study population. To our knowledge, this study is the first to validate an electronic instrument that comprehensively measures satisfaction with cancer care outcome (in addition to cancer care process and structure) for patients with a wide range of tumour types and stages [10
The validation results of our study are supported by a meta-analysis demonstrating that electronic and paper administration of patient reported outcome instruments yield equivalent results [10
]. The only electronic evaluation of satisfaction with cancer care (SEQUS for medical oncology outpatients) only evaluates structure and process but not outcome. The top concern identified by SEQUS [10
] (patient waiting times) was also determined to be an area of relative dissatisfaction in our study. However, additional satisfaction with outcomes data from our study places this result in context, potentially informing resource allocation decisions that optimally improve cancer care quality.
The results of our study must be considered in contexts of its limitations. Due to loss of follow-up, 64% of consented patients completed the initial survey and 44% completed both surveys. However, this completion rate is consistent with other evaluations of patient satisfaction with cancer care (50%–100%) [10
The relatively small sample size and the single urban academic institution study population did not permit subset evaluation and may limit generalisability of to other settings. However, the results of our study for a wide range of cancer diagnoses are consistent with a larger paper SCA study of patients with prostate cancer at multiple institutions across the United States [2
]. Additionally, due to the test–retest study design for the study hypothesis, patients who were unable to use or access a computer with Internet were excluded. Our results, therefore, may have demographic, socioeconomic, and/or functional status biases. However, others have demonstrated validity of electronic HRQoL surveys for a wide range of patients with varying levels of computer literacy, education, age, sex, and race [13
]. In addition, the off-site design was chosen to minimise positive bias associated with on-site surveys [33
]. Finally, we designed this study to closely mirror real-life situations of future quality improvement or research situations without on-site computer/Internet access and in-person assistance for respondents.
A ceiling effect was present for both versions of the survey, as commonly seen in patient satisfaction surveys [34
]. Ceiling effects may reflect prior findings that survey responders have more positive experiences than nonresponders [37
] and may obscure the true magnitude of satisfaction differences [35
]. However, the majority of responses for all domains in both surveys were less than the maximum score, indicating that the survey instrument can discriminate the level of patient satisfaction.
On average, the Paper First group had a longer time interval between initial consent and completion of the first survey and between the completion of the first and second surveys than the Internet First group. This difference may be due to the longer mailing times required for returning the paper survey and receiving Internet survey instructions and may account for the higher attrition rate for the Paper First group. A previous study found higher response rates among cancer patients who received a satisfaction survey more quickly [26
]. Although the completion time interval difference may have affected survey values, the presence of robust test–retest correlations suggests minimal effect on validity testing.
To assess for potential response bias, we performed exploratory chart reviews of patients with missing responses for the question with the highest number of missing responses (n = 13): ‘What is your overall feeling about the effect of your cancer treatment in preventing cancer progression and recurrence?’ Participants with missing responses tended to be undergoing active cancer treatment (n = 7), have metastatic or recurrent cancer (n = 7), and/or symptoms secondary to cancer or cancer treatment complications (n = 5). These may be situations in which nonresponders felt uncertain about their prognosis.
As a validation study, this study did not assess patient preferences for survey version nor the impact of the Internet-based instrument on practice patterns. Several studies demonstrate that electronic collection of patient reported data facilitates real-time feedback of survey results to clinicians and may confer several advantages [29
]: (i) increased clinician inquiries about HRQoL issues [34
], (ii) improved clinician-perceived communication with patients [38
], and (iii) increased clinician-perceived tracking of HRQoL changes over time [38
Patient satisfaction may vary due to factors beyond the current care provider’s control, including the limitations of current cancer treatments [27
]. Further study is also needed on how patient satisfaction data affects the patient–provider relationship, particularly in cancer care where both the provider and the patient may be disappointed with the effectiveness of current therapies. Currently, some health payers provide financial incentives for cancer care quality improvement based on patient satisfaction data [44
]. Assessment of satisfaction with outcome, such as provided by the SCA, may highlight important gaps in quality, including those due to socioeconomic disparities [1
]. However, the effect of these financial incentives on patient outcome is unknown. Cancer care quality assessment instruments that assess satisfaction with outcome may be important to answer these questions and generate new hypotheses.
In conclusion, our study demonstrates that the paper and Internet-based versions of the SCA provide valid measures of patient satisfaction in multiple domains of cancer care, including treatment outcome, and may be useful for evaluating cancer care quality for a variety of cancer diagnoses, clinical settings, and treatment modalities. Multiple options for patient satisfaction survey completion (self-completion with a paper or Internet version at home or in the clinic, in-person completion, or over the telephone) may enhance patient response rates and diversity. The SCA may help improve cancer care quality by placing other metrics of patient satisfaction in context with cancer treatment outcome.