In a sample of colorectal cancer survivors, the overall quality of self-reported doctors’ visits and medication use was mixed with reasonable agreement for doctors’ visits but poor agreement for medications. Although the ICCs and kappa statistics showed ‘substantial’ agreement between the two methods for doctors’ visits, considerable under-reporting occurred, the proportion of absolute agreement was small and the 95% confident limits ranged from poor to substantial. Nevertheless, when costs were applied to the survey data and compared with MBS fees, the cost differences were not substantially different. Medication types were considerably over-reported in our sample with the exception of psychological medicines that were under-reported.
Our results showed a tendency for males and those with frequent contacts to under-report actual visits, coinciding with the findings from other studies [3
]. However, unlike other reports, we did not show disagreement was linked with older age, more severe disease and income or education levels for both doctors’ visits and medication use. Forty-three percent of our participants received laparoscopic surgery rather than radical open surgery and this was a factor associated with perfect agreement of specialist visits. Although the numbers involved are small, this may be a marker for participants having better general well-being and/or fewer side effects after surgery, and subsequently fewer services to recall, compared with participants undergoing other surgeries.
While our results point towards self-reported medication use being somewhat unreliable, it is important to understand the research context. Our participants have colorectal cancer and high treatment needs, they have multiple treatment modalities and they are likely to experience ongoing side-effects consistent with the evidence [12
]. Treatment involves a protracted and sometimes complex navigation through the health system at different service locations [13
]. Therefore, recall of health service utilisation over a six month period may be difficult for this group given their high usage and poor health status resulting in ‘poor agreement’ between self-reported and administrative data.
The agreement between the two methods for prescribed medication responses was disappointing despite efforts to ask participants to prepare records ahead of the interview time and clarify with the researcher any misunderstandings they had. However, this method assumed participants would be highly motivated which may have been unrealistic. Participants may have confused some prescription medications with over-the-counter medicines, participants may have included inexpensive medications not subsidised by the PBS or it is possible that hospital-acquired medications or those prescribed earlier in the recollection period were forgotten. It is clear from our findings that administrative data is necessary for collecting medication data in our participants receiving polypharmacotherapy. PBS data showed the total number of prescriptions (including repeats) was 1523 at a value of $244,231 to the government and $16,484 in out-of-pocket expenses for the participant. We did not test the reliability of frequency or repeats of specific prescriptions but it is clear from the survey responses for general groupings that this would have been too onerous for our participants to accurately recall.
Ultimately, the purpose of our study was to investigate whether self-reported health care utilisation via telephone surveys was an acceptable method for collecting data within a larger cost-utility analysis. If under-reporting service use can cancel out over-reporting and this occurs randomly across intervention arms, then minor discrepancies in resource use may be acceptable. However, our data suggested the extent of discrepancies was more serious however, there were no systematic differences in agreement categories across the CanChange intervention arms. When costs were applied to the frequency of doctor visits, MBS costs were significantly higher than costs from self-reported use, 18% higher for GP visits and 29% higher for specialist visits. Therefore, if the self-reported data is relied on for the cost-utility analysis, the extent of under-reporting may bias the final results. In this particular case, it may be preferable to use the mean cost and standard error from the Medicare records for use in modelling the larger study. Alternatively, the self-reported data collected on the full sample in the larger study could be increased by a compensating factor representing the extent of disagreement seen in our 76 participants.
Our study has several limitations. The small sample was not powered to detect significant differences in agreement category within logistic regressions. The framing of the survey questions may have been too broad and open to interpretation, particularly the medication question. To overcome this limitation we could have asked a closed question about medication usage including a list of specific drugs of interest (e.g., anti-depressants, anti-hypertensives) with their commonly available trade names. Finally, we assigned average costs of MBS attendance items to the survey data during our cost comparisons which may have obscured the variation in doctor services provided. However, the unit costs reflect the most common MBS items for GP and specialist visits and were considered reasonable estimates. Finally, important health service items collected by self-reports for allied health services, community services and home care were unable to be verified with Medicare records as they are not billable items and were therefore excluded from the analyses.
Research has suggested that cancer support services may promote downstream cost-savings to the health system from avoided GP visits and lower medication use [14
]. The strength of this claim is not yet supported by the research evidence and the issue is further complicated by the measurement challenges for health service use. Due to having Medicare data on only a small sample of colorectal cancer survivors, we are not able to contribute meaningfully to this topic, however forthcoming research on a larger sample of men with prostate cancer receiving a psychological intervention [15
] will inform this growing literature.
There is an increasing demand for data on health resource use at the patient level among researchers seeking reliable and high quality data. In comparative intervention research, the types and extent of resources used is important to evaluators to inform service delivery, future service planning and to contribute to assessments of cost-effectiveness.