We developed a computer-based colon cancer screening decision aid and found that it could increase patient interest in screening and intent to be screened. Most participants were ready to be screened after viewing the decision aid, 48% had tests ordered and 43% completed screening tests. These results are similar in magnitude to those from our videotape decision aid trial in which patients' intent to ask providers for screening increased significantly after viewing the aid and 37% completed tests.
In our current study, the computer-based decision aid subjectively improved patients' knowledge about screening and was useful to most in making decisions about screening. Other studies have found that similar tools increased patients' level of knowledge about screening, but effects on screening rates have varied. Zapka et al. conducted a randomized controlled trial of a video on sigmoidoscopy that was mailed to patients in advance of a scheduled visit [14
]. A decision aid developed by Meade et al. improved patient knowledge about CRC screening as determined by a change in score from expert-validated pre- and post-tests [15
]. Dolan et al. found that patients subjectively reported improved knowledge about CRC screening after using a decision aid [16
]. In these studies, there was no difference in screening test ordering and completion between those who viewed the decision aid compared to those who did not [16
]. In contrast to these studies that looked at the effect of a decision aid alone, our previous study using a combined intervention of a videotape decision aid and chart marker was able to increase screening test ordering and completion compared to controls [10
Our computer-based aid differs from other decision aids for CRC screening in that patients were able to interact with the aid via its modular format and choose to view information based on their knowledge needs. Patients in previous trials of decision aids on CRC screening all received the same educational content regardless of their knowledge about screening [10
]. Our computer-based aid was not truly tailored in that the decision aid was not customized to fit individual patients' characteristics. Each patient, however, was able to select the amount and content of information they received. In this way, the information on CRC screening may have achieved greater relevance to patients.
Only 28% of those who were ready to be screened had the test they preferred ordered by their provider. There are a number of possible reasons for the lack of congruence between patient preferences and test ordering. First, some patients may have viewed the decision aid after seeing their provider and thus did not have an opportunity to discuss screening at that visit or another visit within the 6-month follow-up window. Another possibility is that providers may not have been aware of patients' preferences and had not been trained to provide stage-appropriate responses. Third, patients who were already up-to-date with screening may not have had tests ordered. Excluding patients who were up-to-date from the analysis, however, did not increase the proportion of tests ordered, so this explanation is unlikely to account for the low test ordering rates.
Given these results, a patient-oriented decision aid alone may be insufficient to ensure test ordering based on patient preferences or increase test ordering and completion to desired levels; multifaceted interventions that target a combination of providers, patients, or office systems may be more likely to increase screening rates [17
]. Interventions such as physician prompts and standing orders can increase performance of preventive care, including cancer screening [18
]. Standing orders are another potential component of a multifaceted intervention. In a standing orders protocol, a nurse initiates test ordering based on patient preferences and a practice-approved protocol. Implementing the decision aid with office-system interventions may help improve rates of screening test ordering.
The proportion of tests ordered and completed for patients who answered green was higher than for patients who answered yellow or red, but the differences were not statistically significant. The study with its small sample size may have lacked power to detect a significant difference between the groups. In addition, approximately 40% of patients choosing red had tests ordered and one-third completed screening tests. In our previous videotape decision aid study, only 7% of patients choosing red had tests ordered and 4% completed tests. The seeming disconnect between patient interest and provider ordering in the current study is concerning and may be due to poor patient-provider communication about preferences or patients who changed their mind about screening after completing the questionnaire. More research needs to be done to determine why patients who were uncertain or not ready for screening had tests ordered and completed.
Of the patients who were ready for screening, most rated the ability to find cancer, or accuracy, as the most important factor in deciding on a test. Ling et al. previously found that most patients rate accuracy as the most important feature of a CRC screening test, but that providers thought that discomfort in undergoing a test was most important to patients [20
]. Providers should be aware that many individuals value the accuracy of screening methods and counsel their patients accordingly.
There are a number of limitations to this study. Foremost, it was an uncontrolled trial without a comparison group, so it is unclear whether the proportion of patients having tests ordered and completed over 6 months represents an increase compared to the usual care of patients who did not view the decision aid but were otherwise eligible for the study. Our results, however, are fairly comparable to those from our videotape decision aid randomized trial conducted in three central North Carolina private primary care practices: overall there was a net 0.6 unit increase in intent to be screened after the decision aid. Among patients viewing the videotape decision aid, 47% of individuals had screening ordered compared to 26% of controls, and 37% completed tests vs. 23% of controls. We chose to use an uncontrolled design as the first phase of testing to evaluate whether the aid could increase interest in screening and was useful to patients in choosing a screening modality. Whether patients' increased interest in screening after viewing the computer-based decision aid can lead to an improvement in screening rates cannot be determined from this pilot study; this question will be better addressed in a larger, multi-center randomized trial with screening test completion as the main outcome.
Other limitations were the use of a convenience sample and selection bias. Given the volunteer study population with some subjects referred by physicians and the low response rate, the responses of those who chose to participate may be different from those who did not participate. Our results also could have been affected by the fact that almost half of the participants were previously screened and 18% were up-to-date with screening. We performed additional analyses excluding those who were up-to-date and did not find a change in the percentage of tests ordered and completed. There were also some differences in outcomes between those who were up-to-date and those who were not. Individuals who were up-to-date had lower mean intent and interest scores at baseline and after watching the aid, and less were ready to be screened after watching the aid compared to those who were not up-to-date (Table ). These results are based on small numbers, however, and should be interpreted with caution.
Although patients were able to choose different videos in the decision aid, we did not track which segments were viewed by patients. Tracking may have provided additional information on how individual use of the decision aid was related to change in interest, test preferences, or test completion. However, mean viewing time was 19 minutes, indirectly suggesting that patients were accessing a significant portion of the content.
This study did not objectively measure screening knowledge before and after the aid, its effect on decisional conflict, or changes in anxiety or satisfaction with decisions, other important measures of a decision aid's effectiveness [8
]. In this pilot study conducted in a busy primary care practice, we chose to focus on whether the aid could increase interest in screening and was useful in deciding on a screening modality. Future studies should assess whether this decision aid can decrease decisional conflict and improve objective knowledge about screening.
Because our study was conducted at a single site, our study findings may not be generalizable to other populations. Those in our convenience sample had high levels of education, most had insurance, and many had prior experience with screening. Other patient populations, including individuals not currently receiving regular medical care, might respond differently to the decision aid.
Although Medicare, Medicaid, and most private insurers cover CRC screening [21
], cost may be an important issue for patients. We did not collect information on which tests were covered by patients' insurance carriers, co-pays and deductibles, or the importance of cost in patients' decisions about screening. Evaluating the effect of different levels of co-payment on patient preferences is an important area for future research.
A final limitation is that the decision aid may be somewhat challenging for those with limited computer skills. Although we did not objectively measure how many patients needed assistance, we observed that most patients completed the aid independently and required limited, if any, computer assistance. Whether the additional benefits of the web-based format outweigh the greater requirements for computer skill requires further research. We have developed a DVD version of the decision aid that preserves the ability to self-navigate but may be easier for computer-inexperienced users or those without access to a computer.