We found that a choice-based conjoint analysis task produced somewhat different patterns of attribute importance compared with a rating and ranking task, but had little effect on other key outcomes, including values clarity, intent to be screened, and unlabelled screening test preference. This finding suggests that a choice-based method, which requires consideration of trade-offs between attributes, may provide distinct information about how patients value different test features compared with other simpler explicit methods like rating and ranking. However, completing the more involved conjoint analysis choice tasks does not appear to have large effects on more distal and global outcomes such as clarity about the decision, preferences about decision options, or intent to engage in health behaviors. Clinically, our findings suggest that the ability to reduce CRC incidence and mortality is the most important attribute for a majority, but not all, participants. The importance of other attributes differed across our sample, suggesting that providing information about each domain may be important in counseling patients.
Few prior studies have compared different techniques for values elicitation and clarification. O’Connor and colleagues found no difference in values clarity or treatment preference when comparing an implicit technique (provision of a balance sheet) versus an explicit rating task in 201 women deciding about hormone replacement therapy.10
In a trial of 137 participants considering a hypothetical heart disease prevention scenario, we found that rating and ranking did not produce differences in decisional conflict or intent to adopt risk-reducing interventions compared with an implicit approach, but the two approaches did produce somewhat different patterns of preferred labeled treatments.11
In another study of 113 volunteers, we found that a conjoint analysis task produced different patterns of preferences and treatment choices than direct elicitation for a hypothetical heart disease prevention scenario.12
Finally, Sheridan and colleagues found no difference in decisional conflict when they compared a prostate cancer screening decision aid without values clarification versus the decision aid with one of two different values clarification exercises (social matching and ranking / rating). However, participants differed in their intent to be screened, suggesting the method of values clarification may be important.24
Although we are not aware of other published studies that have compared different methods of values elicitation for CRC screening, several have used explicit techniques to assess key decisional attributes.13,25–28
(Details of these studies are provided in Appendix 5
-available online) Most studies have found test accuracy (ability to detect cancer and polyps) to be the most common most important attribute, but the order of importance of other attributes has varied considerably across studies.
Our study has several limitations. First, we enrolled a relatively small sample population drawn from our decision laboratory registry and university e-mail lists. The small sample size limits our ability to detect small differences between groups with precision. Moreover, the size of a clinically meaningful effect for values clarification has not been well-defined, and will require further research to help design future studies. Second, our sample was not representative of the full population of adults ages 48–74, being more highly educated and predominantly female, although the proportion who were up to date with screening was similar (slightly lower) than the general population of North Carolinians of the same age range.29
Third, we studied a hypothetical scenario; real-world screening decisions (and evaluation of actual test completion) may produce different results. Fourth, a more extensive or different set of attributes may have produced different patterns of attribute importance. Similarly, choosing different levels may have produced different results for the conjoint task. Fifth, our use of a mail-based survey made it infeasible to examine the effect of giving conjoint results back to patients to determine the effect of such feedback. Sixth, information about absolute risk was provided in the introductory materials only; whether using absolute risks within the conjoint task would have affected results further is unclear.
We only provided limited information about CRC screening to both groups, and we did not provide the specific range of levels for each attribute to those in the rating and ranking group. Providing such information might have changed attribute preferences. However, our choice of information in the rating and ranking group was meant to provide a parsimonious approach to values elicitation, as might be performed in clinical encounters. Future studies might compare the conjoint analysis approach versus a rating and ranking task in which the range of levels was included in the attribute descriptions or against another technique like “max diff” scaling.28
We did not assess knowledge, so we cannot determine the extent to which the expressed values are representative of informed consumers. Finally, we measured our outcomes only after the values elicitation tasks. Measuring changes from pre-task to post-task might have provided more sensitive assessment of changes.
In conclusion, we found that attribute importance scores derived from a choice-based conjoint analysis produced somewhat different patterns of attribute importance than those derived from rating and ranking tasks for the decision about which strategy to use for CRC screening. Whether the differences in attribute values observed here are meaningful enough to warrant the additional time and effort required to complete the conjoint analysis / discrete choice tasks, compared with rating and ranking, or even implicit values clarification techniques, remains unclear and will require additional research.