|Home | About | Journals | Submit | Contact Us | Français|
Researchers have recently described the feasibility of eliciting patients’ preferences using conjoint analysis (CA) in a variety of health care domains including treatment options, cancer screening, and healthcare delivery (1). These studies have demonstrated that CA, whether administered using a full profile, choice-based or adaptive approach, appears to be a very valuable way of quantifying patients’ preferences and understanding the impact of specific attributes on patients’ choices. The challenge for the health service research community is now to determine how best to 1) develop decision support tools using CA and 2) implement these tools into clinical practice. The following paragraphs outline several issues that should be considered as we aim to meet these challenges.
In contrast to earlier preference studies, which assumed that patients have well-formed stable preferences that need only be elicited, it is now clear that patients frequently construct their preferences de novo (2, 3). CA is a decompositional technique that enables patients to make trade-offs between a reasonable number of pertinent characteristics or attributes. As such, CA provides an ideal framework for the construction, in addition to the elicitation, of preferences. However, it is important to note that as patients become more aware of the trade-offs involved, their part-worths or utilities frequently change. Consequently, depending on how much respondents’ opinions evolve, the data generated by the CA task might not be an accurate reflection of patients’ newly constructed preferences. In these situations, investigators must decide on the “ideal” amount of education/training to provide patients with before performing the CA task; with greater training being more time consuming and costly, but the more likely to yield accurate preference estimates.
“Changing preferences” is less of a concern in situations 1) examining familiar options, in which CA functions more as a tool to elicit, rather than to construct, preferences, and 2) where the investigator is interested in using CA as a vehicle to construct preferences and the outcomes of interest include downstream effects such as patient participation, the quality of informed consent, or patient-physician communication.
The ability of a decision support tool to present patients with personalized information is critical. When decisions are based primarily on trade-offs between the probabilities of equally undesirable outcomes, such as the risk of bleeding or stroke in deciding between aspirin and warfarin for atrial fibrillation, using average risks is at best inadequate and at worse negligent. CA surveys can be relatively easily modified to ensure that subgroups of patients with varying prognostic indicators are presented with the appropriate information. For example, in a hepatitis C study, we created six versions of an ACA questionnaire in order to present patients with outcome data corresponding to specific liver biopsy and genotype results. Ensuring that patients are presented with individualized probabilistic information is, however, oftentimes much more difficult. Multiple risk factors (to calculate heart disease or cancer risk, for example) can be incorporated into a CA survey, but requires more sophisticated programming. However, for many, if not most, clinical scenarios, detailed patient-level outcome data are not available.
Although presentation of risk-related information has received the greater part of attention in the literature, how best to present benefits also poses a challenge for CA users. Consider the ways treatment outcomes are often reported. Most pain trials, for example, report mean change in pain or quality of life scales. These data do not easily translate into what patients want to know – which is whether or not the medication will help them – and by how much. Theoretically, both the likelihood and magnitude of benefit should be presented as a single attribute (since both concepts are highly correlated); however, we have found that presenting both concepts simultaneously is difficult for patients to evaluate and overly complicates the task. Moreover, the number of levels needed to represent relevant combinations of likelihood and magnitude of benefit run the risk of spuriously increasing the relative importance of this attribute due to the “number of levels” effect (4, 5). In this context, “relative importance” reflects the extent to which a specific attribute drives a respondent’s decision to choose a particular product.
Another issue, particularly relevant to the US healthcare system, is cost. The wide range of costs associated with different treatment options makes it extremely difficult to examine the influence of this attribute on preferences. Yet, clearly this is an extremely important factor for almost all patients. While it is possible to create different versions of a CA survey for insured and uninsured patients, tremendous variability persists even within these two subgroups. Given the impact of the range of levels on the relative importances generated by CA, and the expected interactions between cost and other attributes (5), further research is needed to determine how best to include out-of-pocket cost in CA decision support tools.
In most clinical settings, incorporating CA decision support tools into clinical practice will not be possible without significant changes. Common barriers include the difficulty of identifying patients at the point of decision making, insufficient time, the need for support, and lack of space.
Most decision tools have been developed for situations for which it is relatively easy to pinpoint the time of decision making, such as elective surgery, cancer screening or treatment for cancer (6). For chronic diseases, implementing CA as a decision support tool is much more difficult unless a consistent marker for a decision point in clinical care exists (such as bone densitometry exams for the evaluation of osteoporosis) (7).
Because clinicians are routinely overbooked, and schedules are frequently disrupted by patients arriving late or requiring longer visits, scheduling patients to perform a decision tool before their appointment frequently disrupts the flow of the clinic despite careful advanced planning. We have also found that many working patients cannot afford the required time to participate, whether it be before or after their scheduled appointment. Moreover, as we think about how best to incorporate CA into clinical practice, it is important to consider, not only how time limits the feasibility of implementing decision support, but how time pressure impacts negatively on the quality of decision making (8, 9).
Currently, almost all clinical studies using CA have been administered with the help of a trained research assistant or healthcare provider. While the need for such support is likely to decrease as a greater proportion of the population becomes more comfortable with computers, it is also likely that for complex decisions (i.e. the ones for which decision support tools are most needed), some assistance will continue to be required.
In our experience, lack of appropriate space (i.e. sufficiently private and quiet) has eliminated many potential sites as possible settings for implementation projects. Moreover, because space returns the greatest profit when used for clinical assessments, administrators are reluctant to allocate clinical space for supplementary activities.
Notwithstanding the advantages of specialized decision support centers, given the preceding barriers, widespread dissemination of CA will likely require development of self-administered tasks (with online and/or telephone support) which can be performed at a time and location convenient for each individual patient. Arguably, however, the best ways to facilitate implementation of CA-based decision support tools in clinical practice would be 1) to lobby for the use of high quality decision support tools to be included as performance measures, and 2) for third party payers to recognize the value of these tools and to reimburse efforts surrounding their use.
Dr. Fraenkel is supported by the K23 Award AR048826-01 A1.