All study procedures for the focus groups were approved by the IRBs at the University of California, Davis; University of Rochester; and the University of Texas at Austin. Conjoint survey procedures were approved by the IRB at UC Davis. The research was carried out in four steps that will be discussed in turn.
2.1. Step 1: Identification of message attributes through focus groups
Attribute identification was based primarily on three focus groups conducted in June 2008, one at each of three sites: Sacramento, California; Rochester, New York; and Austin, Texas. The three groups of participants came from a purposefully selected subset of participants from 15 earlier focus groups, 5 at each site. Interested participants responded to online postings at craigslist.com
, county clinic and physician office flyers, and neighborhood zip code targeting strategies to ensure race/ethnicity and income diversity. Participants (n=116) responding to our recruiting efforts who were 25-64 years of age, had a self-reported personal and/or family history of depression, and who spoke and understood English were assigned by gender to a neighborhood-specific income group (low- versus mid-income). The low income group consisted of individuals at the 15th
percentile or less for their community and the mid-income group was composed of individuals near the 50th
percentile. Each focus group discussion was guided by a set of questions about participants' depression and symptom history, how they came to recognize depression and its symptoms, and what factors would prompt them to talk with others, including their primary care provider, about depression. All participants were asked to describe the messages that did or possibly could prompt them to discuss depression with their doctor.
Group participants provided informed consent prior to participation, received a $35 stipend for their time, and completed a questionnaire consisting of demographic and health variables. To facilitate discussions, participants were shown public service announcements used in past depression awareness public information campaigns, as well as print and television direct-to-consumer advertisements for depression. Groups explored what they liked and disliked about these messages and considered what might work in future videos created to encourage patients with depression symptoms to talk with their doctors. They also discussed barriers that might prevent a person from seeking help. Relevant themes for coding were identified based on review of discussion transcripts, summaries of recurrent data, and consensus by discussion in multidisciplinary team meetings.
Ten recurring themes (referred hereafter as “attributes”), numbered sequentially below, emerged from analysis of the data from the focus groups. First, participants felt that a misunderstanding of the nature of depression deters care-seeking. Two ideas pertain to this barrier: (1) misunderstanding of the symptoms of depression, which can suppress recognition of the condition; and (2) the belief that depression is rare, leading affected individuals to feel that they probably do not have it. A second set of barriers pertain to problems communicating with one's physician about depression This set is composed of 5 specific issues: (3) depression is not a real medical condition and thus should not or does not need to be shared with one's medical doctor; (4) anticipation of shame from disclosing one's symptoms to the doctor; (5) the belief that depression is self-resolving and thus does not need to be brought to the doctor's attention; (6) lack of knowledge about how to introduce the topic of depression with the doctor; and (7) the notion that depression is a private matter that should be kept to oneself. The third set of barriers, low acceptability of treatment, was reflected in these issues: (8) the belief that if one asks for help, one will just be given medication; (9) the concern that treatment would entail the use of risky medications with unpleasant side effects; and (10) doubts about the effectiveness of antidepressants.
2.2. Step 2: Development of test messages
In the second stage of our analysis, the investigators generated two to four potential messages per attribute that could be used to convey these ideas. In some instances, the focus groups offered very specific language to express the attribute. In other instances, the focus groups offered the core idea but no specific language or argument for a corresponding message. In these instances, our team drew upon its expertise to develop test messages for the attribute. displays messages developed and tested in the conjoint survey for each of the 10 attributes.
Barriers to help-seeking and test messages.
2.3. Step 3: Administration of the conjoint survey
Message preferences were assessed using an online conjoint survey. A convenience sample for the survey was recruited from the membership of a health-related Internet community during the Fall of 2008. The website for this community offers moderated forums organized around specific issues on which members interact anonymously. The site's medical director announced the study in her blog, described the survey, encouraged individuals 18 years or older with a history of depression to participate, and provided a link to the survey.
The survey, which took approximately 15 minutes to complete, was administered using Sawtooth Software's SSI Web platform and ACA module for adaptive conjoint analysis [32
]. On the first page of the questionnaire, respondents were assured that their responses were confidential and given an overview of the survey. The remainder of the questionnaire was organized into three sections. In the first section, called ACA Ratings, the respondent was presented with rating scales for each level (i.e., each test message) of the 10 attributes. Examples of rating questions are shown in the upper section of . The order of presentation of both attributes and message levels was randomized across respondents. In the second section, called ACA Pairs, the respondent was presented with 20 pair questions, an example of which is shown in the lower section of . The pair questions presented to the respondent were generated interactively on the basis of the respondent's answers to the ACA ratings questions. These questions are sometimes called “trade off questions” because they force the respondent to decide what is most important as they consider their preference when presented with two competing sets of messages, each attractive to the respondent on at least one level of an attribute. This combined use of a priori ratings and pair questions in ACA has proven to be useful in modeling and predicting preferences, outcomes and behavior in studies of consumers and patients [33
]. The third section of the questionnaire included standard demographic and health questions.
2.4. Step 4: Analysis
In total, 374 individuals completed the survey. Given our emphasis on unipolar depression in primary care, we dropped 125 individuals who did not have a history of depression or who reported a prior diagnosis of schizophrenia or manic-depressive disorder. This left us with data from 249 individuals with a past diagnosis of unipolar depression.
ACA responses were analyzed using the Sawtooth Software's ACA/HB and SMRT market simulation modules [36
]. The ACA/HB module was used to generate utility estimates for each individual via hierarchical Bayes (HB) estimation. These utilities are essentially regression coefficients for each attribute level [36
]. For each respondent, the sum of his or her utilities across the levels of any given attribute is 0. The utilities reflect the respondent's liking of the attribute level, relative to the other levels for the attribute. The SMRT module was used to generate attribute importance values for each respondent based on the utilities. An attribute's importance reflects the size of the difference between the highest- and lowest utility value for the levels of that attribute; importance values are scaled to sum to 100%. For example, if for any given respondent the utilities for the levels of an attribute were identical, this would indicate that the attribute did not influence the respondent's preference ratings, resulting in an attribute importance value of 0%. In contrast, an attribute having levels that differ substantially in their corresponding utilities for any given respondent is presumed to have exerted a greater influence on the respondent's preference judgments.
Each respondent's utilities and attribute importance scores were merged with their responses to the demographic and health questions into a single database that was analyzed with Stata (version 9.0). These analyses consisted primarily of basic descriptive statistics (e.g., averaged utilities and importance values across respondents). We also compared attribute importance values for respondents over varying demographic and health status groups via multivariate analysis of variance (MANOVA). Since the importance values for k attributes in a conjoint analysis sum to 100%, the dependent variables in such an analysis are k-1 attributes. Specifically, since our analyses were based on 10 attributes, MANOVA was based on the first nine attributes. This process was repeated when testing for significant differences in the utilities for the levels of each specific attribute. Thus, when asking if the utilities for a specific attribute with k levels differed by gender, income, etc., a MANOVA was carried out on the first k-1 set of utilities for that attribute. When an attribute had just two levels, ANOVA was carried out on one level to test for differences.