|Home | About | Journals | Submit | Contact Us | Français|
Evidence suggests that minority populations have lower levels of attendance and retention in mental health care than non-Latino whites. Patient activation and empowerment interventions may be effective in increasing minority patients’ attendance and retention.
This study developed and evaluated a patient self-reported activation and empowerment strategy in mental health care.
The Right Question Project–Mental Health (RQP-MH) trainings consisted of 3 individual sessions using a pre/post test comparison group design with patients from 2 community mental health clinics. The RQP-MH intervention taught participants to identify questions that would help them consider their role, process and reasons behind a decision; and empowerment strategies to better manage their care.
A total of 231 participated, completing at least the pretest interview (n = 141 intervention site, 90 comparison site).
Four main outcomes were linked to the intervention: changes in self-reported patient activation; changes in self-reported patient empowerment; treatment attendance; and retention in treatment.
Findings show that intervention participants were over twice as likely to be retained in treatment and over 3 times more likely than comparison participants to have scheduled at least 1 visit during the 6-month follow-up period. Similarly, intervention participants demonstrated 29% more attendance to scheduled visits than comparison patients. There was no evidence of an effect on self-reported patient empowerment, only on self-reported patient activation.
Results demonstrate the intervention’s potential to increase self-reported patient activation, retention, and attendance in mental health care for minority populations. By facilitating patient-provider communication, the RQP-MH intervention may help minorities effectively participate in mental health care.
Our capacity to diminish racial/ethnic disparities in mental health is hampered by low levels of service utilization and retention in care of minority populations.1,2 For effective treatment, clients must establish a collaborative relationship with providers. However, patients rarely state their concerns during medical visits3,4 and usually refrain from asking for information.5
Minority patients are less likely than whites to have a collaborative relationship with providers,6 and might not be as informed about diagnosis and prognosis.7 Consequently, minority patients may think that they lack needed information,8 may be less compliant with treatment,9 and more likely to drop out of care than whites.10 Increasing minority patients’ activation and empowerment in mental health treatment may prevent premature termination when services do not fulfill expectations.
This article evaluates an activation and empowerment strategy for increasing minority patients’ attendance and retention in mental health care. Activation in our study is defined as developing experience with question formulation and building information-seeking skills that results in increased collaboration with the health care provider. This definition of patient activation relates closely to Hibbard and colleagues’ initial stage of activation.11 It focuses on patients being able to tell their concerns to health care providers; to manage symptoms (emotional or mental health); to get information to make decisions about treatment; to take an active role in care (such as contacting the provider if they are not feeling well); to discuss treatment options with the provider; to discuss side effects of medication; and to know how to avoid emotional triggers. However, given the short intervention, we did not include other aspects of Hibbard and colleagues’ definition of activation, such as achieving knowledge of lifestyle changes or knowledge of the nature and causes of health conditions. A more intensive chronic management program might be necessary to achieve these aims, but might be hard to provide within the limited time of our intervention.
For our definition of empowerment, we modified Staples’ definition (the ongoing capacity of individuals or groups to act on their own behalf to achieve a greater measure of control over their lives and destinies12) to make it applicable to health care and health and to view it as a capacity-building process rather than a state. We specifically focus on the capacity-building process whereby individuals increase their belief that they play an active role in their care (ie, taking action to solve their problems), participate in decision-making (seeing themselves as capable in making decisions and feeling confident of the decisions they make) and manage their care to achieve a greater measure of control over their health and their health care process (ie, being able to accomplish what they set out to do, making their plan work). This definition is consistent with previous descriptions of the process of empowerment of mental health patients discussed by Chamberlain and Schene13 and Linhorst and Eckert.14
Patient activation and empowerment interventions could benefit minorities because Latinos and other minorities avoid hostile confrontation15 due to normative expectations which value politeness even when dealing with disappointment. Minorities may also hold traditional role expectations of being passive recipients in the clinical encounter.16,17 Assessments of patients with chronic conditions18,19 indicate that greater self-perceived activation and empowerment can augment satisfaction with care,20 improve health care processes,21,22 ensure receipt of appropriate treatments,23 and enhance health outcomes.24 A review of interventions in patient-provider communication25 showed inconclusive results around the effectiveness of these interventions. Most studies of patient activation and empowerment in health care have not been conducted with minority populations or in a language other than English. This article presents the results of such an evaluation.
The basic assumption behind Right Question Project–Mental Health (RQP-MH) is that as patients practice strategies for obtaining information from providers, they become active participants in care and clarify expectations of treatment, thereby increasing patient-provider dialogue that allows for greater patient involvement and decision-making. Methods for the RQP-MH intervention include a Question Formulation Technique (QFT) and a Framework for Accountable Decision-Making (FADM). The QFT consists of asking patients to generate and revise questions to obtain more informative answers from their providers. We used this methodology because it is a culturally-supported intervention.26 Rather than present individuals with questions others might believe are important, RQP-MH developers found it more meaningful for individuals to formulate their own questions to providers. The FADM teaches participants to identify questions that will help them consider their role in a decision, reveal the decision-making process, and the reasons behind a decision (see Appendix for an example).
Emphasis on patient-provider interaction is associated with beneficial outcomes.27 Meta-analyses carried out in the past 2 decades28,29 have confirmed that the nature of patient-provider interaction is associated with termination status regardless of treatment modality. We hypothesize that increased question-asking and decision-making could improve patient-provider interaction. Enhanced interaction may signal to patients that their opinions are important to providers,23 thereby augmenting satisfaction30 and retention in care (Fig. 1).
We assessed attendance to scheduled visits and retention in care as outcomes reflective of increased patient-provider communication and collaboration. Attendance and retention differ both conceptually and empirically. Conceptually, attendance, defined as the proportion of visits attended of those scheduled,31 deals primarily with the choice that patients have of scheduling and keeping their appointments. It is also important for the health care organization, as increased attendance can reduce resources and therefore provide further incentive for the organization to adopt patient activation and empowerment interventions.32 Retention is defined as remaining in treatment and ensuring proper monitoring for treatment to be effective.33 Empirically, we find that although these constructs are related, they measure different phenomena. For example, if a patient goes to the only visit he has scheduled, but has not completed treatment, he obtains a 100% attendance score and a zero for retention. We selected no fewer than 4 visits over a 6-month period for follow-up or medication monitoring as the criterion for retention (unless care had been completed according to the patient) based on evidence-based treatment guidelines which find that this is the recommended number of visits for acute and continuation phases of depression, which is the most common diagnosis in our patient population.34 Given the importance of retention as a measure of quality of care35 and the distinction between these 2 outcomes, we decided to retain both as outcome measures.
To evaluate this intervention, we used a pre/post test comparison group design with patients from 2 community mental health clinics that serve primarily Latino and other minority patients. Practical considerations made randomization of patients unfeasible, given limited resources to simultaneously offer the intervention in 2 sites. We also opted for a pre/post design because of concerns regarding patient contamination. Given the long waiting periods, patients assigned to usual care could potentially receive the intervention from intervened patients in the same settings. The risk of contamination of this type of intervention is higher if individuals, rather than facilities, are randomized. However, because the number of facilities required to adequately power our study using randomization of facilities was prohibitively expensive, we opted for a pre/post test comparison group design.
The intervention site, Clinic A, serves approximately 500 adult outpatients per year with 11 providers: 5 psychiatrists, 4 psychologists, and 2 social workers. The patients are primarily Spanish speaking (83%); Medicaid recipients or uninsured (65%), and have mood disorders (67%). Clinic B, the comparison site, serves over 1500 adult and child outpatients per year with 24 providers: 7 psychiatrists, 5 psychologists, and 12 social workers. Slightly less than half of patients in this clinic are Spanish speaking (45%), but most are on Medicaid or uninsured (62%), and with mood disorder diagnoses (45%). In both clinics, waiting time for an appointment ranged between 3 weeks to 4 months. See Table 1 for characteristics of study participants, which closely match those of the clinics.
The patients sampled at both clinics were predominately female, foreign-born, and unemployed. There were no significant differences across the sites in age distribution, education level, or referral source. Although many patients at both clinics were Latino, the ethnic distributions varied significantly, as did the language of interview and length of time in care before enrollment.
RQP-MH coaching for research staff and BA-level care managers (CMs) consisted of two 4-hour workshops addressing RQP’s fundamental beliefs, principles, and values, and how these relate to an individual’s participation in life decisions.30 It also included practicing with prompts to illustrate how to generate questions about important decisions and select questions which focus on the individual’s role, process, and reason. RQP developers also offered ongoing consultation, meeting approximately once a month with CMs and the CM Supervisor to observe CMs conducting the intervention.
We modified standard RQP protocol to adapt and standardize it for a mental health intervention. We increased the number of patient trainings from 1 to 3 to provide additional opportunities for practice, problem-solving, and feedback. RQP-MH trainings took place at the clinic and lasted 30 minutes each. A manual with written guidelines for each session was developed with corresponding patient materials (available from the authors).
Sessions emphasized shared patient-provider decision-making (empowerment) and preparation for appointments by formulating questions to get information (activation) about patients’ mental illnesses, treatments, and relationships with providers. Participants were scheduled for second and third trainings after they had attended at least 1 appointment with a provider after the first training.
During trainings, hypothetical scenarios were presented to elicit discussion about decision-making in care and patient-provider interactions. Participants were encouraged to identify an issue or decision related to their care to explore further with their provider and to generate potential questions that would better inform them. We included strategies of cognitive-behavioral mental health interventions, including role-plays and homework assignments, to increase participants’ comfort level with asking questions. The trainings also incorporated cultural components36 that could influence minority patients’ experiences when taking an active role in care. CMs were trained to reframe patients’ questioning or information-seeking not as a lack of respect for providers, but as a way to get answers without offending providers’ professional abilities. CMs also handled patients’ hesitance to probe providers by assuring them that asking questions is a way to understand providers’ choices, be helpful to providers, and develop mutual trust. See Appendix for a condensed script of a prototypical training session.
All RQP-MH trainings were audio-recorded. The CM Supervisor listened to randomly-selected trainings to monitor the degree to which patients understood trainings and CMs followed protocol and addressed barriers to implementation, and provided feedback during weekly supervision. Adherence was formally monitored by an independent evaluator rating recordings from 15 randomly-selected participants (N = 45 recordings) using a 20-point adherence checklist. 60% of participants received an adherence rating of “High” (80% or more) and 40% of “Medium” (60–79%). No trainings were rated as “Low” (59% or less).
Patient recruitment occurred from October 2004 through January 2006. CMs were available on-site 3 days a week in both clinics. Recruitment was done primarily by asking clinicians or administrative staff to inform patients about the study. CMs also met individually with providers to ask them to refer potential participants who met basic study eligibility (eg, were not in crisis and met age criteria). Flyers were posted at clinics so that interested patients could contact study staff directly. Finally, CMs identified weekly intake slots and met with patients before clinical appointments. After potentially eligible participants approached CMs, they were screened for eligibility and, if eligible, underwent informed consent and were enrolled in the study.
Research staff contacted 342 patients referred for participation [229 at the intervention site (Clinic A) and 113 at the comparison site (Clinic B)]. Of these, 231 were eligible, agreed to participate, and completed at least the pretest interview (intervention site: n = 141; comparison site: n = 90). Excluded from participation were those younger than 18 or older than 65; in crisis or actively psychotic; and with significant comprehension difficulties. Patients in Clinic A received 3 RQP-MH trainings and 4 assessments (1 baseline and 3 follow-up). The first training was conducted immediately after baseline assessment, whereas the second and third trainings were preceded by at least 1 appointment with a provider and a follow-up assessment. Patients at Clinic B served as the comparison group, receiving treatment as usual. The patients in Clinic B were assessed by repeated measures at baseline and after 6–8 weeks. In both settings, we tracked all mental health and medical visits. Changes in self-reported activation and empowerment were compared with evaluate intervention effects. Participants were given gift cards for participation: a total of $75 at the intervention site and $50 at the comparison site.
Four main outcomes were linked to the intervention: changes in self-reported patient activation; changes in self-reported patient empowerment; treatment attendance (defined by the proportion of visits attended divided by the number of visits scheduled); and retention in treatment (defined as 4 or more visits during the 6-month follow-up period). A modified version of the Empowerment Scale34 was used to evaluate changes in patient empowerment resulting from the intervention. The scale includes 10 items (α = 0.83 in Spanish and α = 0.82 in English using baseline data; representative item: “I am usually confident about the decisions I make”). Participants rated these items using a 10-point scale from “none of the time”1 to “all of the time.”10 To create an empowerment score, we first summed each subject’s response to the 10 individual items that comprised the scale. In the baseline period, we averaged this summed response across all participants and sites (intervention and comparison sites) and calculated the standard deviation. For each subject, we then subtracted from their summed response the sample mean, and divided the difference by the standard deviation. Each empowerment score could thus be interpreted as the number of standard deviations the subject’s response is above (or below) the sample baseline mean. Increasing values correspond to more patient empowerment. Changes in patient empowerment were calculated as the empowerment measure at the last follow-up assessment subtracted from empowerment at baseline.
We used a modified version of the Patient Activation Scale36,37 that included a total of 9 items (α = 0.82 in Spanish and α = 0.75 in English using baseline data; representative item: “I have discussed my treatment options with my care provider ”). Participants used a scale with response categories ranging from “none of the time”1 to “all of the time.”10 The same strategy used for the empowerment scale score was used to obtain the activation scale scores. Items for which the subject did not respond were minimal (only 5 cases for a few items), so we used mean case substitution for missing items. Changes in patient activation were calculated as the activation measure at the last follow-up assessment subtracted from the activation measure at enrollment.
Treatment attendance was assessed first by creating a binary variable indicating whether the subject had at least 1 scheduled visit within 6 months after the last follow-up assessment, and second, by a ratio of the number of visits kept over those that were scheduled for participants who had at least 1 scheduled visit. Retention was measured as a binary variable, assuming a value of 1 if the participant had 4 or more visits within 6 months or completed treatment after the post-test; 0 otherwise. The questionnaire also included questions about the participant’s age, gender, ethnicity, and education.
Administrative data were obtained from each clinic to describe outpatient populations. For those enrolled in the study, consent to retrieve mental health diagnosis, length of treatment, and scheduled and attended appointment data was obtained to assess attendance and retention.
We computed means and frequencies of outcome variables and characteristics for all participants, and then stratified by intervention status and by “novice” or “veteran” status. T tests or χ2 tests were used to test for differences between participants at the intervention and comparison sites for continuous and discrete-valued variables. All models included a binary variable indicating if the subject was assigned the intervention and several demographic variables including age at study entry, sex, race/ethnicity, and educational status. Despite the language of interview difference between intervention and comparison groups, this variable was highly correlated with participant’s race/ethnicity and not included as an additional covariate. Our approach to inclusion of covariates was based on both clinical and statistical considerations (eg, variability in the covariate). We avoided a purely empirical approach that would include in the model only those covariates observed to have a statistically-significant bivariate relationship with the outcome. Because we conjectured that effectiveness of the intervention for patients new to the mental health system (“novices”) may be different than that for patients having experience in their respective clinics (“veterans”), we determined whether there was an interaction between novice and intervention. Finally, we reestimated all outcome models including both diagnosis and disability status. The point and interval estimates of our main effect of interest, the intervention-time interaction, did not change. Moreover, diagnosis category was not statistically significant (when including all other covariates) with the smallest P value of 0.51. For reason of parsimony we did not include these 2 variables in the models.
For the analysis of activation and empowerment, each subject contributed a minimum of 2 observations, 1 at baseline and 1 postintervention, permitting us to account for potential baseline floor and ceiling effects. Because of repeated measurements in these analyses, we also included the time of subject’s measurement (measured as months from baseline) and the interaction of time of measurement with the intervention indicator. To determine if difference in the rates of change between intervention and comparison participants was significantly larger or smaller between novices and veterans, we also included the interaction between novice, time of intervention, and the intervention indicator. We estimated random regression models that included participant-specific intercepts that corresponded to baseline outcome and participant-specific slopes that corresponded to monthly rates of change in outcomes to account for heterogeneity between participants. A statistically significant interaction term would indicate the intervention was associated with changes in reported empowerment or activation. Our estimate of the intervention effect was the average difference in monthly rate of change in empowerment (or activation) for intervention subjects compared with comparison subjects.
To assess effectiveness of the intervention on treatment attendance, we first estimated a logistic regression model linking the probability of having any scheduled follow-up treatment as a function of intervention status and patient demographic characteristics. For patients having at least 1 scheduled visit, we next estimated a regression model linking number of scheduled visits attended to the logarithm of the total number of scheduled visits, intervention status, and demographic characteristics. This provided a method to estimate treatment attendance rate ratios adjusted for baseline characteristics for the intervention group compared with the comparison group. We assumed the number of scheduled visits followed a Poisson distribution. For patient retention, a logistic regression model of the probability of having 4 or more visits during the 6-month period post-baseline was linked to patient characteristics and intervention status. In all analyses regarding attendance and retention, we focused on the size and statistical significance of the coefficient for the intervention indicator. We estimated adjusted odds for treatment attendance in the intervention group relative to the comparison group; difference in rates of treatment attendance between the intervention group and the comparison group for those having scheduled visits; and adjusted odds of treatment retention in the intervention group relative to the comparison group. All models were estimated using the SAS software system.37
At the intervention site, 94% of participants completed the first follow-up interview, 82% completed the second, and 76% completed the third. At the comparison site, 97% completed the only follow-up interview. Overall, 231 people completed the pretest, of whom 10 withdrew or were lost to follow-up before a post-test interview, resulting in 221 participants completing at least 1 post-test: 134 from Clinic A and 87 from Clinic B. The average length of time between baseline and last follow-up assessment was 74.1 days (SD = 35.2) at Clinic A and 70.9 days (SD = 35.7) at Clinic B.
Baseline raw activation (Clinic A: mean = 7.17, SE = 0.13; Clinic B: mean = 7.21, SE = 0.17) and empowerment scores (Clinic A: mean = 6.66, SE = 0.14 and Clinic B: mean = 6.65, SE = 0.17; data not shown) were similar in the 2 clinics. Postintervention empowerment showed significant increases among intervention patients’ scores (Clinic A: mean = 7.28, SE = 0.13; Clinic B: mean = 6 0.89, SE = 0.16) but not activation scores (Clinic A: mean = 8.03, SE = 0.13; Clinic B: mean = 7.71, SE = 0.17). After adjusting for patient’s age, sex, race/ethnicity, educational and novice status, we found a statistically significant effect of the intervention on self-reported patient activation (P = 0.049), with an estimated monthly increase in activation of 0.09 standard deviations in intervention participants over comparison participants. There was no statistically significant interaction among novice, intervention, and time for patient empowerment. Eliminating the interaction term from the model, there was no statistically significant effect of the intervention on patient empowerment (P = 0.151) (Table 2). The estimated monthly rate of change in empowerment, although small, 0.07 of a standard deviation, was larger in the intervention group than the comparison group. African Americans had lower baseline patient activation compared with whites (P = 0.05, Table 2).
After adjusting for age, sex, race/ethnicity, education, and novice status, intervention participants were over twice as likely (Table 3) to be retained in treatment (adjusted OR = 2.78, 95% CI = 1.33–5.79). In terms of attendance, intervention participants were over 3 times more likely than comparison participants (adjusted OR = 3.42, 95% CI = 1.02–11.41) to have at least 1 scheduled follow-up visit. Race/ethnicity is not included in Table 3 because all non-Latino patients, except 1, had at least 1 visit scheduled during the 6 months after final assessment. Intervention participants were 29% more likely to attend their scheduled visits than comparison participants (OR = 1.29, 95% CI = 1.16–1.43). We evaluated whether adherence in the RQP-MH trainings was related to changes in activation and empowerment. Greater pre/post changes were evident in the High adherence group than the Medium group (data not shown).
Our results illustrate the promise of the RQP-MH intervention for increasing patient activation, attendance, and retention in mental health care of minorities. Studies suggest that patient communication trainings can change the patient-provider interaction.25 The fact that patient activation did not vary with novice status suggests that changing the dynamics of patient-provider interaction is possible even in established patient-provider relationships that might be typically seen as resistant to modification.
The RQP-MH training prepares patients to ask questions during appointments and get information from providers leading to improved attendance and retention in care. These findings are consistent with work that found that a collaborative relationship is related to treatment retention for drug abuse,38,39 alcohol abuse,40 and family therapy.41 Implementing such interventions in safety net hospitals can help decrease the problems of no-shows and increase retention in care of minority populations that have been linked to service disparities. However, our assumption that most providers would welcome patient activation and empowerment did not prove entirely true. Some providers found increased questioning by patients challenging, because they did not always have answers. Some patients reported feeling discouraged when providers inquired why patients were asking questions now, after years in treatment. These findings lead us to recommend adding a provider component to the intervention to facilitate receptivity of patient activation and empowerment.
There are several limitations to the study. We did not screen patients for cognitive impairment. The intervention’s effectiveness might be enhanced by such screening. Including more visual aids of key concepts might facilitate skill retention, particularly when cognitive processing problems exist. We also did not randomize patients because of practical considerations and risk of patient contamination, although this could have imposed threats to internal validity, leading to unidentified differences across sites or patients. We undertook analysis to include observable differences across sites to minimize threats to internal validity. However, future evaluations of the RQP-MH intervention require random assignment of participants under a more resource-intense design. Finally, due to financial constraints we were unable to conduct the same number of follow-up assessments in Clinics A and B. This difference in contact with research staff may confound intervention effects. However, even with 4 assessments, patients at Clinic A dropped out, suggesting other considerations beyond contact intensity.
Our findings do not show evidence of an effect of the intervention on patient empowerment. Upon reviewing the intervention components, we saw that the intervention had a limited focus in developing patients’ confidence in decision-making and feelings of control regarding their care. Levinson and colleagues8,16 found that patient decision-making and empowerment was linked to greater educational attainment and perceived “excellent” health status. This was not the case for our patient population, which had poor health status and low educational attainment. We propose to enhance the intervention by including elements of illness self-management and more practice in decision-making to help increase patients’ confidence in managing illness and in deciding about health care.
We hypothesize that patients were unable to increase empowerment mainly due to limited health literacy. Minority patients may be at greater risk of assuming that they cannot challenge a provider’s decision. Increasing the length and/or number of sessions in the RQP-MH intervention may provide support for practicing respectful approaches that do not compromise patient-provider relationships. Asking for this affective shift in empowerment for patients that suffer from mental illness might be challenging, and less dependent on the intervention itself. Although we found that diagnosis and disability status was not significant in explaining changes in activation or empowerment, certain symptoms—such as hopelessness and feeling a lack of control over one’s circumstances—could play a role in patient cognition and assertiveness, and should be investigated further. Including behavioral measures of empowerment is recommended for future studies, as well as evaluating whether hopelessness and lack of control predict who will benefit from the intervention. Future studies focusing on the providers’ response to patient questions as part of quality care are also needed. Similarly, the value of providing incentives to restructure the clinical encounter for more collaborative interchanges should be investigated as a way to encourage patient activation for patients to better manage their care and to promote utilization and retention.
Supported by Grant No. P20 MD000537 from the National Center on Minority Health and Health Disparities (NCMHD).
The contents of this article are solely the responsibility of the authors and do not necessarily represent the official views of the NCMHD.
Dr. Polo’s work in this study was carried out while in residence at the Center for Multicultural Mental Health Research, as an NIMH Postdoctoral Fellow for the Family Research Consortium-IV.