|Home | About | Journals | Submit | Contact Us | Français|
Treatments for alcohol use disorders typically have been abstinence-based, but harm reduction approaches that encourage drinkers to alter their drinking behavior to reduce the probability of alcohol-related consequences, have gained in popularity. The current study used a contingency management procedure to determine its effectiveness in reducing alcohol consumption among heavy drinkers.
Eighty non-treatment-seeking heavy drinkers (ages 21–54, M = 30.20) who did not meet diagnostic criteria for alcohol dependence participated in the study. The study had three phases: 1) an Observation phase (4 weeks) where participants drank normally; 2) a Contingency Management phase (12 weeks) where participants were paid $50 weekly for not exceeding low levels of alcohol consumption as measured by transdermal alcohol concentrations, < 0.03 g/dL; and 3) a Follow-up phase (12 weeks) where participants (n = 66) returned monthly for 3 months to self-report drinking after the contingencies were removed. Transdermal alcohol monitors were used to verify meeting contingency requirements; all other analyses were conducted on self-reported alcohol use.
On average 42.3% of participants met the contingency criteria and were paid an average of $222 during the Contingency Management phase, with an average $1998 in total compensation throughout the study. Compared to the Observation phase, the percent of any self-reported drinking days significantly decreased from 59.9% to 40.0% in the Contingency Management and 32.0% in the Follow-up phases. The percent of self-reported heavy drinking days reported also significantly decreased from 42.4% in the Observation phase to 19.7% in the Contingency Management phase, which was accompanied by a significant increase in percent days of self-reported no (from 40.1% to 60.0%) and low level drinking (from 9.9% to 15.4%). Self-reported reductions in drinking either persisted, or became more pronounced, during the Follow-up phase.
Contingency management was associated with a reduction in self-reported episodes of heavy drinking among non-treatment-seeking heavy drinkers. These effects persisted even after incentives were removed, indicating the potential utility of contingency management as a therapeutic intervention to reduce harmful patterns of drinking.
In 2012, 3 million adult Americans received treatment for heavy drinking or alcohol use disorders (SAMHSA, 2013). Most treatments are abstinence-based, but those focusing on reducing the frequency of harmful behaviors are becoming increasingly common (Marlatt and Witkiewitz, 2002). Harm reduction approaches encourage problem drinkers to alter their drinking behavior to reduce the probability of alcohol-related consequences (Marlatt and Witkiewitz, 2002). Such non-abstinence approaches are often better regarded by both patients and clinicians (e.g., Rosenberg and Davis, 2014; Adamson et al., 2010; Skewes and Gonzalez, 2013; Adamson and Sellman, 2001). Importantly, problem drinkers receiving harm reduction treatments have similar health outcomes as those who abstain from alcohol (Dawson et al., 2007; Kline-Simon et al., 2013).
Contingency management provides positive reinforcement in the form of cash, prizes, or vouchers to patients with substance use disorder if they meet predefined treatment goals (Higgins and Petry, 1999). Usually these goals involve refraining from drug use, and contingencies are given based on objective verification of abstinence (e.g., Alessi et al., 2007; Godley et al., 2014; McDonell et al., 2012; Petry et al., 2012; Petry et al., 2000; Alessi and Petry, 2013). Most studies using contingency management have focused on drugs whose metabolites are easily measured and have a large window of detection, e.g., cannabis (Kaminer et al., 2014), nicotine (Javors et al., 2011; Ledgerwood et al., 2014; Secades-Villa et al., 2014), cocaine (Festinger et al., 2014; Weiss and Petry, 2014; Greenwald et al., 2014; Garcia-Fernandez et al., 2013), and opioids (Ghitza et al., 2008; Olmstead and Petry, 2009). The window of detection for alcohol is much shorter than other abused substances, thus frequent monitoring of breath, blood, or urine biomarkers is required (Alessi and Petry, 2013; Petry et al., 2000). Recently, contingency management treatments have used newer technologies that allow continuous monitoring of alcohol use, including transdermal alcohol monitoring. Specifically, we have used transdermal alcohol monitoring to distinguish heavy drinking episodes from lower, more moderate drinking levels (Dougherty et al., 2012; 2014a; Hill-Kapturczak et al., 2014b). Two pilot studies have demonstrated the potential utility of transdermal alcohol monitoring either for promoting abstinence (Barnett et al., 2011) or moderation (Dougherty et al., 2014).
Previously, we showed that financial contingencies to reinforce subjects for keeping their transdermal alcohol concentration below 0.03 g/dl successfully reduced drinking during a one-month period (Dougherty et al., 2014). We also found preliminary evidence that these effects persisted after the contingency was removed.
The present study expands on this earlier work by implementing a longer contingency period (3 months) and evaluating drinking for 3 months after the contingency was removed. Participants included a large group of drinkers who regularly exceeded the NIAAA’s at-risk drinking criteria (>3 drinks for women or >4 drinks for men in one day, NIAAA, 2010). There were three sequential study phases: Observation, Contingency Management, and Follow-up. In the Observation phase, participants drank normally for 4 weeks. In the Contingency Management phase, participants received reinforcement weekly for 12 weeks when drinking was reduced to lower levels (determined through transdermal alcohol monitoring). In the Follow-up phase, the contingencies were removed and drinking patterns were assessed by self-report monthly for 3 months. Interviews to characterize alcohol consumption were conducted across all phases. We hypothesized that, compared to the 4-week Observation phase, drinking would (1) be reduced during the 12-week Contingency Management phase, and (2) remain lower during the 3-month Follow-up phase.
Eighty-two healthy non-treatment-seeking drinkers (ages 21–54) were recruited from the community using television, newspaper, flyer, and internet advertisements. Potentially eligible respondents were invited to the laboratory to complete an in-depth screening. Individuals were included if they self-reported patterns of drinking that met or exceeded the NIAAA (2010) “at-risk” drinking criteria (>3 drinks for women or >4 for men at least three times in the prior 28 days) as assessed using a Timeline Followback interview (described below). Screening included a substance use history, psychiatric screening (Structured Clinical Interview for DSM-IV-TR Axis I Disorders; First et al., 2001), intelligence assessment (Wechsler Abbreviated Scale of Intelligence; Wechsler, 1999), urine drug and pregnancy tests, and a physical examination. Individuals were excluded for IQ < 70, current Axis I disorder, history of substance dependence (including alcohol dependence but not alcohol abuse), a positive urinalysis for drugs of abuse (cocaine, THC, opiates, barbiturates, benzodiazepines, and methamphetamine), pregnancy, or a medical condition that would contraindicate alcohol consumption. All participants gave written informed consent, and the protocol was approved by the Institutional Review Board at The University of Texas Health Science Center at San Antonio.
There were three sequential study phases: Observation, Contingency Management, and Follow-up (described below). On each clinic visit within each phase, participants completed a Timeline Followback interview (TLFB; Sobell and Sobell, 1992) to assess daily alcohol use. The TLFB is a semi-structured interview that uses a calendar and other memory aids (e.g., holidays) to enhance recall of daily drinking. Information about alcohol type, brand, size, and hours of drinking were recorded and then converted into standard units of alcohol based on alcohol-by-volume percentage. At study entry, the previous 28 days of self-reported drinking behavior were recorded. During the Observation and Contingency Management phases, a TLFB interview was conducted weekly, and during the Follow-up phase, monthly.
Secure Continuous Remote Alcohol Monitors (SCRAM-II™, Alcohol Monitoring Systems Inc. [AMS], Highlands Ranch, CO) were used to measure transdermal alcohol concentrations (TAC) during the Observation and Contingency Management phases. These devices take readings approximately every 30 minutes. Data were downloaded from the monitor weekly using Direct ConnectTM (AMS, Littleton, CO). Participants were paid $25 for each clinic visit and $80 each week for wearing the ankle monitor. Incentives were earned when participants did not exceed three consecutive readings of 0.03 g/dl on any given day (for positive TAC readings confirmed by AMS) during the previous week, which generally corresponds to low level drinking of about 1–2 beers (Dougherty et al., 2012; 2014a; Hill-Kapturczak et al., 2014b).
Participation began with a 4-week Observation phase. No restrictions were placed on alcohol consumption during this phase, but alcohol consumption was monitored by weekly TLFB interviews. Beginning with this phase, participants wore a transdermal alcohol ankle monitor. Participants were instructed that they could drink as usual, and were asked to visit the clinic weekly for approximately 30 minutes. During clinic visits, data were downloaded from the ankle monitor and participants completed a TLFB interview.
After completing the Observation phase, participants then entered a 12-week Contingency Management phase where monetary incentives were provided weekly if the specified contingencies were met. During these weekly visits, participants’ ankle monitor data were downloaded. After data were evaluated, a $50 incentive was paid as long as TAC readings on each day of that week met the < 0.03 g/dl criteria (see Dougherty et al., 2014); this contingency was solely based on ankle monitoring data. At the beginning of the contingency phase, participants were told that for most individuals, a single drink would not exceed the criterion, but multiple drinks might. After participants were (or were not) paid their $50 incentive payment, TLFB interviews were conducted to assess drinking during the prior week. Contingency payments were always given before the TLFB interview to reduce reporting bias, and individuals were instructed that self-reported data would be collected without consequence. At the completion of this phase, ankle monitors were removed.
During the Follow-up phase, participants were told that no contingencies were associated with their drinking and that they could drink as usual. They were asked to return to the clinic monthly for 3 months and complete a 28-day TLFB interview at each visit. Participants were paid $85 for each follow-up visit.
Participant characteristics were summarized using descriptive statistics. Sex differences were examined using t tests or chi-squared tests for continuous and categorical variables, respectively.
TLFB interview information was converted into standard drink units. Four levels of drinking were determined for each day: a) no drinking (units = 0); b) low drinking (units >0 but < 3); c) moderate drinking (units ≥3 but <4 for women; ≥3 but <5 for men); and d) heavy drinking (units ≥4 for women; ≥5 for men). For each participant, daily drinking self-reports were consolidated across each week to compute the percentage of days of drinking at each of the four drinking levels: 1) any drinking (defined as days of low, moderate, or high drinking, but not “no drinking”); 2) low drinking; 3) moderate drinking; and 4) heavy drinking. Data from the ankle monitor were used to determine the percentage of participants who met criteria throughout the week. No data imputation was used for participants who dropped out before the end of each phase and all analyses were based on available data using random regression mixed model techniques using SAS (Version 9.3, SAS Institute, Inc., Cary, NC). A sensitivity analysis was considered by imputing the missing outcome of dropouts as “failing criteria;” however, those analyses obtained similar results and therefore were not considered further.
A repeated measures logistic regression model using the generalized estimating equations (GEE) method was fitted to examine the binary outcome of whether or not subjects met the TAC criteria on each week of each phase (Observation, Contingency Management) even though subjects were paid contingent upon these results only during Contingency Management. The percent of subjects meeting criteria was based upon only the number of subjects whose TAC data met the criterion divided by the 80 subjects considered eligible for analysis in order to provide a conservative estimate of success unaffected by drop-out. Increasing or decreasing linear trends across weeks within each phase were examined by repeated measures logistic regression treating week as a continuous variable and also by examination of study phase, week, and the interaction between phase and week. Then, for each drinking classification (i.e., heavy, moderate, low), the percentage of days under each classification was examined by mixed models (SAS PROC MIXED) with an unstructured covariance matrix evaluating the effects of phase and week nested within phase. Model-based Tukey’s multiple comparisons tests were used to examine contrasts between phases. All statistical tests were conducted with 2-sided significance levels of 0.05 using SAS.
A total of 82 people entered the Observation phase (see Figure 1). Two participants were not included in the final analyses because one dropped out before starting the Contingency Management phase and one completed only one week of the Contingency Management phase. The final analyses are based upon the 80 subjects (Mean Age = 30.20, SD= 8.57) who completed two or more weeks of the Contingency phase. Of these 80 participants, 4 each had a single week of missing data variously occurring within the Contingency Management phase due to monitor failure. Notably, 62 participants provided complete data with no missing observations. Demographic and drinking characteristics of the final sample are presented in Table 1. Participants averaged 30 years of age, were height and weight proportionate, and had about 10 “at-risk” drinking days in the month prior to study entry. There were a higher proportion of women who were Hispanic, and men reported slightly more drinks per drinking day than women.
Of the 80 people who entered the Contingency Management phase, 14 withdrew for various reasons including: job-related (n = 7), relocation (n = 1), vacation (n = 1), monitor noncompliance (n = 1), pregnancy (n = 1), or unknown (n = 3). The specific phases of individual drop-out are shown in Figure 1. As a result, a total of 66 participants completed the Contingency Management Phase. We compared demographics (sex, age, BMI, ethnicity, and race), average drinks per drinking day, at-risk drinking days, and total number of binges in the 28 days before the Observation phase between dropouts (n = 14) and completers (n = 66); no statistical differences were observed (see Table 2). On average, participants received $1998 in total compensation, including incentive payments earned during the 3-month Contingency Management phase.
Figure 2 shows the percent of individuals who met contingency criteria across any given week of the Observation and Contingency Management phases. Repeated measures logistic regression found a significant effect of phase [χ2(1) = 40.89, p < 0.001] and a significant effect of week within phase [χ2(14) = 29.24, p = 0.010]. The odds of meeting the contingency criteria were 7-fold greater for the Contingency Management phase compared to the Observation phase (odds ratio = 7.44, 95% CI = 4.39 – 12.60). Linear trends across weeks differed significantly by phase (p = 0.014) revealing that the odds of achieving the TAC criteria increased during the Observation phase (p < 0.001) but remained stable over the 12 weeks of Contingency Management (p = 0.710).
Of participants who dropped out, on average 36.9% met the contingency criteria during their participation, versus 42.3% of those who completed the Contingency Management phase (p = 0.39). The comparability of these numbers supports the decision to not employ any imputation analyses accounting for differential drop-out under various assumptions.
The percentage of days each week where participants self-reported any drinking can be seen in Figure 3. There was a significant main effect of phase [F(2,1882) = 189.02, p < 0.001]. Pairwise comparisons showed that compared to the Observation phase, the percent of any drinking days decreased in the Contingency Management (59.9% vs. 40.0%, p < 0.001, 95% confidence intervals (CI): 16.64% to 23.21%) and Follow-up phases (59.9% vs. 32.0%, p < 0.001, 95% CI: 24.50% to 31.22%) and compared to the Contingency Management phase, the percent of any drinking days decreased during the Follow-up phase (p < 0.001, 95% CI: 5.48% to 10.39%). There was no significant main effect of week within any phase (p = 0.22), but there was a significant effect of phase [F(2,1909) = 3.40, p = 0.03] on the linear trend of weeks within phase. The percent of any drinking days showed a slight, but non-significant (slope = −1.55% per week, p = 0.14), decreasing trend during the Observation phase, and a significant (though slight, slope = 0.52% per week, p = 0.02) increasing trend during the Contingency Management phase), followed by a slight (but not significant, slope = -0.10% per week, p = 0.66) decreasing trend during Follow-up.
The percent days per week with no drinking (see Figure 3) showed a significant effect of study phase [F(2, 1882) = 189.02, p < 0.001]. Pairwise comparisons indicated that in comparison to the Observation phase, no drinking was increased in both the Contingency Management (40.1% vs. 60.0%, p < 0.001, 95% CI: −22.67% to −17.18%) and the Follow-up phase (40.1% vs. 68.0%, p < 0.001, 95% CI: −30.67% to -25.05%). No drinking also was increased significantly in the Follow-up phase in comparison to the Contingency Management phase (p < 0.001, 95% CI: −9.99% to −5.88%). Furthermore, the effect of week within phase differed significantly by phase [F(2, 1909) = 3.40, p = 0.03]. Although no drinking showed a slight decline over the 12 weeks of the Contingency Management phase (slope = −0.52% decrease per week, p = 0.02), this trend was not observed during either the Observation or Follow-up phases (both p values > 0.10).
Figure 3 also shows the percent days each week that participants reported low drinking (> 0 units but < 3 units) during the three phases. A significant main effect of phase [F(2,1882) = 114.28, p < 0.001] indicated that low-level drinking varied by study phase. Compared to the Observation phase, low-level drinking was increased in the Contingency Management phase (9.9% vs. 15.4%, p < 0.001, 95% CI: −7.55% to −3.45%) and significantly decreased in the Follow-up phase (9.9% vs. 5.5%, p < 0.001, 95% CI: 2.72% to 6.47%). Low level drinking during the Follow-up phase was also significantly less in comparison to the Contingency Management phase (p < 0.001, 95% CI: 8.33% to 11.40%). There was no significant main effect of week within any phase [F(25, 1882) = 0.75, p = 0.81] and no significant linear trends by phase [F(2,1909) = 0.12, p = 0.89] or within each phase (all p values > 0.50).
The percent days per week that participants reported moderate drinking (≥ 3 units but < 4 units for women; ≥ 3 units but < 5 units for men) during the three phases also showed a significant main effect of phase [F(2,1882) = 9.94, p < 0.001] (see Figure 3). Compared to the Observation phase, pairwise comparisons indicated that moderate drinking was significantly decreased in both the Contingency Management phase (7.6% vs. 4.9%, p < 0.001, 95% CI: 1.18% to 4.12%) and the Follow-up phase (7.6% vs. 5.0%, p < 0.001, 95% CI: 1.07% to 4.07%). There were no differences in moderate drinking between Contingency Management and Follow-up phases (p = 0.98, 95% CI: −1.18% to 1.02%). There was no significant main effect of week within any phase [F(25, 1882) = 1.01, p = 0.44], nor were there any significant linear trends by phase [F(2, 1913) = 2.28, p = 0.10] or within each phase (all p values > 0.06).
The percentage of days that participants reported heavy drinking (≥ 4 units for women; ≥ 5 units for men) also revealed significant effects of study phase [F(2,1882) = 163.37, p < 0.001] (see Figure 3). Compared to the Observation phase, pairwise comparisons indicated significant reductions in heavy drinking in the Contingency Management (42.4% vs. 19.7%, p < 0.001, 95% CI: 19.73% to 25.83%) and Follow-up (42.4% vs. 21.5%, p < 0.001, 95% CI: 17.86% to 24.09%) phases. No differences were observed between the Contingency Management and Follow-up phases (p = 0.15, 95% CI: −4.09 to 0.47%), indicating that contingency-related reductions in heavy drinking persisted into follow-up. The effect of week within any phase was not significant [F(25, 1882) = 1.46, p = 0.07], and we found no significant differences in linear trends by phase [F(2,1909) = 2.01, p = 0.13]. Although heavy drinking slightly increased during the Contingency Management phase (slope = 0.41% increase per week, p = 0.04), this trend was not observed during either the Follow-up or Observation phases (both p values > 0.20).
During the Observation phase where no contingency was actually applied, participants self- reported consuming a lower number of standard drinks per week when they met TAC contingency criteria (M = 8.63, SD = 1.43) compared to weeks when they failed to meet TAC contingency criteria (M = 32.60, SD = 1.12). During the Contingency Management phase, participants again self-reported consuming a lower number of standard drinks per week when they met TAC contingency criteria (M =2.70, SD = 0.77) compared to when they failed to meet TAC contingency criteria (M = 24.21, SD = 2.84). Collectively, these data indicate that participants consistently reported consuming more standard drinks on those weeks when they failed to meet contingency and consistently reported lower levels of consumption on those weeks when they met contingency.
In this study, a contingency management procedure significantly reduced self-reported harmful drinking in a large sample of non-treatment-seeking heavy drinkers. Compared to the 4-week Observation phase, moderate and high levels of self-reported problematic drinking were: (1) reduced during the 12-week Contingency Management phase, and (2) remained lower during the 3-month Follow-up phase. Though our primary analyses of classified drinking level were based upon self-reported drinking derived from validated TLFB interviews, those findings were entirely consistent with the TAC readings used to document whether participants met the contingency criteria. As observed in our previous pilot study (Dougherty et al., 2014), a substantial proportion (39.5 percent across the 12 weeks) of participants met the contingency criteria during the Contingency Management phase, and were successful at maintaining TAC levels consistently below 0.03 g/dl. More specifically, we demonstrated that in comparison to the Observation phase, self-reported heavy and moderate drinking was reduced and no drinking and low drinking was increased during the Contingency Management phase. Importantly, we found that such effects were maintained throughout the 3-month Follow-up phase, even though no financial contingencies were offered and no TAC monitoring devices were worn.
The current study extends previous research using contingency management procedures by focusing on reductions in heavy or “at-risk” drinking—i.e., a “harm reduction” rather than abstinence approach. Like previous studies in alcohol and drug addiction, our contingency management procedure effectively reduced overall drinking (e.g., Alessi et al., 2007; Alessi and Petry, 2013; Barnett et al., 2011; Dougherty et al., 2014; Hagedorn et al., 2013; Hunt and Azrin, 1973; McDonell et al., 2012; Miller et al., 1974; Petry et al., 2005; Petry et al., 2000; Skipper et al., 2014) and as a result, individuals are less likely to experience negative outcomes (e.g., negative health and social problems, accidents, etc.) associated with heavy drinking (Kline-Simon et al., 2013). However, achievement of the contingency criteria was assessed solely through the objective monitoring of TAC levels. This alcohol monitoring system is an objective and validated measure of high level (i.e., ≥ 5 drinks) consumption used by the court system for monitoring drinking in DUI offenders (Marques and McKnight, 2007).
The use of transdermal alcohol monitors in an intervention is a relatively new idea. Most contingency management studies have required abstinence verifiable only by testing for alcohol or metabolite levels in breath, blood, or urine samples. Because the window of detection for alcohol consumption using these biomarker methods is often brief, these methods require frequent clinic visits, which tend to be burdensome. In contrast, the SCRAM device measures TAC levels continuously 24 hours per day, simplifying the objective verification of contingency requirements.
A few recent studies have tried to improve issues of revolving around the timing and burden of alcohol monitoring through technology. For example, Alessi and Petry (2013) used video cellular phone technology in a study that randomly assigned non-treatment seeking participants (n = 30) to either of two conditions: 1) monitoring alone, whereby compensation was only provided for submitting breath alcohol concentration (BrAC) videos regardless of results; and 2) a contingency management procedure, whereby participants received escalating vouchers for BrAC test results <0.02 g/dl. In both conditions, participants were required to give random breath samples three times a day for four weeks. Participants in the contingency management condition had a greater percentage of negative breath alcohol tests (87.1%) and longer durations of abstinence (16.8 days) compared to monitoring alone (66.9 % and 5.9 days, respectively). Limitations of that procedure include the ability to collect breath samples only periodically and between 7 a.m. and 11 p.m., potentially missing the detection of off-hour alcohol consumption. In another report, Skipper et al. (2014) conducted a pilot study to examine the effectiveness and tolerability of using cellular photo-digital breathalyzer devices to monitor alcohol use. Such devices transmit results over a cellular network, and capture facial images to identify the donor mid-exhalation each time breath alcohol is sampled. Twelve social drinkers were monitored for 5 weeks and results were compared with weekly urine tests of ethyl glucuronide, an ethanol metabolite with a detection window of 1–2 days (Wojcik and Hawthorne, 2007). Participants also provided breath samples at predetermined times (after breakfast, lunch, dinner, and before bedtime). Participants preferred the cellular photo digital breathalyzer device because of convenience and the minimal time required for testing. Compliance rates were high (96% for breath testing and 92% for urinary ethyl glucuronide). Cellular photo digital breathalyzer devices detected 98.8% of the 84 self-reported drinking episodes. Though this technology appears to be a promising new way of monitoring alcohol use, it is still relatively intrusive, requiring frequent breath samples collected by a hand-held device. In addition, undetected drinking can still occur after the hours of prescribed monitoring.
In the current study, we used the TAC ankle monitors as a noninvasive, passive monitoring system. Transdermal alcohol monitoring devices detect alcohol excreted through the skin (Swift, 2003) and provide a continuous measure of transdermal alcohol concentration over time (Swift, 2000, 2003). These monitors are a passive, less burdensome, method of detecting alcohol consumption (although their visibility can be undesirable for some wearers), and their use is increasingly more common in clinical research (e.g., Ayala et al., 2009; Barnett et al., 2011; Dougherty et al., 2012; 2014a, Hill-Kapturczak et al., 2014b). Two previous studies (Barnett et al., 2011; Dougherty et al., 2014), one of which was our pilot study, used transdermal devices to monitor alcohol consumption during contingency management procedures. The study by Barnett and colleagues (2011) used a small sample (n = 13) of heavy drinkers; most had either a lifetime diagnosis of alcohol dependence or alcohol abuse, and all were interested in reducing or stopping drinking. Participants were paid (on an escalating scale) daily for two weeks if they met “abstinence criteria” each day. Compared to a 1-week observation phase, assessed by both self-report and transdermal alcohol monitoring, abstinence increased, but when participants drank, they did not reduce the number of drinks they consumed.
In contrast to the Barnett study, the present study included a large sample of participants who were at-risk or heavy drinkers with no declared interest to reduce their drinking who were monitored for a one month baseline followed by three months of contingency. We found that during the contingency period, participants drank fewer days per week and had fewer instances of heavy or moderate risky drinking accompanied by more days per week with low-level or safe drinking. Our participants drank a higher number of drinks per drinking day (8.0 by men and 6.5 by women in the current study compared to Barnett’s study who reported 5.9 drinks per drinking day for men/women collectively), though they also had fewer days per month of risky drinking (about 36.8% among men and 36.3% among women in the current study compared to Barnett’s study reporting about 70.3% of days in the past month for men/women collectively). Furthermore, we used a lower incentive ($50) paid more infrequently (weekly), compared to daily payments and an escalating scale ($77 maximuim per week) used by Barnett and colleagues (2011).
These results are consistent with and extend our earlier pilot study (Dougherty et al., 2014). In that study, 26 non-treatment-seeking heavy drinkers were randomly assigned to one of two contingency exposure sequences. We compared effects of $25 and $50 contingency conditions for one month with each other and with a non-contingency control condition. The larger incentive ($50) was more effective in reducing heavy drinking during the weekend. Under both conditions, participants not only decreased the number of heavy and moderate levels of drinking days and increased the number of days of low levels of drinking, they also had more days of abstinence. Among participants who began in the $25 contingency condition, improvements persisted in the non-contingency condition. The current study extends this research by examining a larger number of participants, for a longer contingency management period, and included a 3-month follow-up period to demonstrate the persistence of lowered drinking even after removal of the contingency. Similar to our previous research (Dougherty et al., 2014), we found that the effects of contingency management persisted into a follow-up period (in comparison to an observation period).
We acknowledge some limitations in the current study. First, our participants were not seeking treatment and not dependent on alcohol. Although we demonstrated that contingencies are associated with a reduction in heavy drinking even after incentives were removed, this effect may be less robust in a more severely affected population. Our study should be replicated with a sample whose drinking problems are greater. Second, because this was not a clinical trial, we did not randomize participants to treatment groups. That is, everyone who entered the study was given the opportunity to receive financial incentives to reduce their drinking and maintain it at no or low levels of drinking. Future studies should compare contingency management to a control group to further distinguish drinking outcomes. Third, our follow-up data were self-reported and we cannot be certain that the data were reliable once TAC monitors were removed. Future studies should use transdermal alcohol monitoring to assess drinking behavior after the removal of contingency management, as we did in our pilot study. Finally, participants received a relatively high level of reinforcement ($50 per week over a 12-week intervention period), and although this rate of reinforcement is lower than many contingency management studies (e.g., Alessi & Petry, 2013; Barnett et al., 2011; Higgins et al., 2000; 2006), the magnitude of this reinforcement is likely greater than feasibly could be implemented in clinical practice. We have, however, shown that lower levels of reinforcement ($25 per week) can effectively reduce heavy drinking in a similar population (Dougherty et al., 2014). Future research should aim to systematically investigate the magnitude of reinforcement and duration of intervention required to effectively reduce alcohol consumption in different populations (i.e., at-risk, dependent), as well as examine the potential for other non-monetary forms of reinforcement that could be adopted in clinical practice. We believe that the relatively low drop-out rate (18%) in the present study may have been due to the level of compensation participants received in addition to the potential reinforcements that they could earn and may not be representative of drop-out rates expected in other settings. Nonetheless, the results of this study highlight the potential cost/benefit utility of employing reinforcement contingencies to promote healthier behaviors in heavy drinkers that could endure after interventions are completed.
Research reported in this publication was supported by the National Institute on Alcohol Abuse and Alcoholism [R01AA14988] and the National Institute of Drug Abuse [T32DA031115] of the National Institutes of Health.
The authors appreciate the supportive functions performed by our valued colleagues, Sharon Cates, Cameron Hunt, Krystal Shilling, Phillip Brink, and Norma Ketchum. Research reported in this publication was supported by the National Institute on Alcohol Abuse and Alcoholism of the National Institutes of Health [R01AA14988]. The research was also supported in part by the National Institute of Drug Abuse [T32DA031115] for postdoctoral training for Drs. Lake and Karns. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Dr. Dougherty also gratefully acknowledges support from a research endowment, the William and Marguerite Wurzbach Distinguished Professorship. None of the authors has conflicting interests concerning this manuscript.