|Home | About | Journals | Submit | Contact Us | Français|
Altered impulse control has been implicated in the shaping of habitual alcohol use and eventual alcohol dependence. We sought to identify the neural correlates of altered impulse control in 24 abstinent patients with alcohol dependence (PAD), as compared to 24 demographics matched healthy control subjects (HC). In particular, we examined risk taking and deficits in cognitive control as the neural endophenotypes of alcohol dependence. To this end, functional magnetic resonance imaging (fMRI) was conducted during a stop signal task (SST), in which a procedure was used to elicit errors in the participants. The paradigm allowed trial-by-trial evaluation of response inhibition, error processing, and post-error behavioral adjustment. Furthermore, by imposing on the subjects to be both fast and accurate, the SST also introduced a distinct element of risk, which participants may or may not avert during the task. Brain imaging data were analyzed with Statistical Parametric Mapping in covariance analyses accounting for group disparity in general performance. The results showed that, compared to HC, PAD demonstrated longer go trial reaction time (RT) and higher stop success rate (SS%). HC and PAD were indistinguishable in stop signal reaction time (SSRT) and post-error slowing (PES). In a covariance analysis accounting for go trial RT and SS%, HC showed greater activity in the left dorsolateral prefrontal cortex than PAD, when subjects with short and long SSRT were contrasted. By comparing PAD and HC directly during stop errors (SE), as contrasted with SS, we observed greater activity in PAD in bilateral visual and frontal cortices. Compared to HC, PAD showed less activation of the right dorsolateral prefrontal cortex during PES, an index of post-error behavioral adjustment. Furthermore, PAD who showed higher alcohol urge at the time of the fMRI were particularly impaired in dorsolateral prefrontal activation, as compared to those with lower alcohol urge. Finally, compared to HC subjects, PAD showed less activity in cortical and subcortical structures including putamen, insula and amygdala during risk-taking decisions in the SST. These preliminary results provided evidence for altered neural processing during impulse control in PAD. These findings may provide a useful neural signature in the evaluation of treatment outcomes and development of novel pharmacotherapy for alcohol dependence.
Alcohol dependence involves a wide range of serious medical and non-medical conditions such as alcohol-related liver diseases, violence and traffic accidents. Individuals as well as the society as a whole suffer a great deal from this serious mental illness. Understanding the psychological and neural processes leading to heavy, habitual and eventually uncontrollable use of alcohol is thus an important public health issue and poses great challenges to addiction neuroscience.
A number of investigators have hypothesized a critical association between drug and alcohol addiction and deficits in impulse control (Ernst and Paulus, 2005; Everitt and Robbins, 2005; Goldstein and Volkow, 2002; Kalivas and Volkow, 2005; Moeller et al., 2001; Volkow and Li, 2005). Broadly defined in the literature, impulse control could comprise two distinguishable psychological dimensions. On one hand, impulse control implies ability to avoid risk and to curb excessive desire to seek sensation (Finn, 2002; Kelley et al., 2004; Kreek et al., 2005; Verdejo-Garcíaet al., 2008). On the other hand, impulse control implies cognitive operations that allow individuals to change behaviors in a dynamic fashion on the basis of advance information or feedback derived from monitoring ongoing behavior (Botvinick et al., 2001; Carter et al., 1999; Kok et al., 2006; Ridderinkhof et al., 2004). This latter capability has specifically been referred to as cognitive control. By setting goals, inhibiting habitual acts, and monitoring performance, cognitive control allows behavioral flexibility for one to maneuver changing environment and optimize goal-directed actions (Dalley et al., 2004). Cognitive control thus serves to maintain homeostasis by a process that accommodates changing states of the decision maker (Paulus, 2007). It has been hypothesized that disrupted impulse control along with heightened salience attributed to alcohol could lead to a vicious cycle of withdrawal, craving, bingeing and intoxication (Goldstein and Volkow, 2002).
In this study we employed the stop signal task (SST) as a behavioral proxy to explore whether neural process associated with impulse control are altered in patients in alcohol dependence (PAD). The SST is widely used in the cognitive and imaging neuroscience literature (Logan, 1994; Logan and Cowan, 1984). In a “tracking” SST in which the difficulty of the stop trials were adjusted according to participants’ performance, we delineated the neural correlates of response inhibition, error processing, and post-error behavioral adjustment, which are key component processes of cognitive control (Li et al., 2006a; Li et al., 2008a; Li et al., 2008b; Li et al., 2008c). Furthermore, by imposing on the participants to be both fast and accurate, the SST introduced a component of risk, which participants may avert by slowing down, or ignore by responding “as usual,” during go trials. We observed greater activity in a number of cortical and subcortical structures including the amygdala when participants take risk as compared to when they avoid risk (Li et al., In Press). Thus, with the SST that allowed us to examine the neural processes of cognitive control and risk taking, we sought to establish a neural signature of impaired impulse control in PAD.
Twenty-four abstinent patients with alcohol dependence (PAD, 6 women) and 24 age- and education-matched healthy control subjects (HC, 6 women) participated in the study (Table 1). PAD met criteria for current alcohol dependence, as diagnosed by the Structured Clinical Interview for DSM-IV (First et al., 1995). PAD did not meet current DSM-IV criteria for dependence on other psychoactive substances, other than nicotine, and were also excluded if they met current criteria for any DSM IV Axis I psychiatric disorder. Recent use of other illicit substances was ruled out by urine toxicology screens upon admission. Women were excluded from the study if there were using any form of birth control or were either peri or post menopausal. In addition, individuals with current depressive or anxiety symptoms requiring treatment or currently being treated for these symptoms were excluded as well. They were drug-free while staying in an inpatient treatment unit prior to the current fMRI study. All subjects were physically healthy with no major medical illnesses or current use of prescription medications. None of them reported having a history of head injury or neurological illness. The Human Investigation committee at Yale University School of Medicine approved all study procedures, and all subjects signed an informed consent prior to study participation.
PAD were assessed for their alcohol urge with the Alcohol Urge Questionnaire (AUQ, Bohn et al., 1995; Fox et al., 2008). AUQ was used to measure current alcohol urge on a Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree), with a total of 8 items addressing desire for drink, expectation of positive affect from drinking, and inability to avoid drinking if alcohol was available. PAD were assessed with AUQ every 3−4 days during their inpatient stay. PAD participated in the fMRI study between 11 to 17 days (average = 2 weeks) after admission.
We employed a simple reaction time (RT) task in this stop-signal paradigm (Fig. 1). There were two trial types: “go” and “stop,” randomly intermixed. A small dot appeared on the screen to engage attention and eye fixation at the beginning of a go trial. After a randomized time interval (fore-period) anywhere between 1 and 5 s, the dot turned into a circle, prompting the subjects to quickly press a button. The circle vanished at button press or after 1 s had elapsed, whichever came first, and the trial terminated. A premature button press prior to the appearance of the circle also terminated the trial. Three quarters of all trials were go trials. In a stop trial, an additional “X,” the “stop” signal, appeared after the go signal. The subjects were told to withhold button press upon seeing the stop signal. Likewise, a trial terminated at button press or when 1 s had elapsed since the appearance of the stop signal. The stop trials constituted the remaining one quarter of the trials. There was an inter-trial-interval of 2 s.
The time interval between the stop and the go signals (or the stop-signal delay, SSD) started at 200 ms and varied from one stop trial to the next according to a staircase procedure: if the subject succeeded in withholding the response, the SSD increased by 64 ms, making it more difficult for them to succeed again in the next stop trial; conversely, if they failed, SSD decreased by 64 ms, making it easier for the next stop trial. With the staircase procedure, a “critical” SSD could be computed that represents the time delay required for the subject to succeed in withholding a response half of the time in the stop trials (Levitt, 1970). One way to understand the stop signal task is in terms of a horse race model with a go process and a stop process racing toward a finishing line (Logan, 1994). The go process prepares and generates the movement while the stop process inhibits movement initiation: whichever process finishes first determines whether a response will be initiated or not. Importantly, the go and stop processes race toward the activation threshold independently. Thus, the time required for the stop signal to be processed so a response is withheld (i.e., stop signal reaction time or SSRT) can be computed on the basis of the go trial RT distribution and the odds of successful inhibits for different time delays between go and stop signals. This is done by estimating the critical SSD at which a response can be correctly stopped in approximately 50% of the stop trials. With the assumptions of this “horse-race” model, the SSRT could then be computed for each individual subject by subtracting the critical SSD from the median go trial RT. Generally speaking, the SSRT is the time required for a subject to cancel the movement after seeing the stop signal. A long SSRT indicates poor response inhibition.
Subjects were instructed to respond to the go signal quickly while keeping in mind that a stop signal could come up in a small number of trials. Prior to the fMRI study each subject had a practice session outside the scanner. Each subject completed four 10-min runs of the task with the SSD updated manually across runs. Depending on the actual stimulus timing (e.g., trials varied in fore-period duration) and speed of response, the total number of trials varied slightly across subjects in an experiment. With the staircase procedure we anticipated that the subjects would succeed in withholding their response in approximately 50% of the stop trials. This was thus an event-related fMRI study, with the go and stop trials randomly jittered to improve the efficiency of the study design.
We computed the fore-period effect as an index of motor preparedness during the SST (Li et al., 2005; Li et al., 2006a; Tseng and Li, 2008). Briefly, longer fore-period is associated with faster response time (Bertelson and Tisseyre, 1968; Woodrow, 1914). Thus, RT of go trials with a fore-period between 3 and 5 s were compared to those with one between 1 and 3 s, and the effect size of RT difference was defined as fore-period effect. It is also known that in a reaction time (RT) task the RT of a correct response is prolonged following an error, compared to other correct responses, and this prolonged RT is thought to reflect cognitive processes involved in error monitoring (Rabbit, 1966). We thus computed the RT difference between the go trials that followed a stop error and those that followed another go trial, and termed this RT difference “post-error slowing” (Hajcak et al., 2003; Li et al., 2008a).
Conventional T1-weighted spin echo sagittal anatomical images were acquired for slice localization using a 3T scanner (Siemens Trio, Erlangen, Germany). Anatomical images of the functional slice locations were next obtained with spin echo imaging in the axial plane parallel to the AC-PC line with TR = 300 ms, TE = 2.5 ms, bandwidth = 300 Hz/pixel, flip angle = 60°, field of view = 220 × 220 mm, matrix = 256 × 256, 32 slices with slice thickness = 4mm and no gap. Functional, blood oxygenation level dependent (BOLD) signals were then acquired with a single-shot gradient echo echo-planar imaging (EPI) sequence. Thirty-two axial slices parallel to the AC-PC line covering the whole brain were acquired with TR = 2,000 ms, TE = 25 ms, bandwidth = 2004 Hz/pixel, flip angle = 85°, field of view = 220 × 220 mm, matrix = 64 × 64, 32 slices with slice thickness = 4mm and no gap. Three hundred images were acquired in each run for a total of 4 runs.
Data were analyzed with Statistical Parametric Mapping version 2 (SPM2, Wellcome Department of Imaging Neuroscience, University College London, U.K.). Images from the first five TRs at the beginning of each trial were discarded to enable the signal to achieve steady-state equilibrium between RF pulsing and relaxation. Images of each individual subject were first corrected for slice timing and realigned (motion-corrected). A mean functional image volume was constructed for each subject for each run from the realigned image volumes. These mean images were normalized to an MNI (Montreal Neurological Institute) EPI template with affine registration followed by nonlinear transformation (Ashburner and Friston, 1999; Friston et al., 1995a). The normalization parameters determined for the mean functional volume were then applied to the corresponding functional image volumes for each subject. Finally, images were smoothed with a Gaussian kernel of 10 mm at Full Width at Half Maximum. The data were high-pass filtered (1/128 Hz cutoff) to remove low-frequency signal drifts.
Four main types of trial outcome were distinguished: go success (G), go error (F), stop success (SS), and stop error (SE) trial (Fig. 1). A statistical analytical design was constructed for each individual subject, using the general linear model (GLM) with the onsets of go signal in each of these trial types convolved with a canonical hemodynamic response function (HRF) and with the temporal derivative of the canonical HRF and entered as regressors in the model (Friston et al., 1995b). Realignment parameters in all 6 dimensions were also entered in the model. Serial autocorrelation was corrected by a first-degree autoregressive or AR(1) model. The GLM estimated the component of variance that could be explained by each of the regressors. We constructed for each individual subject statistical contrasts: SS > SE and SE > SS.
In a second GLM, G, F, SS, and SE trials were first distinguished. G trials were divided into those that followed a G (pG), SS (pSS) and SE (pSE) trial. Furthermore, pSE trials were divided into those that increased in RT (pSEi) and those that did not increase in RT (pSEni), to allow the isolation of neural processes involved in post-error behavioral adjustment (Li et al., 2008a). To determine whether a pSE trial increased or did not increase in RT, it was compared to the pG trials that preceded it in time during each session. The pG trials that followed the pSE trial were not included for comparison because the neural/cognitive processes associated with these pG trials occurred subsequent to and thus could not have a causal effect on the pSE trial (Li et al., 2008a). We constructed for each individual subject two contrasts: SS > SE, to compare with the first GLM and verify the model; and pSEi vs. pSEni, to identify activations associated with post-error slowing.
In this second GLM, pG trials were also divided into those that increased in RT (pGi) and those that did not increase in RT (pGni, Li et al., In Press). Similarly, to determine whether a pG trial increased or did not increase in RT, it was compared to the pG trials that preceded it in time during each session. We contrasted pGni>pGi (i.e., post-go speeding > post-go slowing) for each individual subject to identify activations associated with risk taking decisions in the SST.
The SS and SE trials were identical in stimulus condition, with SS trials involving inhibition success and SE trials involving inhibition failure. The contrast SS > SE thus engaged processes related to response inhibition and was used in the random effect analysis (Li et al., 2006a). With the SSRT to index response inhibition, PAD and HC were each grouped into those with short (n=12) and long (n=12) SSRT on the basis of a median split, following the rationale of the race model (Li et al., 2006a; Logan, 1994). In a 2×2 analysis of variance (ANOVA), the contrasts HC > PAD (short > long SSRT) allowed us to identify structures showing greater activation in HC as compared to PAD during response inhibition and vice versa.
We compared PAD and HC using the contrast SE > SS in a covariance analysis accounting for go trial RT and SS%.
We compared PAD and HC using the contrast pSEi > pSEni in a covariance analysis accounting for go trial RT and SS%.
We compared PAD and HC using the contrast pGni > pGi in a covariance analysis accounting for go trial RT and SS%.
In addition to voxelwise whole brain exploration, we also performed region of interest (ROI) analysis based on our previous findings (Li et al., 2006; Li et al., 2008a; Li et al., 2008b; Li et al., In Press). We used MarsBaR (Brett et al., 2002; http://marsbar.sourceforge.net/) to compute for each individual subject the effect size (t-statistic) of activity change for functional ROIs derived from our published studies. The effect size rather than mean difference in brain activity was derived in order to account for individual differences in the variance of the mean. These ROIs included two cortical regions related to response inhibition (Li et al., 2006a): the dorsal medial frontal cortex (dmFC, x=−4, y=32, z=51) and the other focused on the rostral anterior cingulate cortex (rACC, x=−8, y=35, z=19); and the left caudate head (Li et al., 2008c). Seven regions related to error processing were also designated as ROIs (Li et al., 2008b): dorsal anterior cingulate cortex (dACC, x=−4, y=16, z=44) extending to include supplementary motor area (SMA); cuneus including retrosplenial cortex (x=16, y=−64, z=8); thalamus (x=−12, y=−16, z=8); left insula probably including inferior frontal cortex (x=−48, y=8, z=−8); superior frontal and precentral gyrus (x=−36, y=−8, z=52); superior temporal gyrus (x=−48, y=−28, z=24); and right insula (x=44, y=16, z=0). One region related to post-error slowing was identified as an ROI (Li et al., 2008a): ventrolateral prefrontal cortex (VLPFC, x=44, y=24, z=−4; BA 47). Finally, two regions related to risk taking were identified as ROIs (Li et al., In press): amygdala (x=−16, y=−4, z=−16) and the posterior cingulate cortex (PCC, x=−4, y=−40, z=44).
Table 2 summarizes the stop signal performance for PAD and HC. Compared to HC, PAD were significantly slower in median go trial RT, suggesting that they adopted a more conservative response strategy. PAD also showed a higher stop success rate, compared to HC. The results suggested that, compared to HC, PAD exercised greater attention in monitoring for the stop signal. This discrepancy in attentional monitoring thus needed to be accounted for when regional brain activities were compared for response inhibition and post-error slowing. The two groups otherwise did not differ in stop signal performance. Furthermore, PAD demonstrated an average of −78 ± 20 ms in post-go speeding in RT and 93 ± 18 ms in post-go slowing in RT, not differently from HC, who demonstrated an average of −82 ± 26 ms in post-go speeding in RT and 99 ± 25 ms in post-go slowing in RT.
Table 3 summarizes the stop signal performance separately for short and long SSRT group each for PAD and HC. PAD and HC showed near-trend and trend difference in median go trial RT and stop success rate.
Other analyses indicated that performance of both PAD and HC was well tracked by the staircase procedure. These findings included that both PAD and HC subjects succeeded in approximately half of the stop trials; and both showed a significant linear correlation between the RT of stop error trials and the stop signal delay (p's<0.005, 0.446<R's<0.941, Pearson regression; Logan and Cowan, 1984).
We applied the same threshold of p<0.001, uncorrected and 5 voxels in the extent of activation to all second-level whole brain analyses of imaging data. In ROI analyses, a threshold of p<0.05, uncorrected was first set up for the whole brain exploration, and the results were reported with small volume correction using p<0.001, uncorrected.
PAD and HC were compared for the contrast SS>SE, in a group (PAD vs. HC) by SSRT (short vs. long) ANOVA. The results showed greater activation in the short as contrasted to long SSRT group in HC compared to PAD in the left dorsolateral prefrontal cortex (DLPFC, x=−48, y=24, z=36, Z=3.70, 8 voxels, Fig. 2). Since the two SSRT groups showed near-trend or trend differences in go trial RT and SS%, we performed another ANOVA covaried for these two variables. The results were essentially identical: HC showed greater left DLPFC activity (x=−48, y=24, z=36, Z=3.57, 6 voxels). PAD did not show greater regional brain activity than HC for the same contrast. In ROI analyses, PAD and HC did not differ in the dmFC, rACC, left caudate head masks in the same ANOVA.
Contrasting stop error (SE) with stop success (SS) trials, PAD showed increased activity compared to HC in a number of brain regions including bilateral visual and frontal cortices in a 2-sample t test (Fig. 3; Table 4). Conversely, no brain regions showed greater activity in the same covariance analysis in HC, as compared to PAD. In ROI analysis for error processing, PAD and HC did not show differential activity in any of the seven masks identified from our earlier studies.
Compared to HC, PAD showed less activation of the right dorsolateral prefrontal cortex (x=44, y=32, z=40; Z=3.62, 14 voxels; Fig. 4) during post-SE go trials with RT increase (pSEi) contrasted with post-SE go trials without RT increase (pSEni); i.e., post-error slowing (PES). No brain regions showed greater activity during PES in PAD as compared to HC. In ROI analysis, we compared PAD and HC for pSEi>pSEni on the basis of small volume correction for the right VLPFC mask. The results showed that PAD and HC did not differ in activation for this contrast in right VLPFC.
PAD and HC were compared for the contrast: post-go go trials without RT increase (pGni) > post-go go trials with RT increase (pGi). The results showed decreased activation in a number of cortical and subcortical structures including the putamen and insula in PAD, compared to HC (Fig. 5; Table 5). In ROI analyses, we compared PAD and HC for pGni>pGi for a mask of the amygdala and the posterior cingulate cortex (PCC). The results showed that, compared to HC, PAD demonstrated decreased activation in the amygdala (p<0.025, corrected for FWE, Z=2.64, x=−20, y=−4, z=−16) and in the PCC (p<0.030, corrected for FWE, Z=2.62, x=−8, y=−36, z=44).
Linear regression analyses indicated that left DLPFC activity during response inhibition, right DLPFC activity during post-error slowing, amygdala or PCC/precuneus activity during risk taking, or any of the regional activities during error processing did not correlate with years of alcohol use or the total amount of alcohol use in the 90 days prior to admission in the PAD (−0.122<r's<0.025; 0.122<p's<0.571).
PAD showed a rating of 20.3 ± 15.0 (mean ± standard deviation; range: 8−50) on alcohol use urge based on the Alcohol Urge Questionnaire on the first day of admission. Alcohol urge decreased over a course of 4−5 weeks of inpatient stay. Because of varying alcohol urge rating across individuals, we normalized the ratings to individual means. Functional MR scans were performed between day 11 and 17 after admission, a period when the average relative urge was under 1.0. However, individual PAD varied in alcohol urge, with 5 PAD showing a relative urge greater than 1.0 (high urge group) and 19 PAD showing a relative urge less than 1.0 (low urge group) at the time when fMRI was conducted (1.18 ± 0.16 versus 0.86 ± 0.14, p<0.001, 2-sample t test).
We compared these two groups of PAD for the effect size of activity changes for the left DLPFC (response inhibition), right DLPFC (post-error slowing), amygdala and PCC (risk taking) that showed differential activity between PAD and HC. Because of multiple comparisons, we guarded against false positive results at an alpha of 0.012. The results showed that, compared to the low urge group, the high urge group showed significantly less activity in the right DLPFC during post-error slowing (effect size: −1.54 ± 0.91 vs. 0.45 ± 1.37, p<0.006; 2-sample t test).
Chronic and heavy alcohol use is known to be associated with a wide range of altered cognitive and affective states (Sullivan and Pfefferbaum, 2005; Sher, 2006). A number of functional imaging studies have examined the neural processes underlying motor dysfunction (Parks et al., 2003), working memory dysfunction (Akine et al., 2007; Caldwell et al., 2005; Schweinsburg et al., 2005; Tapert et al., 2004), cue-elicited craving (Filbey et al., In Press; Grüsser et al., 2004; Heinz et al., 2004; Myrick et al., 2004; Park et al., 2007; Tapert et al., 2004a; Tapert et al., 2003; Tapert et al., 2004b; Wrase et al., 2002; Wrase et al., 2007), altered perceptual detection (Hermann et al., 2007) and affective processing (Heinz et al., 2007; Salloum et al., 2007), in alcohol dependent patients. Some have specifically addressed altered inhibitory control in these patients (Anderson et al., 2005; Karch et al., In Press; Karch et al., 2008; Schweinsburg et al., 2004). For instance, Anderson and colleagues showed that greater blood oxygenation level-dependent (BOLD) response to inhibition during a go/nogo task predicted more expectancies of cognitive and motor impairment from alcohol in adolescents who were assessed with alcohol expectancies (Anderson et al., 2005). These results suggested that decreased inhibitory control may contribute to more positive and less negative expectancies, which could eventually lead to problem drinking (Anderson et al., 2005).
The current study assessed impulse control in patients with alcohol dependence (PAD) using the stop signal task. In particular, we attempted to examine cognitive control independent of general task performance. On the basis of our previous findings from the same behavioral paradigm in healthy individuals, we sought to identify altered cerebral processes in PAD during component processes of cognitive control (Li and Sinha, 2008). Overall, our results suggested that PAD showed altered activity in a number of cortical structures during response inhibition, error processing, and post-error behavioral adjustment. In particular, the findings of decreased dorsolateral prefrontal cortical activation during response inhibition and post-error slowing are broadly in accord with previous studies demonstrating altered prefrontal cortical activity in patients with alcohol misuse (Akine et al., 2007; Bowden-Jones et al., 2005; Chanraud et al., 2007; Clark et al., 2007; Dao-Castellana et al., 1998; De Bellis et al., 2005; de Greck et al., In press; Fein et al., 2006; Goldstein et al., 2004; Heinz et al., 2007; Pfefferbaum et al., 2001; Rupp et al., 2006; Schecklmann et al., 2007; Verdejo-García et al., 2006; see also Kopelman, 2008; Sinha and Li, 2007; Scheurich, 2005; Uekermann et al., 2008 for a review). For instance, the finding of decreased left DLPFC activation during response inhibition in PAD as compared to healthy controls (HC) is consistent with an earlier report showing diminished activity in the same brain region in chronic alcoholics during performance of the Stroop test (Dao-Castellana et al., 1998). Our finding of decreased right DLPFC activity during post-error slowing is also consistent with an earlier study of alcoholic patients showing decreased bilateral DLPFC activation during a working memory task (Pfefferbaum et al., 2001).
Altered prefrontal activity has been reported in alcoholic patients in response to alcohol cues and craving (George et al., 2001; Olbrich et al., 2006; Wilson et al., 2004). For instance, PAD showed increased activity in the DLPFC and anterior thalamus in response to alcohol cues as compared to control visual cues (George et al., 2001). Furthermore, transcranial electrical stimulation of the prefrontal cortices appeared to ameliorate alcohol craving in these patients (Boggio et al., 2008). Thus, the current findings of greater changes in prefrontal cortical activity in PAD with higher alcohol urge are broadly consistent with the idea of prefrontal regulation of alcohol craving (Sinha and Li, 2007). On the other hand, craving and cognitive control represent opposing constructs in theorizing the regulation of alcohol use behavior. One possibility is that cue induced prefrontal activation represents an inflow of subcortical activity form the reward and emotion circuits rather than a signal of top-down executive control during the cue induction paradigms (Boggio et al., 2008). Thus, although the current finding of prefrontal functional changes associated with alcohol urge highlighted a crucial aspect of prefrontal functions, further studies are required to clarify the specific roles of prefrontal cortices in (the regulation of) alcohol craving.
Previous evoked potential and fMRI studies have reported disrupted error-related brain activity and connectivity associated with alcohol consumption (Holroyd and Yeung, 2003; Meda et al., In Press; Ridderinkhof et al., 2002). For instance, moderate alcohol intake is associated with diminished error-related anterior cingulate activity and failure to initiate post-error behavioral adjustment in young adult social drinkers (Ridderinkhof et al., 2002). The present findings of greater error-related activity thus appeared to characterize a complementary profile of cerebral responses in PAD during a stage of early abstinence. Taken together, a contrasting pattern of decreased prefrontal activity during response inhibition and post-error behavioral adjustment and increased frontal including anterior cingulate activity during errors described our cohort of PAD during the stop signal task. These results add to the evidence that prefrontal cortical function may play a critical role in the shaping of alcohol dependence.
Compared to HC subjects, PAD showed less activation during risk-taking decisions in a number of cortical and subcortical structures. For instance, PAD showed less activation of the medial orbitofrontal cortex (mOFC), an area implicated in prediction error signaling and the detection of contingency change (Blair, 2007). Interestingly, the mOFC (in contrast to the lateral OFC) has also been shown to play a role in processing positive and rewarding information (Liu et al., 2007; Nieuwenhuis et al., 2005; O'Doherty et al., 2001). Thus, compared to HC subjects, PAD might experience post-go speeding, in contrast to post-go slowing, as a less rewarding event. Alcohol dependence is associated with a down regulation of circuit activity that is “normally” engaged when individuals partake in a risk-taking decision.
Risk taking also activates parietal cortex and the rostral anterior cingulate cortex (rACC), according to a meta-analysis of imaging studies involving decision making (Krain et al., 2006). Compared to HC subjects, PAD showed less activation of the bilateral parietal cortices and the rACC, suggesting that risk taking is a less salient event for the patients. Furthermore, our finding of decreased amygdala activity during risk taking decisions in the stop signal task is also consistent with the literature implicating this subcortical structure in alcohol misuse. For instance, alcohol abuse has also been associated with impaired amygdala processes during aversive learning (Stephens et al., In press). Postmortum studies showed altered serotonergic neurotransmission in the amygdala, suggesting dysfunctional affect regulation in chronic alcoholics (Storvik et al., 2007). Taken overall, these findings suggest that risk taking as a distinct dimension of impulse control does not evoke in PAD cerebral activity documented for healthy individuals. The stop signal task appears to be a useful proxy to examine risk related behavior as well as cognitive control.
It is important to note a few limitations of the current study. Firstly, although they do not meet criteria of another substance use disorder, many of our PAD used cocaine or other illicit substances. Because of the moderate sample size of the current study, we did not attempt to accommodate this and other clinical factors such as history of trauma and/or mood disorders, which may impact the neural measure of impulse control. Secondly, impulse control can be addressed in a number of behavioral tasks other than the stop signal paradigm. In particular, behavioral tasks incorporating an explicit component of reward, such as the delayed discounting task, would be of tremendous value in elucidating other aspects of impulse control impairment in alcohol dependence (Bjork et al., 2004; Field et al., 2007; Mitchell et al., 2005; Petry, 2001; Petry et al., 2002; Richards et al., 1999; Takahashi et al., 2007; Vuchinich and Simpson, 1998; see also Bickel et al., 2007 for a review). Thirdly, we conducted the fMRI study during a relatively early stage of abstinence in the PAD. Although the patients were free of symptoms and signs of acute alcohol withdrawal, they might continue to experience evolution of other alcohol-related mood states such as anxiety. Thus, studies would be warranted at a later stage of abstinence to confirm the present findings. Fourthly, because of the small number of women recruited for the study, we did not compare women and men subjects in the current study. However, we have previously noted gender differences in cognitive control during the stop signal task (Li et al., 2006b). It would be important to further explore whether the gender differences in brain activation during the stop signal task manifest in relation to alcohol dependence. Finally, PAD overall did not differ from HC in behavioral measures of impulse control. Therefore, the current results did not ascertain behavioral deficits in impulse control in PAD. Studies of a greater sample size are required in the future to further pursue this issue.
Despite these limitations, the current findings are to our knowledge the first to dissect the component processes of impulse control altered in alcohol dependence within a single behavioral paradigm. We confirmed prefrontal cortical deficits during cognitive control, highlighted a contrasting pattern of error-related activations, and elucidated a distinct dimension of risk taking, which may serve as useful neural markers of alcohol dependence.
This study was supported by the Yale Interdisciplinary Women's Health Research Scholar Program on Women and Drug Abuse (Mazure), funded by the NIH Office of Research on Women's Health, the Alcoholic Beverage Medical Research Foundation (Li), Clinician Scientist K12 award in substance abuse research (Rounsaville), and NIH grants P50-DA16556 (Sinha), and R01-DA023248 (Li) to Yale University. This project was also funded in part by the State of Connecticut, Department of Mental Health and Addictions Services.