|Home | About | Journals | Submit | Contact Us | Français|
Real-time fMRI feedback (RTfMRIf) is a developing technique, with unanswered methodological questions. Given a delay of seconds between neural activity and the measurable hemodynamic response, one issue is the optimal method for presentation of neurofeedback to subjects. The primary objective of this preliminary study was to compare the methods of continuous and intermittent presentation of neural feedback on targeted brain activity.
Thirteen participants performed a motor imagery task and were instructed to increase activation in an individually defined region of left premotor cortex using RTfMRIf. The fMRI signal change was compared between real and false feedback for scans with either continuous or intermittent feedback presentation.
More individuals were able to increase their fMRI signal with intermittent feedback, while some individuals had decreased signal with continuous feedback. The evaluation of feedback itself activated an extensive amount of brain regions, and false feedback resulted in brain activation outside of the individually defined region of interest.
As implemented in this study, intermittent presentation of feedback is more effective than continuous presentation in promoting self-modulation of brain activity. Furthermore, it appears that the processes of evaluating feedback involve many brain regions and that can be isolated using intermittent presentation.
“Real-time” functional MRI (RTfMRI) is used to describe the analysis of data while scans are being acquired, as opposed to the more common approach of analyzing data at some time following scanning. It has been proposed that such real-time analysis may be useful for quality monitoring, for brain-computer interfaces, and for neurofeedback [1-5]. RTfMRI feedback (RTfMRIf) provides individuals neurofeedback regarding their own brain function, thus theoretically allowing a subject or patient to dynamically self-manipulate brain activity during mental processes. There are a number of proposed research and clinical applications of RTfMRIf [4, 6], yet fundamental questions surrounding the optimal procedures for RTfMRIf have not been systematically explored. Such questions include how to account for scanner signal drift and physiologic noise over time during a session, how best to select and quantify the signal to feedback, and, perhaps most important, how to best provide the feedback to the subject [1-4, 7-9].
A variety of approaches have been used to present RTfMRI feedback, such as display of whole brain activity , verbal feedback [7, 11], a scrolling graph display [8, 12], visual scales [13-14], and combinations of feedback display approaches [6, 15]. The first published report of RTfMRIf used intermittent feedback, updating a functional map after each rest-task block . Following EEG feedback findings [8, 16], many RTfMRIf studies have used continuous feedback, in which the visual display is updated after each acquired volume [6, 8, 12-13, 15]. It is important to note that there are temporal differences between EEG and fMRI measurements of brain activity. The sampling rate of EEG (~100 samples/second) is orders of magnitude faster than that of fMRI (~0.5 samples/second). Also, the EEG signal is tightly linked to neural activity in time while fMRI measures a hemodynamic response that follows seconds after neural activity . The aim of this study was to directly compare an intermittent versus a continuous approach for providing feedback with RTfMRI to test whether this matters and to aid our group and others in future RTfMRIf study design.
Continuous feedback theoretically may have some advantages. The more feedback that is given, the more opportunities are available to modify thoughts and brain activity to best manipulate brain function. Also, continuous feedback may provide greater interest or engagement in participating in the feedback paradigm and ensure greater attention. However, there may be some disadvantages to continuous feedback. Given the slow and variable hemodynamic response measured by fMRI, it may be challenging to link feedback with thoughts that occurred several seconds prior. Instructions and training about the delay, along with scrolling graphs, have been employed to deal with this challenge [6, 12-13]. Additionally, as noise in the fMRI signal is typically dealt with by traditional approaches of filtering and signal averaging, constant feedback must employ non-traditional approaches to prevent noise from impacting continuous feedback [2-3]. Additionally and perhaps most importantly, the visual attention and cognitive load of evaluating feedback while simultaneously engaged in the experimental paradigm may be confounding and actually distract from the task under primary study. Too much feedback may distract from the main task at hand.
Because of these considerations, intermittent feedback may have some advantages over continuous feedback in RTfMRI neurofeedback procedures. By providing feedback at the end of a block of time, the participant does not need to be aware of any hemodynamic delay and more time points are available for filtering and signal averaging. Furthermore, experimental task performance and the evaluation of feedback are separable in time (and can be more concretely isolated for further whole-brain analysis).
In this study, we directly compared a continuous and an intermittent approach to providing RTfMRI feedback in a movement imagery task. Our primary hypothesis was that intermittent RTfMRIf would be more effective for increasing brain function in a defined region of interest (ROI) than would continuous feedback. We further aimed to explore whole brain differences evaluating feedback continuously versus intermittently, and we used the intermittent paradigm to characterize brain regions involved in evaluating feedback.
Healthy non-smoking, right-handed volunteers, age 18 - 60, were eligible to participate in this study. After providing informed consent as approved by the Institutional Review Board of the Medical University of South Carolina, participants were screened for conditions contraindicated to MRI scanning, current DSM-IV Axis 1 psychiatric disorders, substance dependence, substance abuse within the past 30 days, and significant medical problems or medications that would interfere with the hemodynamic response.
Study subjects participated in six fMRI scans on the same day. Each scan involved a block-design “imagine movement” task. Participants were instructed to imagine moving their right hand when the word “IMAGINE” was visually displayed (imagined activities such as writing, playing a musical instrument, or completing a sports-related movement were suggested), and to engage in non-movement thoughts when the word “REST” was displayed. A tight, molded foam wrist/hand brace was placed on the participant's right hand, wrist and forearm to limit movement during scanning. Following all scanning, participants were asked to describe the mental strategies that they used during the motor imagery task, were asked to rate their confidence in performing the task, and were asked to rank order the four scans with feedback based on their perceived performance.
The scanning was divided into two sessions for comfort purposes, to allow the participant a break from laying in the scanner (see Table 1). Each of the two scan sessions began with a motor imagery functional scan with no feedback, which was used to individually localize a region of interest (ROI) for generating RTfMRIf in the next two scans. Subjects had four motor imagery scans with feedback (intermittent or continuous feedback, with either real feedback or false feedback; IR,IF,CR,CF). Scan order was randomized with either continuous or intermittent pairs of scans first. Within each pair, scan order was also randomized for real or false feedback. Using this cross-over design to control for order effects, “no feedback” ROI localizer scans for each participant were followed by two continuous-feedback in one session and two intermittent-feedback scans in the other session. One of the two feedback scans within each session used “real feedback” (based on actual fMRI signal) and the other used “false feedback” (fixed randomized feedback not based on actual fMRI signal, used as a control condition). Participants were aware that scans would have different kinds of feedback, but they were not aware that some would be false feedback.
All scans lasted for 280 image volumes (616 seconds). The first 60 volumes were “REST”, allowing time for the operator to configure the real-time software and for drift of MR signal intensities to stabilize. Next “IMAGINE” and “REST” alternated for blocks of 10 volumes. For the scans used to functionally localize the ROI, no feedback was presented (although an inactive thermometer was displayed to orient participants). Feedback was provided to the participants as a thermometer (see Supplemental Figure 1) with 5 increments above baseline and 5 increments below baseline (each increment was equal to 0.4% signal change for real feedback). As activation changed, the thermometer readings moved incrementally both up and down. During feedback scans, participants were instructed to attempt to maximally increase a thermometer display (i.e. switch imagined activities if little or no positive activity; increase imagined activity if some positive activity). For continuous-feedback scans, an active thermometer was shown throughout the “IMAGINE” condition (an inactive thermometer was shown with “REST”), updated every volume. Participants were instructed that there was a delay in the feedback, and it was suggested that a strategy be maintained for several seconds in order to receive relevant feedback. For intermittent-feedback scans, no thermometer was displayed during the “IMAGINE” and “REST” conditions. The display for the last volume of the IMAGINE condition and the first volume of the REST condition were replaced with a thermometer (thus the block design had 3 conditions: 9 volumes “IMAGINE”, 2 volumes “FEEDBACK”, and 9 volumes “REST”).
Scanning was performed using a 3T MRI Trio (Siemens Medical, Erlangen, Germany). Each fMRI scan was acquired using a standard multislice single-shot gradient echo EPI sequence with the following parameters: TR = 2.2 s, TE = 35 ms, 64×64 matrix, parallel imaging factor of 2, 3×3×3 mm voxels, 280 volumes, 36 ascending transverse slices with approximate AC-PC alignment. After each volume was acquired, it was automatically exported in DICOM format from the MRI scanner computer to a separate computer for in-scan processing.
Turbo-BrainVoyager (TBV) 2.0 software (Maastricht, The Netherlands) was used to perform in-scan processing. Real-time pre-statistical processing included motion correction and spatial smoothing using a Gaussian kernel of 8.0 mm FWHM. No feedback motor imagery fMRI scans were acquired using a block design paradigm to provide participant-specific activation, to guide ROI placement for the following RTfMRIf acquisitions. Towards the end of the no feedback motor imagery scan, a 7-slice ROI was selected for activation in the left-hemisphere (using a t value threshold of 3, with cluster threshold size of 4), visually approximated to be premotor cortex. The following settings were used for generating neurofeedback: average feedback values to calculate feedback value = 2 timepoints for continuous feedback paradigm and 6 timepoints for intermittent feedback paradigm, maximum percent signal change of feedback bar = 2, ROI-GLM baseline enabled for stable baseline estimation, dynamic ROI enabled using best voxel selection of top 33% (effectively creates a sub-ROI to give better signal extraction from a coarse anatomical ROI selection and with small alignment errors within and between scans).
The experimental paradigm and feedback were presented with a mirrored-projector system, using EPrime 2.0 software (Psychology Software Tools, Pittsburgh, PA). Thermometer bar images were exported from the analysis computer (running Turbo-BrainVoyager software) to the presentation computer (running EPrime software) for the real feedback conditions, or thermometer bar images were taken from a pre-created folder on the presentation computer (full-range set of thermometer images selected by fixed randomization) for the false feedback conditions.
Data was excluded if motion greater than 3mm or if no activation was seen in the no feedback ROI localizer scan during post-hoc fMRI analysis.
As Turbo-BrainVoyager is an operational software with limited capacity for post-hoc analysis, timeseries extraction was performed using FSL 4.1.5 (Oxford Centre for Functional MRI of the Brain, Oxford, United Kingdom). Two approaches were used to extract timeseries, using parameters to approximate the Turbo-BrainVoyager settings and to characterize data from all unfiltered voxels in the ROI.
To characterize data from all voxels in an ROI without temporal filtering, 4D fMRI scans were motion corrected using FSL MCFLIRT (using the first scan of the volume as the reference scan for alignment) and spatially smoothed (using a Gaussian kernel of 8.0 mm FWHM). These volumes were then masked by the individual ROI created in TBV, and a timecourse of mean intensities from all voxels in the ROI was extracted.
To characterize data using parameters approximate to the Turbo-BrainVoyager settings, the 4D fMRI scans were motion corrected using FSL MCFLIRT (using the first scan of the volume as the reference scan for alignment). These volumes were then masked by the individual ROI created in TBV. An FSL FEAT analysis was then run on the masked data using preprocessing (spatial smoothing using a Gaussian kernel of 8.0 mm FWHM and highpass temporal filtering with 44s cutoff) and statistical analysis (GLM with temporal derivative). A timecourse of signal intensities was created from the voxel with the highest z-score.
For both timeseries extraction approaches, intensity values were converted to percent signal change (PSC) using baseline defined as the average of volumes 51-60 (end of first REST period). The hemodynamic response to the “IMAGINE” period was temporally defined by the average timeseries (from the voxel with the highest z-score) of the no feedback ROI localizer scans (positive PSC values, less one volume as the intermittent imagine period was one volume shorter). For each condition of feedback type (continuous or intermittent), the average PSC per block was compared pairwise for each participant between real feedback and false feedback. Slopes for each scan were calculated as the change in PSC over the 11 blocks, and slopes were compared pairwise between real feedback and false feedback for feedback methods (continuous or intermittent).
For each scan, a standard FSL FEAT analysis was performed using preprocessing (motion correction, BET brain extraction, spatial smoothing using a Gaussian kernel of 8.0 mm FWHM, highpass temporal filtering with 44s cutoff) statistical analysis (FILM prewhitening, motion parameters added to model, and GLM with temporal derivative). Two conditions were defined for the no feedback ROI localizer and continuous scans (Rest and Imagine), and three conditions were defined for the intermittent scans (Rest, Imagine, and Feedback). Higher level analysis were performed in FSL using fixed effects for within-subject comparisons and mixed effects (FLAME 1+2) for between-subject comparisons. All statistical results were thresholded using clusters determined by Z > 2.3 and a corrected cluster significance of p = 0.05.
Fifteen participants (8 men and 7 women) enrolled in the study, but scanning was not completed for 1 male (due to claustrophobia) and 1 female (nausea during scanning). The average age of the 13 included participants was 31.6 years (SD = 10.7 years). All participants were high school graduates and the majority had college degrees (2 some college, 7 college degrees, and 1 post-graduate degree). Commonly reported strategies employed during the imagined movement periods included: typing (n = 6), sports activity such as bouncing a ball, swimming or karate (n = 6), playing a musical instrument (n= 5), and writing (n = 2). On a scale from 1-10 (1-not at all confident, 10 extremely confident) participants rated an average of 7.2 (SD = 1.3) in their ability to do the task. In ranking the four feedback scans based on the participants' perception of their own performance (1 = best, 4 = worst), intermittent real feedback had the best average ranking of 2.0 (SD = 1.1), continuous real feedback followed having an average of 2.5 (SD = 1.1), and both continuous false and intermittent false feedback had the worst averages of 2.8 (SD = 1.2). The intermittent real feedback rankings were significantly better than the continuous real feedback rankings, and the continuous real feedback rankings were significantly better than both the intermittent and continuous false feedback rankings (Wilcoxon signed-rank tests, p = 0.05).
Of 26 total comparative sessions (a no feedback ROI localizer, real feedback, and false feedback scans), three comparative sessions were excluded due to at least one scan with motion greater than 3mm. Five comparative sessions were excluded due to lack of activation with the no feedback ROI localizer scan. This yielded 10 usable continuous feedback sessions and 8 usable intermittent half-sessions (see Table 1).
Figure 1 shows the mean PSC from all voxels in the individually selected regions of interest, without temporal filtering. With timeseries extracted from all voxels, the mean PSC (SD) were: continuous no feedback = 0.25 (0.52), continuous real feedback = 0.48 (0.54), continuous false feedback = 0.44 (0.45), intermittent no feedback = 0.38 (0.20), intermittent real feedback = 0.76 (0.31), and intermittent false feedback = 0.22 (0.93). With continuous feedback (comparing real feedback to false feedback), 3 participants performed significantly better with real feedback, 3 participants had no significant difference with real feedback, and 4 participants performed significantly worse with real feedback (significance levels of p = 0.05). With intermittent feedback (comparing real feedback to false feedback), 4 participants performed significantly better with real feedback, 4 participants had no significant difference with real feedback, and no participants performed significantly worse with real feedback (significance levels of p = 0.05).
Relative to timeseries extracted from all voxels without temporal filtering, there was less signal drift over time for the analysis approximating Turbo-BrainVoyager settings. With timeseries extracted from the voxels of highest z-score, the mean PSC (SD) were: continuous no feedback = 0.40 (0.36), continuous real feedback = 0.18 (0.31), continuous false feedback = 0.29 (0.27), intermittent no feedback = 0.30 (0.14), intermittent real feedback = 0.48 (0.16), and intermittent false feedback = 0.31 (0.19). With continuous feedback (comparing real feedback to false feedback), 2 participants performed significantly better with real feedback, 4 participants had no significant difference with real feedback, and 4 participants performed significantly worse with real feedback (significance levels of p = 0.05). With intermittent feedback (comparing real feedback to false feedback), 4 participants performed significantly better with real feedback, 4 participants had no significant difference with real feedback, and no participants performed significantly worse with real feedback (significance levels of p = 0.05).
With timeseries extracted from all voxels, the mean slopes (SD) were: continuous no feedback = -0.033 (0.069), continuous real feedback = 0.053 (0.090), continuous false feedback = 0.028 (0.054), intermittent no feedback = -0.005 (0.042), intermittent real feedback = 0.060 (0.061), and intermittent false feedback = -0.010 (0.129). With timeseries extracted from the voxels of highest z-score, the mean slopes (SD) were: continuous no feedback = -0.015 (0.024), continuous real feedback = 0.005 (0.039), continuous false feedback = -0.014 (0.015), intermittent no feedback = -0.010 (0.012), intermittent real feedback = 0.003 (0.025), and intermittent false feedback = -0.009 (0.022). Paired t-test failed to find any significant differences (p = 0.05) between real and false feedback, for either feedback type in either analysis approach.
The whole brain activation pattern of no feedback ROI localizer scans for the contrast of “Imagine Movement - Rest” is shown in Figure 2. The analysis included 11 individuals with 1 or 2 scans, for a total of 18 scans; analyzed using a multi-session (fixed effects) and multi-subject (mixed effects) three level analysis. Brain regions with significant activation include bilateral middle frontal gyrus, left parietal cortex, left frontal regions, and right frontal and insula regions (clusters and local maximum of activation are listed in Supplemental Table 1).
For continuous feedback, contrasts of “real feedback > no feedback”, “real feedback > false feedback”, and “false feedback > real feedback” are shown in Figure 3 (from lower level contrast of “Imagine Movement - Rest”). The analysis included 10 scan sessions (30 total scans), analyzed using the FSL tripled two-group difference analysis (mixed effects). Results include a relatively small cluster of activation in right frontal regions for “real feedback > no feedback”, no significant activation for “real feedback > false feedback”, and relatively extensive activation with maximum in right frontal regions for “false feedback > real feedback” (clusters and local maximum are listed in Supplemental Table 2).
For intermittent feedback, contrasts of “real feedback > no feedback”, “real feedback > false feedback”, and “false feedback > real feedback” are shown in Figure 4 (from lower level contrast of “Imagine Movement - Rest”). The analysis included 8 scan sessions (24 total scans), analyzed using the FSL tripled two-group difference analysis (mixed effects). Results include a relatively small cluster of activation in right visual regions for “real feedback > no feedback”, no significant activation for “real feedback > false feedback”, and relatively extensive activation with maximum in right visual, right caudate, and left putamen regions for “false feedback > real feedback” (clusters and local maximum are listed in Supplemental Table 3).
With intermittent feedback scans only, the lower level contrast of “Feedback (2 volume blocks) - Rest (9 volume blocks)” is shown in Figure 5 for higher level contrasts of “all intermittent scans”, “real feedback > false feedback”, and “false feedback > real feedback”. The analysis included 8 scan sessions (16 scans total), analyzed using a multi-session (fixed effects) and multi-subject (mixed effects) three level analysis for “all intermittent scans” (top); and a two-sample paired t-test (mixed effects) for “real feedback > false feedback” and “false feedback > real feedback” contrasts. Results include a relatively extensive cluster of activation for all intermittent scans, no significant activation for “real feedback > false feedback”, and activation with maximum in right cingulate, right frontal, right temporal, and right parietal regions for “false feedback > real feedback” (clusters and local maximum are listed in Supplemental Table 4).
Our main hypothesis was that participants would generate greater activation in premotor cortex when given intermittent feedback than they would when given continuous feedback. Using a post-hoc analysis similar to the real-time processing, 4 of 8 participants had significantly higher PSC with intermittent feedback (real feedback compared to the false feedback control condition). This compares to only 2 of 10 participants having higher PSC with continuous feedback, and additionally 4 of 10 participants having significantly worse PSC with continuous feedback. For continuous feedback, the significant decreases in PSC with real feedback relative to false feedback may be due in part to incorrect interpretation of feedback. The false feedback may have provided use feedback at times by random chance, whereas real feedback could be consistently unhelpful if the hemodynamic delay is no properly accounted for by the participant.
Another advantage of the intermittent approach is that the brain regions involved in evaluating feedback can be uniquely separated in time from task performance (see Figure 5). Given the extensive brain activation implicated in evaluated feedback, continuous feedback during task performance could be confounding and interfere with RTfMRIf objectives. The phenomenon of evaluation feedback itself may be a worthwhile research area. Notably false feedback generated much brain activation relative to real feedback, potentially related to task switching and feedback appraisal. Task and appraisal processes occur simultaneous with continuous feedback paradigms, whereas these features may be separable with the intermittent feedback paradigms. Future work focused on feedback processing, correlating factors of accuracy (when feedback matches brain activity, whether from real data or randomly generated data) and direction (positive feedback versus negative feedback), could also aid in isolating feedback components.
We did not provide feedback during rest periods to keep the task simple for participants and to allow contrasts of “task - rest” to include feedback components. While analyzing the data without temporal filtering did not change our primary findings, there were some trends worth considering in future work. Baseline rest values, specifically for real feedback, tend to drift up throughout the scan (Figure 1). Providing feedback during rest to reduce such drift could produce greater “task - rest” contrast values. Practice and learning effects may be important as task signal trended up, specifically through the real feedback scan.
There are many limitations of this pilot study. A considerable number of scans were excluded based on quality checks, and future RTfMRIf studies relying on functionally-defined ROIs may be limited if such defined ROIs are not reliably found. Excluded studies also altered our counterbalanced design, so our study may be susceptible to order effects. However, we did not note obvious order effects in our limited sample. We did not use EMG recordings to verify that participants were performing motor imagery rather than actual movements. However, we took steps to minimize the possibility of actual movements (immobilization and instructions), blinded participants to false feedback conditions, and failed to find significant differences in primary motor cortex in real versus false feedback fMRI contrasts. It should also be noted that there are other ways to provide feedback, such as a continuous timeline that cues participants to the relationship between what they are doing in the moment and the sluggish 3-6 second hemodynamic delay [8, 12]. Such approaches may require extensive training not required for intermittent feedback. However, we tested only two specific feedback strategies in our study and did not examine training effects.
In summary, we have shown that participants can use intermittent feedback to modulate premotor cortex activity during an imaginary movement task. Feedback displayed intermittently may be superior to feedback that is constantly updating and continuously shown, at least for some tasks. As we only tested motor imagery using a single ROI, it is difficult to know if these findings generalize to other RTfMRIf applications. This pilot study provides some interesting, albeit preliminary, data to guide future studies using RTfMRIf. Future methods work is needed to refine and develop the most interesting new tool of RTfMRIf.
Funding was provided in part by NIH/NIDA 1R21DA026085-01 (Brady, George).
The authors wish to thank Brian Dale at Siemens Medical for support in enabling the real-time MR image export, facilitated by a master research agreement between Siemens Medical and the Medical University of South Carolina.
The authors also wish to thank Rainer Goebel for technical assistance with Turbo-BrainVoyager.
Conflicts of Interest: The authors have no conflicts of interest to disclose.