Consistent evidence suggests that pitch height may be represented in a spatial format, having both a vertical and a horizontal representation. The spatial representation of pitch height results into response compatibility effects for which high pitch tones are preferentially associated to up-right responses, and low pitch tones are preferentially associated to down-left responses (i.e., the Spatial-Musical Association of Response Codes (SMARC) effect), with the strength of these associations depending on individuals’ musical skills. In this study we investigated whether listening to tones of different pitch affects the representation of external space, as assessed in a visual and haptic line bisection paradigm, in musicians and non musicians. Low and high pitch tones affected the bisection performance in musicians differently, both when pitch was relevant and irrelevant for the task, and in both the visual and the haptic modality. No effect of pitch height was observed on the bisection performance of non musicians. Moreover, our data also show that musicians present a (supramodal) rightward bisection bias in both the visual and the haptic modality, extending previous findings limited to the visual modality, and consistent with the idea that intense practice with musical notation and bimanual instrument training affects hemispheric lateralization.
musicians; pitch; space perception; line bisection; pseudoneglect
Congenital amusia (amusia, hereafter) is a developmental
disorder that impacts negatively on the perception of music. Psychophysical
testing suggests that individuals with amusia have above average thresholds for
detection of pitch change and pitch direction discrimination; however, a
low-level auditory perceptual problem cannot completely explain the disorder,
since discrimination of melodies is also impaired when the constituent intervals
are suprathreshold for perception. The aim of the present study was to test
pitch memory as a function of (a) time and (b) tonal interference, in order to
determine whether pitch traces are inherently weaker in amusic individuals.
Memory for the pitch of single tones was compared using two versions of a
paradigm developed by Deutsch (1970a). In
both tasks, participants compared the pitch of a standard (S) versus a
comparison (C) tone. In the time task, the S and C tones were presented,
separated in time by 0, 1, 5, 10, and 15 s (blocked presentation). In the
interference task, the S and C tones were presented with a fixed time interval
(5 s) but with a variable number of irrelevant tones in between 0, 2, 4, 6, and
8 tones (blocked presentation). In the time task, control performance remained
high for all time intervals, but amusics showed a performance decrement over
time. In the interference task, controls and amusics showed a similar
performance decrement with increasing number of irrelevant tones. Overall, the
results suggest that the pitch representations of amusic individuals are less
stable and more prone to decay than those of matched non-amusic individuals.
congenital amusia; short-term memory; delay; tonal interference
The purpose of this study was to elucidate the role of auditory feedback derived from one keystroke in the control of the rhythmicity and velocity of successive keystrokes during piano playing. We examined the effects of transient auditory perturbations with respect to the pitch, loudness, and timing of one tone on subsequent keystrokes while six pianists played short excerpts from three simple musical pieces having different tempi (“event rates”). Immediately after a delay in tone production, the inter-keystroke interval became shorter. This compensatory action depended on the tempo, being most prominent at the medium tempo. This indicates that temporal information provided by auditory feedback is utilized to regulate the timing of movement elements produced in a sequence. We also found that the keystroke velocity changed after the timing, pitch, or loudness of a tone was altered, although the response differed depending on the type of perturbation. While delaying the timing or altering the pitch led to an increase in the velocity, altering the loudness changed the velocity in an inconsistent manner. Furthermore, perturbing a tone elicited by the right hand also affected the rhythmicity and velocity of keystrokes with the left hand, indicating that bimanual coordination of tone production was maintained. Finally, altering the pitch sometimes resulted in striking an incorrect key, mostly in the slow piece, emphasizing the importance of pitch information for accurate planning and execution of sequential piano keystrokes.
Feedback control; Auditory motor integration; Sequential movements; Bimanual control; Musicians; Pianists; Music
Humans are able to extract regularities from complex auditory scenes in order to form perceptually meaningful elements. It has been shown previously that this process depends critically on both the temporal integration of the sensory input over time and the degree of frequency separation between concurrent sound sources. Our goal was to examine the relationship between these two aspects by means of magnetoencephalography (MEG). To achieve this aim, we combined time-frequency analysis on a sensor space level with source analysis. Our paradigm consisted of asymmetric ABA-tone triplets wherein the B-tones were presented temporally closer to the first A-tones, providing different tempi within the same sequence. Participants attended to the slowest B-rhythm whilst the frequency separation between tones was manipulated (0-, 2-, 4- and 10-semitones).
The results revealed that the asymmetric ABA-triplets spontaneously elicited periodic-sustained responses corresponding to the temporal distribution of the A-B and B-A tone intervals in all conditions. Moreover, when attending to the B-tones, the neural representations of the A- and B-streams were both detectable in the scenarios which allow perceptual streaming (2-, 4- and 10-semitones). Alongside this, the steady-state responses tuned to the presentation of the B-tones enhanced significantly with increase of the frequency separation between tones. However, the strength of the B-tones related steady-state responses dominated the strength of the A-tones responses in the 10-semitones condition. Conversely, the representation of the A-tones dominated the B-tones in the cases of 2- and 4-semitones conditions, in which a greater effort was required for completing the task. Additionally, the P1 evoked fields’ component following the B-tones increased in magnitude with the increase of inter-tonal frequency difference.
The enhancement of the evoked fields in the source space, along with the B-tones related activity of the time-frequency results, likely reflect the selective enhancement of the attended B-stream. The results also suggested a dissimilar efficiency of the temporal integration of separate streams depending on the degree of frequency separation between the sounds. Overall, the present findings suggest that the neural effects of auditory streaming could be directly captured in the time-frequency spectrum at the sensor-space level.
MEG; Time-frequency spectrum; Auditory scene analysis; Task-driven entrainment
Goal-directed, coordinated movements in humans emerge from a variety of constraints that range from 'high-level' cognitive strategies based on perception of the task to 'low-level' neuromuscular-skeletal factors such as differential contributions to coordination from flexor and extensor muscles. There has been a tendency in the literature to dichotomize these sources of constraint, favouring one or the other rather than recognizing and understanding their mutual interplay. In this experiment, subjects were required to coordinate rhythmic flexion and extension movements with an auditory metronome, the rate of which was systematically increased. When subjects started in extension on the beat of the metronome, there was a small tendency to switch to flexion at higher rates, but not vice versa. When subjects were asked to contact a physical stop, the location of which was either coincident with or counterphase to the auditory stimulus, two effects occurred. When haptic contact was coincident with sound, coordination was stabilized for both flexion and extension. When haptic contact was counterphase to the metronome, coordination was actually destabilized, with transitions occurring from both extension to flexion on the beat and from flexion to extension on the beat. These results reveal the complementary nature of strategic and neuromuscular factors in sensorimotor coordination. They also suggest the presence of a multimodal neural integration process - which is parametrizable by rate and context - in which intentional movement, touch and sound are bound into a single, coherent unit.
It has been demonstrated that neural encoding of pitch in the auditory brainstem is shaped by long-term experience with language. To date, however, all stimuli have exhibited a high degree of pitch saliency. The experimental design herein permits us to determine whether experience-dependent pitch representation in the brainstem is less susceptible to progressive degradation of the temporal regularity of iterated rippled noise (IRN). Brainstem responses were recorded from Chinese and English participants in response to IRN homologues of Mandarin Tone 2 (T2IRN). Six different iterations steps were utilized to systematically vary the degree of temporal regularity in the fine structure of the IRN stimuli in order to produce a pitch salience continuum ranging from low to high. Pitch-tracking accuracy and pitch strength were computed from the brainstem responses using autocorrelation algorithms. Analysis of variance of brainstem responses to T2IRN revealed that pitch-tracking accuracy is higher in the native tone language group (Chinese) relative to the non-tone language group (English) except for the three lowest steps along the continuum, and moreover, that pitch strength is greater in the Chinese group even in severely degraded stimuli for two of the six 40-ms sections of T2IRN that exhibit rapid changes in pitch. For these same two sections, exponential time constants for the stimulus continuum revealed that pitch strength emerges 2–3 times faster in the tone language than in the non-tone language group as a function of increasing pitch salience. These findings altogether suggest that experience-dependent brainstem mechanisms for pitch are especially sensitive to those dimensions of tonal contours that provide cues of high perceptual saliency in degraded as well as normal listening conditions.
auditory; language; pitch; iterated rippled noise (IRN); fundamental frequency following response (FFR); experience-dependent plasticity
We have proposed that the stability of bimanual coordination is influenced by the complexity of the representation of the task goals. Here we present two experiments to explore this hypothesis. First, we examined whether a temporal event structure is present in continuous movements by having participants vocalize while producing bimanual circling movements. Participants tended to vocalize once per movement cycle when moving in-phase. In contrast, vocalizations were not synchronized with anti-phase movements. While the in-phase result is unexpected, the latter would suggest anti-phase continuous movements lack an event structure. Second, we examined the event structure of movements marked by salient turn-around points. Participants made bimanual wrist flexion movements and were instructed to move ‘in synchrony’ with a metronome, without specifying how they should couple the movements to the metronome. During in-phase movements, participants synchronized one hand cycle with every metronome beat; during anti-phase movements, participants synchronized flexion of one hand with one metronome beat and extension of the other hand with the next beat. The results are consistent with the hypothesis that the instability of anti-phase movements is related to their more complex (or absent) event representation relative to that associated with in-phase movements.
coordination; constraints; in-phase; anti-phase; stability
Behavioral and neurophysiological transfer effects from music experience to language processing are well-established but it is currently unclear whether or not linguistic expertise (e.g., speaking a tone language) benefits music-related processing and its perception. Here, we compare brainstem responses of English-speaking musicians/non-musicians and native speakers of Mandarin Chinese elicited by tuned and detuned musical chords, to determine if enhancements in subcortical processing translate to improvements in the perceptual discrimination of musical pitch. Relative to non-musicians, both musicians and Chinese had stronger brainstem representation of the defining pitches of musical sequences. In contrast, two behavioral pitch discrimination tasks revealed that neither Chinese nor non-musicians were able to discriminate subtle changes in musical pitch with the same accuracy as musicians. Pooled across all listeners, brainstem magnitudes predicted behavioral pitch discrimination performance but considering each group individually, only musicians showed connections between neural and behavioral measures. No brain-behavior correlations were found for tone language speakers or non-musicians. These findings point to a dissociation between subcortical neurophysiological processing and behavioral measures of pitch perception in Chinese listeners. We infer that sensory-level enhancement of musical pitch information yields cognitive-level perceptual benefits only when that information is behaviorally relevant to the listener.
Pitch discrimination; music perception; tone language; auditory evoked potentials; fundamental frequency-following response (FFR); experience-dependent plasticity
Introduction: Musical performance is thought to rely predominantly on event-based timing involving a clock-like neural process and an explicit internal representation of the time interval. Some aspects of musical performance may rely on emergent timing, which is established through the optimization of movement kinematics, and can be maintained without reference to any explicit representation of the time interval. We predicted that musical training would have its largest effect on event-based timing, supporting the dissociability of these timing processes and the dominance of event-based timing in musical performance.
Materials and Methods: We compared 22 musicians and 17 non-musicians on the prototypical event-based timing task of finger tapping and on the typically emergently timed task of circle drawing. For each task, participants first responded in synchrony with a metronome (Paced) and then responded at the same rate without the metronome (Unpaced).
Results: Analyses of the Unpaced phase revealed that non-musicians were more variable in their inter-response intervals for finger tapping compared to circle drawing. Musicians did not differ between the two tasks. Between groups, non-musicians were more variable than musicians for tapping but not for drawing. We were able to show that the differences were due to less timer variability in musicians on the tapping task. Correlational analyses of movement jerk and inter-response interval variability revealed a negative association for tapping and a positive association for drawing in non-musicians only.
Discussion: These results suggest that musical training affects temporal variability in tapping but not drawing. Additionally, musicians and non-musicians may be employing different movement strategies to maintain accurate timing in the two tasks. These findings add to our understanding of how musical training affects timing and support the dissociability of event-based and emergent timing modes.
music; timing; finger tapping; circle drawing; emergent timing; event-based timing
In the normal auditory system, the perceived pitch of a tone is closely linked to the cochlear place of vibration. It has generally been assumed that high-rate electrical stimulation by a cochlear implant electrode also evokes a pitch sensation corresponding to the electrode’s cochlear place (“place” code) and stimulation rate (“temporal” code). However, other factors may affect electric pitch sensation, such as a substantial loss of nearby nerve fibers or even higher-level perceptual changes due to experience. The goals of this study were to measure electric pitch sensations in hybrid (short-electrode) cochlear implant patients and to examine which factors might contribute to the perceived pitch. To look at effects of experience, electric pitch sensations were compared with acoustic tone references presented to the non-implanted ear at various stages of implant use, ranging from hookup to 5 years. Here, we show that electric pitch perception often shifts in frequency, sometimes by as much as two octaves, during the first few years of implant use. Additional pitch measurements in more recently implanted patients at shorter time intervals up to 1 year of implant use suggest two likely contributions to these observed pitch shifts: intersession variability (up to one octave) and slow, systematic changes over time. We also found that the early pitch sensations for a constant electrode location can vary greatly across subjects and that these variations are strongly correlated with speech reception performance. Specifically, patients with an early low-pitch sensation tend to perform poorly with the implant compared to those with an early high-pitch sensation, which may be linked to less nerve survival in the basal end of the cochlea in the low-pitch patients. In contrast, late pitch sensations show no correlation with speech perception. These results together suggest that early pitch sensations may more closely reflect peripheral innervation patterns, while later pitch sensations may reflect higher-level, experience-dependent changes. These pitch shifts over time not only raise questions for strict place-based theories of pitch perception, but also imply that experience may have a greater influence on cochlear implant perception than previously thought.
cochlear implant; hybrid; frequency; tonotopy; speech; plasticity
Experience-dependent enhancement of neural encoding of pitch in the auditory brainstem has been observed for only specific portions of native pitch contours exhibiting high rates of pitch acceleration, irrespective of speech or nonspeech contexts. This experiment allows us to determine whether this language-dependent advantage transfers to acceleration rates that extend beyond the pitch range of natural speech. Brainstem frequency following responses (FFRs) were recorded from Chinese and English participants in response to four, 250-ms dynamic click-train stimuli with different rates of pitch acceleration. The maximum pitch acceleration rates in a given stimulus ranged from low (0.3 Hz/ms; Mandarin Tone 2) to high (2.7 Hz/ms; 2 octaves). Pitch strength measurements were computed from the FFRs using autocorrelation algorithms with an analysis window centered at the point of maximum pitch acceleration in each stimulus. Between-group comparisons of pitch strength revealed that Chinese exhibit more robust pitch representation than English across all four acceleration rates. Regardless of language group, pitch strength was greater in response to acceleration rates within or proximal to natural speech relative to those beyond its range. Though both groups showed decreasing pitch strength with increasing acceleration rates, pitch representations of the Chinese group were more resistant to degradation. FFR spectral data were complementary across acceleration rates. These findings demonstrate that perceptually salient pitch cues associated with lexical tone influence brainstem pitch extraction not only in the speech domain, but also in auditory signals that clearly fall outside the range of dynamic pitch that a native listener is exposed to.
auditory; human; brainstem; pitch; language; frequency following response (FFR); click trains; Mandarin Chinese; experience-dependent plasticity; speech perception
The diagnosis of tinnitus relies on self-report. Psychoacoustic measurements of tinnitus pitch and loudness are essential for assessing claims and discriminating true from false ones. For this reason, the quantification of tinnitus remains a challenging research goal. We aimed to: (1) assess the precision of a new tinnitus likeness rating procedure with a continuous-pitch presentation method, controlling for music training, and (2) test whether tinnitus psychoacoustic measurements have the sensitivity and specificity required to detect people faking tinnitus. Musicians and non-musicians with tinnitus, as well as simulated malingerers without tinnitus, were tested. Most were retested several weeks later. Tinnitus pitch matching was first assessed using the likeness rating method: pure tones from 0.25 to 16 kHz were presented randomly to participants, who had to rate the likeness of each tone to their tinnitus, and to adjust its level from 0 to 100 dB SPL. Tinnitus pitch matching was then assessed with a continuous-pitch method: participants had to match the pitch of their tinnitus to an external tone by moving their finger across a touch-sensitive strip, which generated a continuous pure tone from 0.5 to 20 kHz in 1-Hz steps. The predominant tinnitus pitch was consistent across both methods for both musicians and non-musicians, although musicians displayed better external tone pitch matching abilities. Simulated malingerers rated loudness much higher than did the other groups with a high degree of specificity (94.4%) and were unreliable in loudness (not pitch) matching from one session to the other. Retest data showed similar pitch matching responses for both methods for all participants. In conclusion, tinnitus pitch and loudness reliably correspond to the tinnitus percept, and psychoacoustic loudness matches are sensitive and specific to the presence of tinnitus.
Tone languages such as Thai and Mandarin Chinese use differences in fundamental frequency (F0, pitch) to distinguish lexical meaning. Previous behavioral studies have shown that native speakers of a non-tone language have difficulty discriminating among tone contrasts and are sensitive to different F0 dimensions than speakers of a tone language. The aim of the present ERP study was to investigate the effect of language background and training on the non-attentive processing of lexical tones. EEG was recorded from 12 adult native speakers of Mandarin Chinese, 12 native speakers of American English, and 11 Thai speakers while they were watching a movie and were presented with multiple tokens of low-falling, mid-level and high-rising Thai lexical tones. High-rising or low-falling tokens were presented as deviants among mid-level standard tokens, and vice versa. EEG data and data from a behavioral discrimination task were collected before and after a two-day perceptual categorization training task.
Behavioral discrimination improved after training in both the Chinese and the English groups. Low-falling tone deviants versus standards elicited a mismatch negativity (MMN) in all language groups. Before, but not after training, the English speakers showed a larger MMN compared to the Chinese, even though English speakers performed worst in the behavioral tasks. The MMN was followed by a late negativity, which became smaller with improved discrimination. The High-rising deviants versus standards elicited a late negativity, which was left-lateralized only in the English and Chinese groups.
Results showed that native speakers of English, Chinese and Thai recruited largely similar mechanisms when non-attentively processing Thai lexical tones. However, native Thai speakers differed from the Chinese and English speakers with respect to the processing of late F0 contour differences (high-rising versus mid-level tones). In addition, native speakers of a non-tone language (English) were initially more sensitive to F0 onset differences (low-falling versus mid-level contrast), which was suppressed as a result of training. This result converges with results from previous behavioral studies and supports the view that attentive as well as non-attentive processing of F0 contrasts is affected by language background, but is malleable even in adult learners.
Background: This study investigates the effect of altered auditory feedback (AAF) in musician's dystonia (MD) and discusses whether AAF can be considered as a sensory trick in MD. Furthermore, the effect of AAF is compared with altered tactile feedback, which can serve as a sensory trick in several other forms of focal dystonia.
Methods: The method is based on scale analysis (Jabusch et al., 2004). Experiment 1 employs synchronization paradigm: 12 MD patients and 25 healthy pianists had to repeatedly play C-major scales in synchrony with a metronome on a MIDI-piano with three auditory feedback conditions: (1) normal feedback; (2) no feedback; (3) constant delayed feedback. Experiment 2 employs synchronization-continuation paradigm: 12 MD patients and 12 healthy pianists had to repeatedly play C-major scales in two phases: first in synchrony with a metronome, secondly continue the established tempo without the metronome. There are four experimental conditions, among them three are the same AAF as in Experiment 1 and 1 is related to altered tactile sensory input. The coefficient of variation of inter-onset intervals of the key depressions was calculated to evaluate fine motor control.
Results: In both experiments, the healthy controls and the patients behaved very similarly. There is no difference in the regularity of playing between the two groups under any condition, and neither did AAF nor did altered tactile feedback have a beneficial effect on patients' fine motor control.
Conclusions: The results of the two experiments suggest that in the context of our experimental designs, AAF and altered tactile feedback play a minor role in motor coordination in patients with musicians' dystonia. We propose that altered auditory and tactile feedback do not serve as effective sensory tricks and may not temporarily reduce the symptoms of patients suffering from MD in this experimental context.
musician's dystonia; altered auditory feedback; glove effect; sensory trick; scale paradigm; sensorimotor integration
In normal listeners, acoustic reflex decay (ARD) typically occurs for high- but not for low-frequency tones. In patients with acoustic neuromas, decay can be obtained at all frequencies, presumably due to poor neural synchrony. These observations have led us to hypothesize that resistance to decay is due to robust encoding of temporal fine structure of the eliciting stimulus. For a 4-kHz stimulus, ARD is reduced by sinusoidal amplitude modulation (SAM), a result attributed to the low-frequency pattern of SAM providing the temporal characteristics necessary to maintain the reflex. If this interpretation is correct, then further reductions in ARD should be seen for stimuli having temporal characteristics that even more closely resemble the neural response to low-frequency stimulus fine structure. On the other hand, if other perceptual qualities of a SAM tone are responsible for the effect (e.g., rate pitch), then manipulations of perceived sound quality, rather than temporal characteristics per se, should produce similar effects. The experiment reported here included a reference condition, (1) 5-kHz pure tone, and three "temporal" manipulations, composed of a 5-kHz tone multiplied by (2) a raised 100-Hz sinusoid, (3) a noise sample, lowpass filtered at 100 Hz, and (4) a half-wave rectified 100-Hz sinusoid. Additional conditions manipulated perceived pitch. These stimuli spanned 4.5–8 kHz, including a reference condition, (5) Gaussian noise, and a stimulus associated with a 100-Hz pitch, (6) iterated rippled noise. Results show the greatest reductions in ARD with the half-wave rectified stimulus, thought to most closely mimic the temporal characteristics of a low-frequency tone. Little or no reduction in ARD was associated with the iterated rippled noise, suggesting that perceived pitch does not play an important role in maintaining the acoustic reflex.
The human central auditory system can automatically extract abstract regularities from a variant auditory input. To this end, temporarily separated events need to be related. This study tested whether the timing between events, falling either within or outside the temporal window of integration (~350 ms), impacts the extraction of abstract feature relations. We utilized tone pairs for which tones within but not across pairs revealed a constant pitch relation (e.g., pitch of second tone of a pair higher than pitch of first tone, while absolute pitch values varied across pairs). We measured the mismatch negativity (MMN; the brain’s error signal to auditory regularity violations) to second tones that rarely violated the pitch relation (e.g., pitch of second tone lower). A Short condition in which tone duration (90 ms) and stimulus onset asynchrony between the tones of a pair were short (110 ms) was compared to two conditions, where this onset asynchrony was long (510 ms). In the Long Gap condition, the tone durations were identical to Short (90 ms), but the silent interval was prolonged by 400 ms. In Long Tone, the duration of the first tone was prolonged by 400 ms, while the silent interval was comparable to Short (20 ms). Results show a frontocentral MMN of comparable amplitude in all conditions. Thus, abstract pitch relations can be extracted even when the within-pair timing exceeds the integration period. Source analyses indicate MMN generators in the supratemporal cortex. Interestingly, they were located more anterior in Long Gap than in Short and Long Tone. Moreover, frontal generator activity was found for Long Gap and Long Tone. Thus, the way in which the system automatically registers irregular abstract pitch relations depends on the timing of the events to be linked. Pending that the current MMN data mirror established abstract rule representations coding the regular pitch relation, neural processes building these templates vary with timing.
abstract regularities; automatic processing; frontal generators; mismatch negativity; supratemporal generators; temporal window of integration
High-frequency pure tones (>6 kHz), which alone do not produce salient melodic pitch information, provide melodic pitch information when they form part of a harmonic complex tone with a lower fundamental frequency (F0). We explored this phenomenon in normal-hearing listeners by measuring F0 difference limens (F0DLs) for harmonic complex tones and pure-tone frequency difference limens (FDLs) for each of the tones within the harmonic complexes. Two spectral regions were tested. The low- and high-frequency band-pass regions comprised harmonics 6–11 of a 280- or 1,400-Hz F0, respectively; thus, for the high-frequency region, audible frequencies present were all above 7 kHz. Frequency discrimination of inharmonic log-spaced tone complexes was also tested in control conditions. All tones were presented in a background of noise to limit the detection of distortion products. As found in previous studies, F0DLs in the low region were typically no better than the FDL for each of the constituent pure tones. In contrast, F0DLs for the high-region complex were considerably better than the FDLs found for most of the constituent (high-frequency) pure tones. The data were compared with models of optimal spectral integration of information, to assess the relative influence of peripheral and more central noise in limiting performance. The results demonstrate a dissociation in the way pitch information is integrated at low and high frequencies and provide new challenges and constraints in the search for the underlying neural mechanisms of pitch.
The effect of hand proximity on vision and visual attention has been well documented. In this study we tested whether such effect(s) would also be present in the auditory modality. With hands placed either near or away from the audio sources, participants performed an auditory-spatial discrimination (Experiment 1: left or right side), pitch discrimination (Experiment 2: high, med, or low tone), and spatial-plus-pitch (Experiment 3: left or right; high, med, or low) discrimination task. In Experiment 1, when hands were away from the audio source, participants consistently responded faster with their right hand regardless of stimulus location. This right hand advantage, however, disappeared in the hands-near condition because of a significant improvement in left hand's reaction time (RT). No effect of hand proximity was found in Experiments 2 or 3, where a choice RT task requiring pitch discrimination was used. Together, these results that the perceptual and attentional effect of hand proximity is not limited to one specific modality, but applicable to the entire “space” near the hands, including stimuli of different modality (at least visual and auditory) within that space. While these findings provide evidence from auditory attention that supports the multimodal account originally raised by Reed et al. (2006), we also discuss the possibility of a dual mechanism hypothesis to reconcile findings from the multimodal and magno/parvocellular account.
embodied cognition; hand-altered vision; peripersonal space
High doses (10nM) of epothilone B, a microtubule stabilizer, will inhibit the development of human tumor-derived angiogenesis following short (14 day) drug exposure times. Metronomic dosing regimes use lower drug doses and prolonged drug exposure times in an attempt to decrease toxicity compared to standard dosing schedules.
We hypothesized that epothilone B would be an effective antiangiogenic agent when administered at very low doses over an extended period of time.
Fragments of four fresh human tumors were cultured in a fibrin-thrombin matrix and maintained in nutrient media plus 20% fetal bovine serum for 56 days. Tumor fragments (n=40–60 per group) were exposed to weekly doses of epothilone B at concentrations of 10, 5, 1, 0.5, or 0.1 nM. All of these concentrations are clinically achievable. Tumor angiogenesis was assessed weekly on day 14–56 using a validated visual grading system. This system rates neovessel growth, density, and length on a 0–16 scale [angiogenic index, (AI)]. The average change in AI between day 14 and 56 was calculated for all samples and used to evaluate the metronomic response.
Epothilone B produced a dose-dependent antiangiogenic response in all tumors. Two of the four tumors demonstrated a clear and significant metronomic antiangiogenic effect over time.
Epothilone B, when dosed by a metronomic schedule may have a significant antiangiogenic effect on human solid tumors. This study provides evidence for the potential use of epothilone B on a metronomic dosing schedule.
Angiogenesis; Metronomic; Human; Epothilone B; Cancer
"Natural" crossmodal correspondences, such as the spontaneous tendency to associate high pitches with high spatial locations, are often hypothesized to occur preattentively and independently of task instructions (top-down attention). Here, we investigate bottom-up attentional engagement by using emotional scenes that are known to naturally and reflexively engage attentional resources. We presented emotional (pleasant and unpleasant) or neutral pictures either below or above a fixation cross, while participants were required to discriminate between a high or a low pitch tone (experiment 1). Results showed that despite a robust crossmodal attentional capture of task-irrelevant emotional pictures, the general advantage in classifying the tones for congruent over incongruent visual-auditory stimuli was similar for emotional and neutral pictures. On the other hand, when picture position was task-relevant (experiment 2), task-irrelevant tones did not interact with pictures with regard to their combination of pitch and visual vertical spatial position, but instead they were effective in minimizing the interference effect of emotional picture processing on the ongoing task. These results provide constraints on our current understanding of natural crossmodal correspondences.
Schizophrenia may be associated with a fundamental disturbance in the temporal coordination of information processing in the brain, leading to classic symptoms of schizophrenia such as thought disorder and disorganized and contextually inappropriate behavior. Although a variety of behavioral studies have provided strong evidence for perceptual timing deficits in schizophrenia, no study to date has directly examined overt temporal performance in schizophrenia using a task that differentially engages perceptual and motor-based timing processes. The present study aimed to isolate perceptual and motor-based temporal performance in individuals diagnosed with schizophrenia using a repetitive finger-tapping task that has previously been shown to differentially engage brain regions associated with perceptual and motor-related timing behavior. Thirty-two individuals with schizophrenia and 31 non-psychiatric control participants completed the repetitive finger-tapping task, which required participants to first tap in time with computer-generated tones separated by a fixed intertone interval (tone-paced tapping), after which the tones were discontinued and participants were required to continue tapping at the established pace (self-paced tapping). Participants with schizophrenia displayed significantly faster tapping rates for both tone- and self-paced portions of the task compared to the non-psychiatric group. Individuals diagnosed with schizophrenia also displayed greater tapping variability during both tone- and self-paced portions of the task. The application of a mathematical timing model further indicated that group differences were primarily attributable to increased timing – as opposed to task implementation – difficulties in the schizophrenia group, which is noteworthy given the broad range of impairments typically associated with the disorder. These findings support the contention that schizophrenia is associated with a broad range of timing difficulties, including those associated with time perception as well as time production.
schizophrenia; timing; temporal processing; finger tapping; variability
We validated heart rate (HR) and six time and six frequency domain measures of heart rate variability (HRV) as estimators of autonomic outflow in 44 young healthy male subjects. Gold standards for autonomic outflow were the Rosenblueth-Simeone factors m (sympathetic tone) and n (vagal tone), and the sympathovagal balance m·n, determined by two-stage complete autonomic blockade.
Rank correlations were computed between HR and the HRV measures obtained before autonomic blockade, and m, n and m·n. Also, the maximal mean performances (averaged sensitivity and specificity) for HR and HRV as discriminators between low and high values of m, n or m·n were computed.
The spectral HRV measures showed less good correlations and performances than the time domain HRV measures. Correlations with sympathetic tone were all below 0.31. Respiratory sinus arrhythmia during 15 cycles/min metronome breathing was superior in estimating vagal tone and sympathovagal balance (correlations -0.71/-0.73; both performances 0.82), heart rate scored similarly for assessing the sympathovagal balance (correlation 0.71; performance 0.82).
It does not appear justified to evaluate HR or HRV in terms of sympathetic tone, vagal tone, or sympathovagal balance. HR and HRV are specifically weak in assessing sympathetic tone. Respiratory sinus arrhythmia during 15 cycles/min metronome breathing is superior in assessing vagal tone. Current HRV analysis techniques offer no advantages compared with HR in assessing the sympathovagal balance.
atropine; metoprolol; heart rate variability; Rosenblueth-Simeone model; sympathovagal balance
Recent studies have documented robust and intriguing associations between affect and performance in cognitive tasks. The present two experiments sought to extend this line of work with reference to potential cross-modal effects. Specifically, the present studies examined whether word evaluations would bias subsequent judgments of low- and high-pitch tones. Because affective metaphors and related associations consistently indicate that positive is high and negative is low, we predicted and found that positive evaluations biased tone judgment in the direction of high-pitch tones, whereas the opposite was true of negative evaluations. Effects were found on accuracy rates, response biases, and reaction times. These effects occurred despite the irrelevance of prime evaluations to the tone judgment task. In addition to clarifying the nature of these cross-modal associations, the present results further the idea that affective evaluations exert large effects on perceptual judgments related to verticality.
Musicians and musically untrained individuals have been shown to differ in a variety of functional brain processes such as auditory analysis and sensorimotor interaction. At the same time, internally operating forward models are assumed to enable the organism to discriminate the sensory outcomes of self-initiated actions from other sensory events by deriving predictions from efference copies of motor commands about forthcoming sensory consequences. As a consequence, sensory responses to stimuli that are triggered by a self-initiated motor act are suppressed relative to the same but externally initiated stimuli, a phenomenon referred to as motor-induced suppression (MIS) of sensory cortical feedback. Moreover, MIS in the auditory domain has been shown to be modulated by the predictability of certain properties such as frequency or stimulus onset. The present study compares auditory processing of predictable and unpredictable self-initiated 0-delay speech sounds and piano tones between musicians and musical laymen by means of an event-related potential (ERP) and topographic pattern analysis (TPA) [microstate analysis or evoked potential (EP) mapping] approach. As in previous research on the topic of MIS, the amplitudes of the auditory event-related potential (AEP) N1 component were significantly attenuated for predictable and unpredictable speech sounds in both experimental groups to a comparable extent. On the other hand, AEP N1 amplitudes were enhanced for unpredictable self-initiated piano tones in both experimental groups similarly and MIS did not develop for predictable self-initiated piano tones at all. The more refined EP mapping revealed that the microstate exhibiting a typical auditory N1-like topography was significantly shorter in musicians when speech sounds and piano tones were self-initiated and predictable. In contrast, non-musicians only exhibited shorter auditory N1-like microstate durations in response to self-initiated and predictable piano tones. Taken together, our findings suggest that besides the known effect of MIS, internally operating forward models also facilitate early acoustic analysis of complex tones by means of faster processing time as indicated by shorter auditory N1-like microstate durations in the first ~200 ms after stimulus onset. In addition, musicians seem to profit from this facilitation also during the analysis of speech sounds as indicated by comparable auditory N1-like microstate duration patterns between speech and piano conditions. In contrast, non-musicians did not show such an effect.
motor-induced suppression; musicians; speech; plasticity; internal forward model
One important principle of object processing is exclusive allocation. Any part of the sensory input, including the border between two objects, can only belong to one object at a time. We tested whether tones forming a spectro-temporal border between two sound patterns can belong to both patterns at the same time. Sequences were composed of low-, intermediate- and high-pitched tones. Tones were delivered with short onset-to-onset intervals causing the high and low tones to automatically form separate low and high sound streams. The intermediate-pitch tones could be perceived as part of either one or the other stream, but not both streams at the same time. Thus these tones formed a pitch ’border’ between the two streams. The tones were presented in a fixed, cyclically repeating order. Linking the intermediate-pitch tones with the high or the low tones resulted in the perception of two different repeating tonal patterns. Participants were instructed to maintain perception of one of the two tone patterns throughout the stimulus sequences. Occasional changes violated either the selected or the alternative tone pattern, but not both at the same time. We found that only violations of the selected pattern elicited the mismatch negativity event-related potential, indicating that only this pattern was represented in the auditory system. This result suggests that individual sounds are processed as part of only one auditory pattern at a time. Thus tones forming a spectro-temporal border are exclusively assigned to one sound object at any given time, as are spatio-temporal borders in vision.
auditory sensory memory; auditory stream segregation; event-related potentials; implicit memory; spectro-temporal processing