visual spatial frequency; auditory amplitude-modulation rate; auditory-visual interactions
When attention is directed to the local or global level of a hierarchical stimulus, attending to that same scale of information is subsequently facilitated. This effect is called level-priming, and in its pure form, it has been dissociated from stimulus- or response-repetition priming. In previous studies, pure level-priming has been demonstrated using hierarchical stimuli composed of alphanumeric forms consisting of lines. Here, we test whether pure level-priming extends to hierarchical configurations of generic geometric forms composed of elements that can be depicted either outlined or filled-in. Interestingly, whereas hierarchical stimuli composed of outlined elements benefited from pure level-priming, for both local and global targets, those composed of filled-in elements did not. The results are not readily attributable to differences in spatial frequency content, suggesting that forms composed of outlined and filled-in elements are treated differently by attention and/or priming mechanisms. Because our results present a surprising limit on attentional persistence to scale, we propose that other findings in the attention and priming literature be evaluated for their generalizability across a broad range of stimulus classes, including outlined and filled-in depictions.
priming; local; global; attention; hierarchical stimuli
Synchrony judgments involve deciding whether cues to an event are in synch or out of synch, while temporal order judgments involve deciding which of the cues came first. When the cues come from different sensory modalities these judgments can be used to investigate multisensory integration in the temporal domain. However, evidence indicates that that these two tasks should not be used interchangeably as it is unlikely that they measure the same perceptual mechanism. The current experiment further explores this issue across a variety of different audiovisual stimulus types.
Participants were presented with 5 audiovisual stimulus types, each at 11 parametrically manipulated levels of cue asynchrony. During separate blocks, participants had to make synchrony judgments or temporal order judgments. For some stimulus types many participants were unable to successfully make temporal order judgments, but they were able to make synchrony judgments. The mean points of subjective simultaneity for synchrony judgments were all video-leading, while those for temporal order judgments were all audio-leading. In the within participants analyses no correlation was found across the two tasks for either the point of subjective simultaneity or the temporal integration window.
Stimulus type influenced how the two tasks differed; nevertheless, consistent differences were found between the two tasks regardless of stimulus type. Therefore, in line with previous work, we conclude that synchrony and temporal order judgments are supported by different perceptual mechanisms and should not be interpreted as being representative of the same perceptual process.
A single glance at your crowded desk is enough to locate your favorite cup. But finding an unfamiliar object requires more effort. This superiority in recognition performance for learned objects has at least two possible sources. For familiar objects observers might: 1) select more informative image locations upon which to fixate their eyes, or 2) extract more information from a given eye fixation. To test these possibilities, we had observers localize fragmented objects embedded in dense displays of random contour fragments. Eight participants searched for objects in 600 images while their eye movements were recorded in three daily sessions. Performance improved as subjects trained with the objects: The number of fixations required to find an object decreased by 64% across the 3 sessions. An ideal observer model that included measures of fragment confusability was used to calculate the information available from a single fixation. Comparing human performance to the model suggested that across sessions information extraction at each eye fixation increased markedly, by an amount roughly equal to the extra information that would be extracted following a 100% increase in functional field of view. Selection of fixation locations, on the other hand, did not improve with practice.
How do the characteristics of sounds influence the allocation of visual-spatial attention? Natural sounds typically change in frequency. Here we demonstrate that the direction of frequency change guides visual-spatial attention more strongly than the average or ending frequency, and provide evidence suggesting that this cross-modal effect may be mediated by perceptual experience. We used a Go/No-Go color-matching task to avoid response compatibility confounds. Participants performed the task either with their heads upright or tilted by 90°, misaligning the head-centered and environmental axes. The first of two colored circles was presented at fixation and the second was presented in one of four surrounding positions in a cardinal or diagonal direction. Either an ascending or descending auditory-frequency sweep was presented coincident with the first circle. Participants were instructed to respond to the color match between the two circles and to ignore the uninformative sounds. Ascending frequency sweeps facilitated performance (response time and/or sensitivity) when the second circle was presented at the cardinal top position and descending sweeps facilitated performance when the second circle was presented at the cardinal bottom position; there were no effects of the average or ending frequency. The sweeps had no effects when circles were presented at diagonal locations, and head tilt entirely eliminated the effect. Thus, visual-spatial cueing by pitch change is narrowly tuned to vertical directions and dominates any effect of average or ending frequency. Because this cross-modal cueing is dependent on the alignment of head-centered and environmental axes, it may develop through associative learning during waking upright experience.
cross-modal perception; auditory-visual interactions; visual-spatial attention; implicit attentional processing; multi-modal cognition
During binocular rivalry, perception alternates between two different images presented one to each eye. At any moment, one image is visible, dominant, while the other is invisible, suppressed. Alternations in perception during rivalry could involve competition between eyes, eye-rivalry, or between images, image-rivalry, or both. We measured response criteria, sensitivities, and thresholds to brief contrast increments to one of the rival stimuli in conventional rivalry displays and in a display in which the rival stimuli swapped between the eyes every 333 ms–swap rivalry–that necessarily involves image rivalry. We compared the sensitivity and threshold measures in dominance and suppression to assess the strength of suppression. We found that response criteria are essentially the same during dominance and suppression for the two sorts of rivalry. Critically, we found that swap-rivalry suppression is weak after a swap and strengthens throughout the swap interval. We propose that image rivalry is responsible for weak initial suppression immediately after a swap and that eye rivalry is responsible for the stronger suppression that comes later.
Visual perceptual learning (VPL) is defined as visual performance improvement after visual experiences. VPL is often highly specific for a visual feature presented during training. Such specificity is observed in behavioral tuning function changes with the highest improvement centered on the trained feature and was originally thought to be evidence for changes in the early visual system associated with VPL. However, results of neurophysiological studies have been highly controversial concerning whether the plasticity underlying VPL occurs within the visual cortex. The controversy may be partially due to the lack of observation of neural tuning function changes in multiple visual areas in association with VPL. Here using human subjects we systematically compared behavioral tuning function changes after global motion detection training with decoded tuning function changes for 8 visual areas using pattern classification analysis on functional magnetic resonance imaging (fMRI) signals. We found that the behavioral tuning function changes were extremely highly correlated to decoded tuning function changes only in V3A, which is known to be highly responsive to global motion with human subjects. We conclude that VPL of a global motion detection task involves plasticity in a specific visual cortical area.
Visual pattern processing becomes increasingly complex along the ventral pathway, from the low-level coding of local orientation in the primary visual cortex to the high-level coding of face identity in temporal visual areas. Previous research using pattern aftereffects as a psychophysical tool to measure activation of adaptive feature coding has suggested that awareness is relatively unimportant for the coding of orientation, but awareness is crucial for the coding of face identity. We investigated where along the ventral visual pathway awareness becomes crucial for pattern coding. Monoptic masking, which interferes with neural spiking activity in low-level processing while preserving awareness of the adaptor, eliminated open-curvature aftereffects but preserved closed-curvature aftereffects. In contrast, dichoptic masking, which spares spiking activity in low-level processing while wiping out awareness, preserved open-curvature aftereffects but eliminated closed-curvature aftereffects. This double dissociation suggests that adaptive coding of open and closed curvatures straddles the divide between weakly and strongly awareness-dependent pattern coding.
awareness; pattern adaptation; visual perception
Several studies have reported that task instructions influence eye-movement behavior during static image observation. In contrast, during dynamic scene observation we show that while the specificity of the goal of a task influences observers’ beliefs about where they look, the goal does not in turn influence eye-movement patterns. In our study observers watched short video clips of a single tennis match and were asked to make subjective judgments about the allocation of visual attention to the items presented in the clip (e.g., ball, players, court lines, and umpire). However, before attending to the clips, observers were either told to simply watch clips (non-specific goal), or they were told to watch the clips with a view to judging which of the two tennis players was awarded the point (specific goal). The results of subjective reports suggest that observers believed that they allocated their attention more to goal-related items (e.g. court lines) if they performed the goal-specific task. However, we did not find the effect of goal specificity on major eye-movement parameters (i.e., saccadic amplitudes, inter-saccadic intervals, and gaze coherence). We conclude that the specificity of a task goal can alter observer’s beliefs about their attention allocation strategy, but such task-driven meta-attentional modulation does not necessarily correlate with eye-movement behavior.
Auditory and visual processes demonstrably enhance each other based on spatial and temporal coincidence. Our recent results on visual search have shown that auditory signals also enhance visual salience of specific objects based on multimodal experience. For example, we tend to see an object (e.g., a cat) and simultaneously hear its characteristic sound (e.g., “meow”), to name an object when we see it, and to vocalize a word when we read it, but we do not tend to see a word (e.g., cat) and simultaneously hear the characteristic sound (e.g., “meow”) of the named object. If auditory-visual enhancements occur based on this pattern of experiential associations, playing a characteristic sound (e.g., “meow”) should facilitate visual search for the corresponding object (e.g., an image of a cat), hearing a name should facilitate visual search for both the corresponding object and corresponding word, but playing a characteristic sound should not facilitate visual search for the name of the corresponding object. Our present and prior results together confirmed these experiential-association predictions. We also recently showed that the underlying object-based auditory-visual interactions occur rapidly (within 220 ms) and guide initial saccades towards target objects. If object-based auditory-visual enhancements are automatic and persistent, an interesting application would be to use characteristic sounds to facilitate visual search when targets are rare, such as during baggage screening. Our participants searched for a gun among other objects when a gun was presented on only 10% of the trials. The search time was speeded when a gun sound was played on every trial (primarily on gun-absent trials); importantly, playing gun sounds facilitated both gun-present and gun-absent responses, suggesting that object-based auditory-visual enhancements persistently increase the detectability of guns rather than simply biasing gun-present responses. Thus, object-based auditory-visual interactions that derive from experiential associations rapidly and persistently increase visual salience of corresponding objects.
One possible strategy to evaluate whether signals in different modalities originate from a common external event or object is to form associations between inputs from different senses. This strategy would be quite effective because signals in different modalities from a common external event would then be aligned spatially and temporally. Indeed, it has been demonstrated that after adaptation to visual apparent motion paired with alternating auditory tones, the tones begin to trigger illusory motion perception to a static visual stimulus, where the perceived direction of visual lateral motion depends on the order in which the tones are replayed. The mechanisms underlying this phenomenon remain unclear. One important approach to understanding the mechanisms is to examine whether the effect has some selectivity in auditory processing. However, it has not yet been determined whether this aftereffect can be transferred across sound frequencies and between ears.
Two circles placed side by side were presented in alternation, producing apparent motion perception, and each onset was accompanied by a tone burst of a specific and unique frequency. After exposure to this visual apparent motion with tones for a few minutes, the tones became drivers for illusory motion perception. However, the aftereffect was observed only when the adapter and test tones were presented at the same frequency and to the same ear.
These findings suggest that the auditory processing underlying the establishment of novel audiovisual associations is selective, potentially but not necessarily indicating that this processing occurs at an early stage.
People with dyslexia, who face lifelong struggles with reading, exhibit numerous associated low-level sensory deficits including deficits in focal attention. Countering this, studies have shown that struggling readers outperform typical readers in some visual tasks that integrate distributed information across an expanse. Though such abilities would be expected to facilitate scene memory, prior investigations using the contextual cueing paradigm failed to find corresponding advantages in dyslexia. We suggest that these studies were confounded by task-dependent effects exaggerating known focal attention deficits in dyslexia, and that, if natural scenes were used as the context, advantages would emerge. Here, we investigate this hypothesis by comparing college students with histories of severe lifelong reading difficulties (SR) and typical readers (TR) in contexts that vary attention load. We find no differences in contextual-cueing when spatial contexts are letter-like objects, or when contexts are natural scenes. However, the SR group significantly outperforms the TR group when contexts are low-pass filtered natural scenes [F(3, 39) = 3.15, p<.05]. These findings suggest that perception or memory for low spatial frequency components in scenes is enhanced in dyslexia. These findings are important because they suggest strengths for spatial learning in a population otherwise impaired, carrying implications for the education and support of students who face challenges in school.
Attention plays a fundamental role in visual learning and memory. One highly established principle of visual attention is that the harder a central task is, the more attentional resources are used to perform the task and the smaller amount of attention is allocated to peripheral processing because of limited attention capacity. Here we show that this principle holds true in a dual-task setting but not in a paradigm of task-irrelevant perceptual learning. In Experiment 1, eight participants were asked to identify either bright or dim number targets at the screen center and to remember concurrently presented scene backgrounds. Their recognition performances for scenes paired with dim/hard targets were worse than those for scenes paired with bright/easy targets. In Experiment 2, eight participants were asked to identify either bright or dim letter targets at the screen center while a task-irrelevant coherent motion was concurrently presented in the background. After five days of training on letter identification, participants improved their motion sensitivity to the direction paired with hard/dim targets improved but not to the direction paired with easy/bright targets. Taken together, these results suggest that task-irrelevant stimuli are not subject to the attentional control mechanisms that task-relevant stimuli abide.
The rise of systems biology and availability of highly curated gene and molecular information resources has promoted a comprehensive approach to study disease as the cumulative deleterious function of a collection of individual genes and networks of molecules acting in concert. These "human disease networks" (HDN) have revealed novel candidate genes and pharmaceutical targets for many diseases and identified fundamental HDN features conserved across diseases. A network-based analysis is particularly vital for a study on polygenic diseases where many interactions between molecules should be simultaneously examined and elucidated. We employ a new knowledge driven HDN gene and molecular database systems approach to analyze Inflammatory Bowel Disease (IBD), whose pathogenesis remains largely unknown.
Methods and Results
Based on drug indications for IBD, we determined sibling diseases of mild and severe states of IBD. Approximately 1,000 genes associated with the sibling diseases were retrieved from four databases. After ranking the genes by the frequency of records in the databases, we obtained 250 and 253 genes highly associated with the mild and severe IBD states, respectively. We then calculated functional similarities of these genes with known drug targets and examined and presented their interactions as PPI networks.
The results demonstrate that this knowledge-based systems approach, predicated on functionally similar genes important to sibling diseases is an effective method to identify important components of the IBD human disease network. Our approach elucidates a previously unknown biological distinction between mild and severe IBD states.
Inflammatory bowel disease (IBD); Disease related genes; Protein-protein interaction networks; GO based functional score; Interpretation of pathogenesis
Laughter is an auditory stimulus that powerfully conveys positive emotion. We investigated how laughter influenced visual perception of facial expressions. We simultaneously presented laughter with a happy, neutral, or sad schematic face. The emotional face was briefly presented either alone or among a crowd of neutral faces. We used a matching method to determine how laughter influenced the perceived intensity of happy, neutral, and sad expressions. For a single face, laughter increased the perceived intensity of a happy expression. Surprisingly, for a crowd of faces laughter produced an opposite effect, increasing the perceived intensity of a sad expression in a crowd. A follow-up experiment revealed that this contrast effect may have occurred because laughter made the neutral distracter faces appear slightly happy, thereby making the deviant sad expression stand out in contrast. A control experiment ruled out semantic mediation of the laughter effects. Our demonstration of the strong context dependence of laughter effects on facial expression perception encourages a re-examination of the previously demonstrated effects of prosody, speech content, and mood on face perception, as they may similarly be context dependent.
Crossmodal interaction; emotion; facial expressions; laughter
Southeast Asia has become the center of rapid industrial development and economic growth. However, this growth has far outpaced investment in public infrastructure, leading to the unregulated release of many pollutants, including wastewater-related contaminants such as antibiotics. Antibiotics are of major concern because they can easily be released into the environment from numerous sources, and can subsequently induce development of antibiotic-resistant bacteria. Recent studies have shown that for some categories of drugs this source-to-environment antibiotic resistance relationship is more complex. This review summarizes current understanding regarding the presence of quinolones, sulfonamides, and tetracyclines in aquatic environments of Indochina and the prevalence of bacteria resistant to them. Several noteworthy findings are discussed: (1) quinolone contamination and the occurrence of quinolone resistance are not correlated; (2) occurrence of the sul sulfonamide resistance gene varies geographically; and (3) microbial diversity might be related to the rate of oxytetracycline resistance.
Indochina; environment; quinolone; sulfonamide; tetracycline; resistance gene; bacteria
Frequency-following and frequency-doubling neurons are ubiquitous in both striate and extrastriate visual areas. However, responses from these two types of neural populations have not been effectively compared in humans because previous EEG studies have not successfully dissociated responses from these populations. We devised a light–dark flicker stimulus that unambiguously distinguished these responses as reflected in the first and second harmonics in the steady-state visual evoked potentials. These harmonics revealed the spatial and functional segregation of frequency-following (the first harmonic) and frequency-doubling (the second harmonic) neural populations. Spatially, the first and second harmonics in steady-state visual evoked potentials exhibited divergent posterior scalp topographies for a broad range of EEG frequencies. The scalp maximum was medial for the first harmonic and contralateral for the second harmonic, a divergence not attributable to absolute response frequency. Functionally, voluntary visual–spatial attention strongly modulated the second harmonic but had negligible effects on the simultaneously elicited first harmonic. These dissociations suggest an intriguing possibility that frequency-following and frequency-doubling neural populations may contribute complementary functions to resolve the conflicting demands of attentional enhancement and signal fidelity—the frequency-doubling population may mediate substantial top–down signal modulation for attentional selection, whereas the frequency-following population may simultaneously preserve relatively undistorted sensory qualities regardless of the observer’s cognitive state.
Similar to other systems, the endocrine system is affected by aging. Thyroid hormone, the action of which is affected by many factors, has been shown to be associated with longevity. The most useful marker for the assessment of thyroid hormone action is TSH level. Although age and gender are believed to modify the pituitary set point or response to free thyroid hormone concentration, the precise age- and gender-dependent responses to thyroid hormone have yet to be reported.
We analyzed the results of 3564 thyroid function tests obtained from patients who received medication at both out- and inpatient clinics of Shinshu University Hospital. Subjects were from among those with thyroid function test results in the normal or mildly abnormal range. Based on a log-linear relationship between the concentrations of FHs and TSH, we established the putative resistance index to assess the relation between serum FH and TSH levels.
Free thyroid hormone and TSH concentration showed an inverse log-linear relation. In males, there was a negative relationship between the free T3 resistance index and age. In females, although there were no relationships between age and FHs, the indices were positively related to age.
These findings indicated that there is a gender-specific response to thyroid hormone with aging. Although the TSH level is a useful marker for the assessment of peripheral thyroid hormone action, the values should be interpreted carefully, especially with regard to age- and gender-related differences.
thyroid hormone; TSH; aging
Visual spatial attention can be exogenously captured by a salient stimulus or can be endogenously allocated by voluntary effort. Whether these two attention modes serve distinctive functions is debated, but for processing of single targets the literature suggests superiority of exogenous attention (it is faster acting and serves more functions). We report that endogenous attention uniquely contributes to processing of multiple targets. For speeded visual discrimination, response times are faster for multiple redundant targets than for single targets due to probability summation and/or signal integration. This redundancy gain was unaffected when attention was exogenously diverted from the targets, but was completely eliminated when attention was endogenously diverted. This was not due to weaker manipulation of exogenous attention because our exogenous and endogenous cues similarly affected overall response times. Thus, whereas exogenous attention is superior for processing single targets, endogenous attention plays a unique role in allocating resources crucial for rapid concurrent processing of multiple targets.
When you are looking for an object, does hearing its characteristic sound make you find it more quickly? Our recent results supported this possibility by demonstrating that when a cat target, for example, was presented among other objects, a simultaneously presented “meow” sound (containing no spatial information) reduced the manual response time for visual localization of the target. To extend these results, we determined how rapidly an object-specific auditory signal can facilitate target detection in visual search. On each trial, participants fixated a specified target object as quickly as possible. The target’s characteristic sound speeded the saccadic search time within 215–220 ms and also guided the initial saccade toward the target, compared to presentation of a distractor’s sound or to no sound. These results suggest that object-based auditory-visual interactions rapidly increase the target object’s salience in visual search.
Although local interactions involving orientation and spatial frequency are well understood, less is known about spatial interactions involving higher level pattern features. We examined interactive coding of aspect ratio, a prevalent two-dimensional feature. We measured perception of two simultaneously flashed ellipses by randomly post-cueing one of them and having observers indicate its aspect ratio. Aspect ratios interacted in two ways. One manifested as an aspect-ratio-repulsion effect. For example, when a slightly tall ellipse and a taller ellipse were simultaneously flashed, the less tall ellipse appeared flatter and the taller ellipse appeared even taller. This repulsive interaction was long range, occurring even when the ellipses were presented in different visual hemifields. The other interaction manifested as a global assimilation effect. An ellipse appeared taller when it was a part of a global vertical organization than when it was a part of a global horizontal organization. The repulsion and assimilation effects temporally dissociated as the former slightly strengthened, and the latter disappeared when the ellipse-to-mask stimulus onset asynchrony was increased from 40 to 140 ms. These results are consistent with the idea that shape perception emerges from rapid lateral and hierarchical neural interactions.
aspect ratio; repulsion; assimilation; lateral interaction; hierarchical interaction; shape perception
We previously reported that the administration of bevacizumab for pancreatic neuroendocrine tumors inhibited angiogenesis in the host, resulting in tumor growth inhibition. In light of these results, we compared the effect of bevacizumab/gemcitabine/S-1 combination therapy vs. bevacizumab monotherapy. The QGP-1 pancreatic neuroendocrine carcinoma cell line and the BxPC-3 ductal cell carcinoma cell line were transplanted into the subcutaneous tissue of mice, and the mice were treated for 3 weeks with bevacizumab [50 mg/kg intraperitoneally (i.p.) twice weekly], gemcitabine (240 mg/kg i.p. once weekly) and S-1 (10 mg/kg orally five times weekly). The antitumor effect and side effects were evaluated by measuring the tumor volume and weight and by changes in body weight, respectively. The tumor volume became smaller (from the maximum volume) in the group treated with bevacizumab, gemcitabine and S-1 (BGS) and the group treated with bevacizumab and gemcitabine (BG). A significant difference was noted in the tumor weight between the BG group and the group treated with bevacizumab alone. A relatively significant decrease in the body weight was observed in the BGS and BG groups. We conclude that gemcitabine is appropriate as a drug used in combination with bevacizumab for pancreatic neuroendocrine tumors.
neuroendocrine carcinoma; pancreas; vascular endothelial growth factor; antibody; gemcitabine; S-1
Cross cultural studies have played a pivotal role in elucidating the extent to which behavioral and mental characteristics depend on specific environmental influences. Surprisingly, little field research has been carried out on a fundamentally important perceptual ability, namely the perception of biological motion. In this report, we present details of studies carried out with the help of volunteers from the Mundurucu indigene, a group of people native to Amazonian territories in Brazil. We employed standard biological motion perception tasks inspired by over 30 years of laboratory research, in which observers attempt to decipher the walking direction of point-light (PL) humans and animals. Do our effortless skills at perceiving biological activity from PL animations, as revealed in laboratory settings, generalize to people who have never before seen representational depictions of human and animal activity? The results of our studies provide a clear answer to this important, previously unanswered question. Mundurucu observers readily perceived the coherent, global shape depicted in PL walkers, and experienced the classic inversion effects that are typically found when such stimuli are turned upside down. In addition, their performance was in accord with important recent findings in the literature, in the abundant ease with which they extracted direction information from local motion invariants alone. We conclude that the effortless, veridical perception of PL biological motion is a spontaneous and universal perceptual ability, occurring both inside and outside traditional laboratory environments.
A crucial ability for an organism is to orient toward important objects and to ignore temporarily irrelevant objects. Attention provides the perceptual selectivity necessary to filter an overwhelming input of sensory information to allow for efficient object detection. Although much research has examined visual search and the ‘template’ of attentional set that allows for target detection, the behavior of individual subjects often reveals the limits of experimental control of attention. Few studies have examined important aspects such as individual differences and metacognitive strategies. The present study analyzes the data from two visual search experiments for a conjunctively defined target (Proulx, 2007). The data revealed attentional capture blindness, individual differences in search strategies, and a significant rate of metacognitive errors for the assessment of the strategies employed. These results highlight a challenge for visual attention studies to account for individual differences in search behavior and distractibility, and participants that do not (or are unable to) follow instructions.
The present study investigated the minimum amount of auditory stimulation that allows differentiation of spoken voices, instrumental music, and environmental sounds. Three new findings were reported. 1) All stimuli were categorized above chance level with 50 ms-segments. 2) When a peak-level normalization was applied, music and voices started to be accurately categorized with 20 ms-segments. When the root-mean-square (RMS) energy of the stimuli was equalized, voice stimuli were better recognized than music and environmental sounds. 3) Further psychoacoustical analyses suggest that the categorization of extremely brief auditory stimuli depends on the variability of their spectral envelope in the used set. These last two findings challenge the interpretation of the voice superiority effect reported in previously published studies and propose a more parsimonious interpretation in terms of an emerging property of auditory categorization processes.