Korean written language is composed of ideogram (Hanja) and phonogram (Hangul), as Japanese consists of Kanji (ideogram) and Kana (phonogram). Dissociation between ideogram and phonogram impairment after brain injury has been reported in Japanese, but few in Korean. We report a 64-yr-old right-handed man who showed alexia with agraphia in Hanja but preserved Hangul reading and writing after a left posterior inferior temporal lobe infarction. Interestingly, the patient was an expert in Hanja; he had been a Hanja calligrapher over 40 yr. However, when presented with 65 basic Chinese letters that are taught in elementary school, his responses were slow both in reading (6.3 sec/letter) and writing (8.8 sec/letter). The rate of correct response was 81.5% (53 out of 65 letters) both in reading and writing. The patient's performances were beyond mean-2SD of those of six age-, sex-, and education-matched controls who correctly read 64.7 out of 65 and wrote 62.5 out of 65 letters with a much shorter reaction time (1.3 sec/letter for reading and 4.0 sec/letter for writing). These findings support the notion that ideogram and phonogram can be mediated in different brain regions and Hanja alexia with agraphia in Korean patients can be associated with a left posterior inferior temporal lesion.
The Japanese language represents numbers in kana digit words (a syllabic notation), kanji numbers and Arabic numbers (logographic notations). Kanji and Arabic numbers have previously shown similar patterns of numerical processing, and because of their shared logographic properties may exhibit similar brain areas of numerical representation. Kana digit words require a larger phonetic component, and therefore may show different areas of numerical representation as compared to kanji or Arabic numbers. The present study investigated behavioral reaction times and brain activation with fMRI during the numerical processing of kana digit words, kanji numbers and Arabic numbers. No differences in behavioral reaction time were found between kanji and Arabic numbers. In contrast, kana digit words produced a longer reaction time as compared to the other two notations. The imaging data showed that kana activated the posterior cingulate cortex when compared to kanji and Arabic numbers. It is suggested that this posterior cingulate activation reflects an additional attentional demand in this script which may be related to the infrequent use of kana digit words, or may reflect an extra step of phonological mediation in converting the visual word form to the verbal word form. Overall, the data suggest that number reading is processed differently in these three notations.
number processing; Japanese kana and kanji; symbolic notation; fMRI
The case of the right-handed young Japanese woman with alexia with agraphia of kanji (the Japanese morphograms) due to a small circumscribed haematoma in the left posterior inferior temporal gyrus is described. Her chief complaint was the inability to read and write kanji. Detailed examination showed that her alexia with agraphia was much more predominant for kanji than kana (the Japanese syllabograms). These facts suggest that the processing of kanji and kana involves different intrahemispheric mechanisms.
Behavioral and neuroimaging studies have provided evidence that reading is strongly left lateralized, and the degree of this pattern of functional lateralization can be indicative of reading competence. However, it remains unclear whether functional lateralization differs between the first (L1) and second (L2) languages in bilingual L2 readers. This question is particularly important when the particular script, or orthography, learned by the L2 readers is markedly different from their L1 script. In this study, we quantified functional lateralization in brain regions involved in visual word recognition for participants' L1 and L2 scripts, with a particular focus on the effects of L1–L2 script differences in the visual complexity and orthographic depth of the script. Two different groups of late L2 learners participated in an fMRI experiment using a visual one-back matching task: L1 readers of Japanese who learnt to read alphabetic English and L1 readers of English who learnt to read both Japanese syllabic Kana and logographic Kanji. The results showed weaker leftward lateralization in the posterior lateral occipital complex (pLOC) for logographic Kanji compared with syllabic and alphabetic scripts in both L1 and L2 readers of Kanji. When both L1 and L2 scripts were non-logographic, where symbols are mapped onto sounds, functional lateralization did not significantly differ between L1 and L2 scripts in any region, in any group. Our findings indicate that weaker leftward lateralization for logographic reading reflects greater requirement of the right hemisphere for processing visually complex logographic Kanji symbols, irrespective of whether Kanji is the readers' L1 or L2, rather than characterizing additional cognitive efforts of L2 readers. Finally, brain-behavior analysis revealed that functional lateralization for L2 visual word processing predicted L2 reading competency.
visual complexity; orthographic depth; second langauge reading; logogrpahic; functional lateralization
A case is described of a 56 year old Japanese male with pure agraphia of kanji (the Japanese morphograms) due to haemorrhagic infarction of the left temporal lobe caused by the rare condition of cortical vein thrombosis of Labbé. Writing kanji was severely impaired without disturbed consciousness, aphasia or apraxia. On the other hand, writing kana (the Japanese syllabograms), and reading kanji and kana were almost intact. This suggests that the process of writing kanji involves a different pathway from that of reading kanji in the left temporal lobe. Pure agraphia of kanji is considered to be similar to lexical agraphia in Indo-European languages, in that the writing system with a poor or irregular phoneme-grapheme transformation is impaired by the left temporal lesion. This case indicates the necessity for considering thrombosis of the Labbé vein when a subcortical haematoma is detected in a temporal lobe on computed tomography of the brain.
Slowly progressive cognitive decline is the most frequent initial manifestation in MM2-cortical-type sporadic Creutzfeldt-Jakob disease. Agraphia has never been noted in patients with this type of sporadic Creutzfeldt-Jakob disease, however, we report the case of a Japanese patient with sporadic Creutzfeldt-Jakob disease in whom agraphia of Kanji was an initial cardinal symptom.
A 59-year-old right-handed Japanese woman complained of agraphia of Kanji (Chinese characters) as an initial symptom. A neurological examination revealed mild word-finding difficulty, constructive disturbance, hyperreflexia in her jaw and lower limbs, and bilateral extensor plantar reflexes. An examination of her cerebrospinal fluid revealed increased levels of 14-3-3 and total tau proteins, and abnormal conformation of the proteinase K-resistant prion protein. Diffusion-weighted magnetic resonance imaging showed diffuse hyperintensity in bilateral cerebral cortices. Single-photon emission computed tomography scans revealed hypoperfusion in the left temporal lobe, bilateral parietal and occipital lobes. An analysis of the prion protein gene demonstrated no mutation with homozygous for methionine at the codon 129. We diagnosed our patient with sporadic Creutzfeldt-Jakob disease. Although a histological examination was not performed, it was assumed that our patient could be the MM2-cortical type according to the clinical findings and the elevated levels of 14-3-3 protein in her cerebrospinal fluid. The left posterior inferior temporal area, which was affected in our patient as a hypoperfusion area, is associated with selecting and recalling Kanji characters.
Focal signs as an early symptom and hypoperfusion areas in sporadic Creutzfeldt-Jakob disease are critical to recognize initial brain lesions damaged by the proteinase K-resistant prion protein accumulation.
Agraphia; Creutzfeldt-Jakob disease; Kana (Japanese syllabary); Kanji (Chinese characters); Magnetic resonance imaging
The effect of the spatial location of faces in the visual field during brief, free-viewing encoding in subsequent face recognition is not known. This study addressed this question by tagging three groups of faces with cheating, cooperating or neutral behaviours and presenting them for encoding in two visual hemifields (upper vs. lower or left vs. right). Participants then had to indicate if a centrally presented face had been seen before or not. Head and eye movements were free in all phases. Findings showed that the overall recognition of cooperators was significantly better than cheaters, and it was better for faces encoded in the upper hemifield than in the lower hemifield, both in terms of a higher d′ and faster reaction time (RT). The d′ for any given behaviour in the left and right hemifields was similar. The RT in the left hemifield did not vary with tagged behaviour, whereas the RT in the right hemifield was longer for cheaters than for cooperators. The results showed that memory biases in contextual face recognition were modulated by the spatial location of briefly encoded faces and are discussed in terms of scanning reading habits, top-left bias in lighting preference and peripersonal space.
face recognition; visual anisotropy; memory biases; cheater detection; cooperation
The decoding of visually presented line segments into letters, and letters into words, is critical to fluent reading abilities. Here we investigate the temporal dynamics of visual orthographic processes, focusing specifically on right hemisphere contributions and interactions between the hemispheres involved in the implicit processing of visually presented words, consonants, false fonts, and symbolic strings. High-density EEG was recorded while participants detected infrequent, simple, perceptual targets (dot strings) embedded amongst a of character strings. Beginning at 130 ms, orthographic and non-orthographic stimuli were distinguished by a sequence of ERP effects over occipital recording sites. These early latency occipital effects were dominated by enhanced right-sided negative-polarity activation for non-orthographic stimuli that peaked at around 180 ms. This right-sided effect was followed by bilateral positive occipital activity for false-fonts, but not symbol strings. Moreover the size of components of this later positive occipital wave was inversely correlated with the right-sided ROcc180 wave, suggesting that subjects who had larger early right-sided activation for non-orthographic stimuli had less need for more extended bilateral (e.g., interhemispheric) processing of those stimuli shortly later. Additional early (130–150 ms) negative-polarity activity over left occipital cortex and longer-latency centrally distributed responses (>300 ms) were present, likely reflecting implicit activation of the previously reported ‘visual-word-form’ area and N400-related responses, respectively. Collectively, these results provide a close look at some relatively unexplored portions of the temporal flow of information processing in the brain related to the implicit processing of potentially linguistic information and provide valuable information about the interactions between hemispheres supporting visual orthographic processing.
word reading; ERPs; visual cortex; visual orthography
A clinicopathological study is presented of a case of Marchiafava-Bignami disease with a hemispheric disconnection syndrome, an association that does not appear to have been reported previously. Gross and microscopic examination of the brain revealed necrosis of the corpus callosum (sparing a small area in front of the splenium) and of the anterior commissure, cortical and subcortical infarction of the right lingual gyrus, diffuse cortical lesions of the laminar sclerosis type, and lacunae in the basal ganglia and the pons. The patient was unable to grasp objects presented to the right visual half-field with the left hand, or to respond to contralateral somaesthetic stimuli with either of the upper limbs. This motor inhibition, with the associated extended posture, is described as a "crossed avoiding reaction", and attributed to the inability of one hemisphere to respond to visual or somaesthetic stimuli projected to the other hemisphere. Clinicopathological correlations and visuomotor coordination mechanisms are discussed in the light of previous clinical and experimental studies. Anomia to pictures projected tachistoscopically to the left visual field, disturbances in the transfer of somaesthetic information, left sided ideomotor apraxia with agraphia, right sided dyscopia, and ideational apraxia especially marked in the right visual field were observed.
Visual-span profiles are plots of letter-recognition accuracy as a function of letter position left or right of the midline. Previously, we have shown that contraction of these profiles in peripheral vision can account for slow reading speed in peripheral vision. In this study, we asked two questions: (1) can we modify visual-span profiles through training on letter-recognition, and if so, (2) are these changes accompanied by changes in reading speed? Eighteen normally sighted observers were randomly assigned to one of three groups: training at 10° in the upper visual field, training at 10° in the lower visual field and a no-training control group. We compared observers’ characteristics of reading (maximum reading speed and critical print size) and visual-span profiles (peak amplitude and bits of information transmitted) before and after training, and at trained and untrained retinal locations (10° upper and lower visual fields). Reading speeds were measured for six print sizes at each retinal location, using the rapid serial visual presentation paradigm. Visual-span profiles were measured using a trigram letter-recognition task, for a letter size equivalent to 1.4 × the critical print size for reading. Training consisted of the repeated measurement of 20 visual-span profiles (over four consecutive days) in either the upper or lower visual field. We also tracked the changes in performance in a sub-group of observers for up to three months following training. We found that the visual-span profiles can be expanded (bits of information transmitted increased by 6 bits) through training with a letter-recognition task, and that there is an accompanying increase (41%) in the maximum reading speed. These improvements transferred, to a large extent, from the trained to an untrained retinal location, and were retained, to a large extent, for at least three months following training. Our results are consistent with the view that the visual span is a bottleneck on reading speed, but a bottleneck that can be increased with practice.
Reading; Letter-recognition; Peripheral vision; Perceptual learning; Low vision; Visual rehabilitation
Prior research has shown that the two writing systems of the Japanese orthography are processed differently: kana (syllabic symbols) are processed like other phonetic languages such as English, while kanji (a logographic writing system) are processed like other logographic languages like Chinese. Previous work done with the Stroop task in Japanese has shown that these differences in processing strategies create differences in Stroop effects. This study investigated the Stroop effect in kanji and kana using functional magnetic resonance imaging (fMRI) to examine the similarities and differences in brain processing between logographic and phonetic languages. Nine native Japanese speakers performed the Stroop task both in kana and kanji scripts during fMRI. Both scripts individually produced significant Stroop effects as measured by the behavioral reaction time data. The imaging data for both scripts showed brain activation in the anterior cingulate gyrus, an area involved in inhibiting automatic processing. Though behavioral data showed no significant differences between the Stroop effects in kana and kanji, there were differential areas of activation in fMRI found for each writing system. In fMRI, the Stroop task activated an area in the left inferior parietal lobule during the kana task and the left inferior frontal gyrus during the kanji task. The results of the present study suggest that the Stroop task in Japanese kana and kanji elicits differential activation in brain regions involved in conflict detection and resolution for syllabic and logographic writing systems.
Stroop task; Japanese kana and kanji; fMRI
Patients with blindsight are not consciously aware of visual stimuli in the affected field of vision but retain nonconscious perception. This disability can be resolved if nonconsciously perceived information can be brought to their conscious awareness. It can be accomplished by manipulating neural network of visual awareness. To understand this network, we studied the pattern of cortical activity elicited during processing of visual stimuli with or without conscious awareness. The analysis indicated that a re-entrant signaling loop between the area V3A (located in the extrastriate cortex) and the frontal cortex is critical for processing conscious awareness. The loop is activated by visual signals relayed in the primary visual cortex, which is damaged in blindsight patients. Because of the damage, V3A-frontal loop is not activated and the signals are not processed for conscious awareness. These patients however continue to receive visual signals through the lateral geniculate nucleus. Since these signals do not activate the V3A-frontal loop, the stimuli are not consciously perceived. If visual input from the lateral geniculate nucleus is appropriately manipulated and made to activate the V3A-frontal loop, blindsight patients can regain conscious vision.
Thyroid-associated orbitopathy is commonly associated with Graves' disease with lid retraction, exophthalmos, and periorbital swelling, but rarely with autoimmune thyroiditis or euthyroid state. We reviewed 3 cases from our hospital whose antibodies to anti-receptor of TSH were normal.
Case 1: 60 year-old non-diabetic woman with bilateral glaucoma in treatment, recurrent media otitis and euthyroidism, acute onset of painless diplopia, and lid ptosis in the left eye. MRI of orbit showed increased size of the III right cranial pair and high levels of thyroid autoantibodies (Tab) anti-tiroglobulin (ATG) 115.1, anti-thyroid peroxidase (ATPO) 1751 U/mL. She started oral deflazacort 30 mg each 3 days. Sixty days later, complete remission of eye symptoms correlated with lower auto-antibodies level (ATG 19 ATPO 117). Case 2: 10 year-old girl. At age 8, she had diplopia, lid ptosis and limitations of upper gaze in the left eye. The neurological study discarded ocular myasthenia; with thyroid goitier, and hypothyrodism, she started oral levothyroxin. At age 10 with normal IRM Botulinic toxin was injected, without change. High levels of Tab were found, ATG 2723, ATPO 10.7. She started oral deflazacort 30 mg each 3 days, azathioprin 100 mg, daily. Actually, Tab levels are almost normal, but she remains with ocular alterations. Case 3: 56 year-old woman, Grave´s disease with exophtalmos in 1990, treated with I131 and immunosupression, with good outcome; obesity, hypertension and bilateral glaucoma in treatment. She suddenly presented diplopia and IV pair paresia of the right eye. A year later, ATb were found slightly elevated, ATG 100 years ATPO 227; despite prednisone 50 mg, each 3 days and azathioprin 150 mg/daily treatment, a surgical procedure was required for relieve the ocular symptoms.
We found only 3 cases previously reported with this type of eye thyroid disease. Is important to note that awareness of this atypical form of orbitopathy
Early recognition facilitates successful treatment (Case 1) or persistent disease when diagnosis is delayed (Cases 2 and 3).
It remains controversial and hotly debated whether foveal information is double-projected to both hemispheres or split at the midline between the two hemispheres. We investigated this issue in a unique patient with lesions in the splenium of the corpus callosum and the left medial occipitotemporal region, through a series of neuropsychological tests and multimodal MRI scans. Behavioral experiments showed that (1) the patient had difficulties in reading simple and compound Chinese characters when they were presented in the foveal but left to the fixation, (2) he failed to recognize the left component of compound characters when the compound characters were presented in the central foveal field, (3) his judgments of the gender of centrally presented chimeric faces were exclusively based on the left half-face and he was unaware that the faces were chimeric. Functional MRI data showed that Chinese characters, only when presented in the right foveal field but not in the left foveal field, activated a region in the left occipitotemporal sulcus in the mid-fusiform, which is recognized as visual word form area. Together with existing evidence in the literature, results of the current study suggest that the representation of foveal stimuli is functionally split at object processing levels.
Observations of the visual form agnosic patient DF have been highly influential in establishing the hypothesis that separate processing streams deal with vision for perception (ventral stream) and vision for action (dorsal stream). In this context, DF's preserved ability to perform visually-guided actions has been contrasted with the selective impairment of visuomotor performance in optic ataxia patients suffering from damage to dorsal stream areas. However, the recent finding that DF shows a thinning of the grey matter in the dorsal stream regions of both hemispheres in combination with the observation that her right-handed movements are impaired when they are performed in visual periphery has opened up the possibility that patient DF may potentially also be suffering from optic ataxia. If lesions to the posterior parietal cortex (dorsal stream) are bilateral, pointing and reaching deficits should be observed in both visual hemifields and for both hands when targets are viewed in visual periphery. Here, we tested DF's visuomotor performance when pointing with her left and her right hand toward targets presented in the left and the right visual field at three different visual eccentricities. Our results indicate that DF shows large and consistent impairments in all conditions. These findings imply that DF's dorsal stream atrophies are functionally relevant and hence challenge the idea that patient DF's seemingly normal visuomotor behaviour can be attributed to her intact dorsal stream. Instead, DF seems to be a patient who suffers from combined ventral and dorsal stream damage meaning that a new account is needed to explain why she shows such remarkably normal visuomotor behaviour in a number of tasks and conditions.
The efficacy of executive functions is critically modulated by information processing in earlier cognitive stages. For example, initial processing of verbal stimuli in the language-dominant left-hemisphere leads to more efficient response inhibition than initial processing of verbal stimuli in the non-dominant right hemisphere. However, it is unclear whether this organizational principle is specific for the language system, or a general principle that also applies to other types of lateralized cognition. To answer this question, we investigated the neurophysiological correlates of early attentional processes, facial expression perception and response inhibition during tachistoscopic presentation of facial “Go” and “Nogo” stimuli in the left and the right visual field (RVF). Participants committed fewer false alarms after Nogo-stimulus presentation in the left compared to the RVF. This right-hemispheric asymmetry on the behavioral level was also reflected in the neurophysiological correlates of face perception, specifically in a right-sided asymmetry in the N170 amplitude. Moreover, the right-hemispheric dominance for facial expression processing also affected event-related potentials typically related to response inhibition, namely the Nogo-N2 and Nogo-P3. These findings show that an effect of hemispheric asymmetries in early information processing on the efficacy of higher cognitive functions is not limited to left-hemispheric language functions, but can be generalized to predominantly right-hemispheric functions.
executive functions; Go/Nogo task; EEG; ERP; laterality; lateralization; Nogo-N2; Nogo-P3
A dissociation between visual awareness and visual discrimination is referred to as “blindsight”. Blindsight results from loss of function of the primary visual cortex (V1) which can occur due to cerebrovascular accidents (i.e. stroke-related lesions). There are also numerous reports of similar, though reversible, effects on vision induced by transcranial Magnetic Stimulation (TMS) to early visual cortex. These effects point to V1 as the “gate” of visual awareness and have strong implications for understanding the neurological underpinnings of consciousness. It has been argued that evidence for the dissociation between awareness of, and responses to, visual stimuli can be a measurement artifact of the use of a high response criterion under yes-no measures of visual awareness when compared with the criterion free forced-choice responses. This difference between yes-no and forced-choice measures suggests that evidence for a dissociation may actually be normal near-threshold conscious vision. Here we describe three experiments that tested visual performance in normal subjects when their visual awareness was suppressed by applying TMS to the occipital pole. The nature of subjects’ performance whilst undergoing occipital TMS was then verified by use of a psychophysical measure (d') that is independent of response criteria. This showed that there was no genuine dissociation in visual sensitivity measured by yes-no and forced-choice responses. These results highlight that evidence for visual sensitivity in the absence of awareness must be analysed using a bias-free psychophysical measure, such as d', In order to confirm whether or not visual performance is truly unconscious.
A number of recent studies have demonstrated superior visual processing when the information is distributed across the left and right visual fields than if the information is presented in a single hemifield (the bilateral field advantage). This effect is thought to reflect independent attentional resources in the two hemifields and the capacity of the neural responses to the left and right hemifields to process visual information in parallel. Here, we examined whether a bilateral field advantage can also be observed in a high-level visual task that requires the information from both hemifields to be combined. To this end, we used a visual enumeration task—a task that requires the assimilation of separate visual items into a single quantity—where the to-be-enumerated items were either presented in one hemifield or distributed between the two visual fields. We found that enumerating large number (>4 items), but not small number (<4 items), exhibited the bilateral field advantage: enumeration was more accurate when the visual items were split between the left and right hemifields than when they were all presented within the same hemifield. Control experiments further showed that this effect could not be attributed to a horizontal alignment advantage of the items in the visual field, or to a retinal stimulation difference between the unilateral and bilateral displays. These results suggest that a bilateral field advantage can arise when the visual task involves inter-hemispheric integration. This is in line with previous research and theory indicating that, when the visual task is attentionally demanding, parallel processing by the neural responses to the left and right hemifields can expand the capacity of visual information processing.
Consistent with longstanding findings from behavioral studies, neuroimaging investigations have identified a region of the inferior temporal cortex that, in adults, shows greater face-selectivity in the right than left hemisphere and, conversely, a region that shows greater word-selectivity in the left than right hemisphere. What has not been determined is how this pattern of mature hemispheric specialization emerges over the course of development. The current study examines the hemispheric superiority for faces and words in children, young adolescents and adults in a discrimination task in which stimuli are presented briefly in either hemifield. Whereas adults showed the expected left and right visual field superiority for face and word discrimination, respectively, the young adolescents demonstrated only the right field superiority for words and no field superiority for faces. Although the children's overall accuracy was lower than that of the older groups, like the young adolescents, they exhibited a right visual field superiority for words but no field superiority for faces. Interestingly, the emergence of face lateralization was correlated with reading competence, measured on an independent standardized test, after regressing out age, quantitative reasoning scores, and face discrimination accuracy. Taken together, these findings suggest that the hemispheric organization of face and word recognition do not develop independently, and that word lateralization, which emerges earlier, may drive later face lateralization. A theoretical account in which competition for visual representations unfolds over the course of development is proposed to account for the findings.
Hemispheric specialization; lateralization; face processing; word processing
The visual span for reading is the number of letters, arranged horizontally as in text, that can be recognized reliably without moving the eyes. The visual-span hypothesis states that the size of the visual span is an important factor that limits reading speed. From this hypothesis, we predict that changes in reading speed as a function of character size or contrast are determined by corresponding changes in the size of the visual span. We tested this prediction in two experiments in which we measured the size of the visual span and reading speed on groups of five subjects as a function of either character size or character contrast. We used a “trigram method” for characterizing the visual span as a profile of letter-recognition accuracy as a function of distance left and right of the midline (G. E. Legge, J. S. Mansfield, & S. T. L. Chung, 2001). The area under this profile was taken as an operational measure of the size of the visual span. Reading speed was measured with the Rapid Serial Visual Presentation (RSVP) method. We found that the size of the visual span and reading speed showed the same qualitative dependence on character size and contrast, reached maximum values at the same critical points, and exhibited high correlations at the level of individual subjects. Additional analysis of data from four studies provides evidence for an invariant relationship between the size of the visual span and RSVP reading speed; an increase in the visual span by one letter is associated with a 39% increase in reading speed. Our results confirm the visual-span hypothesis and provide a theoretical framework for understanding the impact of stimulus attributes, such as contrast and character size, on reading speed. Evidence for the visual span as a determinant of reading speed implies the existence of a bottom–up, sensory limitation on reading, distinct from attentional, motor, or linguistic influences.
vision; contrast; character size; visual span; low vision; reading; reading speed
Recent functional magnetic resonance imaging research has demonstrated that letters and numbers are preferentially processed in distinct regions and hemispheres in the visual cortex. In particular, the left visual cortex preferentially processes letters compared to numbers, while the right visual cortex preferentially processes numbers compared to letters. Because letters and numbers are cultural inventions and are otherwise physically arbitrary, such a double dissociation is strong evidence for experiential effects on neural architecture. Here, we use the high temporal resolution of event-related potentials (ERPs) to investigate the temporal dynamics of the neural dissociation between letters and numbers. We show that the divergence between ERP traces to letters and numbers emerges very early in processing. Letters evoked greater N1 waves (latencies 140–170 ms) than did numbers over left occipital channels, while numbers evoked greater N1s than letters over the right, suggesting letters and numbers are preferentially processed in opposite hemispheres early in visual encoding. Moreover, strings of letters, but not single letters, elicited greater P2 ERP waves, (starting around 250 ms) than numbers did over the left hemisphere, suggesting that the visual cortex is tuned to selectively process combinations of letters, but not numbers, further along in the visual processing stream. Additionally, the processing of both of these culturally defined stimulus types differentiated from similar but unfamiliar visual stimulus forms (false fonts) even earlier in the processing stream (the P1 at 100 ms). These findings imply major cortical specialization processes within the visual system driven by experience with reading and mathematics.
Letter processing; number processing; ERP; hemispheric specialization
Even though auditory stimuli do not directly convey information related to visual stimuli, they often improve visual detection and identification performance. Auditory stimuli often alter visual perception depending on the reliability of the sensory input, with visual and auditory information reciprocally compensating for ambiguity in the other sensory domain. Perceptual processing is characterized by hemispheric asymmetry. While the left hemisphere is more involved in linguistic processing, the right hemisphere dominates spatial processing. In this context, we hypothesized that an auditory facilitation effect in the right visual field for the target identification task, and a similar effect would be observed in the left visual field for the target localization task. In the present study, we conducted target identification and localization tasks using a dual-stream rapid serial visual presentation. When two targets are embedded in a rapid serial visual presentation stream, the target detection or discrimination performance for the second target is generally lower than for the first target; this deficit is well known as attentional blink. Our results indicate that auditory stimuli improved target identification performance for the second target within the stream when visual stimuli were presented in the right, but not the left visual field. In contrast, auditory stimuli improved second target localization performance when visual stimuli were presented in the left visual field. An auditory facilitation effect was observed in perceptual processing, depending on the hemispheric specialization. Our results demonstrate a dissociation between the lateral visual hemifield in which a stimulus is projected and the kind of visual judgment that may benefit from the presentation of an auditory cue.
Unlike most languages that are written using a single script, Japanese uses multiple scripts including morphographic Kanji and syllabographic Hiragana and Katakana. Here, we used functional magnetic resonance imaging with dynamic causal modeling to investigate competing theories regarding the neural processing of Kanji and Hiragana during a visual lexical decision task. First, a bilateral model investigated interhemispheric connectivity between ventral occipito–temporal (vOT) cortex and Broca's area (“pars opercularis”). We found that Kanji significantly increased the connection strength from right-to-left vOT. This is interpreted in terms of increased right vOT activity for visually complex Kanji being integrated into the left (i.e. language dominant) hemisphere. Secondly, we used a unilateral left hemisphere model to test whether Kanji and Hiragana rely preferentially on ventral and dorsal paths, respectively, that is, they have different intrahemispheric functional connectivity profiles. Consistent with this hypothesis, we found that Kanji increased connectivity within the ventral path (V1 ↔ vOT ↔ Broca's area), and that Hiragana increased connectivity within the dorsal path (V1 ↔ supramarginal gyrus ↔ Broca's area). Overall, the results illustrate how the differential processing demands of Kanji and Hiragana influence both inter- and intrahemispheric interactions.
dynamic causal modeling; functional connectivity; logograph; reading; visual word recognition
We present a 56 year-old, right-handed, congenitally deaf, female who exhibited a partial Balint's syndrome accompanied by positive visual phenomena restricted to her lower right visual quadrant (e.g., color band, transient unformed visual hallucinations). Balint's syndrome is characterized by a triad of visuo-ocular symptoms that typically occur following bilateral parieto-occipital lobe lesions. These symptoms include the inability to perceive simultaneous events in one's visual field (simultanagnosia), an inability to fixate and follow an object with one's eyes (optic apraxia), and an impairment of target pointing under visual guidance (optic ataxia). Our patient exhibited simultanagnosia, optic ataxia, left visual-field neglect, and impairment of all complex visual-spatial tasks, yet demonstrated normal visual acuity, intact visual-fields, and an otherwise normal neurocognitive profile. The patient's visuo-ocular symptoms were noticed while she was participating in rehabilitation for a small right pontine stroke. White matter changes involving both occipital lobes had been incidentally noted on the CT scan revealing the pontine infarction. As the patient relied upon sign language and reading ability for communication, these visuo-perceptual limitations hindered her ability to interact with others and gave the appearance of aphasia. We discuss the technical challenges of assessing a patient with significant barriers to communication (e.g., the need for a non-standardized approach, a lack of normative data for such special populations), while pointing out the substantial contributions that can be made by going beyond the standard neuropsychological test batteries.
Balint's syndrome; Deafness; Simultanagnosia; Optic Apraxia; Optic Ataxia
The spatial character of our reaching movements is extremely sensitive to potential obstacles in the workspace. We recently found that this sensitivity was retained by most patients with left visual neglect when reaching between two objects, despite the fact that they tended to ignore the leftward object when asked to bisect the space between them. This raises the possibility that obstacle avoidance does not require a conscious awareness of the obstacle avoided. We have now tested this hypothesis in a patient with visual extinction following right temporoparietal damage. Extinction is an attentional disorder in which patients fail to report stimuli on the side of space opposite a brain lesion under conditions of bilateral stimulation. Our patient avoided obstacles during reaching, to exactly the same degree, regardless of whether he was able to report their presence. This implicit processing of object location, which may depend on spared superior parietal-lobe pathways, demonstrates that conscious awareness is not necessary for normal obstacle avoidance.