The decoding of visually presented line segments into letters, and letters into words, is critical to fluent reading abilities. Here we investigate the temporal dynamics of visual orthographic processes, focusing specifically on right hemisphere contributions and interactions between the hemispheres involved in the implicit processing of visually presented words, consonants, false fonts, and symbolic strings. High-density EEG was recorded while participants detected infrequent, simple, perceptual targets (dot strings) embedded amongst a of character strings. Beginning at 130 ms, orthographic and non-orthographic stimuli were distinguished by a sequence of ERP effects over occipital recording sites. These early latency occipital effects were dominated by enhanced right-sided negative-polarity activation for non-orthographic stimuli that peaked at around 180 ms. This right-sided effect was followed by bilateral positive occipital activity for false-fonts, but not symbol strings. Moreover the size of components of this later positive occipital wave was inversely correlated with the right-sided ROcc180 wave, suggesting that subjects who had larger early right-sided activation for non-orthographic stimuli had less need for more extended bilateral (e.g., interhemispheric) processing of those stimuli shortly later. Additional early (130–150 ms) negative-polarity activity over left occipital cortex and longer-latency centrally distributed responses (>300 ms) were present, likely reflecting implicit activation of the previously reported ‘visual-word-form’ area and N400-related responses, respectively. Collectively, these results provide a close look at some relatively unexplored portions of the temporal flow of information processing in the brain related to the implicit processing of potentially linguistic information and provide valuable information about the interactions between hemispheres supporting visual orthographic processing.
word reading; ERPs; visual cortex; visual orthography
The present study employed Dynamic Causal Modeling to investigate the effective functional connectivity between regions of the neural network involved in top-down letter processing. We used an illusory letter detection paradigm in which participants detected letters while viewing pure noise images. When participants detected letters, the response of the right middle occipital gyrus (MOG) in the visual cortex was enhanced by increased feed-backward connectivity from the left inferior frontal gyrus (IFG). In addition, illusory letter detection increased feed-forward connectivity from the right MOG to the left inferior parietal lobules. Originating in the left IFG, this top-down letter processing network may facilitate the detection of letters by activating letter processing areas within the visual cortex. This activation in turns may highlight the visual features of letters and send letter information to activate the associated phonological representations in the identified parietal region.
letter processing; word processing; top-down processing; fMRI; dynamic causal modeling
Individuals learn to read by gradually recognizing repeated letter combinations. However, it is unclear how or when neural mechanisms associated with repetition of basic stimuli (i.e., strings of letters) shift to involvement of higher-order language networks. The present study investigated this question by repeatedly presenting unfamiliar letter strings in a one-back matching task during an hour-long period. Activation patterns indicated that only brain areas associated with visual processing were activated during the early period, but additional regions that are usually associated with semantic and phonological processing in inferior frontal gyrus were recruited after stimuli became more familiar. Changes in activation were also observed in bilateral superior temporal cortex, also suggestive of a shift toward a more language-based processing strategy. Connectivity analyses reveal two distinct networks that correspond to phonological and visual processing, which may reflect the indirect and direct routes of reading. The phonological route maintained a similar degree of connectivity throughout the experiment, whereas visual areas increased connectivity with language areas as stimuli became more familiar, suggesting early recruitment of the direct route. This study provides insight about plasticity of the brain as individuals become familiar with unfamiliar combinations of letters (i.e., words in a new language, new acronyms) and has implications for engaging these linguistic networks during development of language remediation therapies.
letter strings; fMRI; connectivity; reading; learning; plasticity
We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked.
audio-visual; fMRI; multi-modal processing
The question of how the brain encodes letter position in written words has
attracted increasing attention in recent years. A number of models have
recently been proposed to accommodate the fact that transposed-letter
stimuli like jugde or caniso
are perceptually very close to their base words.
Here we examined how letter position coding is attained in the tactile
modality via Braille reading. The idea is that Braille word recognition may
provide more serial processing than the visual modality, and this may
produce differences in the input coding schemes employed to encode letters
in written words. To that end, we conducted a lexical decision experiment
with adult Braille readers in which the pseudowords were created by
transposing/replacing two letters.
We found a word-frequency effect for words. In addition, unlike parallel
experiments in the visual modality, we failed to find any clear signs of
transposed-letter confusability effects. This dissociation highlights the
differences between modalities.
The present data argue against models of letter position coding that assume
that transposed-letter effects (in the visual modality) occur at a
relatively late, abstract locus.
Neuroimaging studies have identified a common network of brain regions involving the prefrontal and parietal cortices across a variety of working memory (WM) tasks. However, previous studies have also reported category-specific dissociations of activation within this network. In this study, we investigated the development of category-specific activation in a WM task with digits, letters, and faces. Eight-year-old children and adults performed a 2-back WM task while their brain activity was measured using functional magnetic resonance imaging (fMRI). Overall, children were significantly slower and less accurate than adults on all three WM conditions (digits, letters, and faces); however, within each age group, behavioral performance across the three conditions was very similar. FMRI results revealed category-specific activation in adults but not children in the intraparietal sulcus for the digit condition. Likewise, during the letter condition, category-specific activation was observed in adults but not children in the left occipital–temporal cortex. In contrast, children and adults showed highly similar brain-activity patterns in the lateral fusiform gyri when solving the 2-back WM task with face stimuli. Our results suggest that 8-year-old children do not yet engage the typical brain regions that have been associated with abstract or semantic processing of numerical symbols and letters when these processes are task-irrelevant and the primary task is demanding. Nevertheless, brain activity in letter-responsive areas predicted children’s spelling performance underscoring the relationship between abstract processing of letters and linguistic abilities. Lastly, behavioral performance on the WM task was predictive of math and language abilities highlighting the connection between WM and other cognitive abilities in development.
Visual processing and its conscious awareness can be
dissociated. To examine the extent of dissociation between ability to read characters or words and to be consciously aware of their forms,
reading ability and conscious awareness for characters were examined
using a tachistoscope in an alexic patient. A right handed woman with
14 years of education presented with incomplete right hemianopia,
alexia with kanji (ideogram) agraphia, anomia, and amnesia. Brain MRI
disclosed cerebral infarction limited to the left lower bank of the
calcarine fissure, lingual and parahippocampal gyri, and an old
infarction in the right medial frontal lobe. Tachistoscopic examination
disclosed that she could read characters aloud in the right lower
hemifield when she was not clearly aware of their forms and only noted
their presence vaguely. Although her performance in reading kanji was
better in the left than the right field, she could read kana
(phonogram) characters and Arabic numerals equally well in both fields.
By contrast, she claimed that she saw only a flash of light in 61% of
trials and noticed vague forms of stimuli in 36% of trials. She never
recognised a form of a letter in the right lower field precisely. She
performed judgment tasks better in the left than right lower hemifield
where she had to judge whether two kana characters were the same or different. Although dissociation between performance of visual recognition tasks and conscious awareness of the visual experience was
found in patients with blindsight or residual vision, reading (verbal
identification) of characters without clear awareness of their forms
has not been reported in clinical cases. Diminished awareness of forms
in our patient may reflect incomplete input to the extrastriate cortex.
A growing literature has suggested that processing of visual information presented near the hands is facilitated. In this study, we investigated whether the near-hands superiority effect also occurs with the hands moving. In two experiments, participants performed a cyclical bimanual movement task requiring concurrent visual identification of briefly presented letters. For both the static and dynamic hand conditions, the results showed improved letter recognition performance with the hands closer to the stimuli. The finding that the encoding advantage for near-hand stimuli also occurred with the hands moving suggests that the effect is regulated in real time, in accordance with the concept of a bimodal neural system that dynamically updates hand position in external space.
Perception and action
Although regions of the parietal cortex have been consistently implicated in episodic memory retrieval, the functional roles of these regions remain poorly understood. The present review presents a meta-analysis of findings from event-related fMRI studies reporting the loci of retrieval effects associated with familiarity- and recollection-related recognition judgments. The results of this analysis support previous suggestions that retrieval-related activity in lateral parietal cortex dissociates between superior regions, where activity likely reflects the task relevance of different classes of recognition test items, and more inferior regions where retrieval-related activity appears closely linked to successful recollection. It is proposed that inferior lateral parietal cortex forms part of a neural network supporting the ‘episodic buffer’ [Baddeley, A.D. (2000). The episodic buffer: a new component of working memory? Trends in Cognitive Sciences, 4, 417–423.].
recollection; familiarity; episodic memory; recognition memory; meta-analysis
This fMRI study investigated top-down letter processing with an illusory letter detection task. Participants responded whether one of a number of different possible letters was present in a very noisy image. After initial training that became increasingly difficult, they continued to detect letters even though the images consisted of pure noise, which eliminated contamination from strong bottom-up input. For illusory letter detection, greater fMRI activation was observed in several cortical regions. These regions included the precuneus, an area generally involved in top-down processing of objects, and the left superior parietal lobule, an area previously identified with the processing of valid letter and word stimuli. In addition, top-down letter detection also activated the left inferior frontal gyrus, an area that may be involved in the integration of general top-down processing and letter-specific bottom-up processing. These findings suggest that these regions may play a significant role in top-down as well as bottom up processing of letters and words, and are likely to have reciprocal functional connections to more posterior regions in the word and letter processing network.
word processing; letter processing; top-down processing; fMRI
In a post-cued letter identification task, participants were presented with 7-letter nonword target stimuli that were formed of a random string of consonants (DCMFPLR) or a pronounceable sequence of consonants and vowels (DAMOPUR). Targets were preceded by briefly presented pattern-masked primes that could be the same sequence of letters as the target, composed of seven different letters, or sharing either the first or last five letters of the target. There was some evidence for repetition priming effects that were independent of target type in an early component, the N/P150, thought to reflect the mapping of visual features onto letter representations, and that is insensitive to orthographic structure. Following this, pronounceable nonwords showed significantly greater repetition priming effects than consonant strings, in line with the behavioral results. Initial versus final overlap only started to influence target processing at around 200–250 ms post-target onset, at about the same time as the effects of target type emerged. The results are in line with a model where the initial parallel mapping of visual features onto a location-specific orthographic code is followed by the subsequent activation of location-invariant orthographic and phonological codes.
Pseudoword superiority effect; ERPs; Nonword processing; Masked priming
In alphabetic orthographies, letter identification is a critical process during the recognition of visually presented words. In the present experiment, we examined whether and when visual form influences letter processing in two very distinct alphabets (Roman and Arabic). Disentangling visual versus abstract letter representations was possible because letters in the Roman alphabet may look visually similar/dissimilar in lowercase and uppercase forms (e.g., c-C vs. r-R) and letters in the Arabic alphabet may look visually similar/dissimilar, depending on their position within a word (e.g., vs. -). We employed a masked priming same–different matching task while ERPs were measured from individuals who had learned the two alphabets at an early age. Results revealed a prime–target relatedness effect dependent on visual form in early components (P/N150) and a more abstract relatedness effect in a later component (P300). Importantly, the pattern of data was remarkably similar in the two alphabets. Thus, these data offer empirical support for a universal (i.e., across alphabets) hierarchical account of letter processing in which the time course of letter processing in different scripts follows a similar trajectory from visual features to visual form independent of abstract representations.
S. T. L. Chung (2002) has shown that rapid serial visual presentation (RSVP) reading speed varies with letter spacing, peaking near the standard letter spacing for text and decreasing for both smaller and larger spacings. In this study, we tested the hypothesis that the dependence of reading speed on letter spacing is mediated by the size of the visual span—the number of letters recognized with high accuracy without moving the eyes. If so, the size of the visual span and reading speed should show a similar dependence on letter spacing. We tested this prediction for RSVP reading and asked whether it generalizes to the reading of blocks of text requiring eye movements. We measured visual-span profiles and reading speeds as a function of letter spacing. Visual-span profiles, measured with trigrams (strings of three random letters), are plots of letter-recognition accuracy as a function of letter position left or right of fixation. Size of the visual span was quantified by a measure of the area under the visual-span profile. Reading performance was measured using two presentation methods: RSVP and flashcard (a short block of text on four lines). We found that the size of the visual span and the reading speeds measured by the two presentation methods showed a qualitatively similar dependence on letter spacing and that they were highly correlated. These results are consistent with the view that the size of the visual span is a primary visual factor that limits reading speed.
visual span; reading speed; letter spacing; visual crowding
The visual word form area (VWFA) is a region of left inferior occipitotemporal cortex that is critically involved in visual word recognition. Previous studies have investigated whether and how experience shapes the functional characteristics of VWFA by comparing neural response magnitude in response to words and nonwords. Conflicting results have been obtained, however, perhaps because response magnitude can be influenced by other factors such as attention. In this study, we measured neural activity in monozygotic twins, using functional magnetic resonance imaging. This allowed us to quantify differences in unique environmental contributions to neural activation evoked by words, pseudowords, consonant strings, and false fonts in the VWFA and striate cortex. The results demonstrate significantly greater effects of unique environment in the word and pseudoword conditions compared to the consonant string and false font conditions both in VWFA and in left striate cortex. These findings provide direct evidence for environmental contributions to the neural architecture for reading, and suggest that learning phonology and/or orthographic patterns plays the biggest role in shaping that architecture.
The legibility of the letters in the Latin alphabet has been measured numerous times since the beginning of experimental psychology. To identify the theoretical mechanisms attributed to letter identification, we report a comprehensive review of literature, spanning more than a century. This review revealed that identification accuracy has frequently been attributed to a subset of three common sources: perceivability, bias, and similarity. However, simultaneous estimates of these values have rarely (if ever) been performed. We present the results of two new experiments which allow for the simultaneous estimation of these factors, and examine how the shape of a visual mask impacts each of them, as inferred through a new statistical model. Results showed that the shape and identity of the mask impacted the inferred perceivability, bias, and similarity space of a letter set, but that there were aspects of similarity that were robust to the choice of mask. The results illustrate how the psychological concepts of perceivability, bias, and similarity can be estimated simultaneously, and how each make powerful contributions to visual letter identification.
Letter similarity; Choice theory
When the size of a letter stimulus is near the visual acuity limit of a human subject, details of the stimulus become unavailable due to ocular optical and neural filtering. In this study we tested the hypothesis that letter recognition near the acuity limit is dependent on more global features, which could be parsimoniously described by a few easy-to-visualize and perceptually meaningful low-order geometric moments (i.e., the ink area, variance, skewness, and kurtosis). We constructed confusion matrices from a large set of data (approximately 110,000 trials) for recognition of English letters and Chinese characters of various spatial complexities near their acuity limits. We found that a major portion of letter confusions reported by human subjects could be accounted for by a geometric moment model, in which letter confusions were quantified in a space defined by low-order geometric moments. This geometric moment model is universally applicable to recognition of visual patterns of various complexities near their acuity limits.
visual acuity; computational modeling; object recognition
The ability to identify letters and encode their position is a crucial step of the word recognition process. However and despite their word identification problem, the ability of dyslexic children to encode letter identity and letter-position within strings was not systematically investigated. This study aimed at filling this gap and further explored how letter identity and letter-position encoding is modulated by letter context in developmental dyslexia. For this purpose, a letter-string comparison task was administered to French dyslexic children and two chronological age (CA) and reading age (RA)-matched control groups. Children had to judge whether two successively and briefly presented four-letter strings were identical or different. Letter-position and letter identity were manipulated through the transposition (e.g., RTGM vs. RMGT) or substitution of two letters (e.g., TSHF vs. TGHD). Non-words, pseudo-words, and words were used as stimuli to investigate sub-lexical and lexical effects on letter encoding. Dyslexic children showed both substitution and transposition detection problems relative to CA-controls. A substitution advantage over transpositions was only found for words in dyslexic children whereas it extended to pseudo-words in RA-controls and to all type of items in CA-controls. Letters were better identified in the dyslexic group when belonging to orthographically familiar strings. Letter-position encoding was very impaired in dyslexic children who did not show any word context effect in contrast to CA-controls. Overall, the current findings point to a strong letter identity and letter-position encoding disorder in developmental dyslexia.
letter-string processing; letter-position encoding; letter-identity encoding; letter transposition; letter substitution; reading acquisition; dyslexic children
We examined the effects of letter transposition in Hebrew in three masked-priming experiments. Hebrew, like English has an alphabetic orthography where sequential and contiguous letter strings represent phonemes. However, being a Semitic language it has a non-concatenated morphology that is based on root derivations. Experiment 1 showed that transposed-letter (TL) root primes inhibited responses to targets derived from the non-transposed root letters, and that this inhibition was unrelated to relative root frequency. Experiment 2 replicated this result and showed that if the transposed letters of the root created a nonsense-root that had no lexical representation, then no inhibition and no facilitation were obtained. Finally, Experiment 3 demonstrated that in contrast to English, French, or Spanish, TL nonword primes did not facilitate recognition of targets, and when the root letters embedded in them consisted of a legal root morpheme, they produced inhibition. These results suggest that lexical space in alphabetic orthographies may be structured very differently in different languages if their morphological structure diverges qualitatively. In Hebrew, lexical space is organized according to root families rather than simple orthographic structure, so that all words derived from the same root are interconnected or clustered together, independent of overall orthographic similarity.
Morphology; Letter Transposition; Hebrew; Masked-Priming
Figures that can be seen in more than one way are invaluable tools for the study of the neural basis of visual awareness, because such stimuli permit the dissociation of the neural responses that underlie what we perceive at any given time from those forming the sensory representation of a visual pattern. To study the former type of responses, monkeys were subjected to binocular rivalry, and the response of neurons in a number of different visual areas was studied while the animals reported their alternating percepts by pulling levers. Perception-related modulations of neural activity were found to occur to different extents in different cortical visual areas. The cells that were affected by suppression were almost exclusively binocular, and their proportion was found to increase in the higher processing stages of the visual system. The strongest correlations between neural activity and perception were observed in the visual areas of the temporal lobe. A strikingly large number of neurons in the early visual areas remained active during the perceptual suppression of the stimulus, a finding suggesting that conscious visual perception might be mediated by only a subset of the cells exhibiting stimulus selective responses. These physiological findings, together with a number of recent psychophysical studies, offer a new explanation of the phenomenon of binocular rivalry. Indeed, rivalry has long been considered to be closely linked with binocular fusion and stereopsis, and the sequences of dominance and suppression have been viewed as the result of competition between the two monocular channels. The physiological data presented here are incompatible with this interpretation. Rather than reflecting interocular competition, the rivalry is most probably between the two different central neural representations generated by the dichoptically presented stimuli. The mechanisms of rivalry are probably the same as, or very similar to, those underlying multistable perception in general, and further physiological studies might reveal much about the neural mechanisms of our perceptual organization.
The visual word recognition system recruits neuronal systems originally developed for object perception which are characterized by orientation insensitivity to mirror reversals. It has been proposed that during reading acquisition beginning readers have to “unlearn” this natural tolerance to mirror reversals in order to efficiently discriminate letters and words. Therefore, it is supposed that this unlearning process takes place in a gradual way and that reading expertise modulates mirror-letter discrimination. However, to date no supporting evidence for this has been obtained. We present data from an eye-movement study that investigated the degree of sensitivity to mirror-letters in a group of beginning readers and a group of expert readers. Participants had to decide which of the two strings presented on a screen corresponded to an auditorily presented word. Visual displays always included the correct target word and one distractor word. Results showed that those distractors that were the same as the target word except for the mirror lateralization of two internal letters attracted participants’ attention more than distractors created by replacement of two internal letters. Interestingly, the time course of the effects was found to be different for the two groups, with beginning readers showing a greater tolerance (decreased sensitivity) to mirror-letters than expert readers. Implications of these findings are discussed within the framework of preceding evidence showing how reading expertise modulates letter identification.
The encoding of letter position is a key aspect in all recently proposed models of visual-word recognition. We analyzed the impact of lexical frequency on letter position assignment by examining the temporal dynamics of lexical activation induced by pseudowords extracted from words of different frequencies. For each word (e.g., BRIDGE), we created two pseudowords: A transposed-letter (TL: BRIGDE) and a replaced-letter pseudoword (RL: BRITGE). ERPs were recorded while participants read words and pseudowords in two tasks: Semantic categorization (Experiment 1) and lexical decision (Experiment 2). For high-frequency stimuli, similar ERPs were obtained for words and TL-pseudowords, but the N400 component to words was reduced relative to RL-pseudowords, indicating less lexical/semantic activation. In contrast, TL- and RL-pseudowords created from low-frequency stimuli elicited similar ERPs. Behavioral responses in the lexical decision task paralleled this asymmetry. The present findings impose constraints on computational and neural models of visual-word recognition.
visual-word recognition; position coding; ERPs; word-frequency; transposed-letter effects
In the classic neurological model of language, the human inferior parietal lobule (IPL) plays an important role in visual word recognition. The region is both functionally and structurally heterogeneous, however, suggesting that subregions of IPL may differentially contribute to reading. The two main sub-divisions are the supramarginal (SMG) and angular gyri, which have been hypothesized to contribute preferentially to phonological and semantic aspects of word processing, respectively.
Here we used single-pulse TMS to investigate the functional specificity and timing of SMG involvement in reading. Participants performed two reading tasks that focused attention on either the phonological or semantic relation between two simultaneously presented words. A third task focused attention on the visual relation between pairs of consonant letter strings to control for basic input and output characteristics of the paradigm using non-linguistic stimuli. TMS to SMG was delivered on every trial at 120, 180, 240 or 300 msec post-stimulus onset.
Stimulation at 180 msec produced a reliable facilitation of reaction times for both the phonological and semantic tasks, but not for the control visual task.
These findings demonstrate that SMG contributes to reading regardless of the specific task demands, and suggests this may be due to automatically computing the sound of a word even when the task does not explicitly require it.
reading; phonology; semantics; chronometric TMS; inferior parietal lobe; neurological model of reading
We report three behavioral experiments on the spatial characteristics evoking illusory face and letter detection. False detections made to pure noise images were analyzed using a modified reverse correlation method in which hundreds of observers rated a modest number of noise images (480) during a single session. This method was originally developed for brain imaging research, and has been used in a number of fMRI publications, but this is the first report of the behavioral classification images. In Experiment 1 illusory face detection occurred in response to scattered dark patches throughout the images, with a bias to the left visual field. This occurred despite the use of a fixation cross and expectations that faces would be centered. In contrast, illusory letter detection (Experiment 2) occurred in response to centrally positioned dark patches. Experiment 3 included an oval in all displays to spatially constrain illusory face detection. With the addition of this oval the classification image revealed an eyes/nose/mouth pattern. These results suggest that face detection is triggered by a minimal face-like pattern even when these features are not centered in visual focus.
vision; face perception; reverse correlation; letter perception; top down; false detection
We used fMRI to examine functional brain abnormalities of German-speaking dyslexics who suffer from slow effortful reading but not from a reading accuracy problem. Similar to acquired cases of letter-by-letter reading, the developmental cases exhibited an abnormal strong effect of length (i.e., number of letters) on response time for words and pseudowords.
Corresponding to lesions of left occipito-temporal (OT) regions in acquired cases, we found a dysfunction of this region in our developmental cases who failed to exhibit responsiveness of left OT regions to the length of words and pseudowords. This abnormality in the left OT cortex was accompanied by absent responsiveness to increased sublexical reading demands in phonological inferior frontal gyrus (IFG) regions. Interestingly, there was no abnormality in the left superior temporal cortex which—corresponding to the onological deficit explanation—is considered to be the prime locus of the reading difficulties of developmental dyslexia cases.
The present functional imaging results suggest that developmental dyslexia similar to acquired letter-by-letter reading is due to a primary dysfunction of left OT regions.
It is well established that the formation of memories for life’s experiences—episodic memory—is influenced by how we attend to those experiences, yet the neural mechanisms by which attention shapes episodic encoding are still unclear. We investigated how top-down and bottom-up attention contribute to memory encoding of visual objects in humans by manipulating both types of attention during functional magnetic resonance imaging (fMRI) of episodic memory formation. We show that dorsal parietal cortex—specifically, intraparietal sulcus (IPS)—was engaged during top-down attention and was also recruited during the successful formation of episodic memories. By contrast, bottom-up attention engaged ventral parietal cortex—specifically, temporoparietal junction (TPJ)—and was also more active during encoding failure. Functional connectivity analyses revealed further dissociations in how top-down and bottom-up attention influenced encoding: while both IPS and TPJ influenced activity in perceptual cortices thought to represent the information being encoded (fusiform/lateral occipital cortex), they each exerted opposite effects on memory encoding. Specifically, during a preparatory period preceding stimulus presentation, a stronger drive from IPS was associated with a higher likelihood that the subsequently attended stimulus would be encoded. By contrast, during stimulus processing, stronger connectivity with TPJ was associated with a lower likelihood the stimulus would be successfully encoded. These findings suggest that during encoding of visual objects into episodic memory, top-down and bottom-up attention can have opposite influences on perceptual areas that subserve visual object representation, suggesting that one manner in which attention modulates memory is by altering the perceptual processing of to-be-encoded stimuli.