PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
J Neurosci. Author manuscript; available in PMC Aug 15, 2012.
Published in final edited form as:
PMCID: PMC3419470
NIHMSID: NIHMS394691
During visual word recognition phonology is accessed by 100ms and may be mediated by a speech production code: Evidence from MEG
Katherine L Wheat,1 Piers L Cornelissen,1 Stephen J Frost,2 and Peter C Hansen3
1University of York, UK
2Haskins Laboratories, US
3University of Birmingham, UK
CORRESPONDING AUTHOR: P.L. Cornelissen, Department of Psychology, University of York, York, YO10 5DD, UK. Tel: +44(0)1904432887. p.cornelissen/at/psych.york.ac.uk
Debate surrounds the precise cortical location and timing of access to phonological information during visual word recognition. Therefore, using whole head magnetoencephalography (MEG), we investigated the spatiotemporal pattern of brain responses induced by a masked pseudohomophone priming task. Twenty healthy adults read target words that were preceded by one of three kinds of nonword prime: pseudohomophones (e.g., brein-BRAIN), where 4 of 5 letters are shared between prime and target, and the pronunciation is the same; matched orthographic controls (e.g., broin-BRAIN), where the same 4 of 5 letters are shared between prime and target but pronunciation differs; and unrelated controls (e.g., lopus-BRAIN), where neither letters nor pronunciation are shared between prime and target. All three priming conditions induced activation in the pars opercularis of the left inferior frontal gyrus (IFGpo) and the left precentral gyrus (PCG) within 100ms of target word onset. However, for the critical comparison which reveals a processing difference specific to phonology, we found that the induced pseudohomophone priming response was significantly stronger than the orthographic priming response in left IFG/PCG at ~100ms. This spatio-temporal concurrence demonstrates early phonological influences during visual word recognition and is consistent with phonological access being mediated by a speech production code.
Extensive research has shown that phonological processing skill is a critical predictor of reading acquisition (Bradley & Bryant, 1983) and has been identified as a source of difficulty in dyslexia (Goswami, 2000 for review). A common technique used to probe the earliest stages of processing in visual word identification is the masked priming paradigm. Such studies demonstrate reaction time advantages when pseudohomophones (e.g., brein) prime target words like ‘BRAIN’ as compared to orthographic control primes (e.g., broin) (Lukatela & Turvey, 1994; Perfetti & Bell 1991). This pseudohomophone priming effect is typically interpreted as indicating that the initial access code for word recognition is phonological in nature.
Although behavioral masked priming studies have suggested that phonological access occurs as quickly as 50–100ms after words are presented (Ferrand & Grainger, 1993), such studies cannot determine precisely the time course of events that comprise visual word recognition. In part this is because outcome measures like reaction time represent the output of the system as a whole. But more importantly, experimental manipulations such as varying prime duration do not necessarily provide direct information about the time course of processing. For example,Rayner et al. (2003) demonstrated that exposure to text as brief as 60ms is sufficient for lexical information to be extracted, but this was indexed by changes in eye-fixation duration ~250ms post-stimulus. Thus, observing an experimental effect with a 60ms prime doesn't necessarily mean that a particular processing step happens within 60ms. Rather, 60ms worth of input provides sufficient information to permit that process to occur, at whatever time point thereafter.
To elucidate when and where phonological access occurs during visual word recognition, time sensitive neurophysiological measurements are ideal. Typically, the earliest EEG correlates of phonological priming have been found around 200–300ms following word presentation (Grainger et al., 2006; Sereno et al., 1998). An exception to this was reported by Ashby et al. (2009) who recorded EEG as participants read targets with voiced and unvoiced final consonants (e.g. fad, fat), preceded by pseudoword primes that were incongruent or congruent in voicing and vowel duration (e.g. fap, faz). Phonological feature congruency modulated ERPs by 80ms, indicating that sub-phonemic features can be activated rapidly during word recognition. This latter finding is consistent with recent MEG studies showing early responses to printed words ~100ms after stimulus onset in the left inferior frontal gyrus, pars opercularis (IFGpo) and the precentral gyrus (PCG) (Pammer et al., 2004; Cornelissen et al., 2009). Put together, these neurophysiological data imply that phonological activation may indeed occur around ~100ms, and may be mediated by the IFGpo/PCG. However, such a conclusion is premature because neither Pammer et al. (2004) nor Cornelissen et al. (2009) specifically manipulated phonology in relation to IFGpo/PCG activity, and Ashby et al. (2009) did not localize their ERP data. Therefore, to test this idea, we used MEG to measure brain responses during a masked pseudohomophone priming task, and analyzed the data with cortical source reconstruction methods that provide high temporal resolution (milliseconds) and good spatial resolution (estimated to be 5mm for 85% of voxels; Barnes et al., 2004).
Participants
Twenty native English-speaking, strongly right-handed adults (mean age 23.2 years, SD 5.97 months; 12 female) gave informed consent to participate in the study. None had been diagnosed reading disabled and all read normally based on WRAT-III performance. Handedness was defined by the Annett Hand Preference Questionnaire (Annett, 1967). The study conformed with The Code of Ethics of the World Medical Association (Declaration of Helsinki).
Stimuli
The target words were 111 English 5-letter nouns and verbs, with a mean word frequency count of 19.7 (CELEX). These were primed by pseudohomophones of the target word (PSEUD), matched orthographic nonwords (ORTH) and unrelated nonwords (UNREL). Pseudohomophone primes shared four out of five letters with their target word (brein-BRAIN) and were pronounced identically. Orthographic control primes shared the same four out of five letters with the target word but were pronounced differently (broin-BRAIN)1. Unrelated primes were pseudowords that shared no letters (in any position) with the target word, (lopus-BRAIN). All three prime types were matched on bigram frequency using a positional bigram frequency count derived from the 5-letter words in CELEX. The mean log10 frequencies were: pseudohomophones 5.639 (SEM 0.033); orthographic primes 5.687 (SEM 0.027) and unrelated primes 5.635 (SEM 0.034). A one-way ANOVA for positional bigram frequency score was not significant, F(2,330)=0.06, p=0.94, indicating no condition contained primes made up of more frequently occurring letter pairs than any other condition. Catch trials were randomly interspersed with experimental trials. Target catch trials had an animal name as the target, ensuring the participant had a purpose for attending to the stimuli (i.e. to spot the animal names). Prime catch trials had an animal name as the prime with the purpose of monitoring the visibility of the primes.
Procedure
Participants were asked to rapidly and silently read target words and to press a button only if they spotted an animal name2. The experiment consisted of 373 trials (including 40 catch trials) of 1890ms separated by a fixation cross with duration randomly jittered between 1200–2200ms. Each trial comprised: 300ms blank screen, 500ms forward mask ‘#####’, 66.7ms lowercase prime, 16.7ms backward mask ‘#### #’, 300ms uppercase target word and 500ms blank screen.
Stimuli were back-projected (60Hz vertical refresh) as light grey words and symbols (Arial Monospace 24pt) on a dark grey background using Presentation v12.0 (Neurobehavioural Systems, Inc.). At a viewing distance of ~75cm stimuli subtended ~1° vertically and ~5° horizontally. Each participant saw each of the 111 target words three times, once for each priming condition (PSEUD, ORTH and UNREL), making a total of 333 trials. A pseudorandom blocked design ensured each participant saw a unique overall target word presentation order, and across six participants, prime-target relationships were counterbalanced.
Data acquisition
MEG data were collected continuously using a 4D Neuroimaging Magnes 3600Whole Head, 248 channel system, with the magnetometers arranged in a helmet shaped array. Data were sampled at a rate of 678.17Hz (200hz anti-alias filter). Head shape and head coil position were recorded with a 3-D digitizer (Polhemus Fastrak), and used for co-registration (Kozinska et al., 2001) with a high resolution T1 weighted anatomical volume reconstructed to 1 mm isotropic resolution, acquired using GE 3.0T Signa Excite HDx.
Source-space analysis: beamforming
Neural sources of activity were reconstructed with an in-house modified type I vectorized linearly-constrained minimum-variance beamformer (Van Veen et al., 1997; Huang et al., 2004). In a beamforming analysis, the neuronal signal at a location of interest in the brain is constructed as the weighted sum of the signals recorded by the MEG sensors, the sensor weights computed for each location forming 3 spatial filters, one for each orthogonal current direction. The beamformer weights are determined by an optimization algorithm so that the signal from a location of interest contributes maximally to the beamformer output whereas the signal from other locations is suppressed. For a whole brain analysis, a cubic lattice of spatial filters is defined within the brain (here 5-mm spacing), and an independent set of weights is computed for each of them. The outputs of the 3 spatial filters at each location in the brain are then summed to generate the total power at each so-called ‘virtual electrode’ (VE) over a given temporal window and within a given frequency band.
The localisation accuracy of spatial filtering approaches to source analysis has been found to be superior to that of alternative MEG analysis techniques such as minimum norm (Sekihara et al., 2005). However, the accuracy of spatial filtering approaches can be affected by several factors, including the length of the analysis window, signal-to-noise level, and the signal bandwidth (Brookes et al., 2008). Simulation studies have suggested that type 1 spatial filters maintain localisation accuracy at adverse SNRs and are not prone to produce 'phantom' sources of activity (Huang et al., 2004).
The main limitation of MEG is the difficulty in detecting and localizing deep sources. However, Hillebrand and Barnes (2002) have demonstrated ~90% detection rate for MEG signals in IFGpo/PCG, middle occipital gyrus (MOG), and indeed most of the cortical network involved in reading which is the concern of the current study. An exception to this is the medial portion of the middle and anterior fusiform gyrus, where detection probability reduces to ~50%. In addition there is a theoretical restriction in resolving perfectly temporally correlated sources (Van Veen et al., 1997). However, perfect correlation between distinct sources is unlikely and beamforming has been shown to resolve even highly temporally correlated sources (Huang et al., 2004).
A major advantage of beamformer analysis relative to alternative source localisation techniques, such as equivalent current dipole modelling or minimum norm estimation, is the ability to image changes in cortical oscillatory power that do not give rise to a strong signal in the evoked-average response. Evoked signal components tend to have a stereotypical wave shape that is phase-locked to the onset of the stimulus in such a way it can be revealed by both the evoked average in the time domain and by frequency domain analyses. In contrast, induced components are those changes in oscillatory activity which, though they may occur within a predictable time-window following stimulus onset, lack sufficient phase locking to be revealed by averaging in the time domain. They are, however, revealed by changes in power in the frequency domain.
Source-space analysis: statistics
After acquisition, the MEG data were segmented into epochs running from 900ms before target onset to 800ms after. Epochs containing artifacts, such as blinks, articulatory movements, swallows and other movements, were rejected.
Previous MEG studies of visual word recognition have revealed a complex spread of activation across the cortex with time (Tarkiainen et al, 1999; Pammer et al., 2004; Cornelissen et al., 2009). The earliest components of this pattern occur in occipital, occipito-temporal, and prefrontal cortex ~100–150ms post stimulus. Therefore, as a compromise between being able to reveal this temporal pattern across the whole brain and being able to resolve oscillatory activity as low as 5–10Hz, we conducted beamforming analyses for 200ms long windows.
At the first, within subject level of statistical analysis, we computed a paired sample t-statistic for each point in the VE grid. To do this, we compared the mean difference in oscillatory power (averaged across epochs) in four frequency bands: 5–15Hz, 15–25Hz, 25–35Hz & 35–50Hz between a 200ms passive window (i.e. −790 to −590ms before target onset), which was shared between all conditions, and two active time windows (0–200ms and 200–400ms following target onset). This procedure generates separate t-maps for each participant, for each contrast, at each of the frequency-band/time-window combinations. Individual participant’s t-maps were then transformed into the standardized space defined by the Montreal Neurological Institute (MNI).
At the second, group level of statistical analysis, we used a multi-step procedure (Holmes et al., 1996) to compute the permutation distribution of the maximal statistic (by re-labelling experimental conditions), in our case the largest mean t-value (averaging across participants) from the population of VEs in standard MNI space (Nichols & Holmes, 2004). For a single VE, the null hypothesis asserts that the t-distribution would have been the same whatever the labelling of experimental conditions. At the group level, for whole brain images, we rejected the omnibus hypothesis (that all the VE hypotheses are true) at level α=0.05 if the maximal statistic for the actual labelling of the experiment was in the top 100α% of the permutation distribution for the maximal statistic. This critical value is the (c+1)th largest member of the permutation distribution, where c=[αN], αN rounded down. This test has been formally shown to have strong control over experiment-wise Type I error (Holmes et al., 1996).
Time-frequency analysis
At specified ROIs, we wanted to compare the evoked and induced frequency components between experimental conditions, retaining millisecond temporal resolution. We selected ROIs based on peaks in the group level analyses, and used separate beamformers to reconstruct the time series at these sites. We used Stockwell transforms (Stockwell et al., 1996) to compute time-frequency plots for each participant for each condition, and used generalized linear mixed models (GLMM) to compare these at the group level. The GLMMs included repeated measures factors to account for the fact that each participant’s time-frequency plot is made up of multiple time-frequency tiles. Time-frequency (spatial) variability was integrated into the models by specifying a spatial correlation model for the model residuals (Littell et al., 2006).
Behavioral
To verify our task design and stimuli, 18 participants (none of whom subsequently participated in the MEG) read aloud target words primed by the PSEUD (mean vocal reaction time [VRT] 419.1ms, SEM 21.9ms), ORTH (mean VRT 439.8ms, SEM 21.1ms), and UNREL (mean VRT 467.5ms, SEM 19.3ms) conditions, thus confirming a 21ms pseudohomophone effect. A repeated measures ANOVA with post-hoc comparisons revealed significant differences between PSEUD & ORTH (t1,34=6.13, p<0.0001), PSEUD & UNREL (t1,34=14.35, p<0.0001) and ORTH & UNREL (t1,34=8.23, p<0.0001). In MEG, participants were very poor at correctly identifying animal words in the prime position (mean d’=0.40, SD 0.77), indicating appropriately low awareness of primes. In comparison, participants correctly identified animal words in the target position with a mean d’=3.55 (SD 0.54), indicating participants were successfully attending to the task.
MEG
Figure 1a illustrates 3D rendered images for a representative condition (PSEUD), thresholded at p<0.05 (corrected). During the first 200ms following target onset, in all three conditions cortical activity was centred on left IFGpo, PCG, and left and right middle occipital gyri (MOG). However, the inherent uncertainty in spatial localisation of MEG beamforming analysis prevented us from clearly distinguishing the extent to which activity was localized in either IFGpo or PCG alone or whether there was functionally distinct activity in both areas. Therefore, henceforth we label this cluster of activation as IFGpo/PCG. Activation during this time window also extended inferiorly towards left and right mid fusiform gyri, and superiorly towards right superior parietal lobule.
Figure 1
Figure 1
(a) 3D rendered cortical representations showing significant activity above baseline for PSEUD at four frequency bands and two time windows. T-maps are thresholded at p<0.05 (corrected); (b) Sagittal section showing border (---) between left IFGpo (more ...)
Figure 1b shows substantial overlap in IFGpo/PCG activity for all three conditions in the first 200ms. During the 200–400ms following target onset, all conditions activated additional reading-related regions, including anterior middle temporal gyrus, left posterior middle temporal gyrus, angular and supramarginal gyri, and left superior temporal gyrus.
We performed region of interest analyses on the IFGpo/PCG (centred on MNI co-ordinate: −56, 4, 18) and the left and right MOG (centred on MNI co-ordinates: −26, −96, 8 and 24, −98, 10 respectively) sites visible in the 0–200ms window (see Figure 1a) to compare the strength of responses between conditions. Figure 1c shows the results of group level comparisons of time-frequency plots for the critical comparison between PSEUD and ORTH 3. It demonstrates that shared phonology between prime and target, over and above shared orthography, results in significantly greater induced 30–40Hz activity (blue-aqua scale) at IFGpo/PCG ~100ms after stimulus presentation. No such differences were found in MOG.
Within 100ms of target word onset, we observed stronger responses to pseudohomophone priming than to orthographic priming of visually presented words in a cluster that includes pars opercularis of the left inferior frontal (IFGpo) and/or precentral gyri (PCG). These findings therefore demonstrate an early neurobiological response to phonological priming during visual word recognition within a time frame that is consistent with behavioral studies and Ashby et al.’s (2009) ERP result. Furthermore, these data provide additional confirmation of the early activation of IFGpo/PCG in response to visually presented words as reported by Pammer et al. (2005) and Cornelissen et al. (2009).
Involvement of left posterior IFG is not unique to language and visual word recognition tasks. For example, there is fMRI evidence for a role during motor imitation (Buccino et al., 2004) and in cognitive control (Snyder et al., 2007). However, the early difference we observed between the PSEUD and ORTH conditions was obtained from an event related paradigm in which the task, silent reading, was identical for all trials, and where participants could not predict the nature of the up-coming prime, or even detect it reliably. Therefore, we argue it is most parsimonious to attribute our findings to a stimulus driven differential effect in phonological priming, rather than top-down alternatives such as cognitive control. Indeed, we interpret the early engagement of MOG and IFGpo/PCG as reflecting prelexical orthographic-phonological mapping between these regions. Several lines of evidence are consistent with this idea. First, abstract representations of letters/letter-clusters are available in MOG as early as ~100ms after a printed word is presented (Tarkiainen et al., 1999; Hauk et al., 2006) thus providing the necessary orthographic component of the mapping within an appropriate timeframe. Second, white matter fibre tract connections between the inferior and middle temporal cortices, MOG and IFGpo/PCG may be carried by the superior longitudinal fasciculus (Wakana et al., 2004; Bernal and Altman, 2009), and these could provide the necessary anatomical connectivity. Third, MEG evidence of functional connectivity between MOG and IFGpo/PCG from reading tasks indicates that nodes in the left occipito-temporal cortex can cause the activity observed in prefrontal nodes of the reading network (Kujala et al., 2007). Fourth, early activation of IFGpo/PCG should be observed for pronounceable letter strings not only for silent reading tasks as demonstrated in the current study (see Fig. 2b) but also for visual lexical decision and for passive viewing of words, and this has been shown by Pammer et al. (2004) and Cornelissen et al. (2009) respectively.
The difference in induced oscillatory responses between pseudohomophone priming and orthographic priming at LIFGpo/PCG occurred within 100ms of target onset, whereas differences in the evoked activity were not apparent in IFGpo until ~150–200ms. This may provide an explanation for the failure of most EEG studies to identify such an early neurophysiological signature for fast phonological priming. Because analyses of EEG data are often restricted to the evoked average signal, only the brain responses that are phase-locked to target onset are routinely observed.Ashby et al. (2009), however, used short 3-letter stimuli, which may have aligned the phases of the cortical responses to individual trials sufficiently to reveal a significant effect of phonological consistency in their evoked averaged analysis, analogous to the recognition point for spoken stimuli.
Although the inferior frontal gyrus is implicated in many functions, direct recording in surgical patients (Greenlee et al., 2004) and fMRI studies (Brown et al., 2008) indicate that IFGpo/PCG in particular is strongly associated with motor control of speech articulators. Further evidence that this region is associated with speech production codes comes from Pulvermüller et al. (2006) who found that when individuals listened to speech sounds, somatotopic representations of articulatory features were activated in PCG which were spatially consistent with the motor representations required for generating those same speech sounds. Finally, activation of this speech-motor region is consistent with findings from behavioral studies suggesting that the phonology accessed in visual word recognition is sensitive to articulatory characteristics of the words (Abramson & Goldinger, 1997; Lukatela et al., 2004). In conclusion therefore, the early involvement of IFGpo/PCG in pseudohomophone priming supports a role for these sites in prelexical access to phonological information during visual word recognition. Moreover, these findings suggest that early word recognition may be achieved by a direct print-to-speech mapping mediated by a speech production code.
ACKNOWLEDGEMENTS
We are grateful to Andy Ellis of York University and Michael Simpson of York Neuroimaging Centre who provided advice and guidance at various stages of this project regarding the development of the experimental protocols, MEG data acquisition and analysis. We thank Vesa Kiviniemi, Department of Statistics, University of Kuopio, Finland, for advice on the statistical analyses of the time-frequency plots and to Jane Ashby of Central Michigan University for comments on drafts of the manuscript.
Footnotes
11 Five observers judged whether pseudohomophones sounded like target words and whether orthographic controls sounded different from target words. Winer’s inter-rater reliability for these decisions was 0.97.
2Participants could be heard over an intercom at all times ensuring they were not reading words aloud
3These statistical contours are based on the estimated marginal means derived from the model parameters and the predicted population margins were compared using tests for simple effects by partitioning the interaction effects.
  • Abramson M, Goldinger SD. What the reader's eye tells the mind's ear: Silent reading activates inner speech. Perception & Psychophysics. 1997;59:1059–1068. [PubMed]
  • Annett M. The binomial distribution of right, mixed and left handedness. The Quarterly Journal of Experimental Psychology. 1967;19(4):327–333. [PubMed]
  • Ashby J, Sanders LD, Kingston J. Skilled readers begin processing sub- phonemic features by 80 ms during visual word recognition: Evidence from ERPs. Biological Psychology. 2009;80:84–94. [PMC free article] [PubMed]
  • Barnes GR, Hillebrand A, Fawcett IP, Singh KD. Realistic spatial sampling for MEG beamformer images. Human Brain Mapping. 2004;23:120–127. [PubMed]
  • Bernal B, Altman N. The connectivity of the superior longitudinal fasciculus: a tractography DTI study. . Magnetic Resonance Imaging. 2009 [PubMed]
  • Bradley L, Bryant PE. Categorising sounds and learning to read - A causal connection. Nature. 1983;301:419–521.
  • Brookes MJ, Vrba J, Robinson SE, Stevenson C, Peters AM, et al. Optimising experimental design for MEG beamformer imaging. Neuroimage. 2008;39:1788–1802. [PubMed]
  • Brown S, Ngan E, Liotti M. A larynx area in the human motor cortex. Cerebral Cortex. 2008;14(4):837–845. [PubMed]
  • Buccino G, Vogt S, Ritzl A, Fink GR, Zilles K, Freund H, et al. Neural circuits underlying imitation learning of hand actions: an event-related fMRI study. Neuron. 2004;42(2):323–334. [PubMed]
  • Coltheart M, Rastle K, Perry C, Langdon R, Ziegler J. DRC: A dual route cascaded model of visual word recognition and reading aloud. Psychological Review. 2001;108:204–256. [PubMed]
  • Cornelissen PL, Kringelbach ML, Ellis AW, Whitney C, Holliday IE, Hansen PC. Activation of the left inferior frontal gyrus in the first 200 ms of reading: Evidence from magnetoencephalography (MEG) PLoS ONE. 2009 [PMC free article] [PubMed]
  • Ferrand L, Grainger J. The time-course of phonological and orthographic code activation in the early phases of visual word recognition. Bulletin of the Psychonomic Society. 1993;31:119–122.
  • Goswami U. Phonological representations, reading development and dyslexia: Towards a cross-linguistic theoretical framework. Dyslexia. 2000;6:133–151. [PubMed]
  • Grainger J, Kiyonaga K, Holcomb PJ. The time course of orthographic and phonological code activation. Psychological Science. 2006;17:1021–1026. [PMC free article] [PubMed]
  • Greenlee JDW, Oya H, Kawasaki H, Volkov IO, Kaufman OP, Kovach C, et al. A functional connection between inferior frontal gyrus and orofacial motor cortex in human. Journal of Neurophysiology. 2004;92:1153–1164. [PubMed]
  • Hauk O, Davis MH, Ford M, Pulvermüller F, Marslen-Wilson WD. The time course of visual word recognition as revealed by linear regression analysis of ERP data. Neuroimage. 2006;30:1383–1400. [PubMed]
  • Hillebrand A, Barnes GR. A quantitative assessment of the sensitivity of whole- head MEG to activity in the adult human cortex. NeuroImage. 2002;16(3):638–650. [PubMed]
  • Holmes AP, Blair RC, Watson G, Ford I. Nonparametric analysis of statistic images from functional mapping experiments. Journal of Cerebral Blood Flow & Metabolism. 1996;16:7–22. [PubMed]
  • Huang M-X, Shih JJ, Lee RR, Harrington DL, Thoma RJ, Weisend MP, et al. Commonalities and differences among vectorized beamformers in electromagnetic source imaging. Brain Topography. 2004;16(3):139–158. [PubMed]
  • Kozinska D, Carducci F, Nowinski K. Automatic alignment of EEG/MEG and MRI data sets. Clinical Neurophysiology. 2001;112:1553–1561. [PubMed]
  • Kujala J, Pammer K, Cornelissen PL, Roebroeck A, Formisano E, Salmelin R. Phase coupling in a cerebro-cerebellar network at 8–13 Hz during reading. Cerebral Cortex. 2007;17:1476–1485. [PubMed]
  • Littell RC, Milliken GA, Stroup WW, Wolfinger RD, Schabenberber O. SAS for Mixed Models Edition: 2. SAS Press; 2006.
  • Lukatela G, Eaton T, Sabadini L, Turvey MT. Vowel duration affects visual word identification: evidence that the mediating phonology is phonetically informed. Journal of Experimental Psychology: Human Perception and Performance. 2004;30:151–162. [PubMed]
  • Lukatela G, Turvey MT. Visual lexical access is initially phonological: Evidence from phonological priming by homophones and pseudohomophones. Journal of Experimental Psychology: General. 1994;123:331–353. [PubMed]
  • Nichols TE, Holmes AP. Nonparametric permutation tests for functional neuroimaging. In: Frackowiak RSJ, Friston KJ, Frith CD, Dolan RJ, editors. Human Brain Function. 2nd ed. London: Elsevier; 2004. pp. 887–910.
  • Pammer K, Hansen PC, Kringelbach ML, Holliday I, Barnes G, Hillebrand A, et al. Visual word recognition: The first half second. Neuroimage. 2004;22:1819–1825. [PubMed]
  • Perfetti CA, Bell L. Phonemic activation during the first 40ms of word identification: evidence from backward masking and priming. Journal of Memory and Language. 1991;30:473–485.
  • Pulvermüller F, Huss M, Kherif F, del Prado Martin FM, Hauk O, Shtyrov Y. Motor cortex maps articulatory features of speech sounds. Proceedings of the National Academy of Science USA. 2006;103:7865–7870. [PubMed]
  • Rayner K, Liversedge SP, White SJ, Vergilino-Perez D. Reading disappearing text: Cognitive control of eye movements. Psychological Science. 2003;14:385–388. [PubMed]
  • Sekihara K, Sahani M, Nagarajanc SS. Localization bias and spatial resolution of adaptive spatial filters for MEG source reconstruction. Neuroimage. 2005;25:1056–1067. [PubMed]
  • Sereno SC, Rayner K, Posner MI. Establishing a time-line of word recognition: Evidence from eye movements and event-related potentials. NeuroReport. 1998;9:2195–2200. [PubMed]
  • Snyder HR, Feigenson K, Thompson-Schill SL. Prefrontal cortical response to conflict during semantic and phonological tasks. Journal of cognitive neuroscience. 2007;19(5):761–775. [PubMed]
  • Stockwell RG, Mansinha L, Lowe RP. Localization of the complex spectrum: The S transform. IEEE Transactions of Signal Processing. 1996;44(4):998–1001.
  • Tarkiainen A, Helenius P, Hansen PC, Cornelissen PL, Salmelin R. Dynamics of letter string perception in the human occipitotemporal cortex. Brain. 1999;122:2119–2131. [PubMed]
  • Van Veen BD, van Drongelen W, Yuchtman M, Suzuki A. Localization of brain electrical activity via linearly constrained minimum variance spatial filtering. IEEE Transactions on Biomedical Engineering. 1997;44(9):867–880. [PubMed]
  • Wakana S, Jiang H, Nagae-Poetscher LM, van Zijl PCM, Mori S. Fiber tract-based atlas of human white matter anatomy. Radiology. 2004;230:77–87. [PubMed]