|Home | About | Journals | Submit | Contact Us | Français|
Learning to read takes time and it requires explicit instruction. Three decades of research has taught us a good deal about how children learn about the links between orthography and phonology during word reading development. However, we have learned less about the links that children build between orthographic form and meaning. This is surprising given that the goal of reading development must be for children to develop an orthographic system that allows meanings to be accessed quickly, reliably and efficiently from orthography. This review considers whether meaning-related information is used when children read words aloud, and asks what we know about how and when children make connections between form and meaning during the course of reading development.
Learning to read takes time and it requires explicit instruction. Nevertheless, what young children learning to read accomplish over their first few years of reading experience is truly remarkable. Armed with a relatively small amount of code (in English, the 26 letters of our alphabet), children begin to develop the skills they need to recognize words. By middle childhood, they will have seen many thousands of printed words. From this, they are able to extract principles that generalize, allowing them to read words that they have never seen before, words such as Hogwarts, Voldemort and Quidditch. Alongside this impressive power to generalize, they are able to learn and to remember words that seem to disobey all of the rules, words such as friend, meringue and shove. They are also flexible in their learning, disambiguating words that look the same but sound different (tear in her eye versus tear in her dress) or sound the same but look different (where and wear, there and their). There can be no doubt that the goal of successful reading is for the reader to understand and interpret the content of the text they have read, nor that this is a highly complex skill (e.g. Kintsch & Rawson 2005; Perfetti et al. 2005). Equally though, there is no doubt that at the foundation of reading is our ability to process words: if a child fails to read words, if they are slow to read words or if they are unable to appreciate the meanings of words, reading comprehension will be seriously hampered. Thus, it is important to understand how children learn to read words as without this skill, text comprehension simply cannot happen.
At the very heart of learning to read words is the need for children to learn about form—that is, orthography (letters), phonology (sound) and the links between orthography and phonology. Appropriately, therefore, this has been the focus of research into how children learn to read words, and over the last three decades, the field has learned an enormous amount about how children learn to process form. I begin by reviewing some of this work. Arguably, however, we have learned less about the links that children build between form and meaning. This is surprising given that the goal of reading development must be for children to develop an orthographic system that allows meanings to be accessed quickly, reliably and efficiently from orthography. The second part of our review considers what we know about how and when children make connections between form and meaning during the course of reading development. Two topics are addressed. First, is there any evidence that children's knowledge about the meanings of words contributes to reading development, specifically to the process of reading single words out loud? Second, how and when do children begin to access meaning from orthography? Both of these topics have attracted discussion and debate in the adult literature but have received much less attention in the developmental literature.
As skilled adults, most of our reading is silent. In contrast, reading in young children is characterized by reading out loud and much of what we know about children's reading development has been learned from experiments that ask children to read aloud letter strings.1 This demands that children make explicit links between spellings and their sounds—between orthography and phonology. Before considering how this process develops, it is worth noting that in adults, orthography and phonology are tightly linked. This is clearly illustrated by the striking finding that orthographic knowledge interferes with phonological processing in literate adults: even when a task is presented entirely within the auditory domain, orthographic information exerts an influence. For example, skilled readers are faster to decide that pairs of (aurally presented) words rhyme if they also share a spelling (e.g. rope–hope), compared with pairs of words that do not share a spelling (e.g. rope–soap; Seidenberg & Tanenhaus 1979). People are also sensitive to spelling and orthography when processing speech, for example when asked to decide whether a heard item is a word or a non-word in an auditory lexical decision task (e.g. Chereau et al. 2007; Ziegler et al. 2008). Interference continues to be seen in tasks designed to minimize strategic factors influencing performance (Taft et al. 2008). In illiterate people (and in young children who are pre-readers), orthographic interference is not observed (De Santos Loureiro et al. 2004; Serniclaes et al. 2005). One way to interpret these observations is to suggest that in literate people, phonology and orthography become tightly linked such that one code activates the other very quickly and automatically, consistent with the language processing system being highly interactive. Alternatively, learning to read may cause orthography to become amalgamated with phonology, such that orthographic information is coded into the phonological representations of literate people (Ehri & Wilce 1980; Ziegler & Goswami 2005). Either way, it is clear that once people acquire an orthographic code, its effects are powerful, exerting an influence in the absence of any visual stimuli or visual task demands. But how it is that children develop such a system?
The first task facing the novice reader is to become familiar with the alphabet and with the alphabetic principle—the revelation that letters code phonological information and that there is a systematic relationship between printed words and their pronunciations (Byrne 1998). As soon as children begin to grasp this principle, we see evidence that their reading attempts are underpinned by phonological knowledge. Building on earlier work by Ehri & Wilce (1985), Rack et al. (1994) taught 5-year-old children to associate printed abbreviations (cues) with phonological forms. For example, children learned to associate the letter cue dbl with the word ‘table’. Rack et al. compared two types of visual cue: phonetic and control. Phonetic cues, like dbl for table, embodied something of the sound of the base word, but one letter was replaced by a letter representing a sound that differed only in voice, not in place or manner of articulation (in this example, the letter t was replaced by the letter d; ‘t’ and ‘d’ sound similar, differing only in whether the vocal cords are vibrating when the sound is released). Control cues were identical to phonetic cues except that replaced letters were phonetically more distant (for example, kbl for table). Even though the children had extremely limited reading skills (they were completely unable to read even a very simple nonsense word), they found it easier to learn to associate phonetic cues than control cues.
This phonetic cue effect shows that beginning readers are sensitive to the relationship between the phonological features of spoken words and letters, and that this sensitivity assists learning. This is considered to form the basis of the alphabetic principle, playing a crucial role in the development of decoding. Decoding is the term used to describe the active process by which children map from orthography to phonology, usually in a letter-by-letter (or grapheme-by-grapheme) manner, ‘sounding-out’ words to decipher their pronunciation. Decoding is effortful, particularly for longer words (e.g. Samuels et al. 1978). It is also error prone, especially in a language such as English where the relationship between orthography and phonology is not transparent. For example, a child may read island as ‘izland’. Notwithstanding these limitations, decoding has been hypothesized as the sine qua non of reading acquisition and learning about orthography (Share 1995) as it provides children with a means to translate a printed word into an oral language code; in turn, this offers opportunities to acquire word-specific orthographic knowledge that is gradually learned and refined over time.
This developing orthographic system soon begins to show its sophistication. For example, young children are sensitive to the frequency and legality of different orthographic patterns, and even kindergarten children are able to decide that a written string such as pess is more likely to be a word than a string such as ppes (Cassar & Treiman 1997; Pacton et al. 2001). These results suggest that over time and following even quite limited exposure to written language, children become sensitive to orthographic constraints, based on the frequency of occurrence of letters in words that they have been exposed to. By 7 years of age, orthographic knowledge interferes with phonological processing such that auditory lexical decision times are slowed for items that contain inconsistent spelling–sound patterns, as they are in literate adults (Ventura et al. 2006). Interestingly, Ventura et al. found that interference from orthography was more pervasive in young children than in adults, with interference being seen for words as well as non-words (in adults, orthographic interference on phonological processing is only seen for lexical items), indicative of a highly interactive system, emphasizing mappings between orthography and phonology. By 10 years of age, children begin to show a more adult-like pattern of orthographic interference, with orthography influencing lexical (word), but not sublexical (non-word), phonological processing (Ventura et al. 2008).
As children encounter words in print, they continually expand and refine their orthographic knowledge. Ziegler & Goswami (2005) provide an eloquent description of how reading experience allows children to identify and extract units of correspondence between sound and spelling that are relevant for their language. These units operate at multiple and flexible ‘grain sizes’, determined by the constraints of the language and, importantly, the nature of the print vocabulary encountered by the child. Grain sizes may be small, corresponding to a single grapheme–phoneme correspondence (e.g. T pronounced as ‘t’ or CK pronounced as ‘k’) or may be large, corresponding to a larger unit of spelling–sound correspondence (e.g. ING pronounced ‘ing’ or STR pronounced ‘string’). Print experience, in interaction with the phonological and orthographic characteristics of the language, provides the database from which statistical regularities are abstracted. These ideas are consistent with the lexical tuning hypothesis offered by Castles et al. (2008). Castles et al. (2008) used a technique known as masked priming. Here, participants make lexical decisions, that is, they are asked to decide whether a written string (the target) is a word or a non-word. Before the string appears, another string (the prime) is displayed very briefly, usually for less than 50 ms. Despite participants being unable to report the prime stimulus, it influences processing of the target. When the prime is identical to the target, the target is processed faster, an effect known as identity priming. Using data from a series of masked priming experiments, Castles et al. argued that early in development, the orthographic system can afford to be fairly broadly tuned: if a child's reading vocabulary contains relatively few words, competition from other items will not be high. With reading experience, however, written vocabulary grows and lexical competition increases, demanding that the orthographic system adapts. Castles et al. (2008) demonstrated that early in development, the orthographic system is quite lax, with young readers showing ‘identity’ priming even when the target and prime differed by a letter (e.g. julk primed JUNK). Two years later, the same children, like adults, did not show priming in this condition, suggesting that their orthographic systems had become more precise. This demonstrates how reading development is a continuous process. Even in adulthood, new words are regularly learned, and indeed, the learning of new forms may have an effect on the processing of those that are already in existence (e.g. Bowers et al. 2005a). This view of reading development emphasizes continuous learning, characterized by interactivity and restructuring as new items enter an individual's lexicon via reading experience.
So far, our discussion has focused on the links that children form between orthography and phonology during the course of reading development, a process that provides the foundation of learning to read words. Over the last three decades, a large body of research has been concerned with examining the skills and abilities that underpin this development. This has provided clear evidence that learning to read is promoted by changes and growth in children's spoken language, particularly within the phonological domain (for review, see Goswami & Bryant 1990). Put simply, there is an intimate relationship between the children's appreciation of the sound structure of speech and their ability to learn letter correspondences that map onto units of speech (e.g. Hulme et al. 2002). Without wishing to challenge the central role of phonological skills in learning to read, we can ask what other factors also contribute to its development. For example, we know relatively little about how beginning readers code orthographic representations—this is an important question for future research (for review, see Castles & Nation 2006, 2008). In this paper, however, we ask whether there is a role to be played by other aspects of oral language, beyond phonology. The rationale here is simple: integral to a word being a lexical item is the fact that it has meaning. Thus, in addition to familiar words comprising phonological and orthographic representations, they also have semantic representations. Do these make any contribution to the development of word reading?
Before addressing this question, it is important to clarify what is meant by word reading in this context. Most of us think of reading in terms of the silent reading of text. Clearly, this demands a complex set of processes that culminate in meaning being extracted from text. However, a large body of research has centred on the much more narrowly focused question of how adults read single words out loud. The extent to which the same processes are engaged when we read continuous text remains an open question.
In adults, the question of whether semantic representations contribute to the process of reading aloud has generated some debate over recent years. While contemporary models of reading aloud (in adults) emphasize procedures that translate from orthography to phonology (e.g. via both direct lexical and non-lexical routes of the dual-route cascade (DRC) model (Coltheart et al. 2001) and via the orthography–phonology pathway in the triangle model (Plaut et al. 1996)), they also propose a role for semantics. In the DRC model, semantic factors contribute via the lexical semantic route, whereas in the triangle model, semantic representations feed into the orthography–phonology–semantics pathway. Discussion here will focus on the triangle model for two reasons. First, the orthography–phonology–semantics pathway has been partly implemented in this model—this is not the case for the lexical semantic route of the DRC. Second, the triangle model has attempted to model development as well as skilled processing, and therefore is more relevant when we consider how children learn to read words.
In brief, the triangle model sees semantic factors contributing directly to reading aloud and its development (Plaut et al. 1996). Reading aloud follows from the activation of codes from visual input, and it is accomplished via two pathways working in parallel: a phonological pathway comprising connections between orthography and phonology and a semantic pathway comprising mappings between semantic, phonological and orthographic representations. Implementations of the model involve both pathways and all types of representation for the computation of all items, regardless of their lexicality, familiarity or frequency. However, as the phonological pathway is more direct, it is thought to be faster and therefore to contribute more to reading aloud than the semantic pathway. Importantly, although the semantic pathway is always active, contributing some information to the reading process, its contribution becomes more important when the phonological pathway is compromised (for example, when reading words that have inconsistent mappings between orthography and phonology).
Turning to behavioural data, although there is some evidence consistent with semantic factors contributing to reading aloud in adults, the data are not unequivocal, making interpretation difficult. First, semantic properties such as ambiguity and imageability influence lexical processing in adults (e.g. Hino & Lupker 1996; Woollams 2005), even after lexical variables such as frequency are controlled (e.g. Balota et al. 2004). However, imageability correlates highly with other important variables known to influence lexical processing, especially age-of-acquisition (AoA; e.g. Ellis & Monaghan 2002; Monaghan & Ellis 2002). Some theorists have argued that early acquired words have different phonological structures to words that are acquired at a later age (e.g. Brown & Watson 1987), meaning that apparent differences in semantic factors (as indexed by imageability effects) might in fact be a consequence of differences in phonological factors (as indexed by AoA). A second source of evidence for semantic influences on reading aloud comes from patients with semantic dementia who show declines in reading aloud that parallel their loss of semantic knowledge (e.g. McKay et al. 2007; Woollams et al. 2007). Note however that not all patients with semantic dementia show deficits in reading aloud (e.g. Blazey et al. 2005). Finally, when adults are taught to read novel words, pre-training the meanings of the words leads to more successful orthographic learning of the same items (McKay et al. 2008). Consistent with the triangle model, semantic involvement in reading aloud was demonstrated most clearly when the words to be learned contained inconsistent or irregular mappings between spelling and sound. However, a recent experiment suggests that the semantic effect in this experiment might be a consequence of learners paying greater attention to words in the semantic pre-exposure condition (Taylor et al. submitted), rather than the meaning content contributing directly. Clearly, more work is needed to confirm the extent to which semantic factors influence the processes involved in word naming.
Relatively few studies have examined semantic influences on children's reading of words. There is however an association between meaning-related factors and word reading development. Beginning readers find high imageability words easier to learn (Laing & Hulme 1999), suggesting that an item's ‘semantic richness’ influences its readability. Younger children, like poorer readers, rely more on context when attempting to read words than do older children or more skilled readers (e.g. Stanovich et al. 1985). More generally, children who have weak semantic skills show relative weaknesses when reading aloud inconsistent or irregular words (Nation & Snowling 1998, 2004; Ouellette 2006; Bowey & Rutherford 2007; Ricketts et al. 2007), reminiscent of the relationship between word reading and semantic knowledge reported in semantic dementia.
Given this evidence supporting a general association between meaning-related factors and learning to read, what does the triangle model predict about the potential role of semantics as children learn to read words? At a basic level, if semantic factors contribute to the computation of sound from print, there ought to be a relationship between children's semantic skills and the progress they make in learning to read words. A more specific prediction stemming from the triangle model is that the contribution of semantics should be strongest when children are asked to read words that have inconsistent mappings between orthography and phonology. Nation & Cocksey (2009a) tested these predictions. They asked 7-year-old children to read aloud two lists of words. One comprised regular words, all containing consistent mappings between orthography and phonology, whereas the other list was composed of irregular words, all containing atypical mappings between orthography and phonology. The two lists were matched for frequency and various other factors known to influence lexical processing. Knowledge of the meaning of the words was also assessed by asking the same children to define the same words orally. This allowed Nation and Cocksey to explore the relationship between word knowledge in the oral domain and reading aloud. If, as predicted by the triangle model, semantic factors contribute to reading aloud, there ought to be a close relationship between definition knowledge and reading success, especially for the irregular words—words that depend more on semantic support because of their atypical mappings between orthography and phonology.
To some extent, the results provided evidence for this relationship. Not surprisingly, children were better able to read the regular words than the irregular ones, but consistent with the two lists being well matched for frequency, definition knowledge was equivalent for the two types of word. Across items, there was a correlation between definition knowledge and reading success, and this was closer for the irregular words. This shows that children were more likely to read correctly those words that were familiar in their oral vocabulary, consistent with semantic factors playing a role. Importantly, however, other data reported by Nation & Cocksey (2009a) suggest that this conclusion may be premature. The same children also heard the same words embedded in an auditory lexical decision task, requiring them to discriminate words from non-words. Across items, reading performance was associated with auditory lexical decision performance; further analyses revealed that lexical decision success was at least as closely associated with reading as was semantic knowledge, measured by the definitions task.
Nation and Cocksey's data support the view that there is an association between knowledge in the oral domain and reading aloud, particularly for irregular words, thus supporting the general principles of the triangle model. However, their data are less clear about whether the association between knowledge and reading is most appropriately construed as a semantic contribution to word reading. Definition knowledge—clearly a semantic variable—was no better a predictor of reading success than auditory lexical decision performance. Arguably, when young children decide whether an auditory token is a word or not, they are accessing a form of semantic knowledge. Clearly, however, it is possible to know that an item is a word without knowing much, if anything, about its meaning. Thus, we can conclude from Nation and Cocksey's data that knowledge of lexical phonology—familiarity with the phonological form of a word—shows an association with children's ability to read that word. However, it is not yet clear whether semantic knowledge contributes anything over and above the contribution made by lexical phonology. An important caveat here is that Nation and Cocksey assessed semantic knowledge using a definitions task. This is a demanding task and, arguably, the narrow scoring system may fail to capture the subtleties of semantic knowledge. Future investigations may benefit from using a semantic categorization-type task rather than definitions; as well as revealing whether the children know a word, decision latencies would provide an index of processing cost, perhaps revealing gradations in semantic knowledge that do relate to reading proficiency.
Similar conclusions were reached by McKague et al. (2001). They taught young children either the phonological forms of new words or their phonological forms plus semantic information, before introducing their orthographic forms. Both types of pre-exposure facilitated orthographic learning, but there was no advantage for the semantic condition. Importantly, however, all items were regular in terms of orthography–phonology mappings. Evidence for a semantic contribution may be evident for items that contain less consistent spelling patterns. This pattern has been observed when adults learn to read aloud new words (McKay et al. 2008, but see Taylor et al. (submitted) for an alternative perspective), but is yet to be tested in developing readers. It is also important to note that in both Nation & Cocksey's (2009a) experiment and McKague et al.'s study, the children were young, averaging about 7 years of age. Simulations in the triangle model revealed that the relative contribution of the phonological pathway and the semantic pathway changed with reading experience (Plaut et al. 1996). Early in development, resources were devoted to establishing direct connections between orthography and phonology (the phonological pathway), akin to the early stages of learning to read. Later in training, however, the model came to depend increasingly on the semantic pathway, and this was most apparent for the computation of words with inconsistent mappings between orthography and phonology—words that are most challenging for the phonological pathway working in isolation. In keeping with the triangle model, it could be that semantic factors (beyond lexical phonology) only begin to exert an influence on children's reading aloud later in development, as the range and difficulty of words children are expected to be able to read increases. If correct, the item-by-item relationship between semantic factors and word reading should strengthen as reading skill increases.
In summary, evidence to date concerning whether semantic factors contribute to the process of children reading aloud remains equivocal. Although there is evidence consistent with there being a general association between meaning-related factors and learning to read words (e.g. Laing & Hulme 1999; Nation & Snowling 2004), attempts to specify the exact nature of this association have failed to find evidence that semantic knowledge about words influences children's reading aloud of those same words (McKague et al. 2001; Nation & Cocksey 2009a). It does seem to be the case, however, that lexical phonology—defined as familiarity with the word-level phonological form—is associated with word reading success.
Why might this be? Share's (1995, 2008b) self-teaching hypothesis offers a straightforward explanation whereby familiarity with the oral form of a word provides a strategy that can be used to support partial decoding attempts. For example, a child who reads iron as ‘i-ron’ may reason that this is not a word and so search for a familiar phonological form that is similar to the partial decoding attempt. An alternative view sees whole-word phonology playing a more active or implicit role, contributing to the reading-aloud response. Such interactivity is consistent with the general principles of the triangle model, although it is important to note that lexical phonology is not in itself represented in the model; instead, lexical knowledge emerges from distributed patterns of activation across phonological, orthographic and semantic units. From this perspective, lexical phonology, as indexed by performance on auditory lexical decision, may incorporate semantic influences (Plaut 1997). Taylor et al. (submitted) offer an alternative characterization of lexical phonology within a triangle model framework. They suggest that the semantic pathway is supported not by semantics, but by a mechanism that binds together the phonemes in known words in a qualitatively different way from common sublexical combinations of phonemes. They note that this is how semantic support operated in Plaut et al.'s (1996) implementation of the triangle model, with the so-called semantic support being provided by ‘additional input to the phonological units, pushing them towards their correct activations’ (Plaut et al. 1996, p. 95). This resonates with the implemented lexical route of the CDP+ model, a connectionist model that combines components of the triangle model with components of the DRC model and approximates aspects of skilled word reading well (Perry et al. 2007). In contrast to the triangle model, it does not aim to incorporate semantic representations. Although more recent versions of the triangle model have incorporated richer and arguably more plausible semantic representations (Harm & Seidenberg 1999, 2004), there has been no detailed discussion of how and indeed whether this influences reading aloud. Clearly, more empirical and modelling work is needed to understand the role that these levels of representation play in children's word reading development, as well as to clarify the distinction between lexical phonology and semantics more generally.
So far, we have considered reading aloud—how children generate a pronunciation from a printed letter string. We now turn to consider the question of how young readers access meaning from orthography. This question has generated considerable debate in the adult literature and is considered a classic issue in reading research. Central to this debate has been whether lexical access (defined here as accessing meaning from print) is direct from orthography (e.g. Forster 1976; Coltheart 1978) or indirect. Indirect lexical access is phonologically mediated, with orthography activating meaning via phonological recoding (e.g. Van Orden et al. 1990). Over recent years, evidence from empirical studies and computational modelling has led to a model of lexical access that incorporates both a direct orthography-to-semantics (O→S) route and an indirect orthography-to-phonology-to-semantics (O→P→S) route working in balance in skilled readers (Harm & Seidenberg 2004). Thus, rather than debate whether lexical access is direct or indirect, it is more appropriate to view lexical access as flexible in skilled readers.
Turning to developmental issues, Harm and Seidenberg noted three facts about the relationship between orthography, phonology and semantics that are important when thinking about reading development: orthography and phonology are correlated in alphabetic languages, phonology and semantics are known to young children (via oral language) and mappings between orthography and semantics are difficult to learn (although ultimately faster) to compute meaning as they are less well correlated. Harm & Seidenberg's (2004) connectionist model of lexical access behaved in a way consistent with these observations. Early in training, the model relied more on O→P→S to compute meaning from print. Despite the early predominance of the phonologically mediated pathway, mappings between O→S began to develop over time, and in the fully trained model, both pathways worked together in parallel and both contributed to lexical access.
How well do Harm and Seidenberg's simulations account for behavioural evidence from children? Very few studies have addressed this issue directly, although the clear evidence that phonological decoding is at the core of reading development has led to the assumption that lexical access in children is achieved via phonological recoding, O→P→S (e.g. Wagner & Torgesen 1987). Some evidence for this is provided by Doctor & Coltheart's (1980) observation that 6–10-year-old children are heavily reliant on phonological mediation when reading for meaning. They found that younger children were far more likely to accept a sentence such as ‘she bloo up the balloon’ as meaningful than older children, leading them to argue that lexical access in children is initially mediated by phonology, but that this strategy is gradually replaced by direct links emerging between orthography and semantics over time.
More recent evidence, however, suggests that children might be making direct links between orthography and semantics from the early stages of reading development. Nation & Cocksey (2009b) asked 7-year-old children to make semantic decisions about words, using the semantic competition paradigm devised by Bowers et al. (2005b). Items contained embedded words, for example, the word hip is embedded in the word ship. In the congruent condition, neither the carrier word nor the embedded word was related to the category judgement (e.g. is ship an animal?), whereas in the incongruent condition, embedded words were related in meaning to the category (e.g. Is ship a body part: although ship is not a body part, the embedded word hip is a body part). Children were significantly slower and less accurate at making category judgements in the incongruent condition, demonstrating that semantic information is activated very rapidly from subword orthography.
Building on this finding, two additional factors were manipulated. First, given children's reliance on phonological decoding and the assumption that lexical access is heavily dependent on O→P→S mappings (Doctor & Coltheart 1980; Wagner & Torgesen 1987), Nation & Cocksey (2009b) predicted that semantic interference would be seen only when the embedded word shared its pronunciation with the carrier word (e.g. the plum in PLUMP versus the bus in BUSH). Second—and following the same logic—more semantic interference was anticipated when embedded words overlapped at the beginning of carriers (e.g. the cat in CATCH) rather than at the end of carriers (e.g. the rice in PRICE). Rather surprisingly, however, neither the stability of pronunciation between embed and carrier nor the relative position of embeds influenced the magnitude of semantic interference in 7-year-old children: just as in adults, interference occurred in all conditions, as revealed by processing costs in the reaction time analyses and differences in accuracy.
These findings permit us to draw two notable conclusions about visual word recognition in young children. First, the fact that semantic interference operated at a subword level suggests that the semantic activation from orthography is not dependent on the identification of the whole word. Second, the finding that semantic interference was equivalent regardless of whether or not embedded words shared their pronunciation with the carrier word demonstrates that semantic activation is not dependent on phonological mediation. In turn, this shows that access to meaning from print can be direct, consistent with children using O→S connections much earlier in development than traditionally assumed. Thus, in addition to making connections between orthography and phonology, these findings suggest that beginning readers also forge links between orthography and semantics during the course of reading development.
Children learn a huge amount about words and orthography as their reading skills develop. Armed with basic decoding skills and an appreciation that letters code for phonological information, children have a means to begin to learn from their print experiences, accumulating knowledge about letters and their sounds. Two different perspectives concerning the nature of form–meaning links in children's reading development have been reviewed. First, we asked whether children's knowledge about the meanings of words contributes to the process of reading aloud. This remains a contentious issue in the adult literature, with a recent comprehensive study concluding that semantic involvement contributes a small but significant role in reading aloud (McKay et al. 2008). Although there is clear evidence of a general relationship between vocabulary knowledge and word reading (e.g. Nation & Snowling 2004; Ouellette 2006; Bowey & Rutherford 2007; Ricketts et al. 2007), it is not clear whether semantic factors play a specific role in reading aloud. Evidence to date fails to find support for a semantic involvement (McKague et al. 2001; Nation & Cocksey 2009a). More work is needed to clarify these findings in children of different ages and different levels of reading experience. The second perspective considered when and how children begin to access meaning from orthography. Although few data speak on this issue, Nation & Cocksey's (2009b) experiment provides clear evidence that 7-year-old children access meaning-level information from subword orthography, without phonological mediation. This shows that despite a relatively limited amount of reading experience, direct links have been made between orthographic and semantic representations.
One way to characterize reading development is as a process of constant learning, with experience leading to a continuous re-shaping of lexical knowledge over time, as new words are encountered and old words re-encountered. This characterization fits well with attempts to model reading development using connectionist models in which orthographic knowledge emerges gradually from the processing of input, without the specification of in-built changes in representational constraint. Various versions of the triangle model (Plaut et al. 1996; Harm & Seidenberg 2004) have been used to illustrate this approach in this review. It is important, however, to note its limitations and, in particular, to reflect upon its developmental plausibility. As its limitations are discussed at length elsewhere (e.g. Coltheart 2005), we consider here its developmental plausibility.
Three issues seem particularly problematic at the present time, although as Harm & Seidenberg (2004) point out, these are issues that can be addressed in future modelling attempts. First concerns the nature of orthographic representation. The model was not pre-trained on orthography, unlike semantics and phonology, which were both trained heavily before the onset of reading. Given the importance of letter knowledge during the early stages of word learning (e.g. Byrne & Fielding-Barnsley 1989; Muter et al. 2004), an elementary introduction to letters early in training is important if psychologically valid orthographic representations are to develop. This point is nicely demonstrated by Powell et al. (2006) who made a direct comparison between kindergarten children learning to read and the triangle model (the version reported by Plaut et al. (1996)). They noted that early in development, the original model did not generalize its knowledge to novel words very well and levels of non-word reading accuracy were much lower than they observed in beginning readers. In a revised model, they mimicked letter–sound knowledge by pre-exposing the model on grapheme–phoneme correspondences, before the introduction of words. This led to faster learning and greater generalization, bringing the model more in line with data from children. An issue for the model, more generally, is that the slot-coding scheme to represent orthography is not compatible with some behavioural data (for discussion, see Bowers et al. (2005b)).
A second issue concerns the nature of training and the training set of words the network is exposed to. In the triangle model, learning is achieved by a variant of a back-propagation algorithm. This means that learning is supervised, in the sense that the output of the network is monitored by an external ‘teacher’, and differences between the actual output and the correct output trigger changes in the network's connections. Although children learn through explicit instruction, they rarely receive feedback on each reading attempt. Instead, training is much more variable, sometimes comprising direct modelling of appropriate response, sometimes explicit training in phonological awareness or letter–sound knowledge and often no feedback at all. Harm & Seidenberg (2004) suggest that this rich and varied learning environment may be more advantageous than providing correct feedback on each trial as it ‘may discourage the development of overly word-specific representations in favour of representations that capture structure that is shared across words, improving generalisation’ (p. 673). This is an interesting issue for future research, but at present, it is important to note that the nature of feedback provided to a network differs substantially from the experiences of children learning to read.
A third and related issue concerns the learning algorithm used by Harm and Seidenberg. Very slow learning rates were employed so as to prevent the network's weights from oscillating wildly. The consequence of this is that many trials were required to learn each word, with the very large training set being presented thousands of times. This contrasts sharply with observations from children's learning where initial reading experiences tend to comprise exposure to a small vocabulary of written words that increases in size as proficiency develops (Powell et al. 2006). Despite this incremental and relatively limited training environment, very few exposures may be sufficient for a child to learn a new word (Share 2004; Nation et al. 2007).
In addition to these outstanding issues for the triangle model, more behavioural data from children are also needed. Despite a large body of work documenting the nature of orthographic representations and orthographic processing in adults (for recent review, see special issue edited by Grainger (2007)), very few studies have considered the development of these representations and processes. Similarly, the adult literature has attended to the role of morphology in word reading, providing evidence that sensitivity to morphological structure influences the early stages of visual word recognition (e.g. Rastle et al. 2004; McCormick et al. 2008). We know that children are sensitive to morphological structure in metacognitive awareness tasks and that this becomes reflected in their spelling attempts as literacy develops (e.g. Nunes et al. 1997, 2006; Kemp 2006; however, studies are only just beginning to examine in detail how morphology maps to orthography in the development of orthographic representations. For example, Burani et al. (2008) found that children were faster to read psuedowords that were made up of root and suffix morphemes (for example, womanist is a pseudoword but is made from two morphemes, namely woman+ist) than pseudowords that did not contain embedded morphemes. Interestingly, younger children (and poorer readers) also showed a processing advantage for words that contained morphological structure, suggesting that they were relying on morphological parsing to a greater extent than more skilled readers (see Reichle & Perfetti (2003) and Verhoeven & Perfetti (2003) for further preliminary work in this area).
Another area where additional data are needed concerns silent reading. As noted earlier, much of what we have learned about how children learn to read comes from studies of reading aloud rather than silently. Certainly, as children get older, reading aloud becomes less common, meaning that a thorough understanding of reading development requires researchers to explore silent reading. Indeed, recent studies following this approach suggest that word recognition is surprisingly parallel and interactive, even in very young readers (e.g. Alario et al. 2007; Nation & Cocksey 2009b).
Studies that marry an experimental methodology with a developmental perspective are likely to be fruitful, either via a longitudinal design, or via a cross-sectional design that considers carefully participants’ reading experience. In combination with the wealth of information already known about children's reading development, data of this kind have the potential to enhance our understanding of how and when children begin to forge links between form and meaning during the course of reading development.
This paper was prepared with the support of a grant from the Economic and Social Research Council. I would like to thank the two reviewers for their insightful and thoughtful comments on an earlier version of this paper.
One contribution of 11 to a Theme Issue ‘Word learning and lexical development across the lifespan’.
1This review will focus on research in English. Important insights have been learned from studying reading development in other languages. For recent reviews, see Ziegler & Goswami (2005) and Share (2008a).