|Home | About | Journals | Submit | Contact Us | Français|
A recent study by Packman, Onslow, Coombes and Goodwin (2001) employed a non-word reading paradigm to test the contribution of the lexical retrieval process to stuttering. They consider that, with this material, the lexical retrieval process could not contribute to stuttering and that either anxiety and/or the motor demand of reading are the governing factors. In this article, we will discuss possible processes underlying non-word reading and argue that the conclusion arrived at by Packman et al. does not stand up to close scrutiny. In their introduction, the authors acknowledge that the lexicalization process involves retrieval and encoding of words. In a non-word reading task, the word retrieval component is eliminated. The possibility that the encoding component of the lexicalization process leads to stuttering is, however, completely ignored by the authors when they attribute stuttering to motor demands. As theories put forward by Postma and Kolk (the Covert Repair Hypothesis, 1993) and Howell and Au-Yeung (the EXPLAN theory, in press) argue heavily for the role of the phonological encoding processes in stuttering, Packman et al.'s work does not evaluate such theories. Theoretical issues aside, Packman et al.'s arguments about reading rate and stuttering rate based on reading time is also questionable.
The term ‘lexical retrieval’ is not clearly defined by Packman et al. (2001). At the outset, they adopt Levelt's model of speech production (Levelt and Wheeldon, 1994; Levelt, Roelofs and Meyer, 1999) in which the ‘morphological and phonological encoding’ is the last of the three linear stages in the lexical retrieval process (see Packman et al. p.488). Packman et al. use the Levelt model exclusively to discuss various theories of stuttering. On this basis, we, therefore, take the view that Packman et al. include phonological encoding as part of lexical retrieval.
The basis of their experimental design is that in a non-word reading task ‘no lexical retrieval is involved because the words are meaningless’ (p.489). This suggests that the authors do not consider that phonological encoding is involved in non-word reading. However, non-words need encoding for output (see next section). It seems then that Packman et al. use the term ‘lexical retrieval’ to refer to the conceptualization and the word selection process (the first two stages in the Levelt model of lexical retrieval) in lexicalization.
In the latter case, the elimination of the lexical retrieval process from non-word reading leaves the phonological encoding process intact. This would defeat the objective of the Packman et al. study which aims to prove that linguistic processing does not play a role in stuttering and that motor demand is the likely source for stuttering. To do this, one would need to exclude encoding too which they have not and which is the crucial component in the models of stuttering they discuss (Perkins, Kent and Curlee, 1991; Postma and Kolk, 1993; Prins, Main and Wampler, 1997; Au-Yeung, Howell and Pilgrim, 1998).
According to Packman et al. (2001), the non-word reading task removes the lexical retrieval process from speech (p.489). They state that ‘This procedure eliminates the need to access the cognitive representations of words or word meaning’. The authors constructed two English passages and two non-word variations. The non-word passages are variations of one of the English passages (76 words). Non-words in the passages matched those of the real word counterpart for initial sound and syllable length. The authors did not use control speakers who do not stutter but they claim that ‘reading non-words has never been reported to cause stuttering in normally fluent speakers’ (p.496). The last statement is counterintuitive. Professional newsreaders are from time to time disfluent when they read foreign names which they do not know. Fluent speakers may also choose to lengthen the reading time to reduce disfluency where speakers who stutter may well adopt different speech rate control strategies (cf. Howell and Au-Yeung, in press). The topic of speech rate will be taken up again in a later section.
There are also different types of non-words for English readers. Whittlesea and Williams (1998) distinguish between orthographically regular (easy) and orthographically irregular (hard) non-words. Their examples of the easy non-words are HENSION, FRAMBLE and BARDEN which are easy to pronounce and ‘are similar to many natural words in orthography and phonology, but have no meanings’ (p.144). Examples of hard non-words are JUFICT, STOFWUS and LICTPUB. Under such a classification system, almost all words in the non-word passages constructed by Packman et al. (2001) fall under the hard non-word category which are difficult to pronounce even by fluent speakers, e.g. YARL, EFUM, TRUMDAG, KLUPASUG. According to Wimmer and Goswami (1994) and, more recently, Landerl (2000), such hard non-words are particularly difficult for native English readers who rely on a direct recognition strategy whereas native German readers experience less difficulty because they rely on grapheme-phoneme conversion for pronunciation.
According to ‘dual-route’ models of reading, there are two separate mechanisms; the lexical route and the sublexical route (Joubert and Lecours, 2000). In the lexical route, words are recognized from their holistic form. In the sublexical route, the written words or non-words are converted in a different way from the written form into their phonological form. The sublexical route is assumed to include the following three stages: graphemic parsing, graphophonemic conversion, and phoneme blending. The dual route models are often used to explain reading disorders in which the grapheme-to-phoneme conversion is at fault. For example, in phonological dyslexia, non-word reading shows a deficit while word reading remains intact (Cestnick and Coltheart, 1999; Southwood and Chatterjee, 2001). In another study, Ferrand (2000) found longer latency for naming multi-syllabic low frequency words and non-words in French than naming their monosyllabic counterpart but no such effect is found in high frequency words. Taken the arguments from the two studies together, the lexicalization of high frequency words depends largely on the lexical route while that of low frequency words and non-words depends largely on the sublexical route. In Packman et al.'s (2001) study, the words in the English passages are high frequency words which would be processed differently from low frequency words or non-words. Furthermore, there are studies relating word frequency and stuttering rate in reading (Schlesinger, Forte, Fried and Melkman, 1965; Soderberg, 1966) where low frequency words are stuttered more than high frequency words which may have arisen from the lexical/sublexical route difference. A better study should control word frequency and compare reading of low frequency words with non-words.
The authors have advocated elsewhere a theory based on the role of syllabic stress and its variability on the speech motor system in stuttering (Packman, Onslow, Richard and Van Doorn, 1996). This component of stress assignment is not addressed by Packman et al. (2001) even though stress placement on non-words may affect whether words are stuttered (Wingate, 1984; Klouda and Cooper, 1988).
When constructing the non-word passages, Packman et al. (2001) tried to make the passages similar to the real word passage in term of properties of the initial syllable and the number of syllables in a word. The stress pattern of words and the sentential stress pattern were not, however, taken into account. In the non-word passages, there is no distinction between function and content words where the former usually carries no stress. The counterparts of the function words in the non-word passages, on the other hand, may be stressed by the readers. In Packman et al.'s study, the non-word counterparts of ONTO are ANKEE and UNLAR and those of INTO are ANTAY and UNDOR. The structures of the four non-words resemble the structures of content words more than those of function words. Recent work on dual route models by Rastle and Colheart (2000) addresses the assignment of stress and vowel reduction on disyllabic non-words. The authors present rules that native readers could have used to assign stress to those non-words. The main rule of their computation model assigns stress to the final syllable if prefix-like sequences are found. The four non-words ANKEE, UNLAR, ANTAY and UNDOR have either ‘AN’ or ‘UN’ prefix and will receive word stress on the final syllable under Rastle and Colheart's model. On the other hand, the function words ‘onto’ and ‘into’ in the passage are normally not stressed.
The non-word passages were punctuated in the same way as the real word passage but it would be difficult to predict how readers use this to assign stress on the sentential level if at all. The role of word stress assignment is considered as part of the phonological encoding process (Rastle and Colheart, 2000) which Packman et al. have ignored in the discussion of their results.
Next, we consider how ignoring phonological enconding impacts on the evaluation of theories of stuttering. Packman et al.'s (2001) experimental design focused its attention on the work by Au-Yeung et al. (1998). They quoted from Au-Yeung et al. (1998, p. 1028) on the stuttering of content words where the articulatory planning is slower than for function words because of ‘their more complex semantic content, their phonetic composition, and their greater length’. In designing the non-word reading task, Packman et al. purposefully eliminated the semantic content and equated the phonetic composition and length of all words. Recent work by Howell and colleagues has further specified the source of the difficulty. Howell, Au-Yeung and Sackin (2000) quantify the difficulty on content words by their phonological properties. Howell and Au-Yeung (in press) go into further detail about the timing asynchrony between the planning (including phonological encoding) and the execution of the plans which leads to dysfluency.
Various studies (e.g. Balota, Law and Zevin, 2000) have shown that naming latency is longer for non-words than for low frequency words and longer for low frequency words than for high frequency words. In non-word reading, the component of semantic content retrieval is missing when compared to word reading. It is logical then that the phonological encoding process of non-word reading must be much more complex than for word reading as its naming latency is much longer. So, it is reasonable to conclude that phonological encoding of hard non-words used in the Packman et al. study is particularly taxing. This could be because it involves the stress assignment process of reading non-words discussed in the last section (cf. Rastle and Colheart, 2000).
The phonological encoding process is present in both word and non-word reading and both of these reading conditions lead to stuttering in all three readers in Packman et al.'s (2001) study. It is, thus, reasonable to argue that this particular component of linguistic processing must play an important role as the source of stuttering. Instead, the authors jump straight to the conclusion that the motor demand of speech is the main reason for the stuttering events.
Packman et al. (2001) aimed, but failed, to construct a paradigm to eliminate stuttering in a particular condition in reading. Winkler and Ramig (1986), on the other hand, succeeded in doing something similar with another task. When a sentence-imitation task is compared between children who stutter and those who do not stutter, there is no difference in fluency between the two speaker groups aged six to 12. The difference only emerges in a story-retelling task. This observation directly challenges the claim made by Packman et al. that the motor demands in speech is the main culprit in stuttering. In a sentence-imitation task, the phonological plans of words are made available to the children by the experimenter while the motor demands remain intact. The children are only required to re-execute the given plans while in story retelling and spontaneous speech, the phonological plan is not given.
Packman et al. (2001) discuss the reading time of each session in relation to the stuttering count. They found an unreliable relationship between speech rate and stuttering rate (p.496). One major draw back from their experimental design is that the stuttered episodes are not eliminated from the reading time. A single stutter can last for any duration. For instance, a single repetition of a function word is much shorter than a long prolongation while both produce a single stutter count. Most recent researchers on stuttering advocate the use of articulation rate to avoid this problem (Kelly and Conture, 1992; Kalinowski, Armson and Stuart, 1995; Logan and Conture, 1995; Yaruss and Conture, 1995; Howell, Au-Yeung and Pilgrim, 1999). Articulation rate excludes all stuttering episodes and pausing time from the rate calculation. Howell et al. (1999) further argue that a local articulatory rate based on tone units is a better predictor for stuttering than a global articulatory rate based on whole reading/speech sessions. The authors find fast tone units (more than five syllables per second) are more likely to be stuttered than medium (between four and five syllables per second) or slow (less than four syllables per second) tone units within the same speech sample. Packman et al. use a global measure. This can include local variation in rate that allows a section to have globally slow rate but as many fast tone units as a globally fast stretch.
The only clear pattern from Packman et al.'s data shows that the reading time for non-word passages are longer than the two passages with real words. The reason for the longer reading time could be due to a number of reasons discussed earlier. The naming time for non-words is much longer than for real words. This is especially true for all the hard non-words chosen rather than easy non-words. The phonological encoding process for such non-words is predicted to be longer than for real words. Such lengthening of planning time may lead to a slow down of the execution of the speech plans. Such slow down may, in turn, lead to a reduction of stuttering (cf. Howell and Au-Yeung, in press). If, however, a reader chooses to speed up the articulation rate, the stuttering rate will increase. Without converting the reading time into a meaningful articulation rate, it would be impossible to establish any relationship between speech rate and stuttering rate for the data obtained by Packman et al.
Taking the speech production model of Levelt (Levelt and Wheeldon, 1994; Levelt, Roelofs and Meyer, 1999), the phonological encoding stage has been assumed by Packman et al. (2001) to be part of lexical retrieval. Non-word reading has only eliminated the conceptualization and word selection stages in normal reading or speech task. It does not eliminate the entire lexical retrieval process. As discussed in the section identifying the processes involved in translating non-words into sounds, the phonological encoding stage is paramount in non-word reading. The authors have, however, failed to consider this important process in the failure of fluency. They have argued instead that the motor demand of the speech output is the main problem together with the anxiety of the readers. From the information available, the most that the Packman et al.'s results can show is that conceptualization and word selection cannot be the sole trigger of stuttering. The conceptualization and word selection processes may very well have interacted in some cases with other processes such as the phonological encoding process. The resultant of the interaction may intensify the chance of a word being stuttered.