Dysfluencies on function words in the speech of people who stutter mainly occur when function words precede, rather than follow, content words (Au-Yeung, Howell, & Pilgrim, 1998). It is hypothesized that such function word dysfluencies occur when the plan for the subsequent content word is not ready for execution. Repetition and hesitation on the function words buys time to complete the plan for the content word. Stuttering arises when speakers abandon the use of this delaying strategy and carry on, attempting production of the subsequent, partly prepared content word. To test these hypotheses, the relationship between dysfluency on function and content words was investigated in the spontaneous speech of 51 people who stutter and 68 people who do not stutter. These participants were subdivided into the following age groups: 2–6-year-olds, 7–9-year-olds, 10–12-year-olds, teenagers (13–18 years), and adults (20–40 years). Very few dysfluencies occurred for either fluency group on function words that occupied a position after a content word. For both fluency groups, dysfluency within each phonological word occurred predominantly on either the function word preceding the content word or on the content word itself, but not both. Fluent speakers had a higher percentage of dysfluency on initial function words than content words. Whether dysfluency occurred on initial function words or content words changed over age groups for speakers who stutter. For the 2–6-year-old speakers that stutter, there was a higher percentage of dysfluencies on initial function words than content words. In subsequent age groups, dysfluency decreased on function words and increased on content words. These data are interpreted as suggesting that fluent speakers use repetition of function words to delay production of the subsequent content words, whereas people who stutter carry on and attempt a content word on the basis of an incomplete plan.
stuttering; phonological words; function words; content words; speech plan
To identify changes in emergency department (ED) syndromic surveillance data by analyzing trends in chief complaint (CC) word count; to compare these changes to coding changes reported by EDs; and to examine how these changes might affect the ability of syndromic surveillance systems to identify syndromes in a consistent manner.
The New York City (NYC) Department of Health and Mental Hygiene (DOHMH) receives daily ED data from 49 of NYC’s 52 hospitals, representing approximately 95% of ED visits citywide. Chief complaint (CC) is categorized into syndrome groupings using text recognition of symptom key-words and phrases. Hospitals are not required to notify the DOHMH of any changes to procedures or health information systems (HIS). Previous work noticed that CC word count varied over time within and among EDs. The variations seen in CC word count may affect the quality and type of data received by the DOHMH, thereby affecting the ability to detect syndrome visits consistently.
The daily mean number of words in the chief complaint field were examined by hospital from 2008–2011. Spectral analyses were performed on daily CC word count by hospital to explore temporal trends. Change Point Analysis (CPA) using Taylor’s Method with a maximum change level of four was conducted on the CC field by hospital using 1,000 bootstrap samples. According to Taylor, a level one change is the most important change detected on the program’s first pass through the data. For this analysis, a change point was considered significant if it was level 1, detected an average change of more than 0.50 words per day, and was sustained for at least 6 months before a level 2 change of at least 0.50 words occurred. Results of the CPA were compared to reported changes identified by a survey conducted by DOHMH staff of 49 hospitals that collected information about their HIS and coding practices, including any recent system changes.
When a significant level one change was identified, time series graphs for six months before and after the change were created for five syndromes (cold, diarrhea, fever-flu, influenza-like-illness, and respiratory) and the syndrome’s constituent symptom categories (e.g. cough fever, etc.). Changes in syndrome count and composition at the level one change in word count were noted.
The mean chief complaint word count across all hospitals from 2008 – 2011 in NYC was 3.14, with a range of 0 to 18 words. CPA detected a significant level 1 change in 21 hospitals, with a mean change of 0.60 words, with 9 increases (mean= 0.71 words) and 12 decreases (mean= 0.53 words). According to the results of a survey of 49 NYC EDs, 19 have changed coding practices or Health Information Systems since 2008. CPA identified a coincident and significant shift in word count for 8 of these hospitals. CPA also detected significant shifts in word count for 13 hospitals that did not report any changes. Figure 1 shows the results of CPA from one ED in NYC
We observed immediate changes in daily syndrome count after the detected change in CC word count. For example, respiratory syndrome count increased with increased word count and decreased with decreased word count for 10 of the 21 EDs with a significant change in word count. Only 2 EDs saw an opposite effect on respiratory syndrome count. Meanwhile, 9 EDs saw no obvious change in respiratory syndrome count. Furthermore, these changes in daily CC word count coincided with subsequent changes in syndrome composition, the breakdown of syndromes into constituent symptoms.
Change Point Analysis may be a useful method for prospectively detecting shifts in CC word count, which might represent a change in ED practices. In some instances changes to CC word count had an apparent effect on both syndrome capture and syndrome composition. Further studies are required to determine how often these changes happen and how they may affect the quality of syndromic surveillance.
Chief Complaint; Word Count; Change Point Analysis
Word reading speed in peripheral vision is slower when words are in close proximity of other words (Chung, 2004). This word crowding effect could arise as a consequence of interaction of low-level letter features between words, or the interaction between high-level holistic representations of words. We evaluated these two hypotheses by examining how word crowding changes for five configurations of flanking words: the control condition — flanking words were oriented upright; scrambled — letters in each flanking word were scrambled in order; horizontal-flip — each flanking word was the left-right mirror-image of the original; letter-flip — each letter of the flanking word was the left-right mirror-image of the original; and vertical-flip — each flanking word was the up-down mirror-image of the original. The low-level letter feature interaction hypothesis predicts similar word crowding effect for all the different flanker configurations, while the high-level holistic representation hypothesis predicts less word crowding effect for all the alternative flanker conditions, compared with the control condition. We found that oral reading speed for words flanked above and below by other words, measured at 10° eccentricity in the nasal field, showed the same dependence on the vertical separation between the target and its flanking words, for the various flanker configurations. The result was also similar when we rotated the flanking words by 90° to disrupt the periodic vertical pattern, which presumably is the main structure in words. The remarkably similar word crowding effect irrespective of the flanker configurations suggests that word crowding arises as a consequence of interactions of low-level letter features.
crowding; word recognition; peripheral vision; features; holistic representation
The goal of this study was to jointly examine the effects of word class, word class ambiguity, and semantic ambiguity on the brain response to words in syntactically specified contexts. Four types of words were used: (1) word class ambiguous words with a high degree of semantic ambiguity (e.g., ‘duck’); (2) word class ambiguous words with little or no semantic ambiguity (e.g., ‘vote’); (3) word class unambiguous nouns (e.g., ‘sofa’); and (4) word class unambiguous verbs (e.g., ‘eat’). These words were embedded in minimal phrases that explicitly specified their word class: “the” for nouns (and ambiguous words used as nouns) and “to” for verbs (and ambiguous words used as verbs). Our results replicate the basic word class effects found in prior work (Federmeier, K.D., Segal, J.B., Lombrozo, T., Kutas, M., 2000. Brain responses to nouns, verbs and class ambiguous words in context. Brain, 123 (12), 2552–2566), including an enhanced N400 (250–450ms) to nouns compared with verbs and an enhanced frontal positivity (300–700 ms) to unambiguous verbs in relation to unambiguous nouns. A sustained frontal negativity (250–900 ms) that was previously linked to word class ambiguity also appeared in this study but was specific to word class ambiguous items that also had a high level of semantic ambiguity; word class ambiguous items without semantic ambiguity, in contrast, were more positive than class unambiguous words in the early part of this time window (250–500 ms). Thus, this frontal negative effect seems to be driven by the need to resolve the semantic ambiguity that is sometimes associated with different grammatical uses of a word class ambiguous homograph rather than by the class ambiguity per se.
Language; Word class; Word class ambiguity; Noun–verb homonymy; ERP
Theories of verbal rehearsal usually assume that whole words are being rehearsed. However, words consist of letter sequences, or syllables, or word onset-vowel-coda, amongst many other conceptualizations of word structure. A more general term is the ‘grain size’ of word units (Ziegler and Goswami, 2005). In the current study, a new method measured the quantitative percentage of correctly remembered word structure. The amount of letters in the correct letter sequence as per cent of word length was calculated, disregarding missing or added letters. A forced rehearsal was tested by repeating each memory list four times. We tested low frequency (LF) English words versus geographical (UK) town names to control for content. We also tested unfamiliar international (INT) non-words and names of international (INT) European towns to control for familiarity. An immediate versus distributed repetition was tested with a between-subject design. Participants responded with word fragments in their written recall especially when they had to remember unfamiliar words. While memory of whole words was sensitive to content, presentation distribution and individual sex and language differences, recall of word fragments was not. There was no trade-off between memory of word fragments with whole word recall during the repetition, instead also word fragments significantly increased. Moreover, while whole word responses correlated with each other during repetition, and word fragment responses correlated with each other during repetition, these two types of word recall responses were not correlated with each other. Thus there may be a lower layer consisting of free, sparse word fragments and an upper layer that consists of language-specific, orthographically and semantically constrained words.
word fragments; word rehearsal; working memory; visual cache; inner scribe; word form; orthographic pattern
Recent research into stuttering in English has shown that function word disfluency decreases with age whereas content words disfluency increases. Also function words that precede a content word are significantly more likely to be stuttered than those that follow content words (Au-Yeung, Howell and Pilgrim, 1998; Howell, Au-Yeung and Sackin, 1999). These studies have used the concept of the phonological word as a means of investigating these phenomena. Phonological words help to determine the position of function words relative to content words and to establish the origin of the patterns of disfluency with respect to these two word classes. The current investigation analysed German speech for similar patterns. German contains many long compound nouns; on this basis, German content words are more complex than English ones. Thus, the patterns of disfluency within phonological words may differ between German and English. Results indicated three main findings. Function words that occupy an early position in a PW have higher rates of disfluency than those that occur later in a PW, this being most apparent for the youngest speakers. Second, function words that precede the content word in a PW have higher rates of disfluency than those that follow the content word. Third, young speakers exhibit high rates of disfluency on function words, but this drops off with age and, correspondingly, disfluency rate on content words increases. The patterns within phonological words may be general to German and English and can be accounted for by the EXPLAN model, assuming lexical class operates equivalently across these languages or that lexical categories contain some common characteristic that is associated with fluency across the languages.
Stuttering; German; function and content words
The field of molecular evolution provides many examples of the principle that molecular differences between species contain information about evolutionary history. One surprising case can be found in the frequency of short words in DNA: more closely related species have more similar word compositions. Interest in this has often focused on its utility in deducing phylogenetic relationships. However, it is also of interest because of the opportunity it provides for studying the evolution of genome function. Word-frequency differences between species change too slowly to be purely the result of random mutational drift. Rather, their slow pattern of change reflects the direct or indirect action of purifying selection and the presence of functional constraints. Many such constraints are likely to exist, and an important challenge is to distinguish them. Here we develop a method to do so by isolating the effects acting at different word sizes. We apply our method to 2-, 4-, and 8-base-pair (bp) words across several classes of noncoding sequence. Our major result is that similarities in 8-bp word frequencies scale with evolutionary time for regions immediately upstream of genes. This association is present although weaker in intronic sequence, but cannot be detected in intergenic sequence using our method. In contrast, 2-bp and 4-bp word frequencies scale with time in all classes of noncoding sequence. These results suggest that different genomic processes are involved at different word sizes. The pattern in 2-bp and 4-bp words may be due to evolutionary changes in processes such as DNA replication and repair, as has been suggested before. The pattern in 8-bp words may reflect evolutionary changes in gene-regulatory machinery, such as changes in the frequencies of transcription-factor binding sites, or in the affinity of transcription factors for particular sequences.
One of the foundations of molecular evolution is the idea that more closely related species are more similar on the molecular level. One example that has been known for several years is the genomic composition of short words (i.e., short segments) of DNA. Given a sample of genome sequence, one can count the occurrences of all words of a certain length. It turns out that closely related species have more similar word frequencies. The pattern of how these frequencies change over evolutionary time is likely to be influenced by the many functions of the genome (coding for proteins, controlling gene expression, etc.). Bush and Lahn investigated the influence of genomic function on word-frequency variation in 13 animal genomes. Using a method designed to isolate the effects acting at particular word sizes, the authors examined how word frequencies vary in different categories of noncoding sequence. They found that interspecies patterns of word-frequency variation change depending on word size and sequence category. These results suggest that noncoding sequence is subject to different functional constraints depending on its location in the genome. An especially interesting possibility is that the patterns in longer words may reflect evolutionary changes in gene regulatory machinery.
The present study was carried out to investigate whether sign language structure plays a role in the processing of complex words (i.e., derivational and compound words), in particular, the delay of complex word reading in deaf adolescents. Chinese deaf adolescents were found to respond faster to derivational words than to compound words for one-sign-structure words, but showed comparable performance for two-sign-structure words. For both derivational and compound words, response latencies to one-sign-structure words were shorter than to two-sign-structure words. These results provide strong evidence that the structure of sign language affects written word processing in Chinese. Additionally, differences between derivational and compound words in the one-sign-structure condition indicate that Chinese deaf adolescents acquire print morphological awareness. The results also showed that delayed word reading was found in derivational words with two signs (DW-2), compound words with one sign (CW-1), and compound words with two signs (CW-2), but not in derivational words with one sign (DW-1), with the delay being maximum in DW-2, medium in CW-2, and minimum in CW-1, suggesting that the structure of sign language has an impact on the delayed processing of Chinese written words in deaf adolescents. These results provide insight into the mechanisms about how sign language structure affects written word processing and its delayed processing relative to their hearing peers of the same age.
Genome sequences can be conceptualized as arrangements of motifs or words. The frequencies and positional distributions of these words within particular non-coding genomic segments provide important insights into how the words function in processes such as mRNA stability and regulation of gene expression.
Using an enumerative word discovery approach, we investigated the frequencies and positional distributions of all 65,536 different 8-letter words in the genome of Arabidopsis thaliana. Focusing on promoter regions, introns, and 3' and 5' untranslated regions (3'UTRs and 5'UTRs), we compared word frequencies in these segments to genome-wide frequencies. The statistically interesting words in each segment were clustered with similar words to generate motif logos. We investigated whether words were clustered at particular locations or were distributed randomly within each genomic segment, and we classified the words using gene expression information from public repositories. Finally, we investigated whether particular sets of words appeared together more frequently than others.
Our studies provide a detailed view of the word composition of several segments of the non-coding portion of the Arabidopsis genome. Each segment contains a unique word-based signature. The respective signatures consist of the sets of enriched words, 'unwords', and word pairs within a segment, as well as the preferential locations and functional classifications for the signature words. Additionally, the positional distributions of enriched words within the segments highlight possible functional elements, and the co-associations of words in promoter regions likely represent the formation of higher order regulatory modules. This work is an important step toward fully cataloguing the functional elements of the Arabidopsis genome.
Phonological words (PWs) are defined as having a single word that acts as a nucleus and an optional number of function words preceding and following that act as satellites. Content and function words are one way of specifying the nucleus and satellites of PW. PW, defined in this way, have been found useful in the characterization of patterns of disfluency over ages for both English and Spanish speakers who stutter. Since content words carry stress in English, PWs segmented using content words as the nucleus would correspond to a large extent with PWs segmented that use a stressed word as the nucleus. This correlation between word type and stress does not apply to the same extent in Spanish. Samples of Spanish from speakers of different ages were segmented into PWs using a stressed, rather than a content, word as the nucleus and unstressed, rather than function, words as satellites. PWs were partitioned into those that were common to the two segmentation methods (common set) and those that differed (different set). There were two separate segmentations when PWs differed, those appropriate to content word nuclei, and those appropriate to stressed word nuclei. The two types of segmentation on the different set were analyzed separately to see whether one, both or neither method led to similar patterns of disfluency to those reported when content words were used as nuclei in English and Spanish. Generally speaking, the patterns of stuttering in PW found in English applied to all three analyses (common and the two on the different set) in Spanish. Thus, neither segmentation method showed a marked superiority in predicting the patterns of disfluency over age groups for the different set of Spanish data. It is argued that stressed or content word status can lead to a word being a nucleus and that there may be other factors (e.g. speech rate) that underlie stressed words and content words that affect the words around these PW nuclei in a similar way.
Development stuttering; Spanish; metrical influences on disfluency; lexical influences on disfluency; EXPLAN theory
Stuttering on function words was examined in 51 people who stutter. The people who stutter were subdivided into young (2 to 6 years), middle (6 to 9 years), and older (9 to 12 years) child groups; teenagers (13 to 18 years); and adults (20 to 40 years). As reported by previous researchers, children up to about age 9 stuttered more on function words (pronouns, articles, prepositions, conjunctions, auxiliary verbs), whereas older people tended to stutter more on content words (nouns, main verbs, adverbs, adjectives). Function words in early positions in utterances, again as reported elsewhere, were more likely to be stuttered than function words at later positions in an utterance. This was most apparent for the younger groups of speakers. For the remaining analyses, utterances were segmented into phonological words on the basis of Selkirk’s work (1984). Stuttering rate was higher when function words occurred in early phonological word positions than other phonological word positions whether the phonological word appeared in initial position in an utterance or not. Stuttering rate was highly dependent on whether the function word occurred before or after the single content word allowed in Selkirk’s (1984) phonological words. This applied, once again, whether the phonological word was utterance-initial or not. It is argued that stuttering of function words before their content word in phonological words in young speakers is used as a delaying tactic when the forthcoming content word is not prepared for articulation.
stuttering; phonological words; function words; speech plan
A fundamental problem in the study of human spoken word recognition concerns the structural relations among the sound patterns of words in memory and the effects these relations have on spoken word recognition. In the present investigation, computational and experimental methods were employed to address a number of fundamental issues related to the representation and structural organization of spoken words in the mental lexicon and to lay the groundwork for a model of spoken word recognition.
Using a computerized lexicon consisting of transcriptions of 20,000 words, similarity neighborhoods for each of the transcriptions were computed. Among the variables of interest in the computation of the similarity neighborhoods were: 1) the number of words occurring in a neighborhood, 2) the degree of phonetic similarity among the words, and 3) the frequencies of occurrence of the words in the language. The effects of these variables on auditory word recognition were examined in a series of behavioral experiments employing three experimental paradigms: perceptual identification of words in noise, auditory lexical decision, and auditory word naming.
The results of each of these experiments demonstrated that the number and nature of words in a similarity neighborhood affect the speed and accuracy of word recognition. A neighborhood probability rule was developed that adequately predicted identification performance. This rule, based on Luce's (1959) choice rule, combines stimulus word intelligibility, neighborhood confusability, and frequency into a single expression. Based on this rule, a model of auditory word recognition, the neighborhood activation model, was proposed. This model describes the effects of similarity neighborhood structure on the process of discriminating among the acoustic-phonetic representations of words in memory. The results of these experiments have important implications for current conceptions of auditory word recognition in normal and hearing impaired populations of children and adults.
The purposes of this study were 1) to examine the effect of lexical characteristics on the spoken word recognition performance of children who use a multichannel cochlear implant (CI), and 2) to compare their performance on lexically controlled word lists with their performance on a traditional test of word recognition, the PB-K.
In two different experiments, 14 to 19 pediatric CI users who demonstrated at least some open-set speech recognition served as subjects. Based on computational analyses, word lists were constructed to allow systematic examination of the effects of word frequency, lexical density (i.e., the number of phonemically similar words, or neighbors), and word length. The subjects’ performance on these new tests and the PB-K also was compared.
The percentage of words correctly identified was significantly higher for lexically “easy” words (high frequency words with few neighbors) than for “hard” words (low frequency words with many neighbors), but there was no lexical effect on phoneme recognition scores. Word recognition performance was consistently higher on the lexically controlled lists than on the PB-K. In addition, word recognition was better for multisyllabic than for monosyllabic stimuli.
These results demonstrate that pediatric cochlear implant users are sensitive to the acoustic-phonetic similarities among words, that they organize words into similarity neighborhoods in long-term memory, and that they use this structural information in recognizing isolated words. The results further suggest that the PB-K underestimates these subjects’ spoken word recognition.
It has been suggested that the variability among studies in the onset of lexical effects may be due to a series of methodological differences. In this study we investigated the role of orthographic familiarity, phonological legality and number of orthographic neighbours of words in determining the onset of word/non-word discriminative responses.
ERPs were recorded from 128 sites in 16 Italian University students engaged in a lexical decision task. Stimuli were 100 words, 100 quasi-words (obtained by the replacement of a single letter), 100 pseudo-words (non-derived) and 100 illegal letter strings. All stimuli were balanced for length; words and quasi-words were also balanced for frequency of use, domain of semantic category and imageability. SwLORETA source reconstruction was performed on ERP difference waves of interest.
Overall, the data provided evidence that the latency of lexical effects (word/non-word discrimination) varied as a function of the number of a word's orthographic neighbours, being shorter to non-derived than to derived pseudo-words. This suggests some caveats about the use in lexical decision paradigms of quasi-words obtained by transposing or replacing only 1 or 2 letters. Our findings also showed that the left-occipito/temporal area, reflecting the activity of the left fusiform gyrus (BA37) of the temporal lobe, was affected by the visual familiarity of words, thus explaining its lexical sensitivity (word vs. non-word discrimination). The temporo-parietal area was markedly sensitive to phonological legality exhibiting a clear-cut discriminative response between illegal and legal strings as early as 250 ms of latency.
The onset of lexical effects in a lexical decision paradigm depends on a series of factors, including orthographic familiarity, degree of global lexical activity, and phonologic legality of non-words.
The problem of finding the shortest absent words in DNA data has been recently addressed, and algorithms for its solution have been described. It has been noted that longer absent words might also be of interest, but the existing algorithms only provide generic absent words by trivially extending the shortest ones.
We show how absent words relate to the repetitions and structure of the data, and define a new and larger class of absent words, called minimal absent words, that still captures the essential properties of the shortest absent words introduced in recent works. The words of this new class are minimal in the sense that if their leftmost or rightmost character is removed, then the resulting word is no longer an absent word. We describe an algorithm for generating minimal absent words that, in practice, runs in approximately linear time. An implementation of this algorithm is publicly available at .
Because the set of minimal absent words that we propose is much larger than the set of the shortest absent words, it is potentially more useful for applications that require a richer variety of absent words. Nevertheless, the number of minimal absent words is still manageable since it grows at most linearly with the string size, unlike generic absent words that grow exponentially. Both the algorithm and the concepts upon which it depends shed additional light on the structure of absent words and complement the existing studies on the topic.
Adults with aphasia often try mightily to produce specific words, but their word-finding attempts are frequently unsuccessful. However, the word retrieval process may contain rich information that communicates a desired message regardless of word-finding success.
The original article reprinted here reports an investigation that assessed whether patient-generated self cues inherent in the word retrieval process could be interpreted by listener/observers and improve on communicative effectiveness for adults with aphasia. The newly added commentary identifies and reports tentative conclusions from 18 investigations of self-generated cues in aphasia since the 1982 paper. It further provides a rationale for increasing research on self-generated cueing and notes a surprising lack of attention to the questions investigated in the original article. The original research is also connected with more recent qualitative investigations of interactional, as opposed to transactional, communicative exchange.
Methods & Procedures
While performing single-word production tasks, 10 adults with aphasia produced 107 utterances that contained spontaneous word retrieval behaviours. To determine the “communicative value” of these behaviours, herein designated self cues or self-generated cues, the utterance-final (potential target) word was edited out and the edited utterances were dubbed onto a videotape. Six naïve observers, three of whom received some context about the nature of word retrieval in aphasia and possible topics for the utterances, and three of whom got no information, predicted the target word of each utterance from the word-finding behaviours alone. The communicative value of the self-generated cues was determined for each individual with aphasia by summing percent correct word retrieval and percent correct observer prediction of target words, based on word retrieval behaviours. The newly added commentary describes some challenges of investigating a “communicative value” outcome, and indicates what would and would not change about the methods, if we did the study today.
Outcomes & Results
The observer group that was given some context information appeared to be more successful at predicting target words than the group without any such information. Self-generated cues enhanced communication for the majority of individuals with aphasia, with some cues (e.g., descriptions/gestures of action or function) appearing to carry more communicative value than others (e.g., semantic associates). The commentary again indicates how and why we would change this portion of the investigation if conducting the study at this time.
The results are consistent with Holland’s (1977) premise that people with aphasia do well at communication, regardless of the words they produce. The finding that minimal context information may assist observers in understanding the communicative intent of people with aphasia has important implications for training family members to interpret self-generated cues. The new commentary reinforces these conclusions, highlights potential differences between self cues that improve word-finding success and those that enhance message transmission, and points to some additional research needs.
Patterns of word use both reflect and influence a myriad of human activities and interactions. Like other entities that are reproduced and evolve, words rise or decline depending upon a complex interplay between their intrinsic properties and the environments in which they function. Using Internet discussion communities as model systems, we define the concept of a word niche as the relationship between the word and the characteristic features of the environments in which it is used. We develop a method to quantify two important aspects of the size of the word niche: the range of individuals using the word and the range of topics it is used to discuss. Controlling for word frequency, we show that these aspects of the word niche are strong determinants of changes in word frequency. Previous studies have already indicated that word frequency itself is a correlate of word success at historical time scales. Our analysis of changes in word frequencies over time reveals that the relative sizes of word niches are far more important than word frequencies in the dynamics of the entire vocabulary at shorter time scales, as the language adapts to new concepts and social groupings. We also distinguish endogenous versus exogenous factors as additional contributors to the fates of words, and demonstrate the force of this distinction in the rise of novel words. Our results indicate that short-term nonstationarity in word statistics is strongly driven by individual proclivities, including inclinations to provide novel information and to project a distinctive social identity.
Poor hearing acuity reduces memory for spoken words, even when the words are presented with enough clarity for correct recognition. An "effortful hypothesis" suggests that the perceptual effort needed for recognition draws from resources that would otherwise be available for encoding the word in memory. To assess this hypothesis, we conducted a behavioral task requiring immediate free recall of word-lists, some of which contained an acoustically masked word that was just above perceptual threshold. Results show that masking a word reduces the recall of that word and words prior to it, as well as weakening the linking associations between the masked and prior words. In contrast, recall probabilities of words following the masked word are not affected. To account for this effect we conducted computational simulations testing two classes of models: associative linking models and short-term memory buffer models. Only a model that integrated both contextual linking and buffer components matched all of the effects of masking observed in our behavioral data. In this Linking-Buffer model, the masked word disrupts a short-term memory buffer, causing associative links of words in the buffer to be weakened, affecting memory for the masked word and the word prior to it, while allowing links of words following the masked word to be spared. We suggest that these data account for the so-called "effortful hypothesis", where distorted input has a detrimental impact on prior information stored in short-term memory.
modeling; simulations; recall; word lists; associations
A gating technique was used in two studies of spoken word identification that investigated the relationship between the available acoustic–phonetic information in the speech signal and the context provided by meaningful and semantically anomalous sentences. The duration of intact spoken segments of target words and the location of these segments at the beginnings or endings of words in sentences were varied. The amount of signal duration required for word identification and the distribution of incorrect word responses were examined. Subjects were able to identify words in spoken sentences with only word-initial or only word-final acoustic–phonetic information. In meaningful sentences, less word-initial information was required to identify words than word-final information. Error analyses indicated that both acoustic–phonetic information and syntactic contextual knowledge interacted to generate the set of hypothesized word candidates used in identification. The results provide evidence that word identification is qualitatively different in meaningful sentences than in anomalous sentences or when words are presented in isolation: That is, word identification in sentences is an interactive process that makes use of several knowledge sources. In the presence of normal sentence context, the acoustic–phonetic information in the beginnings of words is particularly effective in facilitating rapid identification of words.
We conducted a preliminary study to examine whether Chinese readers’ spontaneous word segmentation processing is consistent with the national standard rules of word segmentation based on the Contemporary Chinese language word segmentation specification for information processing (CCLWSSIP). Participants were asked to segment Chinese sentences into individual words according to their prior knowledge of words. The results showed that Chinese readers did not follow the segmentation rules of the CCLWSSIP, and their word segmentation processing was influenced by the syntactic categories of consecutive words. In many cases, the participants did not consider the auxiliary words, adverbs, adjectives, nouns, verbs, numerals and quantifiers as single word units. Generally, Chinese readers tended to combine function words with content words to form single word units, indicating they were inclined to chunk single words into large information units during word segmentation. Additionally, the “overextension of monosyllable words” hypothesis was tested and it might need to be corrected to some degree, implying that word length have an implicit influence on Chinese readers’ segmentation processing. Implications of these results for models of word recognition and eye movement control are discussed.
Progressive alexia is an acquired reading deficit caused by degeneration of brain regions that are essential for written word processing. Functional imaging studies have shown that early processing of the visual word form depends on a hierarchical posterior-to-anterior processing stream in occipito-temporal cortex, whereby successive areas code increasingly larger and more complex perceptual attributes of the letter string. A region located in the left lateral occipito-temporal sulcus and adjacent fusiform gyrus shows maximal selectivity for words and has been dubbed the ‘visual word form area’. We studied two patients with progressive alexia in order to determine whether their reading deficits were associated with structural and/or functional abnormalities in this visual word form system. Voxel-based morphometry showed left-lateralized occipito-temporal atrophy in both patients, very mild in one, but moderate to severe in the other. The two patients, along with 10 control subjects, were scanned with functional magnetic resonance imaging as they viewed rapidly presented words, false font strings, or a fixation crosshair. This paradigm was optimized to reliably map brain regions involved in orthographic processing in individual subjects. All 10 control subjects showed a posterior-to-anterior gradient of selectivity for words, and all 10 showed a functionally defined visual word form area in the left hemisphere that was activated for words relative to false font strings. In contrast, neither of the two patients with progressive alexia showed any evidence for a selectivity gradient or for word-specific activation of the visual word form area. The patient with mild atrophy showed normal responses to both words and false font strings in the posterior part of the visual word form system, but a failure to develop selectivity for words in the more anterior part of the system. In contrast, the patient with moderate to severe atrophy showed minimal activation of any part of the visual word form system for either words or false font strings. Our results suggest that progressive alexia is associated with a dysfunctional visual word form system, with or without substantial cortical atrophy. Furthermore, these findings demonstrate that functional MRI has the potential to reveal the neural bases of cognitive deficits in neurodegenerative patients at very early stages, in some cases before the development of extensive atrophy.
progressive alexia; letter-by-letter reading; posterior cortical atrophy; logopenic primary progressive aphasia; visual word form system
When young children answer questions, they do so more slowly than adults and appear to have difficulty finding the appropriate words. Because children leave gaps before they respond, it is possible that they could answer faster with gestures than with words. In this study, we compare gestural and verbal responses from one child between the ages of 1;4 and 3;5, to adult Where and Which questions, which can be answered with gestures and/or words. After extracting all adult Where and Which questions and child answers from longitudinal videotaped sessions, we examined the timing from the end of each question to the start of the response, and compared the timing for gestures and words. Child responses could take the form of a gesture or word(s); the latter could be words repeated from the adult question or new words retrieved by the child. Or responses could be complex: a gesture + word repeat, gesture + new word, or word repeat + new word. Gestures were the fastest overall, followed successively by word-repeats, then new-word responses. This ordering, with gestures ahead of words, suggests that the child knows what to answer but needs more time to retrieve any relevant words. In short, word retrieval and articulation appear to be bottlenecks in the timing of responses: both add to the planning required in answering a question.
where and which questions; answers; gestures; words; timing
In the present study neurophysiological correlates related to mismatching information in lexical access were investigated with a fragment priming paradigm. Event-related brain potentials were recorded for written words following spoken word onsets that either matched (e.g., kan – Kante [Engl. edge]), partially mismatched (e.g., kan – Konto [Engl. account]), or were unrelated (e.g., kan – Zunge [Engl. tongue]). Previous psycholinguistic research postulated the activation of multiple words in the listeners' mental lexicon which compete for recognition. Accordingly, matching words were assumed to be strongly activated competitors, which inhibit less strongly activated partially mismatching words.
ERPs for matching and unrelated control words differed between 300 and 400 ms. Difference waves (unrelated control words – matching words) replicate a left-hemispheric P350 effect in this time window. Although smaller than for matching words, a P350 effect and behavioural facilitation was also found for partially mismatching words. Minimum norm solutions point to a left hemispheric centro-temporal source of the P350 effect in both conditions. The P350 is interpreted as a neurophysiological index for the activation of matching words in the listeners' mental lexicon. In contrast to the P350 and the behavioural responses, a brain potential ranging between 350 and 500 ms (N400) was found to be equally reduced for matching and partially mismatching words as compared to unrelated control words. This latter effect might be related to strategic mechanisms in the priming situation.
A left-hemispheric neuronal network engaged in lexical access appears to be gradually activated by matching and partially mismatching words. Results suggest that neural processing of matching words does not inhibit processing of partially mismatching words during early stages of lexical identification. Furthermore, the present results indicate that neurophysiological correlates observed in fragment priming reflect different aspects of target processing that are cumulated in behavioural responses. Particularly the left-hemispheric P350 difference potential appears to be closely related to fine-grained activation differences of modality-independent representations in the listeners' mental lexicon. This neurophysiological index might guide future studies aimed at investigating neural aspects of lexical access.
Occipito-temporal N170 component represents the first step where face, object and word processing are discriminated along the ventral stream of the brain. N170 leftward asymmetry observed during reading has been often associated to prelexical orthographic visual word form activation. However, some studies reported a lexical frequency effect for this component particularly during word repetition that appears in contradiction with this prelexical orthographic step. Here, we tested the hypothesis that under word repetition condition, discrimination between words would be operated on visual rather than orthographic basis. In this case, N170 activity may correspond to a logographic processing where a word is processed as a whole.
To test such an assumption, frequent words, infrequent words and pseudowords were presented to the subjects that had to complete a visual lexical decision task. Different repetition conditions were defined 1 – weak repetition, 2 – massive repetition and 3 – massive repetition with font alternation. This last condition was designed to change visual word shape during repetition and therefore to interfere with a possible visual strategy during word recognition.
Behavioral data showed an important frequency effect for the weak repetition condition, a lower but significant frequency effect for massive repetition, and no frequency effect for the changing font repetition. Moreover alternating font repetitions slowed subject's responses in comparison to "simple" massive repetition.
ERPs results evidenced larger N170 amplitude in the left hemisphere for frequent than both infrequent words and pseudowords during massive repetition. Moreover, when words were repeated with different fonts this N170 effect was not present, suggesting a visual locus for such a N170 frequency effect.
N170 represents an important step in visual word recognition, consisting probably in a prelexical orthographic processing. But during the reading of very frequent words or after a massive repetition of a word, it could represent a more holistic process where words are processed as a global visual pattern.
Crowding, the adverse spatial interaction due to proximity of adjacent targets, has been suggested as an explanation for slow reading in peripheral vision. The purposes of this study were to (1) demonstrate that crowding exists at the word level and (2) examine whether or not reading speed in central and peripheral vision can be enhanced with increased vertical word spacing.
Five normal observers read aloud sequences of six unrelated four-letter words presented on a computer monitor, one word at a time, using rapid serial visual presentation (RSVP). Reading speeds were calculated based on the RSVP exposure durations yielding 80% correct. Testing was conducted at the fovea and at 5° and 10° in the inferior visual field. Critical print size (CPS) for each observer and at each eccentricity was first determined by measuring reading speeds for four print sizes using unflanked words. We then presented words at 0.8× or 1.4× CPS, with each target word flanked by two other words, one above and one below the target word. Reading speeds were determined for vertical word spacings (baseline-to-baseline separation between two vertically separated words) ranging from 0.8× to 2× the standard single-spacing, as well as the unflanked condition.
At the fovea, reading speed increased with vertical word spacing up to about 1.2× to 1.5× the standard spacing and remained constant and similar to the unflanked reading speed at larger vertical word spacings. In the periphery, reading speed also increased with vertical word spacing, but it remained below the unflanked reading speed for all spacings tested. At 2× the standard spacing, peripheral reading speed was still about 25% lower than the unflanked reading speed for both eccentricities and print sizes. Results from a control experiment showed that the greater reliance of peripheral reading speed on vertical word spacing was also found in the right visual field.
Increased vertical word spacing, which presumably decreases the adverse effect of crowding between adjacent lines of text, benefits reading speed. This benefit is greater in peripheral than central vision.
crowding; reading; peripheral vision; low vision