Dysfluencies on function words in the speech of people who stutter mainly occur when function words precede, rather than follow, content words (Au-Yeung, Howell, & Pilgrim, 1998). It is hypothesized that such function word dysfluencies occur when the plan for the subsequent content word is not ready for execution. Repetition and hesitation on the function words buys time to complete the plan for the content word. Stuttering arises when speakers abandon the use of this delaying strategy and carry on, attempting production of the subsequent, partly prepared content word. To test these hypotheses, the relationship between dysfluency on function and content words was investigated in the spontaneous speech of 51 people who stutter and 68 people who do not stutter. These participants were subdivided into the following age groups: 2–6-year-olds, 7–9-year-olds, 10–12-year-olds, teenagers (13–18 years), and adults (20–40 years). Very few dysfluencies occurred for either fluency group on function words that occupied a position after a content word. For both fluency groups, dysfluency within each phonological word occurred predominantly on either the function word preceding the content word or on the content word itself, but not both. Fluent speakers had a higher percentage of dysfluency on initial function words than content words. Whether dysfluency occurred on initial function words or content words changed over age groups for speakers who stutter. For the 2–6-year-old speakers that stutter, there was a higher percentage of dysfluencies on initial function words than content words. In subsequent age groups, dysfluency decreased on function words and increased on content words. These data are interpreted as suggesting that fluent speakers use repetition of function words to delay production of the subsequent content words, whereas people who stutter carry on and attempt a content word on the basis of an incomplete plan.
stuttering; phonological words; function words; content words; speech plan
To identify changes in emergency department (ED) syndromic surveillance data by analyzing trends in chief complaint (CC) word count; to compare these changes to coding changes reported by EDs; and to examine how these changes might affect the ability of syndromic surveillance systems to identify syndromes in a consistent manner.
The New York City (NYC) Department of Health and Mental Hygiene (DOHMH) receives daily ED data from 49 of NYC’s 52 hospitals, representing approximately 95% of ED visits citywide. Chief complaint (CC) is categorized into syndrome groupings using text recognition of symptom key-words and phrases. Hospitals are not required to notify the DOHMH of any changes to procedures or health information systems (HIS). Previous work noticed that CC word count varied over time within and among EDs. The variations seen in CC word count may affect the quality and type of data received by the DOHMH, thereby affecting the ability to detect syndrome visits consistently.
The daily mean number of words in the chief complaint field were examined by hospital from 2008–2011. Spectral analyses were performed on daily CC word count by hospital to explore temporal trends. Change Point Analysis (CPA) using Taylor’s Method with a maximum change level of four was conducted on the CC field by hospital using 1,000 bootstrap samples. According to Taylor, a level one change is the most important change detected on the program’s first pass through the data. For this analysis, a change point was considered significant if it was level 1, detected an average change of more than 0.50 words per day, and was sustained for at least 6 months before a level 2 change of at least 0.50 words occurred. Results of the CPA were compared to reported changes identified by a survey conducted by DOHMH staff of 49 hospitals that collected information about their HIS and coding practices, including any recent system changes.
When a significant level one change was identified, time series graphs for six months before and after the change were created for five syndromes (cold, diarrhea, fever-flu, influenza-like-illness, and respiratory) and the syndrome’s constituent symptom categories (e.g. cough fever, etc.). Changes in syndrome count and composition at the level one change in word count were noted.
The mean chief complaint word count across all hospitals from 2008 – 2011 in NYC was 3.14, with a range of 0 to 18 words. CPA detected a significant level 1 change in 21 hospitals, with a mean change of 0.60 words, with 9 increases (mean= 0.71 words) and 12 decreases (mean= 0.53 words). According to the results of a survey of 49 NYC EDs, 19 have changed coding practices or Health Information Systems since 2008. CPA identified a coincident and significant shift in word count for 8 of these hospitals. CPA also detected significant shifts in word count for 13 hospitals that did not report any changes. Figure 1 shows the results of CPA from one ED in NYC
We observed immediate changes in daily syndrome count after the detected change in CC word count. For example, respiratory syndrome count increased with increased word count and decreased with decreased word count for 10 of the 21 EDs with a significant change in word count. Only 2 EDs saw an opposite effect on respiratory syndrome count. Meanwhile, 9 EDs saw no obvious change in respiratory syndrome count. Furthermore, these changes in daily CC word count coincided with subsequent changes in syndrome composition, the breakdown of syndromes into constituent symptoms.
Change Point Analysis may be a useful method for prospectively detecting shifts in CC word count, which might represent a change in ED practices. In some instances changes to CC word count had an apparent effect on both syndrome capture and syndrome composition. Further studies are required to determine how often these changes happen and how they may affect the quality of syndromic surveillance.
Chief Complaint; Word Count; Change Point Analysis
Word reading speed in peripheral vision is slower when words are in close proximity of other words (Chung, 2004). This word crowding effect could arise as a consequence of interaction of low-level letter features between words, or the interaction between high-level holistic representations of words. We evaluated these two hypotheses by examining how word crowding changes for five configurations of flanking words: the control condition — flanking words were oriented upright; scrambled — letters in each flanking word were scrambled in order; horizontal-flip — each flanking word was the left-right mirror-image of the original; letter-flip — each letter of the flanking word was the left-right mirror-image of the original; and vertical-flip — each flanking word was the up-down mirror-image of the original. The low-level letter feature interaction hypothesis predicts similar word crowding effect for all the different flanker configurations, while the high-level holistic representation hypothesis predicts less word crowding effect for all the alternative flanker conditions, compared with the control condition. We found that oral reading speed for words flanked above and below by other words, measured at 10° eccentricity in the nasal field, showed the same dependence on the vertical separation between the target and its flanking words, for the various flanker configurations. The result was also similar when we rotated the flanking words by 90° to disrupt the periodic vertical pattern, which presumably is the main structure in words. The remarkably similar word crowding effect irrespective of the flanker configurations suggests that word crowding arises as a consequence of interactions of low-level letter features.
crowding; word recognition; peripheral vision; features; holistic representation
Theories of verbal rehearsal usually assume that whole words are being rehearsed. However, words consist of letter sequences, or syllables, or word onset-vowel-coda, amongst many other conceptualizations of word structure. A more general term is the ‘grain size’ of word units (Ziegler and Goswami, 2005). In the current study, a new method measured the quantitative percentage of correctly remembered word structure. The amount of letters in the correct letter sequence as per cent of word length was calculated, disregarding missing or added letters. A forced rehearsal was tested by repeating each memory list four times. We tested low frequency (LF) English words versus geographical (UK) town names to control for content. We also tested unfamiliar international (INT) non-words and names of international (INT) European towns to control for familiarity. An immediate versus distributed repetition was tested with a between-subject design. Participants responded with word fragments in their written recall especially when they had to remember unfamiliar words. While memory of whole words was sensitive to content, presentation distribution and individual sex and language differences, recall of word fragments was not. There was no trade-off between memory of word fragments with whole word recall during the repetition, instead also word fragments significantly increased. Moreover, while whole word responses correlated with each other during repetition, and word fragment responses correlated with each other during repetition, these two types of word recall responses were not correlated with each other. Thus there may be a lower layer consisting of free, sparse word fragments and an upper layer that consists of language-specific, orthographically and semantically constrained words.
word fragments; word rehearsal; working memory; visual cache; inner scribe; word form; orthographic pattern
The goal of this study was to jointly examine the effects of word class, word class ambiguity, and semantic ambiguity on the brain response to words in syntactically specified contexts. Four types of words were used: (1) word class ambiguous words with a high degree of semantic ambiguity (e.g., ‘duck’); (2) word class ambiguous words with little or no semantic ambiguity (e.g., ‘vote’); (3) word class unambiguous nouns (e.g., ‘sofa’); and (4) word class unambiguous verbs (e.g., ‘eat’). These words were embedded in minimal phrases that explicitly specified their word class: “the” for nouns (and ambiguous words used as nouns) and “to” for verbs (and ambiguous words used as verbs). Our results replicate the basic word class effects found in prior work (Federmeier, K.D., Segal, J.B., Lombrozo, T., Kutas, M., 2000. Brain responses to nouns, verbs and class ambiguous words in context. Brain, 123 (12), 2552–2566), including an enhanced N400 (250–450ms) to nouns compared with verbs and an enhanced frontal positivity (300–700 ms) to unambiguous verbs in relation to unambiguous nouns. A sustained frontal negativity (250–900 ms) that was previously linked to word class ambiguity also appeared in this study but was specific to word class ambiguous items that also had a high level of semantic ambiguity; word class ambiguous items without semantic ambiguity, in contrast, were more positive than class unambiguous words in the early part of this time window (250–500 ms). Thus, this frontal negative effect seems to be driven by the need to resolve the semantic ambiguity that is sometimes associated with different grammatical uses of a word class ambiguous homograph rather than by the class ambiguity per se.
Language; Word class; Word class ambiguity; Noun–verb homonymy; ERP
To assess the impact of glaucoma-related vision loss on measures of out-loud reading, including time to say individual words, interval time between consecutive words, lexical errors, skipped words, and repetitions.
Glaucoma subjects (n = 63) with bilateral visual field loss and glaucoma suspect controls (n = 57) were recorded while reading a standardized passage out loud. A masked evaluator determined the start and end of each recorded word and identified reading errors.
Glaucoma subjects demonstrated longer durations to recite individual words (265 vs. 243 ms, P < 0.001), longer intervals between words (154 vs. 124 ms, P < 0.001), and longer word/post-word interval complexes (the time spanned by the word and the interval following the word; 419 vs. 367 ms, P < 0.001) than controls. In multivariable analyses, each 0.1 decrement in log contrast sensitivity (logCS) was associated with a 15.0 ms longer word/post-interval complex (95% confidence interval [CI] = 9.6–20.4; P < 0.001). Contrast sensitivity was found to significantly interact with word length, word frequency, and word location at the end of a line with regards to word/post-word interval complex duration (P < 0.05 for all). Glaucoma severity was also associated with more lexical errors (Odds ratio = 1.20 for every 0.1 logCS decrement; 95% CI = 1.02–1.39, P < 0.05), but not with more skipped or repeated words.
Glaucoma patients with greater vision loss make more lexical errors, are slower in reciting longer and less frequently used words, and more slowly transition to new lines of text. These problem areas may require special attention when designing methods to rehabilitate reading in patients with glaucoma.
Glaucoma patients with reduced contrast sensitivity have particular difficulty reading longer words, less frequently used words, and transitioning to new lines of text.
glaucoma; reading; contrast sensitivity
The present study was carried out to investigate whether sign language structure plays a role in the processing of complex words (i.e., derivational and compound words), in particular, the delay of complex word reading in deaf adolescents. Chinese deaf adolescents were found to respond faster to derivational words than to compound words for one-sign-structure words, but showed comparable performance for two-sign-structure words. For both derivational and compound words, response latencies to one-sign-structure words were shorter than to two-sign-structure words. These results provide strong evidence that the structure of sign language affects written word processing in Chinese. Additionally, differences between derivational and compound words in the one-sign-structure condition indicate that Chinese deaf adolescents acquire print morphological awareness. The results also showed that delayed word reading was found in derivational words with two signs (DW-2), compound words with one sign (CW-1), and compound words with two signs (CW-2), but not in derivational words with one sign (DW-1), with the delay being maximum in DW-2, medium in CW-2, and minimum in CW-1, suggesting that the structure of sign language has an impact on the delayed processing of Chinese written words in deaf adolescents. These results provide insight into the mechanisms about how sign language structure affects written word processing and its delayed processing relative to their hearing peers of the same age.
Recent research into stuttering in English has shown that function word disfluency decreases with age whereas content words disfluency increases. Also function words that precede a content word are significantly more likely to be stuttered than those that follow content words (Au-Yeung, Howell and Pilgrim, 1998; Howell, Au-Yeung and Sackin, 1999). These studies have used the concept of the phonological word as a means of investigating these phenomena. Phonological words help to determine the position of function words relative to content words and to establish the origin of the patterns of disfluency with respect to these two word classes. The current investigation analysed German speech for similar patterns. German contains many long compound nouns; on this basis, German content words are more complex than English ones. Thus, the patterns of disfluency within phonological words may differ between German and English. Results indicated three main findings. Function words that occupy an early position in a PW have higher rates of disfluency than those that occur later in a PW, this being most apparent for the youngest speakers. Second, function words that precede the content word in a PW have higher rates of disfluency than those that follow the content word. Third, young speakers exhibit high rates of disfluency on function words, but this drops off with age and, correspondingly, disfluency rate on content words increases. The patterns within phonological words may be general to German and English and can be accounted for by the EXPLAN model, assuming lexical class operates equivalently across these languages or that lexical categories contain some common characteristic that is associated with fluency across the languages.
Stuttering; German; function and content words
The field of molecular evolution provides many examples of the principle that molecular differences between species contain information about evolutionary history. One surprising case can be found in the frequency of short words in DNA: more closely related species have more similar word compositions. Interest in this has often focused on its utility in deducing phylogenetic relationships. However, it is also of interest because of the opportunity it provides for studying the evolution of genome function. Word-frequency differences between species change too slowly to be purely the result of random mutational drift. Rather, their slow pattern of change reflects the direct or indirect action of purifying selection and the presence of functional constraints. Many such constraints are likely to exist, and an important challenge is to distinguish them. Here we develop a method to do so by isolating the effects acting at different word sizes. We apply our method to 2-, 4-, and 8-base-pair (bp) words across several classes of noncoding sequence. Our major result is that similarities in 8-bp word frequencies scale with evolutionary time for regions immediately upstream of genes. This association is present although weaker in intronic sequence, but cannot be detected in intergenic sequence using our method. In contrast, 2-bp and 4-bp word frequencies scale with time in all classes of noncoding sequence. These results suggest that different genomic processes are involved at different word sizes. The pattern in 2-bp and 4-bp words may be due to evolutionary changes in processes such as DNA replication and repair, as has been suggested before. The pattern in 8-bp words may reflect evolutionary changes in gene-regulatory machinery, such as changes in the frequencies of transcription-factor binding sites, or in the affinity of transcription factors for particular sequences.
One of the foundations of molecular evolution is the idea that more closely related species are more similar on the molecular level. One example that has been known for several years is the genomic composition of short words (i.e., short segments) of DNA. Given a sample of genome sequence, one can count the occurrences of all words of a certain length. It turns out that closely related species have more similar word frequencies. The pattern of how these frequencies change over evolutionary time is likely to be influenced by the many functions of the genome (coding for proteins, controlling gene expression, etc.). Bush and Lahn investigated the influence of genomic function on word-frequency variation in 13 animal genomes. Using a method designed to isolate the effects acting at particular word sizes, the authors examined how word frequencies vary in different categories of noncoding sequence. They found that interspecies patterns of word-frequency variation change depending on word size and sequence category. These results suggest that noncoding sequence is subject to different functional constraints depending on its location in the genome. An especially interesting possibility is that the patterns in longer words may reflect evolutionary changes in gene regulatory machinery.
Genome sequences can be conceptualized as arrangements of motifs or words. The frequencies and positional distributions of these words within particular non-coding genomic segments provide important insights into how the words function in processes such as mRNA stability and regulation of gene expression.
Using an enumerative word discovery approach, we investigated the frequencies and positional distributions of all 65,536 different 8-letter words in the genome of Arabidopsis thaliana. Focusing on promoter regions, introns, and 3' and 5' untranslated regions (3'UTRs and 5'UTRs), we compared word frequencies in these segments to genome-wide frequencies. The statistically interesting words in each segment were clustered with similar words to generate motif logos. We investigated whether words were clustered at particular locations or were distributed randomly within each genomic segment, and we classified the words using gene expression information from public repositories. Finally, we investigated whether particular sets of words appeared together more frequently than others.
Our studies provide a detailed view of the word composition of several segments of the non-coding portion of the Arabidopsis genome. Each segment contains a unique word-based signature. The respective signatures consist of the sets of enriched words, 'unwords', and word pairs within a segment, as well as the preferential locations and functional classifications for the signature words. Additionally, the positional distributions of enriched words within the segments highlight possible functional elements, and the co-associations of words in promoter regions likely represent the formation of higher order regulatory modules. This work is an important step toward fully cataloguing the functional elements of the Arabidopsis genome.
Phonological words (PWs) are defined as having a single word that acts as a nucleus and an optional number of function words preceding and following that act as satellites. Content and function words are one way of specifying the nucleus and satellites of PW. PW, defined in this way, have been found useful in the characterization of patterns of disfluency over ages for both English and Spanish speakers who stutter. Since content words carry stress in English, PWs segmented using content words as the nucleus would correspond to a large extent with PWs segmented that use a stressed word as the nucleus. This correlation between word type and stress does not apply to the same extent in Spanish. Samples of Spanish from speakers of different ages were segmented into PWs using a stressed, rather than a content, word as the nucleus and unstressed, rather than function, words as satellites. PWs were partitioned into those that were common to the two segmentation methods (common set) and those that differed (different set). There were two separate segmentations when PWs differed, those appropriate to content word nuclei, and those appropriate to stressed word nuclei. The two types of segmentation on the different set were analyzed separately to see whether one, both or neither method led to similar patterns of disfluency to those reported when content words were used as nuclei in English and Spanish. Generally speaking, the patterns of stuttering in PW found in English applied to all three analyses (common and the two on the different set) in Spanish. Thus, neither segmentation method showed a marked superiority in predicting the patterns of disfluency over age groups for the different set of Spanish data. It is argued that stressed or content word status can lead to a word being a nucleus and that there may be other factors (e.g. speech rate) that underlie stressed words and content words that affect the words around these PW nuclei in a similar way.
Development stuttering; Spanish; metrical influences on disfluency; lexical influences on disfluency; EXPLAN theory
Stuttering on function words was examined in 51 people who stutter. The people who stutter were subdivided into young (2 to 6 years), middle (6 to 9 years), and older (9 to 12 years) child groups; teenagers (13 to 18 years); and adults (20 to 40 years). As reported by previous researchers, children up to about age 9 stuttered more on function words (pronouns, articles, prepositions, conjunctions, auxiliary verbs), whereas older people tended to stutter more on content words (nouns, main verbs, adverbs, adjectives). Function words in early positions in utterances, again as reported elsewhere, were more likely to be stuttered than function words at later positions in an utterance. This was most apparent for the younger groups of speakers. For the remaining analyses, utterances were segmented into phonological words on the basis of Selkirk’s work (1984). Stuttering rate was higher when function words occurred in early phonological word positions than other phonological word positions whether the phonological word appeared in initial position in an utterance or not. Stuttering rate was highly dependent on whether the function word occurred before or after the single content word allowed in Selkirk’s (1984) phonological words. This applied, once again, whether the phonological word was utterance-initial or not. It is argued that stuttering of function words before their content word in phonological words in young speakers is used as a delaying tactic when the forthcoming content word is not prepared for articulation.
stuttering; phonological words; function words; speech plan
A fundamental problem in the study of human spoken word recognition concerns the structural relations among the sound patterns of words in memory and the effects these relations have on spoken word recognition. In the present investigation, computational and experimental methods were employed to address a number of fundamental issues related to the representation and structural organization of spoken words in the mental lexicon and to lay the groundwork for a model of spoken word recognition.
Using a computerized lexicon consisting of transcriptions of 20,000 words, similarity neighborhoods for each of the transcriptions were computed. Among the variables of interest in the computation of the similarity neighborhoods were: 1) the number of words occurring in a neighborhood, 2) the degree of phonetic similarity among the words, and 3) the frequencies of occurrence of the words in the language. The effects of these variables on auditory word recognition were examined in a series of behavioral experiments employing three experimental paradigms: perceptual identification of words in noise, auditory lexical decision, and auditory word naming.
The results of each of these experiments demonstrated that the number and nature of words in a similarity neighborhood affect the speed and accuracy of word recognition. A neighborhood probability rule was developed that adequately predicted identification performance. This rule, based on Luce's (1959) choice rule, combines stimulus word intelligibility, neighborhood confusability, and frequency into a single expression. Based on this rule, a model of auditory word recognition, the neighborhood activation model, was proposed. This model describes the effects of similarity neighborhood structure on the process of discriminating among the acoustic-phonetic representations of words in memory. The results of these experiments have important implications for current conceptions of auditory word recognition in normal and hearing impaired populations of children and adults.
The purposes of this study were 1) to examine the effect of lexical characteristics on the spoken word recognition performance of children who use a multichannel cochlear implant (CI), and 2) to compare their performance on lexically controlled word lists with their performance on a traditional test of word recognition, the PB-K.
In two different experiments, 14 to 19 pediatric CI users who demonstrated at least some open-set speech recognition served as subjects. Based on computational analyses, word lists were constructed to allow systematic examination of the effects of word frequency, lexical density (i.e., the number of phonemically similar words, or neighbors), and word length. The subjects’ performance on these new tests and the PB-K also was compared.
The percentage of words correctly identified was significantly higher for lexically “easy” words (high frequency words with few neighbors) than for “hard” words (low frequency words with many neighbors), but there was no lexical effect on phoneme recognition scores. Word recognition performance was consistently higher on the lexically controlled lists than on the PB-K. In addition, word recognition was better for multisyllabic than for monosyllabic stimuli.
These results demonstrate that pediatric cochlear implant users are sensitive to the acoustic-phonetic similarities among words, that they organize words into similarity neighborhoods in long-term memory, and that they use this structural information in recognizing isolated words. The results further suggest that the PB-K underestimates these subjects’ spoken word recognition.
Substantial evidence indicates that where readers fixate within a word affects the efficiency with which that word is recognized. Indeed, words in alphabetic languages (e.g., English, French) are recognized most efficiently when fixated at their optimal viewing position (OVP), which is near the word center. However, little is known about the effects of fixation location on word recognition in non-alphabetic languages, such as Chinese. Moreover, studies to date have not investigated if effects of fixation location vary across adult age-groups, although it is well-established that older readers experience greater difficulty recognizing words due to visual and cognitive declines. Accordingly, the present research examined OVP effects by young and older adult readers when recognizing Chinese words presented in isolation. Most words in Chinese are formed from two or more logograms called characters and so the present experiment investigated the influence of fixation location on the recognition of 2-, 3-, and 4-character words (and nonwords). The older adults experienced generally greater word recognition difficulty. But whereas the young adults recognized words most efficiently when initially fixating the first character of 2-character words and second character of 3- and 4-character words, the older adults recognized words most efficiently when initially fixating the first character for words of each length. The findings therefore reveal subtle but potentially important adult age differences in the effects of fixation location on Chinese word recognition. Moreover, the similarity in effects for words and nonwords implies a more general age-related change in oculomotor strategy when processing Chinese character-strings.
viewing position; eye movements; Chinese words; aging; word recognition
Top-down contextual influences play a major part in speech understanding, especially in hearing-impaired patients with deteriorated auditory input. Those influences are most obvious in difficult listening situations, such as listening to sentences in noise but can also be observed at the word level under more favorable conditions, as in one of the most commonly used tasks in audiology, i.e., repeating isolated words in silence. This study aimed to explore the role of top-down contextual influences and their dependence on lexical factors and patient-specific factors using standard clinical linguistic material. Spondaic word perception was tested in 160 hearing-impaired patients aged 23–88 years with a four-frequency average pure-tone threshold ranging from 21 to 88 dB HL. Sixty spondaic words were randomly presented at a level adjusted to correspond to a speech perception score ranging between 40 and 70% of the performance intensity function obtained using monosyllabic words. Phoneme and whole-word recognition scores were used to calculate two context-influence indices (the j factor and the ratio of word scores to phonemic scores) and were correlated with linguistic factors, such as the phonological neighborhood density and several indices of word occurrence frequencies. Contextual influence was greater for spondaic words than in similar studies using monosyllabic words, with an overall j factor of 2.07 (SD = 0.5). For both indices, context use decreased with increasing hearing loss once the average hearing loss exceeded 55 dB HL. In right-handed patients, significantly greater context influence was observed for words presented in the right ears than for words presented in the left, especially in patients with many years of education. The correlations between raw word scores (and context influence indices) and word occurrence frequencies showed a significant age-dependent effect, with a stronger correlation between perception scores and word occurrence frequencies when the occurrence frequencies were based on the years corresponding to the patients' youth, showing a “historic” word frequency effect. This effect was still observed for patients with few years of formal education, but recent occurrence frequencies based on current word exposure had a stronger influence for those patients, especially for younger ones.
speech perception; lexical influences; word occurrence frequency; hearing loss; aging; spoken word recognition; spondaic words; laterality
Previous research has shown that word frequency affects judgments of learning (JOLs). Specifically, people give higher JOLs for high-frequency (HF) words than for low-frequency (LF) words. However, the exact mechanism underlying this effect is largely unknown. The present study replicated and extended previous work by exploring the contributions of processing fluency and beliefs to the word frequency effect. In Experiment 1, participants studied HF and LF words and made immediate JOLs. The findings showed that participants gave higher JOLs for HF words than for LF ones, reflecting the word frequency effect. In Experiment 2a (measuring the encoding fluency by using self-paced study time) and Experiment 2b (disrupting perceptual fluency by presenting words in an easy or difficult font style), we evaluated the contribution of processing fluency. The findings of Experiment 2a revealed no significant difference in self-paced study time between HF and LF words. The findings of Experiment 2b showed that the size of word frequency effect did not decrease or disappear even when presenting words in a difficult font style. In Experiment 3a (a questionnaire-based study) and Experiment 3b (making pre-study JOLs), we evaluated the role of beliefs in this word frequency effect. The results of Experiment 3a showed that participants gave higher estimates for HF as compared to LF words. That is, they estimated that hypothetical participants would better remember the HF words. The results of Experiment 3b showed that participants gave higher pre-study JOLs for HF than for LF words. These results across experiments suggested that people’s beliefs, not processing fluency, contribute substantially to the word frequency effect on JOLs. However, considering the validation of the indexes reflecting the processing fluency in the current study, we cannot entirely rule out the possible contribution of processing fluency. The relative contribution of processing fluency and beliefs to word frequency effect and the theoretical implications were discussed.
word frequency; judgments of learning; processing fluency; beliefs; cue-utilization framework
It has been suggested that the variability among studies in the onset of lexical effects may be due to a series of methodological differences. In this study we investigated the role of orthographic familiarity, phonological legality and number of orthographic neighbours of words in determining the onset of word/non-word discriminative responses.
ERPs were recorded from 128 sites in 16 Italian University students engaged in a lexical decision task. Stimuli were 100 words, 100 quasi-words (obtained by the replacement of a single letter), 100 pseudo-words (non-derived) and 100 illegal letter strings. All stimuli were balanced for length; words and quasi-words were also balanced for frequency of use, domain of semantic category and imageability. SwLORETA source reconstruction was performed on ERP difference waves of interest.
Overall, the data provided evidence that the latency of lexical effects (word/non-word discrimination) varied as a function of the number of a word's orthographic neighbours, being shorter to non-derived than to derived pseudo-words. This suggests some caveats about the use in lexical decision paradigms of quasi-words obtained by transposing or replacing only 1 or 2 letters. Our findings also showed that the left-occipito/temporal area, reflecting the activity of the left fusiform gyrus (BA37) of the temporal lobe, was affected by the visual familiarity of words, thus explaining its lexical sensitivity (word vs. non-word discrimination). The temporo-parietal area was markedly sensitive to phonological legality exhibiting a clear-cut discriminative response between illegal and legal strings as early as 250 ms of latency.
The onset of lexical effects in a lexical decision paradigm depends on a series of factors, including orthographic familiarity, degree of global lexical activity, and phonologic legality of non-words.
The problem of finding the shortest absent words in DNA data has been recently addressed, and algorithms for its solution have been described. It has been noted that longer absent words might also be of interest, but the existing algorithms only provide generic absent words by trivially extending the shortest ones.
We show how absent words relate to the repetitions and structure of the data, and define a new and larger class of absent words, called minimal absent words, that still captures the essential properties of the shortest absent words introduced in recent works. The words of this new class are minimal in the sense that if their leftmost or rightmost character is removed, then the resulting word is no longer an absent word. We describe an algorithm for generating minimal absent words that, in practice, runs in approximately linear time. An implementation of this algorithm is publicly available at .
Because the set of minimal absent words that we propose is much larger than the set of the shortest absent words, it is potentially more useful for applications that require a richer variety of absent words. Nevertheless, the number of minimal absent words is still manageable since it grows at most linearly with the string size, unlike generic absent words that grow exponentially. Both the algorithm and the concepts upon which it depends shed additional light on the structure of absent words and complement the existing studies on the topic.
Adults with aphasia often try mightily to produce specific words, but their word-finding attempts are frequently unsuccessful. However, the word retrieval process may contain rich information that communicates a desired message regardless of word-finding success.
The original article reprinted here reports an investigation that assessed whether patient-generated self cues inherent in the word retrieval process could be interpreted by listener/observers and improve on communicative effectiveness for adults with aphasia. The newly added commentary identifies and reports tentative conclusions from 18 investigations of self-generated cues in aphasia since the 1982 paper. It further provides a rationale for increasing research on self-generated cueing and notes a surprising lack of attention to the questions investigated in the original article. The original research is also connected with more recent qualitative investigations of interactional, as opposed to transactional, communicative exchange.
Methods & Procedures
While performing single-word production tasks, 10 adults with aphasia produced 107 utterances that contained spontaneous word retrieval behaviours. To determine the “communicative value” of these behaviours, herein designated self cues or self-generated cues, the utterance-final (potential target) word was edited out and the edited utterances were dubbed onto a videotape. Six naïve observers, three of whom received some context about the nature of word retrieval in aphasia and possible topics for the utterances, and three of whom got no information, predicted the target word of each utterance from the word-finding behaviours alone. The communicative value of the self-generated cues was determined for each individual with aphasia by summing percent correct word retrieval and percent correct observer prediction of target words, based on word retrieval behaviours. The newly added commentary describes some challenges of investigating a “communicative value” outcome, and indicates what would and would not change about the methods, if we did the study today.
Outcomes & Results
The observer group that was given some context information appeared to be more successful at predicting target words than the group without any such information. Self-generated cues enhanced communication for the majority of individuals with aphasia, with some cues (e.g., descriptions/gestures of action or function) appearing to carry more communicative value than others (e.g., semantic associates). The commentary again indicates how and why we would change this portion of the investigation if conducting the study at this time.
The results are consistent with Holland’s (1977) premise that people with aphasia do well at communication, regardless of the words they produce. The finding that minimal context information may assist observers in understanding the communicative intent of people with aphasia has important implications for training family members to interpret self-generated cues. The new commentary reinforces these conclusions, highlights potential differences between self cues that improve word-finding success and those that enhance message transmission, and points to some additional research needs.
Patterns of word use both reflect and influence a myriad of human activities and interactions. Like other entities that are reproduced and evolve, words rise or decline depending upon a complex interplay between their intrinsic properties and the environments in which they function. Using Internet discussion communities as model systems, we define the concept of a word niche as the relationship between the word and the characteristic features of the environments in which it is used. We develop a method to quantify two important aspects of the size of the word niche: the range of individuals using the word and the range of topics it is used to discuss. Controlling for word frequency, we show that these aspects of the word niche are strong determinants of changes in word frequency. Previous studies have already indicated that word frequency itself is a correlate of word success at historical time scales. Our analysis of changes in word frequencies over time reveals that the relative sizes of word niches are far more important than word frequencies in the dynamics of the entire vocabulary at shorter time scales, as the language adapts to new concepts and social groupings. We also distinguish endogenous versus exogenous factors as additional contributors to the fates of words, and demonstrate the force of this distinction in the rise of novel words. Our results indicate that short-term nonstationarity in word statistics is strongly driven by individual proclivities, including inclinations to provide novel information and to project a distinctive social identity.
Poor hearing acuity reduces memory for spoken words, even when the words are presented with enough clarity for correct recognition. An "effortful hypothesis" suggests that the perceptual effort needed for recognition draws from resources that would otherwise be available for encoding the word in memory. To assess this hypothesis, we conducted a behavioral task requiring immediate free recall of word-lists, some of which contained an acoustically masked word that was just above perceptual threshold. Results show that masking a word reduces the recall of that word and words prior to it, as well as weakening the linking associations between the masked and prior words. In contrast, recall probabilities of words following the masked word are not affected. To account for this effect we conducted computational simulations testing two classes of models: associative linking models and short-term memory buffer models. Only a model that integrated both contextual linking and buffer components matched all of the effects of masking observed in our behavioral data. In this Linking-Buffer model, the masked word disrupts a short-term memory buffer, causing associative links of words in the buffer to be weakened, affecting memory for the masked word and the word prior to it, while allowing links of words following the masked word to be spared. We suggest that these data account for the so-called "effortful hypothesis", where distorted input has a detrimental impact on prior information stored in short-term memory.
modeling; simulations; recall; word lists; associations
A gating technique was used in two studies of spoken word identification that investigated the relationship between the available acoustic–phonetic information in the speech signal and the context provided by meaningful and semantically anomalous sentences. The duration of intact spoken segments of target words and the location of these segments at the beginnings or endings of words in sentences were varied. The amount of signal duration required for word identification and the distribution of incorrect word responses were examined. Subjects were able to identify words in spoken sentences with only word-initial or only word-final acoustic–phonetic information. In meaningful sentences, less word-initial information was required to identify words than word-final information. Error analyses indicated that both acoustic–phonetic information and syntactic contextual knowledge interacted to generate the set of hypothesized word candidates used in identification. The results provide evidence that word identification is qualitatively different in meaningful sentences than in anomalous sentences or when words are presented in isolation: That is, word identification in sentences is an interactive process that makes use of several knowledge sources. In the presence of normal sentence context, the acoustic–phonetic information in the beginnings of words is particularly effective in facilitating rapid identification of words.
We conducted a preliminary study to examine whether Chinese readers’ spontaneous word segmentation processing is consistent with the national standard rules of word segmentation based on the Contemporary Chinese language word segmentation specification for information processing (CCLWSSIP). Participants were asked to segment Chinese sentences into individual words according to their prior knowledge of words. The results showed that Chinese readers did not follow the segmentation rules of the CCLWSSIP, and their word segmentation processing was influenced by the syntactic categories of consecutive words. In many cases, the participants did not consider the auxiliary words, adverbs, adjectives, nouns, verbs, numerals and quantifiers as single word units. Generally, Chinese readers tended to combine function words with content words to form single word units, indicating they were inclined to chunk single words into large information units during word segmentation. Additionally, the “overextension of monosyllable words” hypothesis was tested and it might need to be corrected to some degree, implying that word length have an implicit influence on Chinese readers’ segmentation processing. Implications of these results for models of word recognition and eye movement control are discussed.
Progressive alexia is an acquired reading deficit caused by degeneration of brain regions that are essential for written word processing. Functional imaging studies have shown that early processing of the visual word form depends on a hierarchical posterior-to-anterior processing stream in occipito-temporal cortex, whereby successive areas code increasingly larger and more complex perceptual attributes of the letter string. A region located in the left lateral occipito-temporal sulcus and adjacent fusiform gyrus shows maximal selectivity for words and has been dubbed the ‘visual word form area’. We studied two patients with progressive alexia in order to determine whether their reading deficits were associated with structural and/or functional abnormalities in this visual word form system. Voxel-based morphometry showed left-lateralized occipito-temporal atrophy in both patients, very mild in one, but moderate to severe in the other. The two patients, along with 10 control subjects, were scanned with functional magnetic resonance imaging as they viewed rapidly presented words, false font strings, or a fixation crosshair. This paradigm was optimized to reliably map brain regions involved in orthographic processing in individual subjects. All 10 control subjects showed a posterior-to-anterior gradient of selectivity for words, and all 10 showed a functionally defined visual word form area in the left hemisphere that was activated for words relative to false font strings. In contrast, neither of the two patients with progressive alexia showed any evidence for a selectivity gradient or for word-specific activation of the visual word form area. The patient with mild atrophy showed normal responses to both words and false font strings in the posterior part of the visual word form system, but a failure to develop selectivity for words in the more anterior part of the system. In contrast, the patient with moderate to severe atrophy showed minimal activation of any part of the visual word form system for either words or false font strings. Our results suggest that progressive alexia is associated with a dysfunctional visual word form system, with or without substantial cortical atrophy. Furthermore, these findings demonstrate that functional MRI has the potential to reveal the neural bases of cognitive deficits in neurodegenerative patients at very early stages, in some cases before the development of extensive atrophy.
progressive alexia; letter-by-letter reading; posterior cortical atrophy; logopenic primary progressive aphasia; visual word form system