This paper describes an initial study of the effect of focused attention on phonological speech errors. In three experiments, participants recited four-word tongue-twisters, and focused attention on one (or none) of the words. The attended word was singled out differently in each experiment; participants were under instructions to either avoid errors on the attended word, to stress it, or to say it silently. The experiments showed that all methods of attending to a word decreased errors on that word, while increasing errors on the surrounding words. However, this error increase did not result from a relative increase in phonemic migrations originating from the attended word. This pattern is inconsistent with conceptualizing attention either as higher activation of the attended word or greater inhibition of the unattended words throughout the production of the sequence. Instead, it is consistent with a model which presumes that attention exerts its effect at the time of production of the attended word, without lingering effects on the past or the future.
Speech errors; Attention; Phoneme migration; Cognitive control
Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the double dissociation between comprehension and error-detection ability observed in the aphasic patients. We propose a new theory of speech-error detection which is instead based on the production process itself. The theory borrows from studies of forced-choice-response tasks the notion that error detection is accomplished by monitoring response conflict via a frontal brain structure, such as the anterior cingulate cortex. We adapt this idea to the two-step model of word production, and test the model-derived predictions on a sample of aphasic patients. Our results show a strong correlation between patients’ error-detection ability and the model’s characterization of their production skills, and no significant correlation between error detection and comprehension measures, thus supporting a production-based monitor, generally, and the implemented conflict-based monitor in particular. The successful application of the conflict-based theory to error-detection in linguistic, as well as non-linguistic domains points to a domain-general monitoring system.
Speech monitoring; Speech errors; Error detection; Aphasia; Computational models
Many research questions in aphasia can only be answered through access to substantial numbers of patients and to their responses on individual test items. Since such data are often unavailable to individual researchers and institutions, we have developed and made available the Moss Aphasia Psycholinguistics Project Database: a large, searchable, web-based database of patient performance on psycholinguistic and neuropsychological tests. The database contains data from over 170 patients covering a wide range of aphasia subtypes and severity, some of whom were tested multiple times. The core of the archive consists of a detailed record of individual-trial performance on the Philadelphia (picture) Naming Test. The database also contains basic demographic information about the patients and patients' overall performance on neuropsychological assessments as well as tests of speech perception, semantics, short-term memory, and sentence comprehension. The database is available at http://www.mappd.org/.
aphasia; database; picture naming; language
Case series methodology involves the systematic assessment of a sample of related patients, with the goal of understanding how and why they differ from one another. This method has become increasingly important in cognitive neuropsychology, which has long been identified with single-subject research. We review case series studies dealing with impaired semantic memory, reading, and language production, and draw attention to the affinity of this methodology for testing theories that are expressed as computational models and for addressing questions about neuroanatomy. It is concluded that case series methods usefully complement single-subject techniques.
case series; single-subject; cognitive neuropsychology; computational models; lexical access; semantic dementia; aphasia; semantic memory
Semantic errors in aphasia (e.g., naming a horse as “dog”) frequently arise from faulty mapping of concepts onto lexical items. A recent study by our group used voxel-based lesion-symptom mapping (VLSM) methods with 64 patients with chronic aphasia to identify voxels that carry an association with semantic errors. The strongest associations were found in the left anterior temporal lobe (L-ATL), in the mid- to anterior MTG region. The absence of findings in Wernicke’s area was surprising, as were indications that ATL voxels made an essential contribution to the post-semantic stage of lexical access. In this follow-up study, we sought to validate these results by re-defining semantic errors in a manner that was less theory dependent and more consistent with prior lesion studies. As this change also increased the robustness of the dependent variable, it made it possible to perform additional statistical analyses that further refined the interpretation. The results strengthen the evidence for a causal relationship between ATL damage and lexically-based semantic errors in naming and lend confidence to the conclusion that chronic lesions in Wernicke’s area are not causally implicated in semantic error production.
aphasia; voxel-based lesion-symptom mapping; naming; semantic; errors
This paper investigates the cognitive processes underlying picture naming and auditory word repetition. In the 2-step model of lexical access, both the semantic and phonological steps are involved in naming, but the former has no role in repetition. Assuming recognition of the to-be-repeated word, repetition could consist of retrieving the word’s output phonemes from the lexicon (the lexical-route model), retrieving the output phonology directly from input phonology (the nonlexical-route model) or employing both routes together (the summation dual-route model). We tested these accounts by comparing the size of the word frequency effect (an index of lexical retrieval) in naming and repetition data from 59 aphasic patients with simulations of naming and repetition models. The magnitude of the frequency effect (and the influence of other lexical variables) was found to be comparable in naming and repetition, and equally large for both the lexical and summation dual-route models. However, only the dual-route model was fully consistent with data from patients, suggesting that nonlexical input is added on top of a fully-utilized lexical route.
Lexical access; Aphasia; Repetition; Picture naming; Computational models; Case-series; Word frequency
Inner speech is typically characterized as either the activation of abstract linguistic representations or a detailed articulatory simulation that lacks only the production of sound. We present a study of the ‘speech errors’ that occur during the inner recitation of tongue-twister like phrases. Two forms of inner speech were tested: inner speech without articulatory movements and articulated (mouthed) inner speech. While mouthing one’s inner speech could reasonably be assumed to require more articulatory planning, prominent theories assume that such planning should not affect the experience of inner speech and consequently the errors that are ‘heard’ during its production. The errors occurring in articulated inner speech exhibited the phonemic similarity effect and lexical bias effect, two speech-error phenomena that, in overt speech, have been localized to an articulatory-feature processing level and a lexical-phonological level, respectively. In contrast, errors in unarticulated inner speech did not exhibit the phonemic similarity effect—just the lexical bias effect. The results are interpreted as support for a flexible abstraction account of inner speech. This conclusion has ramifications for the embodiment of language and speech and for the theories of speech production.
Naming a picture of a dog primes the subsequent naming of a picture of a dog (repetition priming) and interferes with the subsequent naming of a picture of a cat (semantic interference). Behavioral studies suggest that these effects derive from persistent changes in the way that words are activated and selected for production, and some have claimed that the findings are only understandable by positing a competitive mechanism for lexical selection. We present a simple model of lexical retrieval in speech production that applies error-driven learning to its lexical activation network. This model naturally produces repetition priming and semantic interference effects. It predicts the major findings from several published experiments, demonstrating that these effects may arise from incremental learning. Furthermore, analysis of the model suggests that competition during lexical selection is not necessary for semantic interference if the learning process is itself competitive.
Analysis of error types provides useful information about the stages and processes involved in normal and aphasic word production. In picture naming, semantic errors (horse for goat) generally result from something having gone awry in lexical access such that the right concept was mapped to the wrong word. This study used the new lesion analysis technique known as voxel-based lesion-symptom mapping to investigate the locus of lesions that give rise to semantic naming errors. Semantic errors were obtained from 64 individuals with post-stroke aphasia, who also underwent high-resolution structural brain scans. Whole brain voxel-based lesion-symptom mapping was carried out to determine where lesion status predicted semantic error rate. The strongest associations were found in the left anterior to mid middle temporal gyrus. This area also showed strong and significant effects in further analyses that statistically controlled for deficits in pre-lexical, conceptualization processes that might have contributed to semantic error production. This study is the first to demonstrate a specific and necessary role for the left anterior temporal lobe in mapping concepts to words in production. We hypothesize that this role consists in the conveyance of fine-grained semantic distinctions to the lexical system. Our results line up with evidence from semantic dementia, the convergence zone framework and meta-analyses of neuroimaging studies on word production. At the same time, they cast doubt on the classical linkage of semantic error production to lesions in and around Wernicke's area.
aphasia; voxel-based lesion-symptom mapping; naming; semantic; errors
Lexical access in language production, and particularly pathologies of lexical access, are often investigated by examining errors in picture naming and word repetition. In this article, we test a computational approach to lexical access, the two-step interactive model, by examining whether the model can quantitatively predict the repetition-error patterns of 65 aphasic subjects from their naming errors. The model’s characterizations of the subjects’ naming errors were taken from the companion paper to this one (Schwartz, Dell, N. Martin, Gahl & Sobel, 2006), and their repetition was predicted from the model on the assumption that naming involves two error prone steps, word and phonological retrieval, whereas repetition only creates errors in the second of these steps. A version of the model in which lexical-semantic and lexical-phonological connections could be independently lesioned was generally successful in predicting repetition for the aphasics. An analysis of the few cases in which model predictions were inaccurate revealed the role of input phonology in the repetition task.
Adults can learn new artificial phonotactic constraints by producing syllables that exhibit the constraints. The experiments presented here tested the limits of phonotactic learning in production using speech errors as an implicit measure of learning. Experiment 1 tested a constraint in which the placement of a consonant as an onset or coda depended on the identity of a nonadjacent consonant. Participants’ speech errors reflected knowledge of the constraint, but not until the second day of testing. Experiment 2 tested a constraint in which consonant placement depended on an extralinguistic factor, the speech rate. Participants were not able to learn this constraint. Together, these experiments suggest that phonotactic-like constraints are acquired when mutually constraining elements reside within the phonological system.
phonotactic learning; speech errors
Some theories of lexical access in production locate the effect of lexical frequency at the retrieval of a word’s phonological characteristics, as opposed to the prior retrieval of a holistic representation of the word from its meaning. Yet there is evidence from both normal and aphasic individuals that frequency may influence both of these retrieval processes. This inconsistency is especially relevant in light of recent attempts to determine the representation of another lexical property, age of acquisition or AoA, whose effect is similar to that of frequency. To further explore the representations of these lexical variables in the word retrieval system, we performed hierarchical, multinomial logistic regression analyses of 50 aphasic patients’ picture-naming responses. While both log frequency and AoA had a significant influence on patient accuracy and led to fewer phonologically related errors and omissions, only log frequency had an effect on semantically related errors. These results provide evidence for a lexical access process sensitive to frequency at all stages, but with AoA having a more limited effect.
Retrieving a word in a sentence requires speakers to overcome syntagmatic, as well as paradigmatic interference. When accessing cat in “The cat chased the string,” not only are similar competitors such as dog and cap activated, but also other words in the planned sentence, such as chase and string. We hypothesize that both types of interference impact the same stage of lexical access, and review connectionist models of production that use an error-driven learning algorithm to overcome that interference. This learning algorithm creates a mechanism that limits syntagmatic interference, the syntactic “traffic cop,” a configuration of excitatory and inhibitory connections from syntactic-sequential states to lexical units. We relate the models to word and sentence production data, from both normal and aphasic speakers.
Adults rapidly learn phonotactic constraints from brief production or perception experience. Three experiments asked whether this learning is modality-specific, occurring separately in production and perception, or whether perception transfers to production. Participant pairs took turns repeating syllables in which particular consonants were restricted to particular syllable positions. Speakers' errors reflected learning of the constraints present in the sequences they produced, regardless of whether their partner produced syllables with the same constraints, or opposing constraints. Although partial transfer could be induced (Experiment 3), simply hearing and encoding syllables produced by others did not affect speech production to the extent that error patterns were altered. Learning of new phonotactic constraints was predominantly restricted to the modality in which those constraints were experienced.
phonotactic learning; speech errors; production; perception
When unimpaired participants name pictures quickly, they produce many perseverations that bear a semantic relation to the target, especially when the pictures are blocked by semantic category. These “semantic perseverations” have not shown the steep decay over lags (distance from prior occurrence) that typify the perseverations produced by people with aphasia on standard naming tasks (Cohen & Dehaene, 1998). To reconcile the discrepant findings, we studied semantic perseverations generated by participants with aphasia on a naming task that featured semantic blocking [Schnur, T. T., Schwartz, M. F., Brecher, A., & Hodgson, C. (2006). Semantic interference during blocked-cyclic naming: Evidence from aphasia. Journal of Memory and Language, 54, 199–227]. The temporal properties of these perseverations were investigated by analyzing their lag function and the influence of time (response-stimulus interval) on this function. To separate out the influence of chance on the observed lag distributions, chance data sets were created for individual participants by reshuffling whole trials (i.e., stimulus-response pairs) in a manner that preserved unique features of the blocking design. Analyses of chance-corrected lag functions revealed the expected recency bias, i.e., higher perseveration frequencies at short lags. Importantly, there was no difference between the lag functions for perseverations generated with a 5 s, compared to 1 s, response-stimulus interval. This combination of recency and insensitivity to elapsed time indicates that the perseveratory impetus in a named response does not passively decay with time but rather is diminished by interference from related trials. We offer an incremental learning account of these findings.
Perseveration; semantic blocking; aphasia; naming; priming; incremental learning
The lexical bias effect (the tendency for phonological speech errors to create words more often than nonwords) has been debated for over 30 years. One account attributes the effect to a lexical editor, a strategic component of the production system that examines each planned phonological string, and suppresses it if it is a nonword. The alternative explanation is that the effect occurs automatically as a result of phonological-lexical feedback. Using a new paradigm, we explicitly asked participants to do lexical editing on their planned speech and compared performance on this inner lexical decision task to results obtained from the standard lexical decision task in three subsequent experiments. Our experimentally created “lexical editor” needed 300 ms to recognize and suppress nonwords, as determined by comparing reaction times when editing was and was not required. Therefore, we concluded that even though strategic lexical editing can be done, any such editing that occurs in daily speech occurs sporadically, if at all.
lexical bias; self-monitoring; speech errors; feedback
Inner speech, that little voice that people often hear inside their heads while thinking, is a form of mental imagery. The properties of inner speech errors can be used to investigate the nature of inner speech, just as overt slips are informative about overt speech production. Overt slips tend to create words (lexical bias) and involve similar exchanging phonemes (phonemic similarity effect). We examined these effects in inner and overt speech via a tongue-twister recitation task. While lexical bias was present in both inner and overt speech errors, the phonemic similarity effect was evident only for overt errors, producing a significant overtness by similarity interaction. We propose that inner speech is impoverished at lower (featural) levels, but robust at higher (phonemic)levels.