To achieve a paradigm shift in neuropsychology capitalizing on these developments will require decades of commitment, but we possess today multiple actionable options to accelerate change and prepare for the future (see ).
Formalizing Neuropsychological Concepts and Measurements
To increase shared knowledge about neuropsychology and enable its use across disciplines requires operational definition of key concepts and their inter-relations. Formal descriptions of content domains or ontologies are rapidly revolutionizing other biomedical disciplines. More than 2,000 bioinformatics resources are available now on line. Neuropsychology requires similar developments for its concepts to be represented, mined, and connected to the structure and function of underlying neural circuits, cellular systems, signaling pathways, molecular biology and genomics.
Challenges in the creation of neuropsychological ontologies include fuzzy concepts, semantic disambiguation of terms, instability and lack of consensus about concept labels. One large advantage is that abstract neuropsychological constructs are measured by
objective test scores, just as latent constructs are validated with respect to observable indicators in structural equation modeling. By linking neuropsychological concepts to specific measurement methods, it is possible to define families of tests and objectively evaluate the degree to which these measure overlapping or non-overlapping constructs (for further discussion see Bilder, Sabb, Parker et al., 2009
illustrates how it is possible to begin formalizing hypotheses about complex neuropsychological concepts and the evidence that is used to support or refute these. Starting with an assertion, we can identify evidence that includes cognitive task indicators linked to specific functional processes (cognitive constructs), and measurements of brain function and structure that converge on neuroanatomic circuits. The cellular elements of this circuit model can be linked to other bioinformatics resources (including signaling pathways, molecular expression data, and gene networks; not shown).
Schematic Representation of a Neuropsychological Hypothesis
also illustrates how conflicting hypotheses
can be represented. For example, Poldrack and Chambers disagree about how best to describe functions of hyperdirect and indirect pathways; the model can be augmented by evidence to resolve conflicting interpretations. Further, quantitative annotation
can enable automated meta-analysis. This strategy was used to estimate the heritability of “cognitive control” even though no study had assessed this directly; nevertheless it was possible to define cognitive control through other associated concepts and draw conclusions using indirect evidence (Sabb et al., 2008
). Methods for meta-analytic structural equation modeling can be applied to these data, enabling tests of goodness of fit for competing hypotheses (Furlow & Beretvas, 2005
; Riley, Simmonds, & Look, 2007
No integrated resource addresses all of these issues but some relevant applications are under development. The Consortium for Neuropsychiatric Phenomics (www.phenomics.ucla.edu
) includes a Hypothesis Web project offering free resources for designing multilevel graphical hypotheses, searching relevant literature, and recording qualitative and quantitative annotations particularly about cognitive concepts and measurements (see: PubGraph, PubAtlas, PubBrain, Phenomining, and Phenowiki). An affiliated project focuses specifically on cognitive concepts, cognitive tasks, and their inter-relations (see www.cognitiveatlas.org
). Further development of these tools can help represent and work with neuropsychological concepts and link these to other repositories of biomedical knowledge, thereby enabling evidence-based science. Similar tools can serve evidence-based practice by formalizing hypotheses about assessment necessary to optimize differential diagnosis or select among different treatment options.
Collaborative Knowledge Building for Neuropsychology
Shared definitions of neuropsychological constructs and measurements enable systematic aggregation of neuropsychological knowledge. So far there are no large repositories for neuropsychological data despite relatively high consistency in data types and substantial homogeneity of specific variables that are collected. Neuropsychological evidence comprises primarily group data and individual case data. Group data exist primarily in research publications or proprietary manuals from test publishers. These sources are intrinsically static (once published, results do not change). Group data dissemination in clinical neuropsychology typically involves two stages: (1) a test is released by its publisher, with a manual including normative and validity data from selected clinical studies; and (2) subsequent publications describe results of studies applying the test in new samples. Updating of tests occurs only in stage 1, and a typical cycle time for revision is ten years. Test interpretation often relies solely on data from the original manual. Some users complement this with information from subsequent publications, but absent organized repositories, this is left to the initiative of the researcher or clinician. Individual case data today are mostly in private computer databases or file cabinets and not accessible outside the locations where the data were collected.
Dramatic improvements in open access to both group and individual case data are feasible using existing technology. The neuropsychology community can immediately assemble databases that summarize results of published studies. Just as meta-analytic results are compiled by authors of systematic review papers, we can collaboratively assemble published data about specific tests for online access. An example can be found at www.neuropsychnorms.com
, which enables users to input individual test scores and receive immediate reports comparing these to published findings (Mitrushina, Boone, Razani, & D'Elia, 2005
). With community engagement the scope of this work could be expanded greatly, probably covering most relevant published literature within a few years. Since many papers include healthy groups, meta-analytic normative databases could rapidly rival many standardization samples, and accrued data on new clinical samples, treatment effects, and predictive validity could grow dynamically – as fast as the studies are published.
It is assumed that individual case data can never be released without careful consideration of informed consent and privacy protections; these issues are extremely important and complex, but because there is insufficient space to elaborate these issues are not discussed further here. The primary sources of individual case data are the original test publishers, independent researchers, and clinics. Publishers tend to maintain individual case data as proprietary but release such data under certain circumstances. Researchers tend to keep data secure at least until they have published findings, and often longer, but might release data if there were a national repository that appropriately credited contributions.
An exciting possibility is that clinics and clinicians could contribute data from every examined patient in real time. If this were done, the clinical validity data for major neuropsychological tests would grow very rapidly, and provide opportunities to compare any individual patient examined to customized reference groups stratified by demographic characteristics, or by scores on other cognitive tests. Users could be provided tools to effectively filter on diagnostic characteristics, and given that there might be variability in the credibility of different sources, users could further filter on the characteristics of the clinics providing data. A national bank for neuropsychological data could revolutionize both research and assessment practices, enabling rapid aggregation of information regarding under-studied populations and that can support evidence-based effectiveness studies that will be critical for research and public healthcare decision making.
If individual case data are assembled at the item level it will be possible to analyze data using modern psychometric theory, leading to new and improved assessment methods. Community consortia could conduct not-for-profit normative and validation studies. Assuming there are ~5,000 neuropsychologists in the United States (based on memberships in INS, APA Division 40, The American Academy of Clinical Neuropsychology, and the National Academy of Neuropsychology) it is exciting to imagine progress that could be made if each examined even one person per year as part of a national consortium.
The most widely used assessment strategies in neuropsychology have undergone little fundamental change over the last century, despite breakthroughs in cognitive neuroscience, neuroimaging, psychometric theory, and human-machine interfaces. Test revisions using traditional print publishing can also have unintended consequences. For example, the WAIS-IV/WMS-IV revisions have been criticized for failing to consider back-compatibility issues that may invalidate clinical interpretations (Loring & Bauer, 2010
). Promising experimental paradigms typically languish for decades in the lab prior to use in clinics. Meanwhile, web-based acquisition strategies enable rapid collection of data from widely distributed populations using adaptive testing strategies likely to at least double efficiency in construct measurement, and when constructs are correlated (as is true of most cognitive constructs), efficiency gains may be higher. One study found a 95% average reduction in items administered using a computerized adaptive test relative to administering all items on the original scales (Gibbons et al., 2008
). Further, use of modern psychometric theory enables preservation of robust back-compatibility with prior test versions, simultaneously enabling introduction of new content and new constructs after these are validated (addressing the primary critique of Loring & Bauer, 2010
Neuropsychological test development can move forward rapidly if we embrace modern technology, adopt modern psychometric theory, and collaborate. First, neuropsychology needs to embrace computerized assessment. Some express fear that computer tests will somehow replace clinicians, or miss important observations. But the computer is just a tool enabling presentation of certain stimuli and collection of certain responses, and properly used can clearly outperform a human examiner in precision and rapid implementation of adaptive algorithms. One clear advantage of computer timing precision is that it enables implementation of methods from cognitive neuroscience that rely on more subtle task manipulations and trial-by-trial analyses, which can be more sensitive and specific to individual differences in neural system function. To the extent that future computer logic may provide prompts for differential diagnosis, test selection, or test interpretation, this would only supplement and enhance clinical decision-making.
A second bolder step will involve web assessment. This idea often triggers the same anxieties raised about computerized assessment, plus concerns that examiners cannot adequately control conditions of testing, be confident that test-takers are performing tasks as instructed, or even be sure about the identities of test-takers. There are further concerns about individual differences in computer literacy, and the “digital divide” that prevents equal access to the internet. The first class of problems has technological solutions including embedded validity indicators, on-line video surveillance, and anthropometric identifiers. But elaborate surveillance strategies are not necessary for some research and even select clinical applications. There are many people who will try their best, will follow instructions, and will generate valid results, without such interventions. This is a particularly important point for psychometric test development and specific research questions, particularly genetic studies that require large samples. In contrast to conventional test development efforts involving hundreds of participants over years, web-based protocols can acquire hundreds of thousands of participants in months. Given algorithms for item-level response monitoring and automated consistency checks, there is much greater opportunity than in most current tests to detect outlying response patterns of uncertain validity. Because “brain testing” and “brain training” applications are already proliferating without quality control, there is a pressing need for neuropsychologists to participate, establish guidelines, and assure the responsible use of such applications.
Soon many individuals will be completing web-based tests of brain function in the privacy of their homes using a wide range of web-enabled devices. Rich longitudinal behavioral data will be stored in repositories, along with electronic medical records, complete genome sequences, and automatically aggregated information about environmental exposures based on individual life history. Clinicians will need to develop competencies in the use of data mining tools to effectively manage and interpret torrents of information. The neuropsychologist of the future will synthesize these data and then determine what needs to be done in lab, office or clinic, and how to direct patients towards optimal therapeutic options.
These rosy visions of the future depend on multiple changes, some of which are fundamental to both neuropsychological research and clinical practice. The most critical current bottle-neck is achieving consensus frameworks for describing neuropsychological concepts and their measurement. Agreement on terms may seem difficult but there exist already platforms to achieve this aim (see www.cognitiveatlas.org
), and engagement in such collaborative efforts may be an achievable goal for neuropsychological membership organizations. Even after we agree on terms, we will still face obstacles in knowledge aggregation, because existing data vary widely in the ways these are currently maintained, and in the quality with which these were originally acquired. In the longer term, it is likely that publication of research findings will be increasingly structured and data will be “deposited” in a case-wise fashion, fostering capacity for group analysis but raising additional challenges and possibly threats to academic innovation (i.e., will scientists be supported to pursue directions that deviate markedly from “standardized” data frameworks?). In the shorter term, there are opportunities for aggregation of clinical data, but standards for quality control need to be developed, implemented, and monitored. But this aggregation of an adequate knowledge-base is critical to foster acceptance of new methods for assessment, because the responsible researcher or clinician understandably desires to use the best validated methods available. This final stage – development of novel methods – may appear the most daunting but is facilitated by rapid development of relevant technologies, and indeed a true revolution in current assessment methods is achievable with existing technology. The greater obstacles may be financial, given that current funding for test development depends largely on a relatively small “niche” print publishing market. To overcome this, it may be that we need to encourage broader public interest in brain function, while simultaneously developing frameworks to assure the responsible deployment of methods that are being widely disseminated.
In summary, dramatic changes in science, technology, and society now offer us great opportunities and grand challenges to advance our shared mission as neuropsychologists; it is hoped that by working collaboratively Neuropsychology 3.0 will be seen as a ground-breaking success in biomedicine, and pave the road to Neuropsychology 4.0.