PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1468966)

Clipboard (0)
None

Related Articles

1.  Using quantile regression to create baseline norms for neuropsychological tests 
Introduction
The Uniform Data Set (UDS) contains neuropsychological test scores and demographic information for participants at Alzheimer's disease centers across the United States funded by the National Institute on Aging. Mean regression analysis of neuropsychological tests has been proposed to detect cognitive decline, but the approach requires stringent assumptions.
Methods
We propose using quantile regression to directly model conditional percentiles of neuropsychological test scores. An online application allows users to easily implement the proposed method.
Results
Scores from 13 different neuropsychological tests were analyzed for 5413 cognitively normal participants in the UDS. Quantile and mean regression models were fit using age, gender, and years of education. Differences between the mean and quantile regression estimates were found on the individual measures.
Discussion
Quantile regression provides more robust estimates of baseline percentiles for cognitively normal adults. This can then serve as standards against which to detect individual cognitive decline.
doi:10.1016/j.dadm.2015.11.005
PMCID: PMC4879644  PMID: 27239531
Alzheimer's disease; Neuropsychological assessment; Cognitive decline; Early detection; Quantile regression
2.  Reliable change on neuropsychological tests in the Uniform Data Set 
Objective
Longitudinal normative data obtained from a robust elderly sample (i.e., believed to be free from neurodegenerative disease) are sparse. The purpose of the present study was to develop reliable change indices (RCIs) that can assist with interpretation of test score changes relative to a healthy sample of older adults (ages 50+).
Method
Participants were 4217 individuals who completed at least 3 annual evaluations at one of 34 past and present Alzheimer’s Disease Centers throughout the United States. All participants were diagnosed as cognitively normal at every study visit, which ranged from three to nine approximately annual evaluations. One-year RCIs were calculated for 11 neuropsychological variables in the Uniform Data Set by regressing follow-up test scores onto baseline test scores, age, education, visit number, post-baseline assessment interval, race, and sex in a linear mixed effects regression framework. In addition, the cumulative frequency distributions of raw score changes were examined to describe the base rates of test score changes.
Results
Baseline test score, age, education, and race were robust predictors of follow-up test scores across most tests. The effects of maturation (aging) were more pronounced on tests related to attention and executive functioning, whereas practice effects were more pronounced on tests of episodic and semantic memory. Interpretation of longitudinal changes on 11 cognitive test variables can be facilitated through the use of reliable change intervals and base rates of score changes in this robust sample of older adults. A web-based calculator is provided to assist neuropsychologists with interpretation of longitudinal change.
doi:10.1017/S1355617715000582
PMCID: PMC4860819  PMID: 26234918
Reliability of Results; Longitudinal Studies; Cognition; Cognitive Symptoms; Aging; Dementia
3.  The Alzheimer’s Disease Centers’ Uniform Data Set (UDS): The Neuropsychological Test Battery 
The neuropsychological test battery from the Uniform Data Set (UDS) of the Alzheimer’s Disease Centers (ADC) program of the National Institute on Aging (NIA) consists of brief measures of attention, processing speed, executive function, episodic memory and language. This paper describes development of the battery and preliminary data from the initial UDS evaluation of 3,268 clinically cognitively normal men and women collected over the first 24 months of utilization. The subjects represent a sample of community-dwelling, individuals who volunteer for studies of cognitive aging. Subjects were considered “clinically cognitively normal” based on clinical assessment, including the Clinical Dementia Rating scale and the Functional Assessment Questionnaire. The results demonstrate performance on tests sensitive to cognitive aging and to the early stages of Alzheimer disease (AD) in a relatively well-educated sample. Regression models investigating the impact of age, education, and gender on test scores indicate that these variables will need to be incorporated in subsequent normative studies. Future plans include: 1) determining the psychometric properties of the battery; 2) establishing normative data, including norms for different ethnic minority groups; and 3) conducting longitudinal studies on cognitively normal subjects, individuals with mild cognitive impairment, and individuals with AD and other forms of dementia.
doi:10.1097/WAD.0b013e318191c7dd
PMCID: PMC2743984  PMID: 19474567
4.  Considering the base rates of low performance in cognitively healthy older adults improves the accuracy to identify neurocognitive impairment with the Consortium to Establish a Registry for Alzheimer’s Disease-Neuropsychological Assessment Battery (CERAD-NAB) 
It is common for some healthy older adults to obtain low test scores when a battery of neuropsychological tests is administered, which increases the risk of the clinician misdiagnosing cognitive impairment. Thus, base rates of healthy individuals’ low scores are required to more accurately interpret neuropsychological results. At present, this information is not available for the German version of the Consortium to Establish a Registry for Alzheimer’s Disease-Neuropsychological Assessment Battery (CERAD-NAB), a frequently used battery in the USA and in German-speaking Europe. This study aimed to determine the base rates of low scores for the CERAD-NAB and to tabulate a summary figure of cut-off scores and numbers of low scores to aid in clinical decision making. The base rates of low scores on the ten German CERAD-NAB subscores were calculated from the German CERAD-NAB normative sample (N = 1,081) using six different cut-off scores (i.e., 1st, 2.5th, 7th, 10th, 16th, and 25th percentile). Results indicate that high percentages of one or more “abnormal” scores were obtained, irrespective of the cut-off criterion. For example, 60.6 % of the normative sample obtained one or more scores at or below the 10th percentile. These findings illustrate the importance of considering the prevalence of low scores in healthy individuals. The summary figure of CERAD-NAB base rates is an important supplement for test interpretation and can be used to improve the diagnostic accuracy of neurocognitive disorders.
Electronic supplementary material
The online version of this article (doi:10.1007/s00406-014-0571-z) contains supplementary material, which is available to authorized users.
doi:10.1007/s00406-014-0571-z
PMCID: PMC4464368  PMID: 25555899
Neuropsychology; Normal aging; Diagnosis; Neurocognitive disorders; Dementia; Mild cognitive impairment
5.  HIV-associated neurocognitive disorder in Australia: a case of a high-functioning and optimally treated cohort and implications for international neuro HIV research 
Journal of neurovirology  2014;20(3):258-268.
The Australian HIV-infected (HIV+) population is largely comprised of high-functioning men who have sex with men (MSM). Like other English-speaking countries, Australia mostly relies on US neuropsychological normative standards to detect and determine the prevalence of neurological disorders. Whether the US neuropsychological (NP) normative standards are appropriate in Australian HIV+ MSM has not been established. Ninety virally suppressed HIV+ and 49 HIV-uninfected (HIV−) men (respectively 86 and 85 % self-reported MSM; mean age 54 and 56 years, mean premorbid verbal IQ estimate 110 and 111) undertook standard NP testing. The raw neuropsychological data were transformed using the following: (1) US standards as uncorrected scaled scores and demographically corrected T scores (US norms); and (2) z scores (without demographic corrections) derived from Australian comparison group scaled scores (local norms). To determine HIV-associated neurocognitive disorder prevalence, we used a standard definition of impairment based upon a battery-wide summary score: the global deficit score (GDS). Impairment classification (GDS≥0.5) based on the local norms was best at discriminating between the two groups (HIV−=14.3 % vs. HIV+ = 53.3 %; p<0.0001). This definition was significantly associated with age. Impairment classification based on the US norms yielded much lower impairment rate regardless of the HIV status (HIV−=4.1 % vs. HIV+=14.7 %; p= 0.05), but was associated with historical AIDS, and not age. Both types of summary scores were associated with reduced independence in activities of daily living (p≤ 0.03). Accurate neuropsychological classifications of high (or low) functioning individuals may need country-specific norms that correct for performance-based (e.g., reading) estimates of premorbid cognition in addition to the traditional demographic factors.
doi:10.1007/s13365-014-0242-x
PMCID: PMC4268870  PMID: 24696363
HIV-associated neurocognitive disorder; Neuropsychological functions; Normative data; HIV/AIDS
6.  Prediction of Driving Ability with Neuropsychological Tests: Demographic Adjustments Diminish Accuracy 
Demographically-adjusted norms are used to enhance accuracy of inferences based on neuropsychological assessment. However, we hypothesized that predictive accuracy regarding complex real-world activities is diminished by demographic corrections. Driving performance was assessed with a standardized on-road test in participants aged 65+ (24 healthy elderly, 26 Alzheimer’s disease, 33 Parkinson’s disease). Neuropsychological measures included Trailmaking A and B, Complex Figure, Benton Visual Retention, and Block Design tests. A multiple regression model with raw neuropsychological scores was significantly predictive of driving errors (R2 = .199, p <.005); a model with demographically-adjusted scores was not (R2 = .113, p >.10). Raw scores were more highly correlated than adjusted scores with each neuropsychological measure, and among both healthy elderly and Parkinson’s patients. Demographic corrections diminished predictive accuracy for driving performance, extending findings of Silverberg and Millis (2009) that competency in complex real-world activities depends on ability levels, regardless of demographic considerations.
doi:10.1017/S1355617710000470
PMCID: PMC3152745  PMID: 20441682
aged; age factors; automobile driving; geriatric assessment; Parkinson’s; Alzheimer disease
7.  A Novel Study Paradigm for Long-term Prevention Trials in Alzheimer Disease: The Placebo Group Simulation Approach (PGSA) 
INTRODUCTION
The PGSA (Placebo Group Simulation Approach) aims at avoiding problems of sample representativeness and ethical issues typical of placebo-controlled secondary prevention trials with MCI patients. The PGSA uses mathematical modeling to forecast the distribution of quantified outcomes of MCI patient groups based on their own baseline data established at the outset of clinical trials. These forecasted distributions are then compared with the distribution of actual outcomes observed on candidate treatments, thus substituting for a concomitant placebo group. Here we investigate whether a PGSA algorithm that was developed from the MCI population of ADNI 1*, can reliably simulate the distribution of composite neuropsychological outcomes from a larger, independently selected MCI subject sample.
METHODS
Data available from the National Alzheimer’s Coordinating Center (NACC) were used. We included 1523 patients with single or multiple domain amnestic mild cognitive impairment (aMCI) and at least two follow-ups after baseline. In order to strengthen the analysis and to verify whether there was a drift over time in the neuropsychological outcomes, the NACC subject sample was split into 3 subsamples of similar size. The previously described PGSA algorithm for the trajectory of a composite neuropsychological test battery (NTB) score was adapted to the test battery used in NACC. Nine demographic, clinical, biological and neuropsychological candidate predictors were included in a mixed model; this model and its error terms were used to simulate trajectories of the adapted NTB.
RESULTS
The distributions of empirically observed and simulated data after 1, 2 and 3 years were very similar, with some over-estimation of decline in all 3 subgroups. The by far most important predictor of the NTB trajectories is the baseline NTB score. Other significant predictors are the MMSE baseline score and the interactions of time with ApoE4 and FAQ (functional abilities). These are essentially the same predictors as determined for the original NTB score.
CONCLUSION
An algorithm comprising a small number of baseline variables, notably cognitive performance at baseline, forecasts the group trajectory of cognitive decline in subsequent years with high accuracy. The current analysis of 3 independent subgroups of aMCI patients from the NACC database supports the validity of the PGSA longitudinal algorithm for a NTB. Use of the PGSA in long-term secondary AD prevention trials deserves consideration.
PMCID: PMC4268776  PMID: 25530953
Placebo Group Simulation Approach (PGSA); clinical AD trials; phase 3 clinical trials; MCI; modelling AD trajectories; prodromal AD
8.  MANUSCRIPT IN PRESS: DEMENTIA & GERIATRIC COGNITIVE DISORDERS 
Background
Prior work on the link between blood-based biomarkers and cognitive status has largely been based on dichotomous classifications rather than detailed neuropsychological functioning. The current project was designed to create serum-based biomarker algorithms that predict neuropsychological test performance.
Methods
A battery of neuropsychological measures was administered. Random forest analyses were utilized to create neuropsychological test-specific biomarker risk scores in a training set that were entered into linear regression models predicting the respective test scores in the test set. Serum multiplex biomarker data were analyzed on 108 proteins from 395 participants (197 AD cases and 198 controls) from the Texas Alzheimer’s Research and Care Consortium.
Results
The biomarker risk scores were significant predictors (p<0.05) of scores on all neuropsychological tests. With the exception of premorbid intellectual status (6.6%), the biomarker risk scores alone accounted for a minimum of 12.9% of the variance in neuropsychological scores. Biomarker algorithms (biomarker risk scores + demographics) accounted for substantially more variance in scores. Review of the variable importance plots indicated differential patterns of biomarker significance for each test, suggesting the possibility of domain-specific biomarker algorithms.
Conclusions
Our findings provide proof-of-concept for a novel area of scientific discovery, which we term “molecular neuropsychology.”
doi:10.1159/000345605
PMCID: PMC4400831  PMID: 24107792
Neuropsychology; Biomarkers; Algorithms; Molecular; Psychology
9.  Age and education effects and norms on a cognitive test battery from a population-based cohort: The Monongahela –Youghiogheny Healthy Aging Team (MYHAT) 
Aging & mental health  2010;14(1):100-107.
Objectives
Performance on cognitive tests can be affected by age, education, and also selection bias. We examined the distribution of scores on a several cognitive screening tests by age and educational levels in a population-based cohort.
Method
An age-stratified random sample of individuals aged 65+ years was drawn from the electoral rolls of an urban U.S. community. Those obtaining age and education-corrected scores ≥ 21/30 on the Mini-Mental State Examination were designated as cognitively normal or only mildly impaired, and underwent a full assessment including a battery of neuropsychological tests. Participants were also rated on the Clinical Dementia Rating scale. The distribution of neuropsychological test scores within demographic strata, among those receiving a CDR of 0 (no dementia), are reported here as cognitive test norms. After combining individual test scores into cognitive domain composite scores, multiple linear regression models were used to examine associations of cognitive test performance with age, and education.
Results
In this cognitively normal sample of older adults, younger age and higher education were associated with better performance in all cognitive domains. Age and education together explained 22% of the variation of memory, and less of executive function, language, attention, and visuospatial function.
Conclusion
Older age and lesser education are differentially associated with worse neuropsychological test performance in cognitively normal older adults representative of the community at large. The distribution of scores in these participants can serve as population-based norms for these tests, and be especially useful to clinicians and researchers assessing older adults outside specialty clinic settings.
doi:10.1080/13607860903071014
PMCID: PMC2828360  PMID: 20155526
Neuropsychological tests; epidemiology; normative; community
10.  Demographically Corrected Normative Standards for the English Version of the NIH Toolbox Cognition Battery 
Demographic factors impact neuropsychological test performances and accounting for them may help to better elucidate current brain functioning. The NIH Toolbox Cognition Battery (NIHTB-CB) is a novel neuropsychological tool, yet the original norms developed for the battery did not adequately account for important demographic/cultural factors known to impact test performances. We developed norms fully adjusting for all demographic variables within each language group (English and Spanish) separately. The current study describes the standards for individuals tested in English. Neurologically healthy adults (n = 1038) and children (n = 2917) who completed the NIH Toolbox norming project in English were included. We created uncorrected scores weighted to the 2010 Census demographics, and applied polynomial regression models to develop age-corrected and fully demographically adjusted (age, education, sex, race/ethnicity) scores for each NIHTB-CB test and composite (i.e., Fluid, Crystallized, and Total Composites). On uncorrected NIHTB-CB scores, age and education demonstrated significant, medium-to-large associations, while sex showed smaller, but statistically significant effects. In terms of race/ethnicity, a significant stair-step effect on uncorrected NIHTB-CB scores was observed (African American
doi:10.1017/S1355617715000351
PMCID: PMC4490030  PMID: 26030001
Neuropsychological test; Norms; Psychometrics; Assessment; Cross-cultural; Cognition
Epilepsy & behavior : E&B  2015;47:45-54.
Reliable change index scores (RCIs) and standardized regression-based change score norms (SRBs) permit evaluation of meaningful changes in test scores following treatment interventions, like epilepsy surgery, while accounting for test-retest reliability, practice effects, score fluctuations due to error, and relevant clinical and demographic factors. Although these methods are frequently used to assess cognitive change after epilepsy surgery in adults, they have not been widely applied to examine cognitive change in children with epilepsy. The goal of the current study was to develop RCIs and SRBs for use in children with epilepsy. Sixty-three children with epilepsy (age range 6–16; M=10.19, SD=2.58) underwent comprehensive neuropsychological evaluations at two time points an average of 12 months apart. Practice adjusted RCIs and SRBs were calculated for all cognitive measures in the battery. Practice effects were quite variable across the neuropsychological measures, with the greatest differences observed among older children, particularly on the Children’s Memory Scale and Wisconsin Card Sorting Test. There was also notable variability in test-retest reliabilities across measures in the battery, with coefficients ranging from 0.14 to 0.92. RCIs and SRBs for use in assessing meaningful cognitive change in children following epilepsy surgery are provided for measures with reliability coefficients above 0.50. This is the first study to provide RCIs and SRBs for a comprehensive neuropsychological battery based on a large sample of children with epilepsy. Tables to aid in evaluating cognitive changes in children who have undergone epilepsy surgery are provided for clinical use. An excel sheet to perform all relevant calculations is also available to interested clinicians or researchers.
doi:10.1016/j.yebeh.2015.04.052
PMCID: PMC4475419  PMID: 26043163
epilepsy; reliable change indices; standardized regression-based change score norms; children; neuropsychology
Arthritis care & research  2013;65(3):481-486.
OBJECTIVE
Research shows a gap between perceived cognitive dysfunction and objective neuropsychological performance in persons with chronic diseases. We explored this relationship in persons with rheumatoid arthritis (RA).
METHODS
Individuals from a longitudinal cohort study of RA participated in a study visit that included physical, psychosocial, and biological metrics. Subjective cognitive dysfunction was assessed using the Perceived Deficits Questionnaire (PDQ; 0–20, higher scores = greater perceived impairment). Objective cognitive impairment was assessed using a battery of 12 standardized neuropsychological measures yielding 16 indices. On each test, subjects were classified as ‘impaired’ if they performed 1 SD below age-based population norms. Total cognitive function scores were calculated by summing the transformed scores (0–16, higher scores = greater impairment). Multiple linear regression analyses determined the relationship of total cognitive function score with PDQ score, controlling for gender, race, marital status, income, education, disease duration, disease severity, depression, and fatigue.
RESULTS
120 subjects (mean ± SD age: 58.5 ± 11.0 years) were included. Mean ± SD scores of total cognitive function and PDQ were 2.5 ± 2.2 (0–10) and 5.8 ± 3.8 (0–16), respectively. In multivariate analysis, there was no significant relationship between total cognitive function score and PDQ score. However, depression and fatigue (β = 0.31, p < 0.001; β = 0.31, p = 0.001) were significantly associated with PDQ score.
CONCLUSION
The findings emphasize the gap between subjective and objective measures of cognitive impairment and the importance of considering psychological factors within the context of cognitive complaints in clinical settings.
doi:10.1002/acr.21814
PMCID: PMC3786333  PMID: 22899659
Current aging science  2012;5(2):131-135.
Evidence links diabetes mellitus to cognitive impairment and increased risk of Alzheimer's disease (AD) and suggests that insulin therapy improves cognition. With an increasing percentage of the US elderly population at high risk for diabetes and AD, the evidence of an association between diabetes and poor cognition in non-demented elderly may have implications for diagnosis, prevention and treatment of cognitive decline including AD.
In our study, we hypothesized that diabetic elders with normal cognition would demonstrate poorer cognitive outcomes than non-diabetic elders and that diabetic elders receiving diabetes treatment would demonstrate better outcomes than those not receiving treatment.
Data were evaluated from the National Alzheimer's Coordinating Center's Uniform Data Set (UDS). The UDS consists of clinical and neuropsychological assessments of a sample of elderly research subjects recruited from thirty-one Alzheimer's Disease Centers nationwide. The UDS provides a unique opportunity to study cognition in a nationally recruited sample with structured neuropsychological tests.
We examined the impact of diabetes and diabetes treatment on cognitive measures in 3421 elderly research subjects from 2005-2007 with normal cognition. We performed linear regression analyses to compare cognitive scores between diabetic subjects and non-diabetic subjects. Diabetic subjects had lower scores than non-diabetic subjects including attention, psychomotor function and executive function, but no differences in memory or semantic memory language. There was no association between diabetes treatment and cognitive scores.
These subtle but significant cognitive deficits in diabetic subjects compared to non-diabetic subjects may contribute to difficulty with compliance with complex diabetes medication regimens. A specific role of diabetes as a risk for cognitive impairment will require longitudinal study.
PMCID: PMC3659164  PMID: 22023096
Diabetes; Cognition; Alzheimer's; Elderly
The CLOX test is a neuropsychological measure intended to aid in the assessment and detection of dementia in elderly populations. Few studies have provided normative data for this measure, with even less research available regarding the impact of socio-demographic factors on test scores. This study presents normative data for the CLOX in a sample of English- and Spanish-speaking Hispanic and non-Hispanic Whites. The total sample included 445 cognitively healthy older adults seen as part of an ongoing study of rural cognitive aging, Project FRONTIER. Unlike previous studies, criteria for “normality” (i.e., unimpaired) for CLOX1 and CLOX2 were based not merely on global impairment, but also on domain-specific impairment of executive functioning on the EXIT25 and/or Trail Making Test B (Trails B), or visuospatial/constructional impairment on the Line Orientation and Figure Copy subtests of the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS), respectively. Hierarchical regression analyses revealed that CLOX1 scores require adjustment by Age across ethnicities, while Education and Gender are necessary stratification markers for CLOX1 performance only in non-Hispanic Whites. None of the demographic variables were valid predictors of CLOX2 performance, negating the need for such adjustments. In addition to being the first study to provide separate normative data for CLOX performance in Hispanic and non-Hispanic White samples, the current study offers a novel approach to defining “normal” by cognitive domain. We also highlight the need to directly examine the impact of socio-demographic factors before applying normative corrections based on factors that have negligible impact on test scores.
doi:10.1002/gps.2810
PMCID: PMC4142441  PMID: 22052628
Executive functioning; visuospatial skills; norms; geriatrics; cognition
PLoS Medicine  2014;11(4):e1001633.
Radek Bukowski and colleagues conducted a case control study in 59 US hospitals to determine the relationship between fetal growth and stillbirth, and find that both restrictive and excessive growth could play a role.
Please see later in the article for the Editors' Summary
Background
Stillbirth is strongly related to impaired fetal growth. However, the relationship between fetal growth and stillbirth is difficult to determine because of uncertainty in the timing of death and confounding characteristics affecting normal fetal growth.
Methods and Findings
We conducted a population-based case–control study of all stillbirths and a representative sample of live births in 59 hospitals in five geographic areas in the US. Fetal growth abnormalities were categorized as small for gestational age (SGA) (<10th percentile) or large for gestational age (LGA) (>90th percentile) at death (stillbirth) or delivery (live birth) using population, ultrasound, and individualized norms. Gestational age at death was determined using an algorithm that considered the time-of-death interval, postmortem examination, and reliability of the gestational age estimate. Data were weighted to account for the sampling design and differential participation rates in various subgroups. Among 527 singleton stillbirths and 1,821 singleton live births studied, stillbirth was associated with SGA based on population, ultrasound, and individualized norms (odds ratio [OR] [95% CI]: 3.0 [2.2 to 4.0]; 4.7 [3.7 to 5.9]; 4.6 [3.6 to 5.9], respectively). LGA was also associated with increased risk of stillbirth using ultrasound and individualized norms (OR [95% CI]: 3.5 [2.4 to 5.0]; 2.3 [1.7 to 3.1], respectively), but not population norms (OR [95% CI]: 0.6 [0.4 to 1.0]). The associations were stronger with more severe SGA and LGA (<5th and >95th percentile). Analyses adjusted for stillbirth risk factors, subset analyses excluding potential confounders, and analyses in preterm and term pregnancies showed similar patterns of association. In this study 70% of cases and 63% of controls agreed to participate. Analysis weights accounted for differences between consenting and non-consenting women. Some of the characteristics used for individualized fetal growth estimates were missing and were replaced with reference values. However, a sensitivity analysis using individualized norms based on the subset of stillbirths and live births with non-missing variables showed similar findings.
Conclusions
Stillbirth is associated with both growth restriction and excessive fetal growth. These findings suggest that, contrary to current practices and recommendations, stillbirth prevention strategies should focus on both severe SGA and severe LGA pregnancies.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Pregnancy is usually a happy time, when the parents-to-be anticipate the arrival of a new baby. But, sadly, about 20% of pregnancies end in miscarriage—the early loss of a fetus (developing baby) that is unable to survive independently. Other pregnancies end in stillbirth—fetal death after 20 weeks of pregnancy (in the US; after 24 weeks in the UK). Stillbirths, like miscarriages, are common. In the US, for example, one in every 160 pregnancies ends in stillbirth. How women discover that their unborn baby has died varies. Some women simply know something is wrong and go to hospital to have their fears confirmed. Others find out when a routine check-up detects no fetal heartbeat. Most women give birth naturally after their baby has died, but if the mother's health is at risk, labor may be induced. Common causes of stillbirth include birth defects and infections. Risk factors for stillbirth include being overweight and smoking during pregnancy.
Why Was This Study Done?
Stillbirths are often associated with having a “small for gestational age” (SGA) fetus. Gestation is the period during which a baby develops in its mother's womb. Gestational age is estimated from the date of the woman's last menstrual period and/or from ultrasound scans. An SGA fetus is lighter than expected for its age based on observed distributions (norms) of fetal weights for gestational age. Although stillbirth is clearly associated with impaired fetal growth, the exact relationship between fetal growth and stillbirth remains unclear for two reasons. First, studies investigating this relationship have used gestational age at delivery rather than gestational age at death as an estimate of fetal age, which overestimates the gestational age of stillbirths and leads to errors in estimates of the proportions of SGA and “large for gestational age” (LGA) stillbirths. Second, many characteristics that affect normal fetal growth are also associated with the risk of stillbirth, and this has not been allowed for in previous studies. In this population-based case–control study, the researchers investigate the fetal growth abnormalities associated with stillbirth using a new approach to estimate gestational age and accounting for the effect of characteristics that affect both fetal growth and stillbirth. A population-based case–control study compares the characteristics of patients with a condition in a population with those of unaffected people in the same population.
What Did the Researchers Do and Find?
The researchers investigated all the stillbirths and a sample of live births that occurred over 2.5 years at 59 hospitals in five US regions. They used a formula developed by the Stillbirth Collaborative Research Network to calculate the gestational age at death of the stillbirths. They categorized fetuses as SGA if they had a weight for gestational age within the bottom 10% (below the 10th percentile) of the population and as LGA if they had a weight for gestational age above the 90th percentile at death (stillbirth) or delivery (live birth) using population, ultrasound, and individualized norms of fetal weight for gestational age. Population norms incorporate weights for gestational age from normal pregnancies and from pregnancies complicated by growth abnormalities, whereas the other two norms include weights for gestational age from normal pregnancies only. Having an SGA fetus was associated with a 3- to 4-fold increased risk of stillbirth compared to having a fetus with “appropriate” weight for gestational age based on all three norms. LGA was associated with an increased risk of stillbirth based on the ultrasound and individualized norms but not the population norms. Being more severely SGA or LGA (below the 5th percentile or above the 95th percentile) was associated with an increased risk of stillbirth.
What Do These Findings Mean?
These findings indicate that, when the time of death is accounted for and norms for weight for gestational age only from uncomplicated pregnancies are used, stillbirth is associated with both restricted and excessive fetal growth. Overall, abnormal fetal growth was identified in 25% of stillbirths using population norms and in about 50% of stillbirths using ultrasound or individualized norms. Although the accuracy of these findings is likely to be affected by aspects of the study design, these findings suggest that, contrary to current practices, strategies designed to prevent stillbirth should focus on identifying both severely SGA and severely LGA fetuses and should use norms for the calculation of weight for gestational age based on normal pregnancies only. Such an approach has the potential to identify almost half of the pregnancies likely to result in stillbirth.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001633.
The March of Dimes, a nonprofit organization for pregnancy and baby health, provides information on stillbirth
Tommy's, a UK nonprofit organization that funds research into stillbirth, premature birth, and miscarriage and provides information for parents-to-be, also provides information on stillbirth (including personal stories)
The UK National Health Service Choices website provides information about stillbirth (including a video about dealing with grief after a stillbirth)
MedlinePlus provides links to other resources about stillbirth (in English and Spanish)
Information about the Stillbirth Collaborative Research Network is available
doi:10.1371/journal.pmed.1001633
PMCID: PMC3995658  PMID: 24755550
PLoS ONE  2015;10(9):e0138095.
Background
Trials in Alzheimer’s disease are increasingly focusing on prevention in asymptomatic individuals. This poses a challenge in examining treatment effects since currently available approaches are often unable to detect cognitive and functional changes among asymptomatic individuals. Resultant small effect sizes require large sample sizes using biomarkers or secondary measures for randomized controlled trials (RCTs). Better assessment approaches and outcomes capable of capturing subtle changes during asymptomatic disease stages are needed.
Objective
We aimed to develop a new approach to track changes in functional outcomes by using individual-specific distributions (as opposed to group-norms) of unobtrusive continuously monitored in-home data. Our objective was to compare sample sizes required to achieve sufficient power to detect prevention trial effects in trajectories of outcomes in two scenarios: (1) annually assessed neuropsychological test scores (a conventional approach), and (2) the likelihood of having subject-specific low performance thresholds, both modeled as a function of time.
Methods
One hundred nineteen cognitively intact subjects were enrolled and followed over 3 years in the Intelligent Systems for Assessing Aging Change (ISAAC) study. Using the difference in empirically identified time slopes between those who remained cognitively intact during follow-up (normal control, NC) and those who transitioned to mild cognitive impairment (MCI), we estimated comparative sample sizes required to achieve up to 80% statistical power over a range of effect sizes for detecting reductions in the difference in time slopes between NC and MCI incidence before transition.
Results
Sample size estimates indicated approximately 2000 subjects with a follow-up duration of 4 years would be needed to achieve a 30% effect size when the outcome is an annually assessed memory test score. When the outcome is likelihood of low walking speed defined using the individual-specific distributions of walking speed collected at baseline, 262 subjects are required. Similarly for computer use, 26 subjects are required.
Conclusions
Individual-specific thresholds of low functional performance based on high-frequency in-home monitoring data distinguish trajectories of MCI from NC and could substantially reduce sample sizes needed in dementia prevention RCTs.
doi:10.1371/journal.pone.0138095
PMCID: PMC4574479  PMID: 26379170
Neurology  2009;73(5):342-348.
Objective:
To rigorously evaluate the time course of cognitive change in a cohort of individuals with HIV-associated neurocognitive disorders (HAND) initiating combination antiretroviral therapy (CART), and to investigate which demographic, laboratory, and treatment factors are associated with neuropsychological (NP) outcome (or “any NP improvement”).
Methods:
Study participants included 37 HIV+ individuals with mild to moderate NP impairment who initiated CART and underwent NP testing at 12, 24, 36, and 48 weeks thereafter. NP change was assessed using a regression-based change score that was normed on a separate NP-stable group thereby controlling for regression toward the mean and practice effect. Mixed-effect regression models adjusting for loss to follow-up were used to evaluate the time course of cognitive change and its association with baseline and time-varying predictors.
Results:
In persons with HAND initiating CART, cognitive improvement happens soon after initiation (13% at week 12), but more often 24, 36, and up to 48 weeks after initiation (up to 41%), with fewer than 5% demonstrating significant worsening. In multivariate analyses, unique predictors of NP improvement included more severe baseline NP impairment and higher CART CNS penetration index. Greater viral load decrease was associated with NP improvement only in univariate analyses.
Conclusion:
Clinically meaningful neuropsychological improvement seemed to peak around 24–36 weeks after combination antiretroviral therapy initiation and was prolonged over the 1-year study period. This study also provides new evidence that benefit may be maximized by choosing antiretroviral medications that reach therapeutic concentrations in the CNS.
GLOSSARY
= asymptomatic neurocognitive impairment;
= combination antiretroviral therapy;
= confidence interval;
= Cognitive Intervention Trial;
= CNS penetration effectiveness;
= Global Deficit Score;
= interquartile range;
= HIV-associated dementia;
= HIV-associated neurocognitive disorders;
= mild neurocognitive disorder;
= mean scaled score regression-based change score;
= neuropsychological.
doi:10.1212/WNL.0b013e3181ab2b3b
PMCID: PMC2725933  PMID: 19474412
Use of neuropsychological tests to identify HIV-associated neurocognitive dysfunction must involve normative standards that are well-suited to the population of interest. Norms should be based on a population of HIV-uninfected individuals as closely matched to the HIV-infected group as possible, and must include examination of the potential effects of demographic factors on test performance. This is the first study to determine the normal range of scores on measures of psychomotor speed and executive function among a large group of ethnically and educationally diverse HIV-uninfected, high risk women, as well as their HIV-infected counterparts. Participants (n = 1653) were administered the Trailmaking Test Parts A and B (Trails A and Trails B), the Symbol Digit Modalities Test (SDMT), and the Wide Range Achievement Test-3 (WRAT-3). Among HIV-uninfected women, race/ethnicity accounted for almost 5% of the variance in cognitive test performance. The proportion of variance in cognitive test performance accounted for by age (13.8%), years of school (4.1%) and WRAT-3 score (11.5%) were each significant, but did not completely account for the effect of race (3%). HIV-infected women obtained lower scores than HIV-uninfected women on time to complete Trails A and B, SDMT total correct, and SDMT incidental recall score, but after adjustment for age, years of education, racial/ethnic classification, and reading level, only the difference on SDMT total correct remained significant. Results highlight the need to adjust for demographic variables when diagnosing cognitive impairment in HIV-infected women. Advantages of demographically adjusted regression equations developed using data from HIV-uninfected women are discussed.
doi:10.1080/13803395.2010.547662
PMCID: PMC3383771  PMID: 21950512
Objective
There is a need to identify a cognitive composite that is sensitive to tracking preclinical AD decline to be used as a primary endpoint in treatment trials.
Method
We capitalized on longitudinal data, collected from 1995 to 2010, from cognitively unimpaired presenilin 1 (PSEN1) E280A mutation carriers from the world’s largest known early-onset autosomal dominant AD (ADAD) kindred to identify a composite cognitive test with the greatest statistical power to track preclinical AD decline and estimate the number of carriers age 30 and older needed to detect a treatment effect in the Alzheimer’s Prevention Initiative’s (API) preclinical AD treatment trial. The mean-to-standard-deviation ratios (MSDRs) of change over time were calculated in a search for the optimal combination of one to seven cognitive tests/sub-tests drawn from the neuropsychological test battery in cognitively unimpaired mutation carriers during a two and five year follow-up period, using data from non-carriers during the same time period to correct for aging and practice effects. Combinations that performed well were then evaluated for robustness across follow-up years, occurrence of selected items within top performing combinations and representation of relevant cognitive domains.
Results
This optimal test combination included CERAD Word List Recall, CERAD Boston Naming Test (high frequency items), MMSE Orientation to Time, CERAD Constructional Praxis and Ravens Progressive Matrices (Set A) with an MSDR of 1.62. This composite is more sensitive than using either the CERAD Word List Recall (MSDR=0.38) or the entire CERAD-Col battery (MSDR=0.76). A sample size of 75 cognitively normal PSEN1-E280A mutation carriers age 30 and older per treatment arm allows for a detectable treatment effect of 29% in a 60-month trial (80% power, p=0.05).
Conclusions
We have identified a composite cognitive test score representing multiple cognitive domains that has improved power compared to the most sensitive single test item to track preclinical AD decline in ADAD mutation carriers and evaluate preclinical AD treatments. This API composite cognitive test score will be used as the primary endpoint in the first API trial in cognitively unimpaired ADAD carriers within 15 years of their estimated age at clinical onset. We have independently confirmed our findings in a separate cohort of cognitively healthy older adults who progressed to the clinical stages of late-onset AD, described in a separate report, and continue to refine the composite in independent cohorts and compared with other analytical approaches.
doi:10.4088/JCP.13m08927
PMCID: PMC4331113  PMID: 24816373
composite cognitive score; API; Alzheimer’s Prevention Initiative; E280A; PSEN1; presenilin1; sample size; preclinical; cognitively unimpaired; autosomal dominant; ADAD
Introduction
Electroencephalography (EEG) microstates and brain network are altered in patients with Alzheimer’s disease (AD) and discussed as potential biomarkers for AD. Microstates correspond to defined states of brain activity, and their connectivity patterns may change accordingly. Little is known about alteration of connectivity in microstates, especially in patients with amnestic mild cognitive impairment with stable or improving cognition within 30 months (aMCI).
Methods
Thirty-five outpatients with aMCI or mild dementia (mean age 77 ± 7 years, 47 % male, Mini Mental State Examination score ≥24) had comprehensive neuropsychological and clinical examinations. Subjects with cognitive decline over 30 months were allocated to the AD group, subjects with stable or improving cognition to the MCI-stable group. Results of neuropsychological testing at baseline were summarized in six domain scores. Resting state EEG was recorded with 256 electrodes and analyzed using TAPEEG. Five microstates were defined and individual data fitted. After phase transformation, the phase lag index (PLI) was calculated for the five microstates in every subject. Networks were reduced to 22 nodes for statistical analysis.
Results
The domain score for verbal learning and memory and the microstate segmented PLI between the left centro-lateral and parieto-occipital regions in the theta band at baseline differentiated significantly between the groups. In the present sample, they separated in a logistic regression model with a 100 % positive predictive value, 60 % negative predictive value, 100 % specificity and 77 % sensitivity between AD and MCI-stable.
Conclusions
Combining neuropsychological and quantitative EEG test results allows differentiation between subjects with aMCI remaining stable and subjects with aMCI deteriorating over 30 months.
Electronic supplementary material
The online version of this article (doi:10.1186/s13195-015-0163-9) contains supplementary material, which is available to authorized users.
doi:10.1186/s13195-015-0163-9
PMCID: PMC4697314  PMID: 26718102
Background
There is growing interest in the evaluation of preclinical Alzheimer’s disease (AD) treatments. As a result, there is a need to identify a cognitive composite that is sensitive to tracking preclinical AD decline to be used as a primary endpoint in treatment trials.
Methods
Longitudinal data from initially cognitively normal, 70–85 year old participants in three cohort studies of aging and dementia from the Rush Alzheimer’s Disease Center were examined to empirically define a composite cognitive endpoint that is sensitive to detecting and tracking cognitive decline prior to the onset of cognitive impairment. The mean to standard deviation ratios (MSDR) of change over time were calculated in a search for the optimal combination of cognitive tests/sub-tests drawn from the neuropsychological battery in cognitively normal participants who subsequently progressed to clinical stages of AD during a two and five year period, using data from those who remained unimpaired during the same time period to correct for aging and practice effects. Combinations that performed well were then evaluated for representation of relevant cognitive domains, robustness across individual years prior to diagnosis, and occurrence of selected items within top performing combinations.
Results
The optimal composite cognitive test score is comprised of 7 cognitive tests/sub-tests with an MSDR=0.964. By comparison, the most sensitive individual test score, Logical Memory – Delayed Recall, MSDR= 0.64.
Conclusions
We have identified a composite cognitive test score representing multiple cognitive domains that has improved power compared to the most sensitive single test item to track preclinical AD decline and evaluate preclinical AD treatments. We are confirming the power of the composite in independent cohorts, and with other analytical approaches, which may result in refinements, and have designated it as the primary endpoint in the Alzheimer’s Prevention Initiative’s preclinical treatment trials for individuals at high imminent risk for developing symptoms due to late-onset AD.
doi:10.1016/j.jalz.2014.02.002
PMCID: PMC4201904  PMID: 24751827
Introduction
Novel compounds with potential to attenuate or stop the progression of Alzheimer's disease (AD) from its presymptomatic stage to dementia are being tested in man. The study design commonly used is the long-term randomized, placebo-controlled trial (RPCT), meaning that many patients will receive placebo for 18 months or longer. It is ethically problematic to expose presymptomatic AD patients, who by definition are at risk of developing dementia, to prolonged placebo treatment. As an alternative to long-term RPCTs we propose a novel clinical study design, termed the placebo group simulation approach (PGSA), using mathematical models to forecast outcomes of presymptomatic AD patients from their own baseline data. Forecasted outcomes are compared with outcomes observed on candidate drugs, thus replacing a concomitant placebo group.
Methods
First models were constructed using mild cognitive impairment (MCI) data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. One outcome is the Alzheimer Disease Assessment Scale - cognitive subscale (ADAScog) score after 24 months, predicted in a linear regression model; the other is the trajectory over 36 months of a composite neuropsychological test score (Neuro-Psychological Battery (NP-Batt)), using a mixed model. Demographics and clinical, biological and neuropsychological baseline values were tested as potential predictors in both models.
Results
ADAScog scores after 24 months are predicted from gender, obesity, Functional Assessment Questionnaire (FAQ) and baseline scores of Mini-Mental State Examination, ADAScog and NP-Batt with an R2 of 0.63 and a residual standard deviation of 0.67, allowing reasonably precise estimates of sample means. The model of the NP-Batt trajectory has random intercepts and slopes and fixed effects for body mass index, time, apolipoprotein E4, age, FAQ, baseline scores of ADAScog and NP-Batt, and four interaction terms. Estimates of the residual standard deviation range from 0.3 to 0.5 on a standard normal scale. If novel drug candidates are expected to diminish the negative slope of scores with time, a change of 0.04 per year could be detected in samples of 400 with a power of about 80%.
Conclusions
First PGSA models derived from ADNI MCI data allow prediction of cognitive endpoints and trajectories that correspond well with real observed values. Corroboration of these models with data from other observational studies is ongoing. It is suggested that the PGSA may complement RPCT designs in forthcoming long-term drug studies with presymptomatic AD individuals.
doi:10.1186/alzrt68
PMCID: PMC3226271  PMID: 21418632
Schizophrenia Bulletin  2006;32(4):679-691.
Current methods for statistical analysis of neuropsychological test data in schizophrenia are inherently insufficient for revealing valid cognitive impairment profiles. While neuropsychological tests aim to selectively sample discrete cognitive domains, test performance often requires several cognitive operations or “attributes.” Conventional statistical approaches assign each neuropsychological score of interest to a single attribute or “domain” (e.g., attention, executive, etc.), and scores are calculated for each. This can yield misleading information about underlying cognitive impairments. We report findings applying a new method for examining neuropsychological test data in schizophrenia, based on finite partially ordered sets (posets) as classification models.
A total of 220 schizophrenia outpatients were administered the Positive and Negative Symptom Scale (PANSS) and a neuropsychological test battery. Selected tests were submitted to cognitive attribute analysis a priori by two neuropsychologists. Applying Bayesian classification methods (posets), each patient was classified with respect to proficiency on the underlying attributes, based upon his or her individual test performance pattern.
Twelve cognitive “classes” are described in the sample. Resulting classification models provided detailed “diagnoses” into “attribute-based” profiles of cognitive strength/weakness, mimicking expert clinician judgment. Classification was efficient, requiring few measures to achieve accurate classification. Attributes were associated with PANSS factors in the expected manner (only the negative and cognition factors were associated with the attributes), and a double dissociation was observed in which divergent thinking was selectively associated with negative symptoms, possibly reflecting a manifestation of Kraepelin's hypothesis regarding the impact of volitional disturbances on thought.
Using posets for extracting more precise cognitive information from neuropsychological data may reveal more valid cognitive endophenotypes, while dramatically reducing the amount of testing required.
doi:10.1093/schbul/sbj038
PMCID: PMC2632274  PMID: 16424379
schizophrenia; neurocognitive deficits; neuropsychological test domains; neuropsychological test data reduction; clustering techniques; Bayesian methods
Neurology  2006;67(6):1006-1010.
Objective
To evaluate the performance of nondemented subjects 85 years and older on the Consortium to Establish a Registry for Alzheimer’s Disease (CERAD) neuropsychological battery, and to assess its relationship with sociodemographic variables.
Methods
We studied 196 subjects enrolled in an Alzheimer’s Disease Research Center study who had a complete CERAD neuropsychological assessment. We used multiple regression analysis to predict performance on the neuropsychological tests from age, education, and sex. Eight representative hypothetical individuals were created (for example, an 87-year-old man, with high education). For each test, estimates of performance at the 10th, 25th, 50th, and 75th percentiles were reported for the eight representative hypothetical individuals.
Results
Mean age was 89.2 years (SD = 3.2), mean years of education was 14.9 (SD = 3.2), and 66% of the sample were women. For 11 of the 14 neuropsychological tests, there was a significant multiple regression model using education, age, and sex as predictors. Neither the models nor the predictors used individually were significant for Delayed Recall, Savings, or correct Recognition. Among the significant results, seven had education as the strongest predictor. Lower age and higher education were associated with better performance. Women performed better than men in three of four tests with significant results for sex.
Conclusions
In a sample of oldest old whose primary language is English, neuropsychological testing is influenced mainly by education and age. Cutoff scores based on younger populations and applied to the oldest old might lead to increased false-positive misclassifications.
doi:10.1212/01.wnl.0000237548.15734.cd
PMCID: PMC3163090  PMID: 17000969
Objectives
Hispanics are the fastest growing ethnicity in the United States, yet there are limited well-validated neuropsychological tools in Spanish, and an even greater paucity of normative standards representing this population. The Spanish NIH Toolbox Cognition Battery (NIHTB-CB) is a novel neurocognitive screener; however, the original norms were developed combining Spanish- and English-versions of the battery. We developed normative standards for the Spanish NIHTB-CB, fully adjusting for demographic variables and based entirely on a Spanish-speaking sample.
Methods
A total of 408 Spanish-speaking neurologically healthy adults (ages 18–85 years) and 496 children (ages 3–7 years) completed the NIH Toolbox norming project. We developed three types of scores: uncorrected based on the entire Spanish-speaking cohort, age-corrected, and fully demographically corrected (age, education, sex) scores for each of the seven NIHTB-CB tests and three composites (Fluid, Crystallized, Total Composites). Corrected scores were developed using polynomial regression models. Demographic factors demonstrated medium-to-large effects on uncorrected NIHTB-CB scores in a pattern that differed from that observed on the English NIHTB-CB. For example, in Spanish-speaking adults, education was more strongly associated with Fluid scores, but showed the strongest association with Crystallized scores among English-speaking adults.
Results
Demographic factors were no longer associated with fully corrected scores. The original norms were not successful in eliminating demographic effects, overestimating children’s performances, and underestimating adults’ performances on the Spanish NIHTB-CB.
Conclusions
The disparate pattern of demographic associations on the Spanish versus English NIHTB-CB supports the need for distinct normative standards developed separately for each population. Fully adjusted scores presented here will aid in more accurately characterizing acquired brain dysfunction among U.S. Spanish-speakers.
doi:10.1017/S135561771500137X
PMCID: PMC5107311  PMID: 26817924
Neuropsychological test; Norms; Psychometrics; Assessment; Cross-cultural; Cognition

Results 1-25 (1468966)