If smoking is a risk factor for Alzheimer’s disease (AD) but a smoker dies of another cause before developing or manifesting AD, smoking-related mortality may mask the relationship between smoking and AD. This phenomenon, referred to as competing risk, complicates efforts to model the effect of smoking on AD. Typical survival regression models assume that censorship from analysis is unrelated to an individual’s probability of developing AD (i.e., that censoring is noninformative). However, if individuals who die before developing AD are younger than those who survive long enough to develop AD, and if they include a higher percentage of smokers than nonsmokers, the incidence of AD will appear to be higher in older individuals and in nonsmokers. Further, age-specific mortality rates are higher in smokers because they die earlier than nonsmokers. Therefore, if we fail to take into account the competing risk of death when we estimate the effect of smoking on AD, we bias the results and are really only comparing the incidence of AD in nonsmokers with that in the healthiest smokers. In this study, we demonstrate that the effect of smoking on AD differs in models that are and are not adjusted for competing risks.
Alzheimer disease; competing risks; elderly; mortality; smoking
Subjective cognitive complaints (SCCs) are increasingly a focus in studies of prodromal Alzheimer disease (AD) and risk for dementia. Little is known about the optimal approach to measure SCCs. We used item response theory (IRT) to examine characteristics of 24 SCC items in a sample of 3,495 older adults pooled from four community-based studies. We investigated the potential advantages of IRT scoring over conventional scoring, based on participants' item response patterns. Items most likely endorsed by individuals low in SCC severity relate to word retrieval and general subjective memory decline. Items likely endorsed only by individuals high in SCC severity relate to non-episodic memory changes, such as decline in comprehension, judgment and executive functions, praxis and procedural memory, and social behavior changes. IRT scoring of SCCs was associated with performance on objective cognitive test performance above and beyond total SCC scores, and was associated with objective cognitive test performance among participants endorsing only one SCC item. Thus, IRT scoring captures additional information beyond a simple sum of SCC symptoms. Modern psychometric approaches including IRT may be useful in developing 1) brief community screening questionnaires, and 2) more sensitive measures of very subtle subjective decline for use in prodromal AD research.
Subjective memory; item response theory; dementia; neuropsychological tests; subjective cognitive impairment
A key component of successful aging in old age is the ability to independently perform instrumental activities of daily living (IADLs). We examined the ability to perform multiple IADL tasks in relation to mild cognitive impairment (MCI) defined on purely neuropsychological grounds.
Population-based cohort in Southwestern Pennsylvania.
1,737 community-dwelling adults aged 65 years and older.
Classification of MCI based on performance with reference to norms in the cognitive domains of memory, language, attention, executive and visuospatial function. The ability to perform seven IADL tasks (travel, shopping, meal preparation, housework, taking medications, handling personal finances, and telephone use) as assessed by the Older Americans Resources and Services (OARS) scale.
Those with cognitively defined MCI were more likely to be dependent in at least one IADL task, and in each individual IADL task, than cognitively normal participants. Better memory and executive functioning were associated with lower odds of IADL dependence in MCI. Across the subtypes of MCI, those with the multiple-domain amnestic subtype were the most likely to be dependent in all IADL tasks; with better executive functioning associated with lower risk of dependence in select IADL tasks in this group.
Mild impairment in cognition is associated with difficulty performing IADL tasks at the population level. Understanding these associations may help improve prediction of the outcomes of MCI. It may also allow appropriate targeting of cognitive interventions in MCI to potentially help preserve functional independence.
cognition; mild cognitive impairment; everyday functioning; instrumental activities of daily living; epidemiology; community; population
Concern regarding wide variations in spending and intensive care unit use for patients at the end of life hinges on the assumption that such treatment offers little or no survival benefit.
To explore the relationship between hospital “end-of-life” (EOL) treatment intensity and postadmission survival.
Retrospective cohort analysis of Pennsylvania Health Care Cost Containment Council discharge data April 2001 to March 2005 linked to vital statistics data through September 2005 using hospital-level correlation, admission-level marginal structural logistic regression, and pooled logistic regression to approximate a Cox survival model.
A total of 1,021,909 patients ≥65 years old, incurring 2,216,815 admissions in 169 Pennsylvania acute care hospitals.
EOL treatment intensity (a summed index of standardized intensive care unit and life-sustaining treatment use among patients with a high predicted probability of dying [PPD] at admission) and 30- and 180-day postadmission mortality.
There was a nonlinear negative relationship between hospital EOL treatment intensity and 30-day mortality among all admissions, although patients with higher PPD derived the greatest benefit. Compared with admission at an average intensity hospital, admission to a hospital 1 standard deviation below versus 1 standard deviation above average intensity resulted in an adjusted odds ratio of mortality for admissions at low PPD of 1.06 (1.04–1.08) versus 0.97 (0.96–0.99); average PPD: 1.06 (1.04–1.09) versus 0.97 (0.96–0.99); and high PPD: 1.09 (1.07–1.11) versus 0.97 (0.95– 0.99), respectively. By 180 days, the benefits to intensity attenuated (low PPD: 1.03 [1.01–1.04] vs. 1.00 [0.98–1.01]; average PPD: 1.03 [1.02–1.05] vs. 1.00 [0.98–1.01]; and high PPD: 1.06 [1.04–1.09] vs. 1.00 [0.98–1.02]), respectively.
Admission to higher EOL treatment intensity hospitals is associated with small gains in postadmission survival. The marginal returns to intensity diminish for admission to hospitals above average EOL treatment intensity and wane with time.
intensive care; terminal care; mortality; hospitals; efficiency; quality
Over 4 billion people worldwide are exposed to dietary aflatoxins, which cause liver cancer (hepatocellular carcinoma, HCC) in humans. However, the population attributable risk (PAR) of aflatoxin-related HCC remains unclear.
In our systematic review and meta-analysis of epidemiological studies, summary odds ratios (ORs) of aflatoxin-related HCC with 95% confidence intervals were calculated in HBV+ and HBV− individuals, as well as the general population. We calculated the PAR of aflatoxin-related HCC for each study as well as the combined studies, accounting for HBV status.
17 studies with 1680 HCC cases and 3052 controls were identified from 479 articles. All eligible studies were conducted in China, Taiwan, or sub-Saharan Africa. The PAR of aflatoxin-related HCC was estimated at 17% (14–19%) overall, and higher in HBV+ (21%) than HBV− (8.8%) populations. If the one study that contributed most to heterogeneity in the analysis is excluded, the summarized OR of HCC with 95% CI is 73.0 (36.0–148.3) from the combined effects of aflatoxin and HBV, 11.3 (6.75–18.9) from HBV only, and 6.37 (3.74–10.86) from aflatoxin only. The PAR of aflatoxin-related HCC increases to 23% (21–24%). The PAR has decreased over time in certain Taiwanese and Chinese populations.
In high exposure areas, aflatoxin multiplicatively interacts with HBV to induce HCC; reducing aflatoxin exposure to non-detectable levels could reduce HCC cases in high-risk areas by about 23%. The decreasing PAR of aflatoxin-related HCC reflects the benefits of public health interventions to reduce aflatoxin and HBV.
Aflatoxin; hepatocellular carcinoma; hepatitis B virus; population attributable risk; systematic review; meta-analysis
Long-term medication therapy for post myocardial infarction (MI) patients can prolong life. However, recent data on long-term adherence are limited, particularly among some subpopulations. We compared medication adherence among Medicare MI survivors, by disability status, race/ethnicity, and income.
We examined 100% of Medicare fee-for-service beneficiaries discharged post-MI in 2008. The outcomes were adherence to β-blockers, statins, and angiotensin-converting enzyme inhibitors (ACEIs)/angiotensin II receptor blockers (ARBs), for 1-year and 6-month post-discharge. Adherence was defined as having prescriptions in possession for ≥75% of days.
Among aged beneficiaries who survived 1-year, 1-year adherence to β-blockers were 68%, 66%, 61%, 58%, and 57% for Whites, Asians, Hispanics, Native Americans and Blacks, respectively; among the disabled, 1-year adherence was worse for each group: 59%, 54%, 52%, 47% and 43% respectively. The racial/ethnic difference persisted after adjustment for age, gender, income, drug coverage, location and health status. Patterns of adherence to statins and ACEs/ARBs were similar. Among beneficiaries with close-to full drug coverage, minorities were still less likely to adhere relative to Whites, OR=0.70 (95% CI 0.65-0.75) for Blacks and OR=0.70 (95% CI 0.55-0.90) for Native Americans.
Although β-blockers at discharge has improved since the National Committee for Quality Assurance implemented quality measures, long-term adherence remains problematic, especially among disabled and minority beneficiaries. Quality measures for long-term adherence should be created to improve outcomes in post-MI patients. Even among those with close-to full drug coverage, racial differences remain, suggesting that policies simply relying on cost reduction cannot eliminate racial differences.
Approximately 15% of HIV-infected individuals have comorbid diabetes. Studies suggest that HIV and diabetes have an additive effect on chronic kidney (CKD) progression; however, this observation may be confounded by differences in traditional CKD risk factors.
We studied a national cohort of HIV-infected and matched HIV-uninfected individuals who received care through the Veterans Healthcare Administration. Subjects were divided into four groups based on baseline HIV and diabetes status, and the rate of progression to an estimated glomerular filtration rate (eGFR) < 45ml/min/1.73m2 was compared using Cox-proportional hazards modeling to adjust for CKD risk factors.
31,072 veterans with baseline eGFR ≥ 45ml/min/1.73m2 (10,626 with HIV only, 5,088 with diabetes only, and 1,796 with both) were followed for a median of 5 years. Mean baseline eGFR was 94ml/min/1.73m2, and 7% progressed to an eGFR < 45ml/min/1.73m2. Compared to those without HIV or diabetes, the relative rate of progression was increased in individuals with diabetes only [adjusted hazard ratio (HR) 2.48; 95% confidence interval (CI) 2.19–2.80], HIV only [HR 2.80, 95% CI 2.50–3.15], and both HIV and diabetes [HR 4.47, 95% CI 3.87–5.17].
Compared to patients with only HIV or diabetes, patients with both diagnoses are at significantly increased risk of progressive CKD even after adjusting for traditional CKD risk factors. Future studies should evaluate the relative contribution of complex comorbidities and accompanying polypharmacy to the risk of CKD in HIV-infected individuals, and prospectively investigate the use of cART, glycemic control, and adjunctive therapy to delay CKD progression.
non-AIDS complications; HIV; chronic kidney disease; diabetes; risk factors
Population-based studies face challenges in measuring brain structure relative to cognitive aging. We examined the feasibility of acquiring state-of-the-art brain MRI images at a community hospital, and attempted to cross-validate two independent approaches to image analysis.
Participants were 49 older adults (29 cognitively normal and 20 with mild cognitive impairment, MCI) drawn from an ongoing cohort study, with annual clinical assessments within one month of scan, without overt cerebrovascular disease, and without dementia (Clinical Dementia Ratings (CDR) <1)). Brain MRI images, acquired at the local hospital using the Alzheimer's Disease Neuroimaging Initiative protocol, were analyzed using (1) a visual atrophy rating scale and (2) a semi-automated voxel-level morphometric method. Atrophy and volume measures were examined in relation to cognitive classification (any MCI and Amnestic MCI vs. normal cognition), CDR (0.5 vs. 0), and presumed etiology.
Measures indicating greater atrophy or lesser volume of the hippocampal formation, the medial temporal lobe, and the dilation of the ventricular space, were significantly associated with cognitive classification, CDR=0.5, and presumed neurodegenerative etiology, independent of the image analytic method. Statistically significant correlations were also found between the visual ratings of medial temporal lobe atrophy and the semiautomated ratings of brain structural integrity.
High quality MRI data can be acquired and analyzed from older adults in population studies, enhancing their capacity to examine imaging biomarkers in relation to cognitive aging and dementia.
Transplantation is often the only viable treatment for pediatric patients with end-stage liver disease. Making well-informed decisions on when to proceed with transplantation requires accurate predictors of transplant survival. The standard Cox proportional hazards (PH) model assumes that covariate effects are time-invariant on right-censored failure time; however, this assumption may not always hold. Gray's piecewise constant time-varying coefficients (PC-TVC) model offers greater flexibility to capture the temporal changes of covariate effects without losing the mathematical simplicity of Cox PH model. In the present work, we examined the Cox PH and Gray PC-TVC models on the posttransplant survival analysis of 288 pediatric liver transplant patients diagnosed with cancer. We obtained potential predictors through univariable (P < 0.15) and multivariable models with forward selection (P < 0.05) for the Cox PH and Gray PC-TVC models, which coincide. While the Cox PH model provided reasonable average results in estimating covariate effects on posttransplant survival, the Gray model using piecewise constant penalized splines showed more details of how those effects change over time.
We examine the impact of menopausal status, beyond menopausal symptoms, on health-related quality of life (HRQoL).
Seven hundred thirty-two women aged 40–65, regardless of health condition or menopausal status, were enrolled from single general internal medicine practice. Women completed annual questionnaires including HRQoL, and menopausal status and symptoms.
The physical health composite of the RAND-36 is lower in late peri (45.6, P<.05), early post (45.4, P<.05), and late postmenopausal women (44.6, P<.01), and those who report a hysterectomy (44.2, P<.01) compared to premenopausal women (47.1), with effect sizes of Cohen’s d = .12-.23. The mental health composite of the RAND-36 is lower in late peri (44.7, P<.01), early post (44.9, P<.01), and late postmenopausal women (45.0, P<.05) and those who report a hysterectomy (44.2, P<.01) compared to premenopausal women (46.8), with effect sizes of Cohen’s d = .15–.20. Findings are comparable adjusted for menopausal symptom frequency and bother.
Over a 5-year follow-up period, we found a negative impact of menopause on some domains of HRQoL, regardless of menopausal symptoms. Clinicians should be aware of this relationship and work to improve HRQoL, rather than expect it to improve spontaneously when menopausal symptoms resolve.
Menopause; Health-related quality of life; Hot flashes; Vaginal dryness; Women’s health
Although end-stage kidney disease (ESKD) in African Americans (AAs) is four times greater than in Whites, AAs are less than half as likely to undergo kidney transplantation (KT). This racial disparity has been found even after controlling for clinical factors such as co-morbid conditions, dialysis vintage and type, and availability of potential living donors. Therefore, studying non-medical factors is critical to understanding disparities in KT.
Design, Setting, and Participants
We conducted a longitudinal cohort study with 127 AA and White patients with ESKD undergoing evaluation for KT (12/06 – 7/07) to determine whether, after controlling for medical factors, differences in time to acceptance for transplant is explained by patients’ cultural factors (e.g., perceived racism and discrimination, medical mistrust, religious objections to living donor KT), psychosocial characteristics (e.g., social support, anxiety, depression), or transplant knowledge. Participants completed 2 telephone interviews (shortly after initiation of transplant evaluation and after being accepted or found ineligible for transplant).
Results indicated that AA patients reported higher levels of the cultural factors than did Whites. We found no differences in co-morbidity or availability of potential living donors. AAs took significantly longer to get accepted for transplant than did Whites (HR=1.49, p=0.005). After adjustment for demographic, psychosocial, and cultural factors, the association of race with longer time for listing was no longer significant.
We suggest that interventions to address racial disparities in KT incorporate key non-medical risk factors in patients.
kidney transplantation; disparities; discrimination
Delay in seeking care for sexually transmitted diseases (STDs) has adverse consequences for both the individual and population. We sought to identify factors associated with delay in seeking care for STDs.
Subjects included 300 young men and women (aged 15-24) attending an urban STD clinic for a new STD-related problem due to symptoms or referral for an STD screening. Subjects completed a structured interview that evaluated STD history, attitudes and beliefs about STDs, depression, substance use, and other factors possibly associated with delay. Delay was defined as waiting > 7 days to seek and obtain care for STDs.
Nearly one-third of participants delayed seeking care for > 7 days. Significant predictors for delay included self-referral for symptoms as the reason for visit (OR 5.3, 95% CI: 2.58 – 10.98), and the beliefs “my partner would blame me if I had an STD” (OR 2.44, 95% CI: 1.30 – 4.60) and “it’s hard to find time to get checked for STDs” (OR 3.62, 95% CI: 1.95 – 6.69), after adjusting for age, race, sex, and other factors. Agreeing with the statement “would use a STD test at home if one were available” was associated with a decrease in delay (OR 0.24, 95% CI: 0.09 – 0.60).
Many young persons delay seeking care for STDs for a number of reasons. Strategies to improve STD care-seeking include encouragement of symptomatic persons to seek medical care more rapidly, reduction of social stigmas, and improved access to testing options.
Delay; healthcare-seeking behavior; men; sexually transmitted diseases; symptoms; women.
We aimed to determine adherence, virological, and immunological outcomes one year after starting a first combination antiretroviral therapy (ART) regimen.
Observational; synthesis of administrative, laboratory, and pharmacy data. Antiretroviral regimens were divided into efavirenz, nevirapine, boosted protease inhibitor (PI), and single PI categories. Propensity scores were used to control for confounding by treatment assignment. Adherence was estimated from pharmacy refill records.
Veterans Affairs Healthcare System, all sites.
HIV-infected individuals starting combination ART with a low likelihood of previous antiretroviral exposure.
The proportion of antiretroviral prescriptions filled as prescribed, a change in log HIV-RNA, the proportion with log HIV-RNA viral suppression, a change in CD4 cell count.
A total of 6394 individuals unlikely to have previous antiretroviral exposure started combination ART between 1996 and 2004, and were eligible for analysis. Adherence overall was low (63% of prescriptions filled as prescribed), and adherence with efavirenz (67%) and nevirapine (65%) regimens was significantly greater than adherence with boosted PI (59%) or single PI (61%) regimens (P < 0.001). Efavirenz regimens were more likely to suppress HIV-RNA at one year (74%) compared with nevirapine (62%), boosted PI (63%), or single PI (53%) regimens (all P < 0.001), and this superiority was maintained when analyses were adjusted for baseline clinical characteristics and propensity for treatment assignment. Efavirenz also yielded more favorable immunological outcomes.
HIV-infected individuals initiating their first combination ART using an efavirenz-based regimen had improved virological and immunological outcomes and greater adherence levels.
Adherence; resistance; ART; Veterans Affairs Healthcare System
To track cognitive change over time in dementia-free older adults and to examine terminal cognitive decline.
A total of 1,230 subjects who remained free from dementia over 14 years of follow-up were included in a population-based epidemiologic cohort study. First, we compared survivors and decedents on their trajectories of 5 cognitive functions (learning, memory, language, psychomotor speed, executive functions), dissociating practice effects which can mask clinically significant decline from age-associated cognitive decline. We used longitudinal mixed-effects models with penalized linear spline. Second, limiting the sample to 613 subjects who died during follow-up, we identified the inflection points at which the rate of cognitive decline accelerated, in relation to time of death, controlling for practice effects. We used mixed-effects model with a change point.
Age-associated cognitive trajectories were similar between decedents and survivors without dementia. However, substantial differences were observed between the trajectories of practice effects of survivors and decedents, resembling those usually observed between normal and mildly cognitively impaired elderly. Executive and language functions showed the earliest terminal declines, more than 9 years prior to death, independent of practice effects.
Terminal cognitive decline in older adults without dementia may reflect presymptomatic disease which does not cross the clinical threshold during life. Alternatively, cognitive decline attributed to normal aging may itself represent underlying neurodegenerative or vascular pathology. Although we cannot conclude definitively from this study, the separation of practice effects from age-associated decline could help identify preclinical dementia. Neurology® 2011;77:722–730
To understand the association between sense of purpose in life and sexual well-being in a cohort of midlife women.
Participation in partnered sexual activities and indicators of sexual well-being (the engagement in, and enjoyment of, sexually intimate activities) were measured in a longitudinal cohort of 677 eligible women aged 40–65. At a single time point, women completed the Life Engagement Test (LET), a measure of life purpose. Univariable and multivariable mixed models were used to assess the association between LET and longitudinal sexual well-being.
A higher sense of purpose in life was associated with higher levels of enjoyment (coefficient=2.89, p<0.001) but not with participation in partnered sexual activity (coefficient=0.49, p=0.63) or engagement in partnered sexually intimate activities (coefficient=1.0, p=0.30). Participation was associated with younger age, lower body mass index, being married, reporting any vaginal dryness, and better emotional well-being. HT use approached, but did not reach, significance in association with participation, with p=0.05. Engagement in sexually intimate activities was associated with younger age, more social support, and better emotional well-being. Higher levels of enjoyment were associated with more social support, better emotional well-being, and less vaginal dryness. Menopausal status was not associated with engagement or enjoyment, and only being 5 years or more post-menopausal was related to decreased participation.
Higher sense of purpose in life is associated with more enjoyment of sexually intimate activities, adjusting for other known factors that influence sexual well-being, and independent of demographic factors and menopausal or HT status.
menopause; female sexual dysfunction; life purpose; psychosocial factors; women’s health
Whether hepatitis C (HCV) confers additional coronary heart disease (CHD) risk among Human Immunodeficiency Virus (HIV) infected individuals is unclear. Without appropriate adjustment for antiretroviral therapy, CD4 count, and HIV-1 RNA, and substantially different mortality rates among those with and without HIV and HCV infection, the association between HIV, HCV, and CHD may be obscured.
Methods and Results
We analyzed data on 8579 participants (28% HIV+, 9% HIV+HCV+) from the Veterans Aging Cohort Study Virtual Cohort who participated in the 1999 Large Health Study of Veteran Enrollees. We analyzed data collected on HIV and HCV status, risk factors for and the incidence of CHD, and mortality from 1/2000–7/2007. We compared models to assess CHD risk when death was treated as a censoring event and as a competing risk. During the median 7.3 years of follow-up, there were 194 CHD events and 1186 deaths. Compared with HIV−HCV− Veterans, HIV+ HCV+ Veterans had a significantly higher risk of CHD regardless of whether death was adjusted for as a censoring event (adjusted hazard ratio (HR)=2.03, 95% CI=1.28–3.21) or a competing risk (adjusted HR=2.45, 95% CI=1.83–3.27 respectively). Compared with HIV+HCV− Veterans, HIV+ HCV+ Veterans also had a significantly higher adjusted risk of CHD regardless of whether death was treated as a censored event (adjusted HR=1.93, 95% CI=1.02–3.62) or a competing risk (adjusted HR =1.46, 95% CI=1.03–2.07).
HIV+HCV+ Veterans have an increased risk of CHD compared to HIV+HCV−, and HIV−HCV− Veterans.
viruses; coronary disease; mortality; multi morbidity
Rationale: Studies demonstrating an association between chronic obstructive pulmonary disease and low bone mineral density (BMD) implicate factors distinct from treatments and severity of lung disease in the pathogenesis of osteoporosis. Whereas emphysema has been independently associated with vascular disease and other comorbidities, its association with BMD has not been well studied.
Objectives: We explored the associations of BMD with computed tomography (CT) measures of emphysema and other risk factors in current and former smokers.
Methods: One hundred ninety subjects completed a CT scan, pulmonary function testing, questionnaires, and dual x-ray absorptiometry measurements of hip and lumbar spine BMD. Subjects were classified as having normal BMD, osteopenia, or osteoporosis. Demographic, physiologic, and radiographic characteristics were compared and the association of BMD with radiographic emphysema, airflow obstruction, and osteoporosis risk factors was assessed.
Measurements and Main Results: No difference existed in age, tobacco exposure, oral steroid use, or physical activity across BMD categories. Both osteopenia and osteoporosis were associated with the presence of airflow obstruction, inhaled corticosteroid use, and female sex, and demonstrated a significant relationship with the presence of visual emphysema (P = 0.0003). Quantitative emphysema, but not CT-measured indices of airway wall thickness, was inversely associated with BMD. Visual emphysema alone was a significant predictor of osteopenia/osteoporosis (odds ratio = 2.55; 95% confidence interval, 1.24–5.25) in a model including obstruction severity, age, sex, and inhaled and oral steroid use.
Conclusions: Radiographic emphysema is a strong, independent predictor of low BMD in current and former smokers. This relationship suggests a common mechanistic link between emphysema and osteopenia/osteoporosis.
pulmonary disease, chronic obstructive; emphysema; osteoporosis
Although Alzheimer's disease (AD) is a neurodegenerative disorder, there is growing interest in the influence of vascular factors on its incidence.
In a population-based longitudinal epidemiological study, we fit Cox proportional hazard models to examine the risk of incident dementia and AD associated with self-reported vascular disease. The population-attributable risk percent (percent of the incidence of dementia and AD in the population that would be eliminated if vascular disease was eliminated) was calculated using the adjusted hazard ratios (HR).
Of 822 eligible participants, 94 individuals developed incident dementia, with 79 having AD (probable/possible AD) during the follow-up period of on average 8 years. Stroke/transient ischemic attack history was associated with incident dementia (HR = 2.6) as well as AD (HR = 2.4) among non-apolipoprotein E ε4 carriers.
At the community level, the risk of dementia could be potentially reduced by 10.8% by eliminating overt cerebrovascular disease, and the risk of AD by 9.1% for non-apolipoprotein E ε4 carriers.
Population-Attributable Risk % (PAR%); Epidemiology; Vascular disease; AD; APOEε4
In May 2009, one of the earliest outbreaks of 2009 pandemic influenza A virus (pH1N1) infection resulted in the closure of a semi-rural Pennsylvania elementary school. Two sequential telephone surveys were administered to 1345 students (85% of the students enrolled in the school) and household members in 313 households to collect data on influenza-like illness (ILI). A total of 167 persons (12.4%) among those in the surveyed households, including 93 (24.0%) of the School A students, reported ILI. Students were 3.1 times more likely than were other household members to develop ILI (95% confidence interval [CI], 2.3–4.1). Fourth-grade students were more likely to be affected than were students in other grades (relative risk, 2.2; 95% CI, 1.2–3.9). pH1N1 was confirmed in 26 (72.2%) of the individuals tested by real-time reverse-transcriptase polymerase chain reaction. The outbreak did not resume upon the reopening of the school after the 7-day closure. This investigation found that pH1N1 outbreaks at schools can have substantial attack rates; however, grades and classrooms are affected variably. Additioanl study is warranted to determine the effectiveness of school closure during outbreaks.
In the community at large, many older adults with minimal cognitive and functional impairment remain stable or improve over time, unlike patients in clinical research settings, who typically progress to dementia. Within a prospective population-based study, we identified neuropsychological tests predicting improvement or worsening over one year in cognitively-driven everyday functioning as measured by Clinical Dementia Rating (CDR). Participants were 1682 adults aged 65+ and dementia-free at baseline. CDR change was modeled as a function of baseline test scores, adjusting for demographics. Among those with baseline CDR=0.5, 29.8% improved to CDR=0; they had significantly better baseline scores on most tests. In a stepwise multiple logistic regression model, tests which remained independently associated with subsequent CDR improvement were Category Fluency, a modified Token Test, and the sum of learning trials on Object Memory Evaluation. In contrast, only 7.1% with baseline CDR=0 worsened to CDR=0.5. They had significantly lower baseline scores on most tests. In multiple regression analyses, only the Mini-Mental State Exam, delayed memory for visual reproduction, and recall susceptible to proactive interference, were independently associated with CDR worsening. At the population level, changes in both directions are observable in functional status, with different neuropsychological measures predicting the direction of change.
Epidemiology; community; aging; Clinical Dementia Rating; cognition; prediction
To examine older adults' beliefs about osteoporosis and osteoporosis screening to identify barriers to screening.
Cross-sectional mailed survey.
Surveys were mailed to 1830 women and men aged 60 years and older. The survey assessed sociodemographic characteristics, osteoporosis and general health-related characteristics, and beliefs about osteoporosis severity, susceptibility, screening self-efficacy, and screening response efficacy. Analyses included Wilcoxon rank-sum tests to compare belief dimension scores, and multivariable ordinal logistic regression analyses to evaluate association between osteoporosis beliefs and potential explanatory variables.
Surveys were completed by 1268 individuals (69.3 per cent). Mean age of respondents was 73.3 years, and most were female (58.7 per cent). Individuals demonstrated greatest belief in the severity of osteoporosis and least belief in personal susceptibility (P <.001). Older individuals believed less strongly than younger individuals in osteoporosis severity (OR, 0.95 per 1-year increase in age; 95 per cent CI, 0.92-0.97) and response efficacy (OR, 0.97 per 1-year increase in age; 95 per cent CI, 0.95-0.99). Women believed more strongly than men in osteoporosis susceptibility (OR, 1.87; 95 per cent CI, 1.38-2.53) and screening self-efficacy (OR, 2.87; 95 per cent CI, 1.17-7.07). Individuals with high self-rated health status had greater belief than those with low self-rated health status in screening self-efficacy (OR, 3.59; 95 per cent CI, 1.89-6.83).
Older adults demonstrate several beliefs that may be barriers to osteoporosis screening, including low belief in susceptibility to osteoporosis. These beliefs should be targeted with patient education to improve screening rates.
geriatrics; patient education; osteoporosis; screening; survey research
There is increasing interest in identifying novel cognitive paradigms to help detect preclinical dementia. Promising results have been found in clinical settings using the Semantic Interference Test (SIT), a modification of an existing episodic memory test (Fuld Object Memory Evaluation) that exploits vulnerability to semantic interference in Alzheimer’s disease. It is not yet known how broadly this work will generalize to the community at large.
Participants aged ≥65 years from the Monongahela-Youghiogheny Healthy Aging Team (MYHAT) were administered the SIT at study entry. Independent of neuropsychological assessment, participants were rated on the Clinical Dementia Rating (CDR) scale, based on reported loss of cognitively-driven everyday functioning. In individuals free of dementia (CDR <1), the concurrent validity of the SIT was assessed by determining its association with CDR using multiple logistic regression models, with CDR 0 (no dementia) vs. 0.5 (possible dementia) as the outcome and the SIT test variables as predictors.
Poorer performance on all SIT variables but one was associated with higher CDR reflecting possible dementia (Odds Ratios 2.24 to 4.79). Younger age and female gender also conferred a performance advantage. Years of education and reading ability (a proxy for quality of education) evidenced a very weak association with SIT performance.
The SIT shows promise as a valid, novel measure to identify early preclinical dementia in a community setting. It has potential utility for assessment of persons who may be illiterate or of low education. Finally, we provide normative SIT data stratified by age which may be utilized by clinicians or researchers in future investigations.
neuropsychological tests; norms; cognitive aging; Semantic Interference Test; Alzheimer’s disease
To estimate and compare the frequency and prevalence of mild cognitive impairment (MCI) and related entities using different classification approaches at the population level.
Cross-sectional epidemiologic study of population-based cohort recruited by age-stratified random sampling from electoral rolls.
Small-town communities in western Pennsylvania, USA
Of 2036 individuals aged 65 years and older, 1982 participants with normal or mildly impaired cognition (age-education-corrected Mini-Mental State scores ≥ 21).
Demographics, neuropsychological assessment expressed as cognitive domains, functional ability, subjective reports of cognitive difficulties; based on these measurements, operational criteria for the Clinical Dementia Rating (CDR) scale, the 1999 criteria for Amnestic MCI, the 2004 Expanded criteria for MCI, and new, purely cognitive criteria for MCI.
A CDR rating of 0.5 (questionable dementia) was obtained by 27.6% of participants, while 1.2% had CDR ≥ 1 (mild or moderate dementia). Among those with CDR <1, 2.27% had Amnestic MCI and 17.61% had Expanded MCI, while 34.6 % had MCI by purely cognitive classification. Isolated executive function impairment was the least common, while impairment in multiple domains including executive function was the most common. Prevalence estimates weighted against the US Census are also provided.
The manner in which criteria for MCI are operationalized determines the proportion of individuals who are thus classified, and the degree of overlap with other criteria. Prospective followup is needed to determine progression from MCI to dementia, and thus empirically develop improved MCI criteria with good predictive value.
MCI criteria; subjective memory; epidemiology
To examine whether there is an association between engagement in reading and hobbies and dementia risk in late life.
942 members of a population-based, prospective cohort study were followed biennially to identify incident dementia cases. Cox proportional hazards models were used to estimate the risk of dementia in relation to baseline total number of activities and time commitment to reading and hobbies.
A lower risk for dementia was found for a greater number of activities, and for a high (about 1 hour each day) compared with low (less than 30 minutes each day) weekly time commitment to hobbies, independent of covariates. Only the protective effect of hobbies remained after methods were used to minimize bias due to potential preclinical dementia.
Engaging in hobbies for one or more hours every day might be protective against dementia in late life.
Dementia; Cognitive Activity; Leisure Activity; Hobbies
The optimal threshold for initiating HIV treatment is unclear.
To compare different thresholds for initiating HIV treatment.
We used our validated computer simulation to weigh important harms from earlier initiation of antiretroviral therapy (toxicity, side effects, and resistance accumulation) against important benefits (decreased HIV-related mortality).
Veterans Aging Cohort Study (5742 HIV-infected patients and 11 484 matched uninfected controls) and published reports.
Individuals with newly diagnosed chronic HIV infection and varying viral loads (10 000, 30 000, 100 000, and 300 000 copies/mL) and ages (30, 40, and 50 years).
Alternative thresholds for initiating antiretroviral therapy (CD4 counts of 200, 350, and 500 cells/mm3).
Life-years and quality-adjusted life-years (QALYs).
Results of Base-Case Analysis
Although the simulation was biased against earlier treatment initiation because it used an upper-bound assumption for therapy-related toxicity, earlier treatment increased life expectancy and QALYs at age 30 years regardless of viral load (life expectancies with CD4 initiation thresholds of 500, 350, and 200 cells/mm3 were 18.2 years, 17.6 years, and 17.2 years, respectively, for a viral load of 10 000 copies/mL and 17.3 years, 15.9 years, and 14.5 years, respectively, for a viral load of 300 000 copies/mL), and increased life expectancies at age 40 years if viral loads were greater than 30 000 copies/mL (life expectancies were 12.5 years, 12.0 years, and 11.4 years, respectively, for a viral load of 300 000 copies/mL).
Results of Sensitivity Analysis
Findings favoring early treatment were generally robust.
Results favoring later treatment may not be valid. The findings may not be generalizable to women.
This simulation suggests that earlier initiation of combination antiretroviral therapy is often favored compared with current recommendations.