|Home | About | Journals | Submit | Contact Us | Français|
The immuno-restorative effects of Highly Active Antiretroviral Treatment have allowed individuals with HIV to increase their lifespan. Despite this treatment, HIV can still produce central nervous system manifestations that can result in subtle or profound cognitive impairment. The characteristics of HIV-related cognitive impairment are reflective of declines found in normal cognitive aging. These impairments are observed in memory, speed of processing, attention, inductive reasoning, and psychomotor speed (Hinkin et al., 2004; Hardy & Vance, 2009). Hardy et al. (1999) examined neurocognitive functioning of HIV-positive adults of varying ages; compared to an HIV-negative control group, it was found that within the HIV-positive group, increased age and disease severity were indicative of poorer cognitive performance.
Such cognitive manifestations may be complicated by other factors that can also affect cognitive functioning such as psychoactive drug use, depression, education level, hardiness, and years diagnosed with HIV (Rabkin, McElhiney, Ferrando, van Gorp, & Lin, 2004). Examining such variables could determine underlying risks and protective factors that may predict cognitive performance among HIV-positive adults. Furthermore, identifying such predictors of cognition is particularly relevant, as it may assist health care providers to prevent future cognitive decline as well as functional decline.
The current study examines a subsample from a larger study that explored the effects of HIV status on cognition (Vance, Wadley, Crowe, Raper, & Ball, under review). The results from the initial analysis indicated, as expected, that HIV-positive adults exhibited poorer cognition as compared to the HIV-negative group. These results are similar to what Hardy et al. (1999) found in their comparison of HIV-positive individuals to an HIV-negative control group. Therefore, the purpose of this study was to further examine potential predictors of cognition among this sample of HIV-positive adults. Identifying predictors of cognition in this at-risk population will allow for the development of prevention and intervention techniques to help HIV-positive adults maintain their optimal cognitive functioning.
The 98 HIV-positive adults in this study were recruited from the Birmingham, Alabama, metropolitan area as part of a larger study on the effect of HIV status on cognition. Recruitment was conducted through an HIV outpatient clinic, university newspaper advertisements, brochures, flyers, and word-of-mouth. Only participants who had known about their HIV-positive diagnosis for at least one year were included. This one year criterion was used to ensure that all participants had at least one full year of HIV exposure to their nervous system. Also, an initial diagnosis of HIV can be characterized by extreme reactive depression which negatively affects cognitive functioning (Vance, Childs, Moneyham, & McKie-Bell, 2009); therefore, the effects of a diagnosed-induced reactive depression on cognition were minimized. Exclusion criteria included being homeless, blind or deaf, pregnant, having a developmental disability, currently being treated with radiation or chemotherapy, non-English speaking, having a severe neurological disorder (e.g., schizophrenia, bipolar disorder), or having a history of traumatic brain injury. Such exclusion criteria are similar to other HIV cognitive studies as these criteria exclude those with conditions or situations that may impair cognition or interfere with neuropsychological testing (e.g., Hinkin, Castellon, Atkinson, & Goodkin, 2001). All participants signed a consent form approved by the University of Alabama at Birmingham’s Institutional Review Board and received fifty dollars for their time. The sample was comprised of adults with a Mage = 41.47 years (range 24 – 67; SD = 10.23). As seen in Table 1, 68 participants were African American and 28 were women. Seventy-five participants provided their last known CD4+ lymphocyte cell count (M = 491.32, SD = 355.91).
In this cross-sectional study, participants were seen at a university research facility for approximately 2½ hours. During this one-to-one visit, trained interviewers administered a battery of demographic, psychosocial, and cognitive measures to the participants. The interviewers were professionals who also tested for other research protocols at the university. They were trained and monitored to avoid drift and weekly staff meetings provided the opportunity to discuss problems with administering the protocol.
Predictor variables were derived from the below measures. The variables of interest included age, gender, socio-economic status (SES), reading score, mood disturbance score, medical problems composite, CD4+ lymphocyte cell count, years with HIV, HIV medication usage, social networks, hardiness, and psychoactive drug use. Descriptive statistics for all predictor variables are provided in Table 1.
This information was utilized to obtain background information, which included the age, gender (0 = female; 1 = male), race, educational level, and household income before taxes (1 = $0 – $10K; 2 = $10,001 – $20K; etc.). An SES variable was created by conducting z-score transformations on educational level and income and then combining them to form a single composite.
As a proxy measure of the quality of education that one received, the WRAT-3 Reading score subtest was used (Wilkinson, 1993). To measure reading ability, this test asked participants to name letters and/or pronounce words of increasing difficulty. As per WRAT-3 standardized administration, participants were asked to read the letters if less than five words from the list were pronounced correctly and, once ten consecutive errors were made, testing discontinued. Possible scores ranged from 0 – 57, with higher scores indicating better reading abilities.
Mood disturbance was measured by the POMS; specifically, it measured mood or affective state (McNair, Loor, & Droppelman, 1992). Participants were asked to rate how much they have been feeling similar to 65 descriptive items (e.g., energetic, bitter) during the past week using a 5-point Likert scale (0 = not at all; 4 = extremely). From this, a mood disturbance score was calculated for this study; higher scores indicate more negative affect. The standardization scoring protocol permits negative and positive values. In this study, internal consistency was high for the POMS total score (Cronbach’s alpha = .94).
Adapted from the Cardiovascular Health Study (1989), this self-report questionnaire identifies the presence of medical conditions such as heart disease and diabetes. It was modified to also include questions regarding HIV, such as most recent CD4+ lymphocyte cell count and year diagnosed. From this measure, four variables were created for the analyses – a medical problems composite, a dichotomized CD4+ lymphocyte cell count, HIV medication usage, and HIV chronicity. First, a “medical problems composite” was formed by combining: 1) the number of medical conditions, and 2) the number of medications prescribed. To equate both on the same scale, z-scores for each item were combined to form the composite; a higher value indicates poorer health. Second, CD4+ lymphocyte cell count was used to create a categorical variable, distinguishing those with counts less than 200 cells/ml (an indicator of AIDS) from those with counts 200 cells/ml or greater (0 = CD4+ lymphocyte cell count less than 200; 1 = CD4+ lymphocyte cell count 200 and above). This value is significant because the literature suggests that a value below 200 CD4+ lymphocyte cells/ml exerts a distinct impact on neurological and cognitive functioning (Hardy & Vance, 2009). Third, HIV medication usage was assessed determining if participants were prescribed HIV medications (0 = no; 1 = yes). Finally, the number of years diagnosed with HIV was calculated by subtracting the interview date from participants’ reported date of receiving an initial HIV diagnosis; this variable is referred to as HIV chronicity. This technique is the best and only way to document time of HIV exposure.
Social networks was measured by the LSNS; specifically, it measures how much perceived social support one has (Giranda, Lubben, & Atchison, 1999; Lubben, 1988; Rubinstein, Lubben, & Mintzer, 1994). The questionnaire consists of 10 items and is administered in an interview format, asking participants to answer questions regarding friends and family. Each question is coded on a 6-point Likert scale ranging from 0 – 5, which is summed to create a total score ranging from 0 – 50; higher scores indicate less social isolation. In this study, internal consistency was high for the LSNS (Cronbach’s alpha = .77).
Hardiness was measured by the PCS; specifically, it measures trait hardiness in coping related to goal setting and attainment (Greenglass, 2001). The measure includes 14 items, where the participants are instructed to indicate how much they agree with each statement on a 4-point Likert scale (1 = not true at all; 4 = completely true) and are added to create a total score ranging from 14 – 56. Higher scores reflect greater identification with these hardiness characteristics. In this study, internal consistency was high for the PCS (Cronbach’s alpha = .82).
Psychoactive drug use was measured by asking participants to indicate if they used (0 = no; 1 = yes) a variety of substances during the past week, similar to other substance use measures (e.g., Berquier & Ashton, 1992). These substances included tobacco, alcohol, cannabis, stimulants, amphetamines, benzodiazepines, sedatives, heroine, methadone, hallucinogens, and inhalants. A tally of the number of substances used was employed to create the psychoactive drug use score.
Several cognitive domains were measured including speed of processing (Useful Field of View, Complex Reaction Time), psychomotor speed and visuomotor coordination (Trails A, Finger Tapping Test, Digit Symbol Copy and Substitution), attention and working memory (Digit Span, Spatial Span), reasoning (Letter Comparison and Pattern Comparison), and executive functioning (Trails B, CLOX). The means and standard deviations of the cognitive measures are presented in Table 2.
The Useful Field of View (UFOV®) test was included as a measure of visual speed of processing and attention (Edwards et al., 2005). It is a computerized test consisting of four increasingly complex subtests, where several presentations (17 – 500 ms long) of stimuli are used to determine the quickest speed at which the participants can accurately process visual information. The test is administered on a computer monitor with touch-screen technology following standardized administration instructions. The scores are reported in ms, representing the optimal presentation speed at which visual information can be correctly perceived. Lower scores (i.e., fewer ms) indicate faster speed of processing. Test-retest reliability is high (r = .81) (Edwards et al., 2005).
Another computer-administered test, the complex reaction time test, was used as a test of everyday visual search skills and reaction time (Ball & Owsley, 2000). The test involves a detailed set of instructions and practice tests to acclimate the participants to the test and the signs being presented (bicycle, pedestrian, left arrow, right arrow). Participants are then asked to complete 24 trials, making responses using the computer mouse in reaction to various target signs being presented on the computer screen. These targets are presented randomly and among non-target signs, requiring the participants to visually scan the signs and react when one of the target signs appears. The final scores for this test include an average reaction time (sec) for the three and six sign trials. Lower scores (i.e., fewer sec) indicate better cognitive functioning. Test-retest reliability is adequate (r = .56) (Ball & Owsley, 2000).
Trails A (Trails Making Test is a widely used cognitive measure of visuospatial tracking, attention, and perceptual motor speed (Reitan, 1958; Reitan, 1979). It is a reliable measure (Spreen & Strauss, 1998) with good sensitivity to age-related declines in performance (Lezak, 1995). Specifically, Trails A is a measure of attention and visuomotor tracking in which participants connect numbers in sequence (e.g., 1-2-3-4 etc.). Performance is measured by the time (sec) taken to complete each task, with lower scores indicating better cognitive functioning.
The Finger Tapping Test is a measure of fine motor speed, in which participants are instructed to quickly tap their finger on a button for ten sec at a time. Five trials of tapping are completed for both left and right hands and scores are recorded via an electronic counting device attached to the button. These trials are averaged across both hands to create a mean score. Higher scores indicate better psychomotor ability as evidenced by faster fine motor speeds. This test is highly reliable for women and men (r = .86 and r = .94, respectively) (Lezak, 1995; Spreen & Strauss, 1991).
The WAIS Digit Symbol Substitution and Copy test was used as a measure of psychomotor speed and visual orientation (Lezak, 1995). Two parts of the Digit Symbol subtest were administered in this study. The Digit Symbol Substitution is a measure of visuomotor coordination, while Digit Symbol Copy is a measure of psychomotor speed. In the substitution test, 9 digits are paired with distinct symbols, which are randomly presented in 93 boxes. Participants are given 90 sec to write symbols that correspond to the digits in each box. The score consists of the number of correctly written symbols, with higher scores indicating better cognitive functioning. The copy test presents 93 symbols and the participants are asked to copy the symbol in the adjacent boxes provided. Participants are timed and their completion time is recorded. Less time to complete the task indicates better cognitive functioning.
The Digit Span subtest of the Wechsler Memory Scale-III (WMS-III) was utilized as a measure of verbal working memory and attention (Wechsler, 1981). For the test, participants are given a string of numbers, spoken to them by an interviewer, that increase in length across trials placing more demand on short-term memory. The first part of the subtest requires participants to repeat the string of numbers verbatim. The second part instructs participants to repeat the sequence in reverse order, placing more demand on working memory as the information has to be mentally manipulated. The final score is the total number of correctly repeated trials, with higher scores indicating better verbal attention and working memory.
The WMS-III Spatial Span subtest (Wechsler, 1981) is a measure of nonverbal attention and working memory, requiring participants to touch a series of blocks in sequence, both forwards and backwards. Similar to digit span, the number of boxes in the series increases, placing greater demands on short-term memory. The final score is the total number of trials in which the boxes were correctly touched in sequence (forwards and backwards), with higher scores indicating better nonverbal attention and working memory.
Two separate tests, Letter Comparison and Pattern Comparison, were utilized to assess reasoning (Salthouse, 1991). Letter Comparison presents participants with three sets of 32 pairs of letters containing 3, 6, or 9 segments. The participants are instructed to decide whether the patterns between the pairs are the same or different within each set, with a time limit of 20 sec per set. Pattern Comparison also presents three sets of 32 pairs of patterns with 3, 6, or 9 line segments. Similarly, participants are instructed to decide whether the patterns are the same or different within the 20 sec time limit. For each measure, the total score is the number of correct answers from all three sets. Larger scores indicate better reasoning and cognitive functioning.
CLOX is a clock drawing task designed to discriminate visuospatial functions from executive functions (Royall, Cordes, & Polk, 1998). The first part of the test, CLOX 1, requires participants to spontaneously construct a clock on a blank sheet of paper. They are instructed to: “Draw me a clock that says 1:45. Set the hands and numbers on the face so that a child could read them.” As it requires planning in addition to the actual construction of the clock, it encompasses an executive component. The second part, CLOX 2, instructs participants to copy the clock as drawn by the interviewer. This is a purer measure of visuospatial functions. The format in which the clock is drawn is standardized, with the interviewer beginning by drawing in the numbers 12, 6, 3, and 9 first, then placing the hands set to “1:45”. On both parts, standardized scoring criteria are used, resulting in scores ranging from 0 – 15, with lower scores reflecting greater impairment in each domain of functioning (i.e., visuospatial and executive functions). The scores for CLOX 1 and 2 were averaged to create an overall measure of visuospatial and executive functions. Inter-rater reliability for the CLOX measure has been shown to be very high (CLOX 1, r = .94; CLOX 2, r = .93) (Royall et al., 1998).
Trails B (Trails Making Test) is a measure of executive functioning (Reitan, 1958; Reitan, 1979). As with Trails A, it has been shown to be reliable (Spreen & Strauss, 1998) and has sensitivity to age-related declines (Lezak, 1995). For Trails B, participants are asked to connect numbers and letters in alternating sequence (e.g., 1-A-2-B-3-C etc.). Performance is measured by the time (sec) taken to complete the task, with lower scores indicating better executive functioning.
Correlations and a series of hierarchical regressions were conducted to determine predictors of cognitive test performance. Hierarchical regression models were created for each of the cognitive measures serving as the dependent variable (Tabachnick & Fidell, 2006). The independent variables were the potential predictors including age, gender, SES, WRAT-3, mood disturbance score, medical problems composite, CD4+ lymphocyte cell count grouping (< 200; ≥ 200), HIV chronicity, HIV medication usage (no/yes), social networks score, hardiness score, and psychoactive drug use score. Of these variables, only CD4+ lymphocyte cell count had missing data; pairwise deletion was utilized since imputation was not appropriate with the nature of this variable. Model 1 statistically controlled for basic demographic variables including age, gender, WRAT-3 Reading score, and SES. The variables for model 2 were predictors that otherwise were correlated to the cognitive measure at (p < .10; Table 3). Parametric correlations (i.e., Pearson’s r) were conducted for all continuous variables and nonparametric correlations (i.e., Spearman’s rho) were used for categorical and dichotomous variables (e.g., HIV medication usage).
As mentioned, each hierarchical multiple regression model included two models. Specifically, at model 1, age, gender, SES, and WRAT-3 Reading score (a proxy for quality of education) were forced into the regression model. This was done for each cognitive measure. Model 2 allowed all other variables that were significantly correlated (see Table 3) with each respective measure to be entered into the model. Analyses for Spatial Span and CLOX stopped at model 1, as they were not correlated with any other potential predictors. The results of these analyses are presented with respect to the cognitive domains each test is thought to measure.
Two measures assessed visual speed of processing and reaction time, the UFOV® and the Complex Reaction Time. The regression analysis for UFOV® revealed age and WRAT-3 Reading score (model 1) as significant predictors; those who were older and had poorer reading scores performed worse on this measure. Model 1 explained approximately 20% of the variance (Table 4). Adding in other variables in model 2 significantly improved the model, where mood disturbance score was found to also be a significant predictor of performance; those with poorer mood performed worse on this measure. Overall, the model explained 31% of the variance in speed of processing as measured by the UFOV®. The regression analysis for Complex Reaction Time revealed that the only significant demographic predictor (model 1) was WRAT-3 Reading score; those who had poorer reading scores performed worse on this measure. Model 1 explained 19% of the variance in reaction time (Table 4). As with UFOV®, the only significant predictor in model 2 was mood disturbance score; those with poorer mood performed worse on this measure. The overall model for Complex Reaction Time explained 31% of the variance in reaction time.
The regression analysis for Trails A revealed age as the only significant demographic factor (model 1); those who were older performed worse on this measure. Model 1 accounted for 21% of the variance in this measure (Table 5). In model 2, mood disturbance score added a significant proportion of variance to the model; those with poorer mood performed worse on this measure. The overall model accounted for 36% of the variance in performance on Trails A. The regression analysis for the Finger Tapping Test revealed that gender was the only significant demographic predictor of performance; women performed worse on this measure. Model 1 explained 14% of the variance (Table 5). In model 2, the variable of HIV medication usage significantly improved the model; those not using HIV medication performed worse on this measure. The overall model for Finger Tapping Test explained 21% of the variability in this measure. The regression analysis for Digit Symbol Copy revealed that of the demographic factors, only age was significantly predictive; those who were older performed worse on this measure. Model 1 accounted for 11% of the variance (Table 5). However, in the overall model, age was no longer significant, while HIV medication usage, mood disturbance score, and psychoactive drug use were significantly predictive; those not using HIV medication, possessed a poorer mood, and used more psychoactive drugs performed worse on this measure. The overall model for Digit Symbol Copy explained 47% of the variability in this measure. Finally, on the more complex psychomotor speed/visuomotor coordination task, the regression analysis for Digit Symbol Substitution revealed that age and WRAT-3 Reading score (model 1) were significant predictors; those who were older and had poorer reading scores performed worse on this measure. Model 1 explained 23% of the variance (Table 5). For this measure, there were no significant model 2 predictors beyond age and WRAT-3 Reading score; however, the overall model did improve the prediction of the outcome measure, now explaining 34% of the variance.
The hierarchical regression analysis for Digit Span revealed that only WRAT-3 Reading score was a significant predictor of performance (model 1); those who had poorer reading scores performed worse on this measure. Model 1 explained 20% of the variance (Table 6). The variables entered at model 2 failed to explain a significant portion of variance; thus, there was not a significant improvement in the overall model (22% of the variance explained). The regression analysis for Spatial Span, which only entered model 1 demographics, revealed that the only significant predictor was age; those who were older performed worse on this measure. This model explained only 8% of the variability (Table 6).
Several variables were found to be significant predictors for Letter and Pattern Comparison (Table 7). The regression analysis for Letter Comparison revealed that age (model 1) alone accounted for 29% of the variance; those who were older performed worse on this measure. Significant predictors for model 2 included CD4+ group, HIV medication usage, and mood disturbance score; those with a lower CD4+ count, not using HIV medication, and had a poorer mood score performed worse on this measure. The overall model explained 48% of the variance in this measure. The regression analysis for Pattern Comparison revealed that the model 1 variables of age and WRAT-3 Reading score were significant demographic predictors; those who were older and had poorer reading scores performed worse on this measure. Model 1 accounted for 27% of the variance (Table 7). The model 2 variable of CD4+ group significantly improved the model; those with a lower CD4+ count performed worse on this measure. The overall model now explained approximately 39% of the variance in performance on Pattern Completion.
The regression analysis for Trails B revealed several demographic factors as significant predictors, including age, gender, and WRAT-3 Reading score; those who were older, women, and had a poorer reading score performed worse on this measure. Model 1 accounted for 39% of the variance in Trails B performance (Table 8). In model 2, only the CD4+ group variable added to the predictive utility; those with a lower CD4+ count performed worse on this measure. The overall model explained 43% of the variability. For CLOX, with only model 1 variables entered, the regression analysis revealed only WRAT-3 Reading score as a significant predictor; those who had a poorer reading score performed worse on this measure. This model explained 10% of the variance in performance on this measure (Table 8).
The purpose of the study was to identify risk factors underlying cognitive functioning among adults with HIV. Determining such correlates may be beneficial in detecting precursors to future cognitive impairments and target areas for interventions. The present study suggests that HIV chronicity, medical problems, social support network, and degree of hardiness were not predictive of cognitive performance. While SES was not uniquely predictive of performance on any measure, educational quality as measured by WRAT-3 Reading score was predictive of performance on several measures, suggesting that quality of education may have attenuated the combined effect of income and educational level. Other predictive factors were found, including age, gender, CD4+ lymphocyte cell count, HIV medication usage, mood disturbance score, and psychoactive drug use.
The individual hierarchical regression models for each measure explained 8 – 48% of the variability in cognitive performance; some of which explained a high degree of variance. Predictors of the speed of processing domain included age, WRAT-3 Reading score, and mood disturbance score. Similarly, predictors of reasoning reflected those of speed of processing, but also included CD4+ lymphocyte cell count, along with whether or not they were using HIV medication. Attention and working memory had the fewest predictors, with only the demographics of age and WRAT-3 Reading score being significant predictors. Executive functioning predictors included age, gender, WRAT-3 Reading score, and CD4+ lymphocyte cell count. Finally, psychomotor speed and visuomotor coordination measures had the largest number of predictors, which included age, gender, WRAT-3 Reading score, mood disturbance score, psychoactive drug use composite score, and whether or not they were using HIV medication.
Several factors were consistently predictive of performance across many of the cognitive domains. Age and WRAT-3 Reading score were the most consistent demographic predictors across all five cognitive domains. Of the other factors included, only CD4+ lymphocyte cell count, HIV medication usage, and mood disturbance score emerged as predictors, but were not as consistently predictive across domains. One possible explanation for this finding is the high percentage of the sample that was using HIV medication; HIV medication has been shown in previous studies to be neuroprotective (Vance & Burrage, 2006). In addition, a majority of the sample had a CD4+ lymphocyte cell count of 200 and above. Similarly, psychoactive drug use was only predictive in one cognitive domain (psychomotor speed and visuomotor coordination). One explanation for this finding is the fact that participants were not recruited based on having a history of substance use, thus making it difficult to determine the correlation between psychoactive drug use and cognitive performance. A second explanation is that the psychoactive drug use composite measure used may not be sensitive enough to detect the contribution of psychoactive drug use on differences in cognitive performance.
Surprisingly, there was no significant relationship between HIV chronicity and cognitive performance. Vance, Woodley, and Burrage (2007) found years living with HIV to be predictive of performance on several measures, with those being diagnosed longer exhibiting better cognitive performance, possibly as a result of a greater degree of hardiness (e.g., better coping strategies) throughout the course of the disease. Although this study did not find a positive correlation between HIV chronicity and cognitive performance, the fact that a negative correlation (longer diagnosis predictive of poorer performance) was not detected is promising because it suggests that those living longer with HIV may not necessarily be subject to cognitive decline as a function of chronicity. Instead, age and HIV severity as indicated by CD4+ lymphocyte cell count may serve as more significant predictors.
Several strengths may be noted about this study. First, this study is one of a few that required that participants be diagnosed for at least 1 year with HIV; this was done to control for the effect of reactive depression of being diagnosed with HIV on cognition. Second, this study used standardized and acceptable measures. Third, this study used a variety of unique psychosocial predictors (i.e., hardiness) to examine their effects on cognition.
All studies have limitations and this one is not without exception. One limitation of this study is the restricted age range of our sample. Although the range was 24 – 67 years, there was not a large portion of participants over age 65, making it difficult to examine the effect of much older age on performance. As a result, the magnitude of age as a predictor was small in comparison to other studies (Hardy & Vance, 2009); however, with statistical findings about advancing age still being observed in this sample, this may attest to the concern that HIV may be a form of accelerated aging (Vance et al., 2009). Another limitation is that self-report of one’s most recent CD4+ lymphocyte cell count is used. This could potentially lead to reporting bias in that those with more memory problems may not be able to recall their most recent CD4+ lymphocyte cell count. Another limitation is that a measure of medication adherence was not included; so there is no way to discern the consistent use of HIV mediation usage on cognitive performance. A final limitation is that the psychoactive drug score is based on self-report; because of the illicit nature associated with this variable, participants may have under reported psychoactive drug use based on social desirability (Polit & Beck, 2008).
Given their proxy to patients, nurses are in a key position to observe cognitive changes and problems in their patients with HIV. From their assessment of these changes, nurses and nurse practitioners can act promptly on the patient’s behalf to address and treat these adverse effects. This study has shown that those patients who are older, engage in psychoactive drug use, experience more negative affect, do not use HIV medications, have a lower CD4+ lymphocyte cell count, and have a poorer quality education may exhibit problems on several cognitive tests. By attenuating to health outcomes such as limiting psychoactive drug use, maintaining a steady medication schedule in order to improve CD4+ lymphocyte cell count, and reducing depression, nurses can provide interventions that cognitively benefit patients.
Vance and Burrage (2006) posited several ways nurses can improve cognitive functioning in adults with HIV. Focusing on HIV treatment to avoid any further immunological and neurological decline is the first step. This requires educating patients about the importance of medication adherence and the perils of viral mutation on not just immunological health, but on neurological health as well. The second step is to promote general health and well-being through physical exercise, mood stabilization, good sleep hygiene, proper nutrition, and avoiding or curbing substance use. These approaches have been shown in the literature to be neuroprotective. In addition, nurses can educate patients on the possible interactions between recreational drugs, psychotropic medications, and antiretroviral medications, which may require adjustments in current dosages. Furthermore, nurses can discuss dietary restrictions which may affect drug absorption. Finally, nurses can encourage positive neuroplasticity in patients. Positive neuroplasticity refers to the brain’s ability to form new connects between neurons. Improving such connects occurs in response to challenging and novel stimuli (Vance & Burrage, 2006). Therefore, activities that promote learning something new should be encouraged.
Nurse researchers have a unique opportunity to study cognitive aging in this clinical population. With the aging of the population and the increased lifespan of those with HIV, there will be increased opportunity to examine the complex interaction between older age and HIV on cognition. The results of this study indicate a need for studies explicitly recruiting much older adult samples whose declines may be more sensitive to cognitive measures. In addition, future studies should utilize a longitudinal design in order to examine the effects of HIV and aging over time. Likewise, studies investigating strategies to prevent or intervene in lieu of such cognitive changes with HIV should be employed. In an on-going study, Vance, Marceaux, Fazeli, McKie-Bell, and Ball (2009) have used specially designed computer exercises, referred to as speed of processing training that is similar to gaming technology, to improve the Useful Field of View of adults with HIV. This approach is important because other studies show that declines in Useful Field of View are related to at-fault crashes and instrumental activities of daily living (Ball, Edwards, & Ross, 2007). Other cognitive remediation techniques may also prove to be important in improving everyday functioning in adults with HIV.
Neurological insults due to HIV can produce subtle cognitive declines in patients. There is much concern that this may get worse with age. Therefore, it is important to maintain adequate CD4+ lymphocyte cell counts, which may be done through the use of HIV medications, to prevent more neurological insults. Nurses should be frank with their patients about the possibility of slow, but progressive, cognitive decline that can occur. Likewise, they should also stress the factors that may accelerate such declines, such as psychoactive drug use and poor mood.
This study was funded by the University of Alabama at Birmingham (UAB) Center for AIDS Research and supported by the UAB Edward R. Roybal Center for Translational Research in Aging and Mobility and the UAB Center for Aging.
Predictors of Cognition in Adults with HIV: Implications for Nursing Practice and Research
Pariya L. Fazeli, Department of Psychology & Center for Research in Applied Gerontology, Holly Mears Building, Room 130, 924 19th Street South, University of Alabama at Birmingham (UAB), Birmingham, AL 35294, Office: 205-934-2551, Email: plfazeli/at/uab.edu.
Janice C. Marceaux, Medical/Clinical Psychology Doctorate Program, University of Alabama at Birmingham, Campbell Hall 201, 1530 3rd Ave. S., Birmingham, AL 35294-1170, Phone: 205-249-4782.
David E. Vance, School of Nursing, Room 456, 1701 University Boulevard, University of Alabama at Birmingham (UAB), Birmingham, AL 35294- 1210, Office: 205-934-7589.
Larry Slater, School of Nursing, Room 456, 1701 University Boulevard, University of Alabama at Birmingham (UAB), Birmingham, AL 35294-1210, Office: 205-934-7589, Fax: 205-996-7183, Email: lzslater/at/uab.edu.
C. Ann Long, School of Nursing, Nursing Building, Room 548, 1530 3rd Ave S., University of Alabama at Birmingham (UAB), Birmingham, AL 35294, Office: 205-996-5547, Fax: 205-034-7589, Email: calong/at/uab.edu.