Medication non-adherence is a key clinical concern in bipolar disorder across the lifespan. Cognitive deficits in older adults with bipolar disorder may hinder medication management ability, which, in turn, may lead to non-adherence. Using an innovative performance-based measure of medication management ability, the Medication Management Ability Assessment (MMAA), we compared performance of 29 middle-aged older community dwelling outpatients with bipolar disorder who were clinically stable (mean age=61, sd=11 years; rage 45 to 86) to 59 normal control subjects (NCs) and 219 outpatients with schizophrenia. The MMAA is a role-play task that simulates a medication regimen likely to be encountered by older adults. Within the bipolar group, we examined the relationships of MMAA scores to demographic, psychiatric symptoms severity, and the Dementia Rating Scale scores. The bipolar group made 2.8 times the errors on the MMAA than NCs (Bipolar group =6.2 (sd=5.5) vs. NCs=2.2 (sd=2.5), and did not significantly differ from the schizophrenia group in errors on the MMAA. Errors in the bipolar group were more likely to be in taking too few medications as in taking too many. Within the bipolar group, a significant correlation was seen between MMAA scores and DRS Total Score, but not with age, education, BPRS, HAM-D, number of psychiatric medications, or medical conditions. Among DRS subscales, the Memory Subscale correlated most strongly with MMAA errors. This small cross-sectional study suggests that deficits in medication management ability may be present in later-life bipolar disorder. Neurocognitive deficits may be important in understanding problems with unintentional non-adherence.
Bipolar disorder; aging; older adults; medication management ability; medication adherence; memory
The benefits of aspirin as an anti-platelet agent are well established; however, there has been much debate about the lack of uniformity in the efficacy of aspirin to inhibit platelet function. In some patients, aspirin fails to inhibit platelets even where compliance has been verified, a phenomenon which has been termed “aspirin resistance”. These patients may in turn be at a higher risk of future vascular events. The proportion of “resistant” patients identified depends on the type of platelet function test. Therefore, the aim of this systematic review is to determine which, if any, platelet function test has utility in terms of identifying patients with a high risk of vascular events. The review has been registered with PROSPERO (CRD42012002151).
Relevant studies will be sought from bibliographic databases. Trials registers will be searched for ongoing studies. Reference lists will be checked and subject experts contacted. There will be no date or language restrictions. Standard reviewing methodology to minimise bias will be employed. Any prospective studies in patients on aspirin therapy and assessing platelet function in relation to relevant clinical outcomes will be included, as will studies reporting prognostic models. Risk of bias assessment will be based on the Quality Assessment of Diagnostic Accuracy Studies guidelines, and suitable criteria for assessing quality of prognostic studies. Data on test accuracy measures, relative risks, odds or hazard ratios will be extracted and meta-analysed, where possible, using a random-effects model to account for between-study heterogeneity. Where appropriate, the causes of heterogeneity will be explored through meta-regression and sub-group or sensitivity analyses. If platelet function testing is demonstrated to have diagnostic/predictive utility in a specific population, the potential for a cost-effectiveness analysis will be considered and, if possible, an economic model constructed. This will be supported by a systematic review of existing economic evaluation studies.
The results of the review could indicate if platelet function test(s) could lead to a reliable prediction of the risk of clinically important events in a defined population, and thus support investigations into adjustments to therapy in order to compensate for a predicted poor response to standard aspirin.
Aspirin resistance; Cardiovascular disease; Cerebrovascular disease; Diabetes; Diagnosis; Meta-analysis; Platelet function test; Prediction; Prognosis; Test accuracy
Difficulties with sustained attention have been found among both persons with HIV infection (HIV+) and bipolar disorder (BD). The authors examined sustained attention among 39 HIV+ individuals with BD (HIV+/BD+) and 33 HIV-infected individuals without BD (HIV+/BD−), using the Conners’ Continuous Performance Test–II (CPT–II). A Global Assessment of Functioning (GAF) score was also assigned to each participant as an overall indicator of daily functioning abilities. HIV+/BD+ participants had significantly worse performance on CPT–II omission errors, hit reaction time SE (Hit RT SE), variability of SE, and perseverations than HIV+/BD− participants. When examining CPT–II performance over the six study blocks, both HIV+/BD+ and HIV+/BD− participants evidenced worse performance on scores of commission errors and reaction times as the test progressed. The authors also examined the effect of current mood state (i.e., manic, depressive, euthymic) on CPT–II performance, but no significant differences were observed across the various mood states. HIV+/BD+ participants had significantly worse GAF scores than HIV+/BD− participants, which indicates poorer overall functioning in the dually-affected group; among HIV+/BD+ persons, significant negative correlations were found between GAF scores and CPT–II omission and commission errors, detectability, and perseverations, indicating a possible relationship between decrements in sustained attention and worse daily-functioning outcomes.
Leg exercise hemodynamics during single-leg knee extensions were compared among healthy groups of early perimenopausal (n = 15), late perimenopausal (n = 12), and early postmenopausal (n = 11) women. Femoral blood flow (FBF) and vascular conductance (FVC) at rest and during very light work rates (0 and 5 W) were similar among all three menopause stage groups. Vascular responses at 10 W (FBF) and 20 W (FBF and FVC) were significantly higher (P < 0.05) in early perimenopausal compared with late perimenopausal women. At 15 and 25 W, FBF and FVC were similar between late perimenopausal and early postmenopausal groups but higher (P < 0.05) in early perimenopausal women as compared with the other two menopausal groups. In the combined sample of all three menopause stage groups, follicle-stimulating hormone was significantly correlated with vascular conductance during submaximal (15 W) exercise (R = −0.56, P < 0.001), even after adjustment for age, fitness, LDL cholesterol, and abdominal fat (R = −0.46, P = 0.005). Collectively, these findings suggest that in middle-aged women, there is an association between menopause stage and leg vascular responsiveness during exercise.
exercise hyperemia; menopause transition; reproductive hormones
Estimates of the prevalence of lifetime suicidal ideation and attempt, and risks for new-onset suicidality, among HIV-infected (HIV+) individuals are not widely available in the era of modern combined antiretroviral treatment (cART).
Participants (n=1560) were evaluated with a comprehensive battery of tests that included the depression and substance use modules of the Composite International Diagnostic Interview (CIDI) and the Beck Depression Inventory-II (BDI-II) as part of a large prospective cohort study at six U.S. academic medical centers. Participants with possible lifetime depression (n=981) were classified into five categories: 1) no thoughts of death or suicide (n=352); 2) thoughts of death (n=224); 3) thoughts of suicide (n=99); 4) made a suicide plan (n=102); and 5) attempted suicide (n=204).
Twenty-six percent (405/1560) of participants reported lifetime suicidal ideation and 13% (204/1560) reported lifetime suicide attempt. Participants who reported suicidal thoughts or plans, or attempted suicide, reported higher scores on the BDI-II (p<0.0001), and higher rates of current major depressive disorder (p=0.01), than those who did not. Attempters reported higher rates of lifetime substance abuse (p=0.02) and current use of psychotropic medications (p=0.01) than non-attempters.
Study assessments focused on lifetime, rather than current, suicide. Data was not collected on the timing of ideation or attempt, frequency, or nature of suicide attempt.
High rates of lifetime suicidal ideation and attempt, and the relationship of past report with current depressed mood, suggests that mood disruption is still prevalent in HIV. Findings emphasize the importance of properly diagnosing and treating psychiatric comorbidities among HIV persons in the cART era.
HIV; depression; suicide
A major limitation in the identification of novel antichlamydial compounds is the paucity of effective methods for large-scale compound screening. The immunofluorescence assay is the preferred approach for accurate quantification of the intracellular growth of Chlamydia. In this study, an immunofluorescence image-based method (termed image-based automated chlamydial identification and enumeration [iBAChIE]) was customized for fully automated quantification of Chlamydia infection using the freely available open-source image analysis software program CellProfiler and the complementary data exploration software program CellProfiler Analyst. The method yielded enumeration of different species and strains of Chlamydia highly comparably to the conventional manual methods while drastically reducing the analysis time. The inhibitory capability of established antichlamydial activity was also evaluated. Overall, these data support that iBAChIE is a highly effective tool for automated quantification of Chlamydia infection and assessment of antichlamydial activities of molecules. Furthermore, iBAChIE is expected to be amenable to high-throughput screening studies for inhibitory compounds and fluorescently labeled molecules to study host-pathogen interactions.
Attention modulates auditory perception, but there are currently no simple tests that specifically quantify this modulation. To fill the gap, we developed a new, easy-to-use test of attention in listening (TAIL) based on reaction time. On each trial, two clearly audible tones were presented sequentially, either at the same or different ears. The frequency of the tones was also either the same or different (by at least two critical bands). When the task required same/different frequency judgments, presentation at the same ear significantly speeded responses and reduced errors. A same/different ear (location) judgment was likewise facilitated by keeping tone frequency constant. Perception was thus influenced by involuntary orienting of attention along the task-irrelevant dimension. When information in the two stimulus dimensions were congruent (same-frequency same-ear, or different-frequency different-ear), response was faster and more accurate than when they were incongruent (same-frequency different-ear, or different-frequency same-ear), suggesting the involvement of executive control to resolve conflicts. In total, the TAIL yielded five independent outcome measures: (1) baseline reaction time, indicating information processing efficiency, (2) involuntary orienting of attention to frequency and (3) location, and (4) conflict resolution for frequency and (5) location. Processing efficiency and conflict resolution accounted for up to 45% of individual variances in the low- and high-threshold variants of three psychoacoustic tasks assessing temporal and spectral processing. Involuntary orientation of attention to the irrelevant dimension did not correlate with perceptual performance on these tasks. Given that TAIL measures are unlikely to be limited by perceptual sensitivity, we suggest that the correlations reflect modulation of perceptual performance by attention. The TAIL thus has the power to identify and separate contributions of different components of attention to auditory perception.
Nanotechnology is the design and assembly of submicroscopic devices called nanoparticles, which are 1–100 nm in diameter. Nanomedicine is the application of nanotechnology for the diagnosis and treatment of human disease. Disease-specific receptors on the surface of cells provide useful targets for nanoparticles. Because nanoparticles can be engineered from components that (1) recognize disease at the cellular level, (2) are visible on imaging studies, and (3) deliver therapeutic compounds, nanotechnology is well suited for the diagnosis and treatment of a variety of diseases. Nanotechnology will enable earlier detection and treatment of diseases that are best treated in their initial stages, such as cancer. Advances in nanotechnology will also spur the discovery of new methods for delivery of therapeutic compounds, including genes and proteins, to diseased tissue. A myriad of nanostructured drugs with effective site-targeting can be developed by combining a diverse selection of targeting, diagnostic, and therapeutic components. Incorporating immune target specificity with nanostructures introduces a new type of treatment modality, nano-immunochemotherapy, for patients with cancer. In this review, we will discuss the development and potential applications of nanoscale platforms in medical diagnosis and treatment. To impact the care of patients with neurological diseases, advances in nanotechnology will require accelerated translation to the fields of brain mapping, CNS imaging, and nanoneurosurgery. Advances in nanoplatform, nano-imaging, and nano-drug delivery will drive the future development of nanomedicine, personalized medicine, and targeted therapy. We believe that the formation of a science, technology, medicine law–healthcare policy (STML) hub/center, which encourages collaboration among universities, medical centers, US government, industry, patient advocacy groups, charitable foundations, and philanthropists, could significantly facilitate such advancements and contribute to the translation of nanotechnology across medical disciplines.
Nanoplatforms; Nanotechnology; Image-guided therapy; Nanomedicine; Nanoneurosurgery; Nanostructures; Contrast agents; Nanoparticles; Nanotechnology policy; Nano-radiology; Nano-neuroscience; Nano-neurology
Beijing family strains of Mycobacterium tuberculosis have attracted worldwide attention because of their wide geographical distribution and global emergence. Peru, which has a historical relationship with East Asia, is considered to be a hotspot for Beijing family strains in South America. We aimed to unveil the genetic diversity and transmission characteristics of the Beijing strains in Peru. A total of 200 Beijing family strains were identified from 2140 M. tuberculosis isolates obtained in Lima, Peru, between December 2008 and January 2010. Of them, 198 strains were classified into sublineages, on the basis of 10 sets of single nucleotide polymorphisms (SNPs). They were also subjected to variable number tandem-repeat (VNTR) typing using an international standard set of 15 loci (15-MIRU-VNTR) plus 9 additional loci optimized for Beijing strains. An additional 70 Beijing family strains, isolated between 1999 and 2006 in Lima, were also analyzed in order to make a longitudinal comparison. The Beijing family was the third largest spoligotyping clade in Peru. Its population structure, by SNP typing, was characterized by a high frequency of Sequence Type 10 (ST10), which belongs to a modern subfamily of Beijing strains (178/198, 89.9%). Twelve strains belonged to the ancient subfamily (ST3 [n = 3], ST25 [n = 1], ST19 [n = 8]). Overall, the polymorphic information content for each of the 24 loci values was low. The 24 loci VNTR showed a high clustering rate (80.3%) and a high recent transmission index (RTIn−1 = 0.707). These strongly suggest the active and on-going transmission of Beijing family strains in the survey area. Notably, 1 VNTR genotype was found to account for 43.9% of the strains. Comparisons with data from East Asia suggested the genotype emerged as a uniquely endemic clone in Peru. A longitudinal comparison revealed the genotype was present in Lima by 1999.
Perceptual skills can improve dramatically even with minimal practice. A major and practical benefit of learning, however, is in transferring the improvement on the trained task to untrained tasks or stimuli, yet the mechanisms underlying this process are still poorly understood. Reduction of internal noise has been proposed as a mechanism of perceptual learning, and while we have evidence that frequency discrimination (FD) learning is due to a reduction of internal noise, the source of that noise was not determined. In this study, we examined whether reducing the noise associated with neural phase locking to tones can explain the observed improvement in behavioral thresholds. We compared FD training between two tone durations (15 and 100 ms) that straddled the temporal integration window of auditory nerve fibers upon which computational modeling of phase locking noise was based. Training on short tones resulted in improved FD on probe tests of both the long and short tones. Training on long tones resulted in improvement only on the long tones. Simulations of FD learning, based on the computational model and on signal detection theory, were compared with the behavioral FD data. We found that improved fidelity of phase locking accurately predicted transfer of learning from short to long tones, but also predicted transfer from long to short tones. The observed lack of transfer from long to short tones suggests the involvement of a second mechanism. Training may have increased the temporal integration window which could not transfer because integration time for the short tone is limited by its duration. Current learning models assume complex relationships between neural populations that represent the trained stimuli. In contrast, we propose that training-induced enhancement of the signal-to-noise ratio offers a parsimonious explanation of learning and transfer that easily accounts for asymmetric transfer of learning.
perceptual learning; transfer of learning; frequency discrimination; internal noise; phase locking; integration time; auditory; modeling
During the last decade considerable attention has been focussed upon the development of new technologies and methodologies for detection of drug resistance in Mycobacterium tuberculosis. There is a growing acknowledgement that the redundancy in testing a full panel of first-line drugs is an unaffordable indulgence; since only resistance at baseline to either (or both) of the two most potent agents, isoniazid (H) and rifampicin (R), would usually prompt therapeutic modification there is a shift towards initial RH (or R alone for selected genotypic technologies) drug susceptibility testing (DST) followed, if necessary by further extended first and second line agent (currently phenotypic) DST. Most of the new drug susceptibility tests endorsed by the World Health Organization since 2007 deliver rapid RH (or R alone for selected genotypic technologies) DST. Targeting of patient groups with risk factors for drug resistance increases the proportion of tests that identify drug resistance, but in many settings at least as many patients with drug resistant disease will have no identifiable risk factors—equity of care demands that universal RH DST at baseline should be the goal. We review the bewildering array of choices facing TB program directors and attempt to provide objective information to help in deciding what tools may be best suited to different environments.
HIV-associated neurocognitive disorders (HAND) remain prevalent despite improved antiretroviral treatment (ART), and it is essential to have a sensitive and specific HAND screening tool.
Participants were 200 HIV-infected US military beneficiaries, managed early in the course of HIV infection, had few comorbidities, and had open access to ART. Participants completed a comprehensive, seven-domain (16-test), neuropsychological battery (∼120 min); neurocognitive impairment (NCI) was determined using a standardized score derived from demographically adjusted T-scores (global deficit score ≥0.5). Restricting the estimated administration time of the screening battery to < = 20 minutes, we examined the sensitivity and specificity of detecting NCI for all possible combinations of 2-, 3-, and 4- tests from the comprehensive battery.
Participants were relatively healthy (median CD4 count: 546 cells/mm3) with 64% receiving ART. Prevalence of NCI was low (19%). The best 2-test screener included the Stroop Color Test and the Hopkins Verbal Learning Test-Revised (11 min; sensitivity = 73%; specificity = 83%); the best 3-test screener included the above measures plus the Paced Auditory Serial Addition Test (PASAT; 16 min; sensitivity = 86%; specificity = 75%). The addition of Action Fluency to the above three tests improved specificity (18 min; sensitivity = 86%; specificity = 87%).
Combinations of widely accepted neuropsychological tests with brief implementation time demonstrated good sensitivity and specificity compared to a time intensive neuropsychological test battery. Tests of verbal learning, attention/working memory, and processing speed are particularly useful in detecting NCI. Utilizing validated, easy to administer, traditional neuropsychological tests with established normative data may represent an excellent approach to screening for NCI in HIV.
The circadian clock has been shown to regulate metabolic homeostasis. Mice with a deletion of Bmal1, a key component of the core molecular clock, develop hyperglycemia and hypoinsulinemia suggesting β-cell dysfunction. However, the underlying mechanisms are not fully known. In this study, we investigated the mechanisms underlying the regulation of β-cell function by Bmal1. We studied β-cell function in global Bmal1-/- mice, in vivo and in isolated islets ex vivo, as well as in rat insulinoma cell lines with shRNA-mediated Bmal1 knockdown. Global Bmal1-/- mice develop diabetes secondary to a significant impairment in glucose-stimulated insulin secretion (GSIS). There is a blunting of GSIS in both isolated Bmal1-/- islets and in Bmal1 knockdown cells, as compared with controls, suggesting that this is secondary to a loss of cell-autonomous effect of Bmal1. In contrast to previous studies, in these Bmal1-/- mice on a C57Bl/6 background, the loss of stimulated insulin secretion, interestingly, is with glucose but not to other depolarizing secretagogues, suggesting that events downstream of membrane depolarization are largely normal in Bmal1-/- islets. This defect in GSIS occurs as a result of increased mitochondrial uncoupling with consequent impairment of glucose-induced mitochondrial potential generation and ATP synthesis, due to an upregulation of Ucp2. Inhibition of Ucp2 in isolated islets leads to a rescue of the glucose-induced ATP production and insulin secretion in Bmal1-/- islets. Thus, Bmal1 regulates mitochondrial energy metabolism to maintain normal GSIS and its disruption leads to diabetes due to a loss of GSIS.
Bmal1; circadian clock; diabetes; insulin secretion; mitochondria; β-cells
For drug-compliant patients, poor responses to tuberculosis (TB) treatment might be attributable to subtherapeutic drug concentrations. An impaired absorption of rifampin was previously reported for patients with diabetes mellitus (DM) or HIV. The objectives of this study were to determine whether TB drug pharmacokinetics differed in Peruvian TB patients with DM or HIV. In this cross-sectional study, TB patients, recruited from health centers in Lima, Peru, had blood samples taken at 2 and 6 h after directly observed TB drug ingestion, to determine plasma concentrations of rifampin. Of 105 patients, 50 had TB without a comorbidity, 26 had coexistent DM, and 29 had coexistent HIV. Unexpectedly, the overall median 2- and 6-h levels of rifampin were 1.6 and 3.2 mg/liter, respectively, and the time to the peak concentration was 6 h (slow absorber) instead of 2 h (fast absorber) for 61 patients (62.2%). The geometric mean peak concentration of drug in serum (Cmax) was significantly higher in fast absorbers than in slow absorbers (5.0 versus 3.8 mg/liter; P = 0.05). The rifampin Cmax was significantly lower in male patients than in female patients (3.3 versus 6.3 mg/liter; P < 0.001). Neither slow nor fast absorbers with comorbidities (DM or HIV) had significantly different Cmax results compared to those of TB patients without comorbidities. An analysis of variance regression analysis showed that female gender (P < 0.001) and the time to maximum concentration of drug in serum (Tmax) at 2 h (P = 0.012) were independently correlated with increased exposure to rifampin. Most of this Peruvian study population exhibited rifampin pharmacokinetics different from those conventionally reported, with delayed absorption and low plasma concentrations, independent of the presence of an HIV or DM comorbidity.
Carrion's disease affects small Andean communities in Peru, Colombia and Ecuador and is characterized by two distinct disease manifestations: an abrupt acute bacteraemic illness (Oroya fever) and an indolent cutaneous eruptive condition (verruga Peruana). Case fatality rates of untreated acute disease can exceed 80% during outbreaks. Despite being an ancient disease that has affected populations since pre-Inca times, research in this area has been limited and diagnostic and treatment guidelines are based on very low evidence reports. The apparently limited geographical distribution and ecology of Bartonella bacilliformis may present an opportunity for disease elimination if a clear understanding of the epidemiology and optimal case and outbreak management can be gained.
All available databases were searched for English and Spanish language articles on Carrion's disease. In addition, experts in the field were consulted for recent un-published work and conference papers. The highest level evidence studies in the fields of diagnostics, treatment, vector control and epidemiology were critically reviewed and allocated a level of evidence, using the Oxford Centre for Evidence-Based Medicine (CEBM) guidelines.
A total of 44 studies were considered to be of sufficient quality to be included in the analysis. The majority of these were level 4 or 5 (low quality) evidence and based on small sample sizes. Few studies had been carried out in endemic areas.
Current approaches to the diagnosis and management of Carrion's disease are based on small retrospective or observational studies and expert opinion. Few studies take a public health perspective or examine vector control and prevention. High quality studies performed in endemic areas are required to define optimal diagnostic and treatment strategies.
Carrion's disease is one of the truly neglected tropical diseases. It affects children predominantly in small Andean communities in Peru, Colombia and Ecuador. Case fatality rates of untreated acute disease can exceed 80% during outbreaks. Diagnostic and treatment guidelines are based on very low evidence reports and public health and prevention programs have been limited. This paper presents the first systematic review of Carrion's disease in Peru and encompasses a detailed analysis of all the highest level evidence regarding not only diagnosis and management but also vector control and prevention. In the review, the authors highlight the considerable knowledge gaps in this field and suggest a strategy for a renewed effort in its investigation. The authors hope that through this work we will be able to develop a better understanding of the epidemiology, natural history and optimal approaches to case and outbreak
Multiple nuclear receptors, including hepatocyte nuclear factor 4α (HNF4α), retinoid X receptor α (RXRα) plus peroxisome proliferator-activated receptor α (PPARα), RXRα plus farnesoid X receptor α (FXRα), liver receptor homolog 1 (LRH1), and estrogen-related receptors (ERRs), have been shown to support efficient viral biosynthesis in nonhepatoma cells in the absence of additional liver-enriched transcription factors. Although HNF4α has been shown to be critical for the developmental expression of hepatitis B virus (HBV) biosynthesis in the liver, the relative importance of the various nuclear receptors capable of supporting viral transcription and replication in the adult in vivo has not been clearly established. To investigate the role of the nuclear receptor FXR and the corepressor small heterodimer partner (SHP) in viral biosynthesis in vivo, SHP-expressing and SHP-null HBV transgenic mice were fed a bile acid-supplemented diet. The increased FXR activity and SHP expression levels resulting from bile acid treatment did not greatly modulate HBV RNA and DNA synthesis. Therefore, FXR and SHP appear to play a limited role in modulating HBV biosynthesis, suggesting that alternative nuclear receptors are more critical determinants of viral transcription in the HBV transgenic mouse model of chronic viral infection. These observations suggest that hepatic bile acid levels or therapeutic agents targeting FXR may not greatly modulate viremia during natural infection.
The design of β-peptide foldamers targeting the transmembrane (TM) domains of complex natural membrane proteins has been a formidable challenge. A series of β-peptides was designed to stably insert in TM orientations in phospholipid bilayers. Their secondary structures and orientation in the phospholipid bilayer was characterized using biophysical methods. Computational methods were then devised to design a β-peptide that targeted a TM helix of the integrin αIIbβ3. The designed peptide (β-CHAMP) interacts with the isolated target TM domain of the protein, and activates the intact integrin in vitro.
2'-3-dimethyl-4-aminoazobenzene (ortho-aminoazotoluene, OAT) is an azo dye and a rodent carcinogen that has been evaluated by the International Agency for Research on Cancer (IARC) as a possible (class 2B) human carcinogen. Its mechanism of action remains unclear. We examined the role of the xenobiotic receptor Constitutive Androstane Receptor (CAR, NR1I3) as a mediator of the effects of OAT. We found that OAT increases mouse CAR (mCAR) transactivation in a dose-dependent manner. This effect is specific because another closely related azo dye, 3'-methyl-4-dimethyl-aminoazobenzene (3'MeDAB), did not activate mCAR. Real-time Q-PCR analysis in wild-type C57BL/6 mice revealed that OAT induces the hepatic mRNA expression of the following CAR target genes: Cyp2b10, Cyp2c29, Cyp3a11, Ugt1a1, Mrp4, Mrp2 and c-Myc. CAR-null (Car−/−) mice showed no increased expression of these genes following OAT treatment, demonstrating that CAR is required for their OAT dependent induction. The OAT-induced CAR-dependent increase of Cyp2b10 and c-Myc expression was confirmed by Western blotting. Immunohistochemistry analysis of wild-type and Car−/− livers showed that OAT did not acutely induce hepatocyte proliferation, but at much later time points showed an unexpected CAR-dependent proliferative response. These studies demonstrate that mCAR is an OAT xenosensor, and indicate that at least some of the biological effects of this compound are mediated by this nuclear receptor.
Ortho-Aminoazotoluene (OAT); Constitutive Androstane Receptor (CAR); CYP450s; c-Myc; hepatocyte proliferation
Mild forms of HIV-associated neurocognitive disorders (HAND) remain prevalent in the era of combination antiretroviral therapy (cART). Although elevated LPS and immune activation are implicated in HAND pathogenesis, relationships of LPS and inflammatory markers to mild forms of HAND or impairment in specific cognitive domains are unknown. To examine these relationships, we compared plasma soluble CD14 (sCD14), CCL2, and LPS levels to neurocognitive test scores in a cART era cohort.
We analyzed plasma from HIV+ subjects (n=97) with nadir CD4 counts <300 and high frequency of HCV co-infection and illicit drug use for relationships between sCD14, CCL2, and LPS levels and neurocognitive test scores.
Plasma sCD14 levels were higher in subjects with test scores indicating global impairment (p=0.007), particularly in attention and learning domains (p=0.015 and p=0.03, respectively), regardless of HAND diagnosis. Plasma sCD14 levels correlated inversely with global, attention, and learning T scores (p=0.036, 0.047, and 0.007, respectively), and yielded higher AUROC values for predicting impaired scores than single-marker models based on plasma or CSF viral load or CD4 count (AUROC 0.71, 0.81, and 0.71, respectively), and in four-marker models based on plasma sCD14 and three conventional markers compared to the three-marker models.
Plasma sCD14 is a biomarker associated with impaired neurocognitive testing in attention and learning domains in HIV-infected individuals with advanced disease, suggesting involvement of cortical and limbic pathways by inflammatory processes in the cART era. Plasma sCD14 is a potential biomarker to monitor HAND progression and therapeutic responses.
HIV; AIDS; HIV-associated neurocognitive disorders; HCV; biomarkers; drug abuse
Memory and executive functioning are two important components of clinical neuropsychological (NP) practice and research. Multiple demographic factors are known to affect performance differentially on most NP tests, but adequate normative corrections, inclusive of race/ethnicity, are not available for many widely used instruments. This study compared demographic contributions for widely used tests of verbal and visual learning and memory (Brief Visual Memory Test-Revised, Hopkins Verbal Memory Test-Revised), and executive functioning (Stroop Color and Word Test, Wisconsin Card Sorting Test-64) in groups of healthy Caucasians (n = 143) and African-Americans (n = 103). Demographic factors of age, education, gender, and race/ethnicity were found to be significant factors on some indices of all four tests. The magnitude of demographic contributions (especially age) was greater for African-Americans than Caucasians on most measures. New, demographically corrected T-score formulas were calculated for each race/ethnicity. The rates of NP impairment using previously published normative standards significantly overestimated NP impairment in African-Americans. Utilizing the new demographic corrections developed and presented herein, NP impairment rates were comparable between the two race/ethnicities and unrelated to the other demographic characteristics (age, education, gender) in either race/ethnicity group. Findings support the need to consider extended demographic contributions to neuropsychological test performance in clinical and research settings.
The targeted delivery of therapeutics to the tumor site is highly desirable in cancer treatment, because it is capable of minimizing collateral damage. Herein, we report the synthesis of a nanoplatform, which is composed of a 15 ± 1 nm diameter core/shell Fe/Fe3O4 magnetic nanoparticles (MNPs) and the topoisomerase I blocker SN38 bound to the surface of the MNPs via a carboxylesterase cleavable linker. This nanoplatform demonstrated high heating ability (SAR = 522 ± 40 W/g) in an AC-magnetic field. For the purpose of targeted delivery, this nanoplatform was loaded into tumor-homing double-stable RAW264.7 cells (mouse monocyte/macrophage-like cells (Mo/Ma)), which have been engineered to express intracellular carboxylesterase (InCE) upon addition of doxycycline by a Tet-On Advanced system. The nanoplatform was taken up efficiently by these tumor-homing cells. They showed low toxicity even at high nanoplatform concentration. SN38 was released successfully by switching on the Tet-On Advanced system. We have demonstrated that this nanoplatform can be potentially used for thermochemotherapy. We will be able to achieve the following goals: (1) Specifically deliver the SN38 prodrug and magnetic nanoparticles to the cancer site as the payload of tumor-homing double-stable RAW264.7 cells; (2) Release of chemotherapeutic SN38 at the cancer site by means of the self-containing Tet-On Advanced system; (3) Provide localized magnetic hyperthermia to enhance the cancer treatment, both by killing cancer cells through magnetic heating and by activating the immune system.
cell-based delivery; chemotherapeutic prodrug; magnetic Fe/Fe3O4 nanoparticles; SN38
The time course and outcome of perceptual learning can be affected by the length and distribution of practice, but the training regimen parameters that govern these effects have received little systematic study in the auditory domain. We asked whether there was a minimum requirement on the number of trials within a training session for learning to occur, whether there was a maximum limit beyond which additional trials became ineffective, and whether multiple training sessions provided benefit over a single session.
We investigated the efficacy of different regimens that varied in the distribution of practice across training sessions and in the overall amount of practice received on a frequency discrimination task. While learning was relatively robust to variations in regimen, the group with the shortest training sessions (∼8 min) had significantly faster learning in early stages of training than groups with longer sessions. In later stages, the group with the longest training sessions (>1 hr) showed slower learning than the other groups, suggesting overtraining. Between-session improvements were inversely correlated with performance; they were largest at the start of training and reduced as training progressed. In a second experiment we found no additional longer-term improvement in performance, retention, or transfer of learning for a group that trained over 4 sessions (∼4 hr in total) relative to a group that trained for a single session (∼1 hr). However, the mechanisms of learning differed; the single-session group continued to improve in the days following cessation of training, whereas the multi-session group showed no further improvement once training had ceased.
Shorter training sessions were advantageous because they allowed for more latent, between-session and post-training learning to emerge. These findings suggest that efficient regimens should use short training sessions, and optimized spacing between sessions.
Tuberculosis (TB) disease remains one of the highest causes of mortality in HIV-infected individuals, and HIV–TB coinfection continues to grow at alarming rates, especially in sub-Saharan Africa. Surprisingly, a number of important areas regarding coinfection remain unclear. For example, increased risk of TB disease begins early in the course of HIV infection; however, the mechanism by which HIV increases this risk is not well understood. In addition, there is lack of consensus on the optimal way to diagnose latent TB infection and to manage active disease in those who are HIV infected. Furthermore, effective point-of-care testing for TB disease remains elusive. This review discusses key areas in the epidemiology, pathogenesis, diagnosis, and management of active and latent TB in those infected with HIV, focusing attention on issues related to high- and low-burden areas. Particular emphasis is placed on controversial areas where there are gaps in knowledge and on future directions of study.
tuberculosis; HIV; diagnosis; management; epidemiology
According to the multi-process theory of prospective memory (ProM), time-based tasks rely more heavily on strategic processes dependent on prefrontal systems than do event-based tasks. Given the prominent frontostriatal pathophysiology of HIV infection, one would expect HIV-infected individuals to demonstrate greater deficits in time-based versus event-based ProM. However, the two prior studies examining this question have produced variable results. We evaluated this hypothesis in 143 individuals with HIV infection and 43 demographically similar seronegative adults (HIV−) who completed the research version of the Memory for Intentions Screening Test, which yields parallel subscales of time- and event-based ProM. Results showed main effects of HIV serostatus and cue type, but no interaction between serostatus and cue. Planned pair-wise comparisons showed a significant effect of HIV on time-based ProM and a trend-level effect on event-based ProM that was driven primarily by the subset of participants with HIV-associated neurocognitive disorders. Nevertheless, time-based ProM was more strongly correlated with measures of executive functions, attention/working memory, and verbal fluency in HIV-infected persons. Although HIV-associated deficits in time- and event-based ProM appear to be of comparable severity, the cognitive architecture of time-based ProM may be more strongly influenced by strategic monitoring and retrieval processes.
AIDS dementia complex; Episodic memory; Executive functions; Neuropsychological assessment