|Home | About | Journals | Submit | Contact Us | Français|
Electronic medical records (EMR) provide a unique opportunity for efficient, large-scale clinical investigation in psychiatry. However, such studies will require development of tools to define treatment outcome.
Natural language processing (NLP) was applied to classify notes from 127 504 patients with a billing diagnosis of major depressive disorder, drawn from out-patient psychiatry practices affiliated with multiple, large New England hospitals. Classifications were compared with results using billing data (ICD-9 codes) alone and to a clinical gold standard based on chart review by a panel of senior clinicians. These cross-sectional classifications were then used to define longitudinal treatment outcomes, which were compared with a clinician-rated gold standard.
Models incorporating NLP were superior to those relying on billing data alone for classifying current mood state (area under receiver operating characteristic curve of 0.85–0.88 v. 0.54–0.55). When these cross-sectional visits were integrated to define longitudinal outcomes and incorporate treatment data, 15% of the cohort remitted with a single antidepressant treatment, while 13% were identified as failing to remit despite at least two antidepressant trials. Non-remitting patients were more likely to be non-Caucasian (p<0.001).
The application of bioinformatics tools such as NLP should enable accurate and efficient determination of longitudinal outcomes, enabling existing EMR data to be applied to clinical research, including biomarker investigations. Continued development will be required to better address moderators of outcome such as adherence and co-morbidity.
The analysis of electronic medical records (EMR) has been proposed as an efficient means of characterizing outcomes and rapidly identifying subpopulations within disorders in very large patient populations (Simon & Perlis, 2010). In addition to allowing collection of effectiveness outcomes or pharmacovigilance studies, such a tool could rapidly identify subgroups for biomarkers studies or participation in targeted clinical trials. This approach has advantages in ecological validity, as by definition it reflects clinical practice. It also offers far greater efficiency and feasibility than traditional clinical trials, as the data have already been collected and coded.
On the other hand, billing data typically offer little precision regarding diagnosis or outcome, particularly for psychiatric disorders. To overcome these limitations, computational methods have been developed to extract clinical data from narrative notes in the EMR. Natural language processing (NLP) represents an automated method of chart review by processing text into meaningful concepts based on a set of rules. Outside of medicine, the recent success of a computer contestant on a television game show represents perhaps the most prominent recent example of a NLP application (Ferruci et al. 2010). NLP has been applied to a limited number of biomedical settings – mandatory reporting of notifiable diseases (Effler et al. 1999; Klompas et al. 2008; Lazarus et al. 2009), definition of co-morbid conditions (Meystre & Haug, 2006a, b; Solti et al. 2008) and medications (Turchin et al. 2006; Levin et al. 2007) and identification of adverse events (Bates et al. 2003; Penz et al. 2007).
To our knowledge, these approaches have received little attention in psychiatric disorders. In particular, given the complexity of phenotypic assessment in these illnesses, a crucial but unresolved question is how well outcomes may be defined based solely on EMR data. If such an approach could be validated, extremely efficient descriptive studies could be conducted and a means of facilitating future prospective studies established. To explore the potential utility of NLP, we examined outcomes of antidepressant treatment in major depressive disorder (MDD). Specifically, we attempted to develop, compare and validate alternative methods of characterizing two key outcomes in the treatment of MDD episodes, symptomatic remission (Rush et al. 2003a) and treatment resistance. Treatment resistant depression (TRD), typically defined as a failure to respond to at least two adequate trials of medication or other somatic therapies (Fava & Rush, 2006; Rush et al. 2006), contributes substantially to the disability and associated costs of MDD (Gibson et al. 2010) and may also be associated with elevated risk for suicide (Papakostas et al. 2003). The ability to identify individuals at greater risk for TRD might allow clinicians to risk stratify patients and treat or triage them more appropriately.
A particular challenge in defining outcomes is the need to integrate information across multiple visits – that is, evaluating a single assessment may be insufficient to establish an individual’s treatment course for a disorder where symptoms may fluctuate over time. Therefore, we first compared ICD-9 codes to a gold standard based upon consensus for clinical status at each visit based upon review by a panel of experienced clinicians. Then, we developed a novel and broadly applicable tool using NLP to classify cross-sectional clinical status using narrative notes and compared it to the gold standard and to ICD-9 codes alone. Finally, we extended this cross-sectional data to define longitudinal outcomes and again validated these outcomes against those generated by consensus of clinical expert reviewers.
The Partners HealthCare EMR incorporates socio-demographic data, billing codes, laboratory results, problem lists, medications, vital signs and narrative notes from Massachusetts General Hospital (MGH) and Brigham and Women’s Hospital (BWH), as well as community and specialty hospitals that are part of the Partners HealthCare system in Boston (MA, USA). Altogether these records comprise about three million unique patients.
Patients with at least one diagnosis of MDD (ICD-9 296.2x, 296.3x) in the billing data or out-patient medical record at MGH or BWH were selected from the EMR for inclusion in a dataset (referred to as a data ‘mart’). The data mart consists of all electronic records (psychiatric and non-psychiatric) from 127 504 patients using the i2b2 Workbench software (i2b2 v. 1.4; USA) (Murphy et al. 2007). The i2b2 system is a scalable computational framework for managing human health data and the Workbench facilitates analysis and visualization of such data. Billing data were available for all public and private payors. The Partners Institutional Review Board approved all aspects of this study and the usual safeguards for human subjects’ data were applied, including data encryption and password protection and elimination of patient identifiers from derived datasets.
From the MDD data mart, 5198 patients with at least one billing code indicating a diagnosis of MDD and a psychiatric narrative note were selected for inclusion in the study (Fig. 1). Patients with billing codes for bipolar disorder, schizophrenia or dementia/delirium were excluded, as were those with other depressive disorders, such as dysthymia.
To determine the ‘clinical gold standard’ for patient status, a panel of three experienced board-certified clinical psychiatrists (J.W.S., D.V.I., R.H.P.) reviewed 724 randomly selected out-patient provider narratives and arrived at a consensus about the clinical status of the patient at the time of the visit. This status was assigned based upon the reported clinical status – that is, based upon clinician characterization of the patient’s current mood state. Where this report was ambiguous or absent, DSM-IV mood state criteria were applied – that is, the clinical raters examined presence or absence of individual depression criteria and degree of severity, if present. Raters explicitly did not consider symptoms of co-morbidity such as anxiety or pain – thus, it was possible for subjects to be classified as remitted even with persistence of syndromal anxiety. Each note was classified as well (euthymic/remitted), defined as absence or virtual absence of depressive symptoms, depressed, defined as likely to meet criteria for a current major depressive episode or intermediate/subthreshold. The definitions for these states were drawn from prior task force reports on terminology (Frank et al. 1991; Rush et al. 2006). Raters voted individually but were required to achieve consensus for each note. The confidence level for each assignment was further rated as good confidence, fair confidence or low confidence, recognizing that the quality of notes precluded accurate characterization in some cases. During the classification process, the clinicians also identified words or phrases that were likely to be useful for classification. These terms were subsequently extracted from each narrative note with NLP using the HiTex platform (USA) (Zeng et al. 2006).The platform identifies terms using regular expressions (flexible matching) and applies negation and context algorithms to filter inappropriate matches. The presence or absence of a term then becomes a feature of each note, which can be utilized in classification algorithms. Fig. 1 provides a schematic of the study selection procedure for identifying and classifying patient groups.
We used the clinician-reviewed classifications to train models to predict the probability of being depressed or well (at the single visit level) based on a logistic regression classifier with adaptive least absolute shrinkage and selection operator (LASSO) procedure. We found that optimal fit was provided by fitting two separate models: depressed versus other; well versus other. The adaptive LASSO procedure simultaneously identifies important features and provides stable estimates of the model parameters (Zou, 2006). It is often applied in high-dimensionality datasets to select the more useful subset of features for modeling, because it will shrink the coefficients of other features (covariates) to zero (for a review and comparison to alternative approaches, see Bunea et al. (2011)). The optimal penalty parameter was determined based on Bayes’ Information Criterion. ICD-9 depression billing codes include a digit intended to indicate current severity (e.g. 296.3x), but we anticipated that such digits might not be used consistently in claims data. We therefore developed and compared three different sets of models using: (i) billing codes only; (ii) narrative terms only (NLP); (iii) all available data (billing codes+NLP).
For each clinical state, we selected the threshold probability value for classifying patients as being in the state by setting the specificity level at 95%. Importantly, this rigorous threshold was selected to minimize the false positive rate, as might be optimal for biomarkers studies of extreme phenotypes, for example. Patients whose predicted probability exceeds the threshold value for either state were classified as being depressed or well, denoted by D+ and W+, respectively. We allowed a third state representing intermediate/subthreshold status, recognizing the prevalence of subthreshold depression in clinical practice, capturing those classified as neither D+ nor W+. The sensitivity, precision and area under the receiver operating characteristic curve (AUC) were estimated for D+ and W+ to compare prediction performance based on all three models, compared to the gold standard established by the clinician ratings. At this phase of investigation, data on treatment, if any, were not considered.
A small subset of clinicians in the Partners system routinely ask patients to complete a validated self-report measure of depression severity, the 16-item Quick Inventory of Depressive Symptomatology-Self-Rated (QIDS-SR; Trivedi et al. 2004), at every visit. This subset includes clinicians within a specialized major depression treatment program. The availability of these scores provided an opportunity to further examine the cross-sectional classifications in an exploratory fashion: the scores were extracted from the narrative notes and compared between visit classifications (well, depressed, intermediate). Because of their relative paucity in the dataset, these scores were not used directly to train the classification algorithms.
Using the single visit classifications, we developed a rule-based algorithm to classify patients as TRD (case) or treatment-responsive (control) based upon standard definitions of outcome (Rush et al. 2006) and treatment-resistance (Rush et al. 2003a). The algorithm was defined by a panel of experienced clinicians to maximize face validity within the limitations of a sparse database.
TRD was defined as meeting all of the following criteria : two or more D+ visits within a 12-month period following an initial antidepressant prescription ; no visits classified as W+; a majority of all visits classified as D+; exposure to at least two anti-depressants during this period. Patients with at least two consecutive D+ visits following an antidepressant prescription were classified as TRD. Treatment-responsive was defined as two or more W+ visits within a 12-month period following initial antidepressant prescription, no visits classified as D+ and exposure to only one antidepressant during this period. Observations preceding antidepressant prescription were not considered.
As the intention was to identify more extreme phenotypes for future study, patients who otherwise met criteria for responsiveness but received multiple types of different antidepressants during this period were excluded from the responsive group, since requiring multiple antidepressants would typically represent failure of monotherapy. Thus, the treatment-responsive group might be further characterized as ‘single treatment responsive’.
In order to validate the rule, a board-certified psychiatrist (R.H.P.), blinded to the rule classifications, reviewed all of the notes for a random sample of 55 patients and assigned them a classification of either TRD or treatment-responsive using the same approach as in cross-sectional analysis and after reviewing standard outcome definitions noted above.
Finally, after deriving these longitudinal phenotypes, we compared patient demographics, visit frequency and medication prescriptions in the derived longitudinal TRD, treatment-responsive and intermediate/partially responsive groups. Co-morbid conditions were also assessed using the previously validated Age-adjusted Charlson Comorbidity Index (Charlson et al. 1987, 1994).
To assess the overall concordance of the single visit algorithms with the training data and to estimate the threshold value for D+ and W+, we used three-fold cross-validation repeated 50 times to correct for potential over-fitting bias. Bootstrapping was used to assess the standard error and obtain confidence intervals (CI) for the accuracy estimates. TRD, responsive and intermediate/partially responsive group demographics were compared using analysis of variance and χ2 test. Visit frequency, co-morbidity score and medication prescriptions were compared using the Kruskal–Wallis non-parametric test.
After manual review and classification of 724 narrative notes, 34 NLP terms were identified from the clinician annotations as potentially useful for predicting cross-sectional (visitwise) clinical status. The adaptive LASSO procedure was then used to build three sets of models in a training dataset: one with billing data only (that is, the ICD-9 296.3x or 296.2x severity codes), one with NLP terms and one with both. The combination model selected 23 of the NLP terms and one billing code for the depressed classification and 15 NLP terms and three billing codes for the well classification. Supplementary Fig. S1 depicts the model selection for prediction of single-visit clinical status using NLP and billing codes, while the resulting models are shown in Supplementary Table S1. The initial terms positively associated in the depressed model included ‘depressed’ and ‘mood anxious’ and those positively associated in the well model were ‘euthymic affect’, ‘stable’ and ‘much better’. Some unexpected terms, such as ‘energy’, also associated positively with depressed status, likely because clinicians described neurovegetative symptoms in both depressed and well patients. Two single visit classifiers were developed, to categorize visits as ‘depressed’, ‘well’ or ‘intermediate’.
Receiver operating characteristic curves for these sets of models are shown in Fig. 2. Models incorporating NLP were markedly more accurate than those incorporating billing data alone: for prediction of ‘depressed’, AUC was 0.88 v. 0.54, while for prediction of ‘well’, AUC was 0.85 v. 0.55. Models with and without billing codes performed similarly. Fig. 2 also indicates sensitivity for each model, with specificity constrained to be 0.95. For the full (NLP+billing) models, when ‘wellness’ or ‘depression’ were classified with a 5% false positive rate, sensitivity was 0.39, i.e. 39% of ‘well’ or ‘depressed’ visits were identified.
To further characterize the performance of the mood state classifiers, we examined the classification of notes, which included a validated self-report measure of depressive symptom severity, the QIDS-SR (Trivedi et al. 2004). Such notes were available for only a subset of individuals from the full cohort (~20%). Where these measures were available for multiple visits for a single patient, one visit was randomlyselected. For the 874 notes classified as depressed, mean QIDS-SR was 12.6 (95% CI 12.0–13.3); for the 479 notes classified as remitted, mean QIDS-SR was 7.9 (95% CI 7.2–8.6) and for the 1606 notes classified as intermediate, mean QIDS-SR was 13.3 (95% CI 12.8–13.7).
Single-visit classifications based on NLP+Billing codes models for both D+ and W+ were then used to construct longitudinal outcomes. Supplementary Fig. S2 shows an example of visualization of longitudinal course using the i2b2 Workbench (Murphy et al. 2007). In all, 840 of 5198 patients (16%) met criteria for a period of remission and 574 patients (11%) for TRD; the remaining 3784 (73%) were intermediate/partially-responsive or required multiple treatment trials. Concordance with the clinician gold standard was 0.764.
Table 1 provides group demographic, visit frequency and medication prescription comparisons of each of the study cohorts. Notably, the TRD group had significantly greater proportions of non-Caucasian patients (p<0.001) and patients covered by public insurance plans such as Medicare and Medicaid (p<0.001). As expected, TRD patients had a significantly greater frequency of depressed visits, number of antidepressant prescription refills and different types of antidepressants prescribed (p<0.001). There was no difference between age-adjusted comorbidity index scores between the TRD and responsive groups (p=0.245) but both groups had significantly higher scores than patients with partial response (p<0.001).
Our results demonstrate the feasibility as well as the challenges of assessing clinical outcomes in EMR using NLP of clinicians’ narrative notes. Using a simple set of empirically defined terms, which are readily extracted from free text, 23% of narrative notes could be accurately classified as depressed, 22% as euthymic and the remainder as intermediate or subthreshold. We emphasize that a large number of patients and notes remain in this third group by design: criteria were selected a priori to maximize specificity for the two outcome categories (TRD and single-treatment responder) anticipating their use in future biomarkers studies. Selection of more liberal thresholds would of course greatly increase the proportion of subjects classified to the extreme groups, and might be desirable for other types of investigations such as effectiveness studies seeking to characterize TRD risk.
The intermediate group also reflects the limitations both of the diagnostic system and clinical documentation. That is, many patients will experience only partial improvement and this may not be well captured in the narrative text. Of note, even for those individuals classified as euthymic based on the narrative note, mean QIDS-SR is in the mildly depressed range. One contributor to this discordance might be the specific guidance given to the raters to not score anxiety or other symptoms, while patients might score anxiety symptoms as (for example) agitation or poor concentration – a challenge any time a self-report and clinician-rated assessment are compared. Given the relative paucity and lack of systematic administration of QIDS-SR, these exploratory analyses should be interpreted with caution. This finding underscores the prevalence of residual mood symptoms in clinical practice, as well as the potential utility of using self-report measures in this context (Nierenberg et al. 2010).
The superiority of using clinician- or even patient-reported measures to determine symptom severity should be apparent, which might lead one to question the utility of NLP-based approaches. Indeed, these results should highlight the limitations of the narrative text as well as the potential utility of standardized assessments (and their inclusion in EMR systems). On the other hand, progress toward this goal has been remarkably slow even in academic mental health systems and, once implemented, it will be many years until large datasets with these measures accumulate. During this transition, the value of using existing large datasets, with millions of patients and years of data collection, should also be clear.
Our report is one of the first to examine large-scale use of NLP approaches for classification in psychiatry, although this application was suggested two decades ago (for a review, see Garfield et al. 1992). One previous study described a pilot effort to classify suicide notes according to intention (Pestian et al. 2008). Outside of psychiatry, modern NLP techniques have demonstrated success in such areas as detecting disease requiring notification of public health officials (Effler et al. 1999; Klompas et al. 2008; Lazarus et al. 2009) and identifying unexpected adverse events (Bates et al. 2003; Penz et al. 2007), as well as determining co-morbid medical conditions (Meystre & Haug, 2006a, b; Solti et al. 2008) and medications (Turchin et al. 2006; Levin et al. 2007). With growing interest in the use of large clinical databases for conducting effectiveness research, the development of the toolset necessary to define outcomes in psychiatry may be critical.
Our findings strongly suggest that billing data alone, including ICD-9 codes used for billing, is unlikely to be adequate for establishing outcomes. This likely reflects clinicians’ lack of concern for accuracy in such codes, as they do not impact reimbursement and are often used primarily to reflect the diagnosis of the patient and not current clinical status. Indeed, prior reports suggest that such codes may not reliably distinguish individuals by diagnosis, as was illustrated in a cohort of mood disorder patients undergoing electroconvulsive therapy (Jakobsen et al. 2008).
We note several caveats in interpreting our work. First, the portability of these classification models remains to be determined. Different healthcare systems may have different standards or formats for narrative notes, which would be expected to influence the performance of our classifiers. However, we emphasize that MGH and BWH, the two major hospitals within the Partners Health Care system, include two distinct departments of psychiatry with different medical record systems and approaches to documentation, which should improve portability to other systems. The vast majority of clinical notes derive not from the in-patient units, but from affiliated out-patient clinics in the region, most of which are not primarily academic in orientation.
Second, as we have noted, these classifiers should not be construed as a substitute for systematic and quantitative assessment. Manual review of notes identified a remarkable disparity in quality and nature of documentation and consequent ambiguity in description of clinical states. For example, a common notation was ‘depression is stable’, which might refer to a patient who continues to be depressed (as in unchanged), or one whose illness is successfully managed (as in remaining in remission). Likewise, it was not uncommon to encounter documentation of details of recent stressors or events, in the absence of mood symptoms. As more health care systems move to EMR, there is a unique opportunity to better quantify outcomes. For example, the 16-item patient-rated QIDS-SR has been shown to be highly correlated with clinician-rated measures and sensitive to treatment effects (Rush et al. 2003b); another well-validated alternative is the PHQ-9 (Kroenke et al. 2001). Their incorporation in EMR systems would greatly improve their capacity to support future outcome studies. At minimum, EMR systems that utilize templates could require clinicians to record a clinical status [for example, using the 7-point Clinical Global Impression scale (Guy, 1976), or even recording remission status].
Third, in defining longitudinal outcomes, multiple assumptions are required about treatment status. As the Partners HealthCare system is not a ‘closed’ one, there is documentation of a prescription being given but not of it being filled or re-filled. Therefore, there is some risk for misclassification in both directions. Individuals labeled ‘responsive’ may have remitted in spite of not adhering to treatment, as might be expected given the sizeable rates of placebo response in MDD (Fournier et al. 2010). Conversely, individuals labeled as having TRD may actually be non-adherent, or partially adherent, or receive inadequate medication dosage or duration, a phenomenon sometimes referred to as ‘pseudoresistance’. This limitation underscores the value of integrating clinical data with pharmacy billing data whenever possible. A related challenge is determining tolerability; some individuals classified as resistant may actually be intolerant to multiple medications and thus unable to achieve therapeutic doses necessary for symptomatic improvement. Whether tolerability can itself be accurately determined with NLP approaches merits further investigation. Incorporating tolerability data is further complicated by its partial correlation with efficacy: individuals may be more likely to tolerate medications that they perceive as being helpful to them, and vice versa. In addition to adherence and tolerability, psychiatric and medical co-morbidity are also important moderators of treatment response to which NLP approaches may be applicable.
It should be emphasized that TRD was selected for this study precisely because it is a difficult problem for NLP. Many outcomes within psychiatry should be substantially easier to define, particularly those such as hospitalization, which are likely to be available from billing data. Given the chronicity of many psychiatric disorders, however, the ability toparse less ‘hard’ outcomes such as remission among out-patients will clearly be important in facilitating future studies.
Classification based upon narrative notes provides an opportunity to take advantage of existing EMR systems for highly efficient clinical investigation. In the Partners HealthCare system, there are ~4 years of psychiatry out-patient notes, which, even in the absence of detailed rating scales, yield some perspective on clinical outcomes on a very large scale. With appropriate protection of patients’ privacy, this resource could be applied to efficiently identify risk factors for treatment resistance. It can facilitate investigations of effectiveness, for example, by comparing outcomes across different clinics or payor types within a health care system to highlight potential disparities. (We note the importance of considering confounding in these sorts of population-based investigations, and also the well-established methodologies for addressing these concerns.) Finally, it might allow for efficient recruitment of specific clinical populations; for example, investigations of novel interventions specifically for patients with TRD, or pharmacogenomic investigations of TRD. By comparison, in the largest TRD study to date, >4000 patients were enrolled in order to yield fewer than 100 patients per arm in the most treatment-resistant phase (Trivedi et al. 2006). If personalized medicine is to become a reality in psychiatry, multiple large datasets will be required to build and validate models for treatment outcome. Our results suggest that applying NLP tools to existing EMR data may help accelerate this process.
The project described was supported by Award # U54LM008748 from the National Library of Medicine (to ISK) and R01MH086026 and R01MH085542 from the National Institute of Mental Health (to R.H.P. and J.W.S., respectively). The content is solely the responsibility of the authors and does not necessarily represent the official view of the National Library of Medicine or the National Institutes of Health.
Note Supplementary material accompanies this paper on the Journal’s website (http://journals.cambridge.org/psm).
Declaration of Interest Roy Perlis has received consulting fees from Proteus Biomedical, Concordant Rater Systems, Genomind, and RID Ventures.
Dan Iosifescu has received grant support from Aspect Medical Systems, Forest Laboratories, Janssen Pharmaceuticals, NARSAD, and NIH. He has received speaker honoraria from Eli & Co., Pfizer, Inc., Forest Laboratories, and Reed Medical Education.
Maurizio Fava – Lifetime Disclosures
Research Support: Abbott Laboratories; Alkermes, Inc.; Aspect Medical Systems; AstraZeneca; Bio-Research; BrainCells Inc.; Bristol-Myers Squibb; Cephalon, Inc.; Clinical Trials Solutions, LLC; Covidien; Eli Lilly and Company; EnVivo Pharmaceuticals, Inc.; Forest Pharmaceuticals, Inc.; Ganeden Biotech, Inc.; GlaxoSmithKline; Johnson & Johnson Pharmaceutical Research & Development; Lichtwer Pharma GmbH; Lorex Pharmaceuticals; Novartis AG; Organon Pharmaceuticals; PamLab, LLC.; Pfizer Inc.; Pharmavite® LLC; Roche; RTC Logic, LLC; Sanofi-Aventis US LLC; Shire; Solvay Pharmaceuticals, Inc.; Synthelabo; Wyeth-Ayerst Laboratories.
Advisory/Consulting: Abbott Laboratories; Affectis Pharmaceuticals AG; Amarin Pharma Inc.; Aspect Medical Systems; AstraZeneca; Auspex Pharmaceuticals; Bayer AG; Best Practice Project Management, Inc.; BioMarin Pharmaceuticals, Inc.; Biovail Corporation; BrainCells Inc; Bristol-Myers Squibb; CeNeRx BioPharma; Cephalon, Inc.; Clinical Trials Solutions, LLC; CNS Response, Inc.; Compellis Pharmaceuticals; Cypress Pharmaceutical, Inc.; Dov Pharmaceuticals, Inc.; Eisai Inc.; Eli Lilly and Company; EPIX Pharmaceuticals, Inc.; Euthymics Bioscience, Inc.; Fabre-Kramer Pharmaceuticals, Inc.; Forest Pharmaceuticals, Inc.; GenOmind, LLC; GlaxoSmithKline; Gruenthal GmbH; Janssen Pharmaceutica; Jazz Pharmaceuticals, Inc.; Johnson & Johnson Pharmaceutical Research & Development, LLC.; Knoll Pharmaceuticals Corp.; Labopharm Inc.; Lorex Pharmaceuticals; Lundbeck Inc.; MedAvante, Inc.; Merck & Co., Inc.; Methylation Sciences; Neuronetics, Inc.; Novartis AG; Nutrition 21; Organon Pharmaceuticals; PamLab, LLC.; Pfizer Inc.; PharmaStar; Pharmavite® LLC.; Precision Human Biolaboratory; Prexa Pharmaceuticals, Inc.; Psycho-Genics; Psylin Neurosciences, Inc.; Ridge Diagnostics, Inc.; Roche; RCT Logic, LLC; Sanofi-Aventis US LLC.; Sepracor Inc.; Schering-Plough Corporation; Solvay Pharmaceuticals, Inc.; Somaxon Pharmaceuticals, Inc.; Somerset Pharmaceuticals, Inc.; Synthelabo; Takeda Pharmaceutical Company Limited; Tetragenex Pharmaceuticals, Inc.; TransForm Pharmaceuticals, Inc.; Transcept Pharmaceuticals, Inc.; Vanda Pharmaceuticals, Inc.; Wyeth-Ayerst Laboratories.
Speaking/Publishing: Adamed, Co.; Advanced Meeting Partners; American Psychiatric Association; American Society of Clinical Psychopharmacology; AstraZeneca; Belvoir Media Group; Boehringer Ingelheim GmbH; Bristol-Myers Squibb; Cephalon, Inc.; Eli Lilly and Company; Forest Pharmaceuticals, Inc.; GlaxoSmithKline; Imedex, LLC; MGH Psychiatry Academy/Primedia; MGH Psychiatry Academy/Reed Elsevier; Novartis AG; Organon Pharmaceuticals; Pfizer Inc.; PharmaStar; United BioSource, Corp.; Wyeth-Ayerst Laboratories.
Equity Holdings: Compellis.
Royalty/patent, other income: Patent for SPCD and patent application for a combination of azapirones and bupropion in MDD, copyright royalties for the MGH CPFQ, SFI, ATRQ, DESS, and SAFER.