PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (40941)

Clipboard (0)
None

Related Articles

1.  The Health of the American Slave Examined by Means of Union Army Medical Statistics 
The health status of the American slave in the 19th century remains unclear despite extensive historical research. Better knowledge of slave health would provide a clearer picture of the life of the slave, a better understanding of the 19th-century medicine, and possibly even clues to the health problems of modern blacks. This article hopes to contribute to the literature by examining another source of data. Slaves entering the Union Army joined an organization with standardized medical care that generated extensive statistical information. Review of these statistics answers questions about the health of young male blacks at the time American slavery ended.
PMCID: PMC2561819  PMID: 3881595
2.  Risk Exposures in Early Life and Mortality at Older Ages: Evidence from Union Army Veterans 
Population and development review  2009;35(2):275-295.
This study examines the relation between risk exposures in early life and hazard of mortality among 11,978 Union Army veterans aged 50 and over in 1900. Veterans’ risk exposures prior to enlistment–as approximated by birth season, country of origin, residential region, city size, and height at enlistment–significantly influence their chance of survival after 1900. These effects are robust irrespective of whether or not socioeconomic well-being circa 1900 has been taken into account; however, they are sensitive to the particular age periods that have been selected for survival analysis. Whereas some of the effects such as being born in Ireland and coming from big cities became fully unfolded in the first decade after 1900 and then dissipated over time, the effects of birth season, being born in Germany, residential region in the U.S., and height at enlistment were more salient in the post-1910 periods. Height at enlistment shows a positive association with risk of mortality in the post-1910 periods. Compared to corresponding findings from more recent cohorts, the exceptional rigidity of the effects of risk exposures prior to enlistment on old-age mortality among the veterans highlights the harshness of living conditions early in their life.
doi:10.1111/j.1728-4457.2009.00276.x
PMCID: PMC2832117  PMID: 20209063
3.  Health, Information, and Migration: Geographic Mobility of Union Army Veterans, 1860–1880 
The journal of economic history  2008;68(3):862-899.
This article explores how injuries, sickness, and the geographic mobility of Union Army veterans while in service affected their postservice migrations. Wartime wounds and illnesses significantly diminished the geographic mobility of veterans after the war. Geographic moves while carrying out military missions had strong positive effects on their postservice geographic mobility. Geographic moves while in service also influenced the choice of destination among the migrants. I discuss some implications of the results for the elements of self-selection in migration, the roles of different types of information in migration decisions, and the overall impact of the Civil War on geographic mobility.
doi:10.1017/S0022050708000661
PMCID: PMC2838394  PMID: 20234796
4.  Military positions and post-service occupational mobility of Union Army veterans, 1861–1880 
Explorations in economic history  2007;44(4):680-698.
Although the Civil War has attracted a great deal of scholarly attention, little is known about how different wartime experiences of soldiers influenced their civilian lives after the war. This paper examines how military rank and duty of Union Army soldiers while in service affected their post-service occupational mobility. Higher ranks and non-infantry duties appear to have provided more opportunities for developing skills, especially those required for white-collar jobs. Among the recruits who were unskilled workers at the time of enlistment, commissioned and non-commissioned officers were much more likely to move up to a white-collar job by 1880. Similarly, unskilled recruits assigned to white-collar military duties were more likely to enter a white-collar occupation by 1880. The higher occupational mobility of higher-ranking soldiers is likely to have resulted from disparate human capital accumulations offered by their military positions rather than from their superior abilities.
PMCID: PMC2838376  PMID: 20234792
Military service; Civil War; Rank; Duty; Human capital; Training; Occupational mobility; Union Army; Veteran
5.  Wealth Accumulation and the Health of Union Army Veterans, 1860–1870 
The journal of economic history  2005;65(2):352-385.
How did the wartime health of Union Army recruits affect their wealth accumulation through 1870? Wounds and exposure to combat had strong negative effects on subsequent savings, as did illnesses while in the service. The impact of poor health was particularly strong for unskilled workers. Health was a powerful determinant of nineteenth-century economic mobility. Infectious diseases’ influences on wealth accumulation suggest that the economic gains from the improvement of the disease environment should be enormous. The direct economic costs of the Civil War were probably much greater than previously thought, given the persistent adverse health effects of wartime experiences.
doi:10.1017/S0022050705000124
PMCID: PMC2840618  PMID: 20300440
6.  Occupational Career and Risk of Mortality among Union Army Veterans 
Social science & medicine (1982)  2009;69(3):460-468.
Previous studies have extended the traditional framework on occupational disparities in health by examining mortality differentials from a career perspective. Few studies, however, have examined the relation between career and mortality in a historical U.S. population. This study explores the relation between occupational career and risk of mortality in old age among 7,096 Union Army veterans who fought the American Civil War in the 1860s. Occupational mobility was commonplace among the veterans in the postbellum period, with 54 percent of them changing occupations from the time of enlistment to 1900. Among veterans who were farmers at enlistment, 46 percent of them changed to a non-farming occupation by the time of 1900. Results from the Cox Proportional Hazard analysis suggest that relative to the average mortality risk of the sample, being a farmer at enlistment or circa 1900 are both associated with a lower risk of mortality in old age, although the effect is more salient for veterans who were farmers at enlistment. Occupational immobility for manual labors poses a serious threat to chance of survival in old age. These findings still hold after adjusting for the effects of selected variables characterizing risk exposures during early life, wartime, and old age. The robustness of the survival advantage associated with being a farmer at enlistment highlights the importance of socioeconomic conditions early in life in chance of survival at older ages.
doi:10.1016/j.socscimed.2009.05.027
PMCID: PMC2852134  PMID: 19552993
Career; Occupational Mobility; Mortality; Union Army Veterans; USA
7.  Testing the influenza–tuberculosis selective mortality hypothesis with Union Army data⋆ 
Social science & medicine (1982)  2009;68(9):1599-1608.
Using Cox regression, this paper shows a weak association between having tuberculosis and dying from influenza among Union Army veterans in late nineteenth-century America. It has been suggested elsewhere [Noymer, A. and M. Garenne (2000). The 1918 influenza epidemic’s effects on sex differentials in mortality in the United States. Population and Development Review 26(3), 565–581.] that the 1918 influenza pandemic accelerated the decline of tuberculosis, by killing many people with tuberculosis. The question remains whether individuals with tuberculosis were at greater risk of influenza death, or if the 1918/post-1918 phenomenon arose from the sheer number of deaths in the influenza pandemic. The present findings, from microdata, cautiously point toward an explanation of Noymer and Garenne’s selection effect in terms of age-overlap of the 1918 pandemic mortality and tuberculosis morbidity, a phenomenon I term “passive selection”. Another way to think of this is selection at the cohort, as opposed to individual, level.
doi:10.1016/j.socscimed.2009.02.021
PMCID: PMC2677170  PMID: 19304361
USA; Influenza; Tuberculosis; Selection; Mortality; Historical demography; Historical epidemiology; Union Army veterans
8.  Pensions and Retirement Among Black Union Army Veterans 
The journal of economic history  2010;70(3):567-592.
I examine the effects of an unearned income transfer on the retirement rates and living arrangements of black Union Army veterans. I find that blacks were more than twice as responsive as whites to income transfers in their retirement decisions and 6 to 8 times as responsive in their choice of independent living arrangements. My findings have implications for understanding racial differences in rates of retirement and independent living at the beginning of the twentieth century, the rise in retirement prior to 1930, and the subsequent convergence in black-white retirement rates and living arrangements.
doi:10.1017/S0022050710000549
PMCID: PMC3004158  PMID: 21179379
9.  Health, Wartime Stress, and Unit Cohesion: Evidence From Union Army Veterans 
Demography  2010;47(1):45-66.
We find that Union Army veterans of the American Civil War who faced greater wartime stress (as measured by higher battlefield mortality rates) experienced higher mortality rates at older ages, but that men who were from more cohesive companies were statistically significantly less likely to be affected by wartime stress. Our results hold for overall mortality, mortality from ischemic heart disease and stroke, and new diagnoses of arteriosclerosis. Our findings represent one of the first long-run health follow-ups of the interaction between stress and social networks in a human population in which both stress and social networks are arguably exogenous.
PMCID: PMC3000013  PMID: 20355683
10.  Socioeconomic Differences in the Health of Black Union Army Soldiers 
Social science history  2009;33(4):427-457.
This paper investigates patterns of socioeconomic difference in the wartime morbidity and mortality of black Union Army soldiers. Among the factors that contributed to a lower probability of contracting and dying from diseases were (1) lighter skin color, (2) a non-field occupation, (3) residence on a large plantation, and (4) residence in a rural area prior to enlistment. Patterns of disease-specific mortality and timing of death suggest that the differences in the development of immunity against diseases and in nutritional status prior to enlistment were responsible for the observed socioeconomic differences in wartime health. For example, the advantages of light-skinned soldiers over dark-skinned and of enlisted men formerly engaged in non-field occupations over field hands resulted from differences in nutritional status. The lower wartime mortality of ex-slaves from large plantations can be explained by their better-developed immunity as well as superior nutritional status. The results of this paper suggest that there were substantial disparities in the health of the slave population on the eve of the Civil War.
doi:10.1215/01455532-2009-007
PMCID: PMC3427919  PMID: 22933827
11.  An Occupational Health Nursing Computer Application in Medical Care: An Army Approach 
Occupational health nursing has become an increasingly important specialty in the field of nursing during this century. In the broadest concept, occupational health is concerned with all factors which influence the health of people at work. Nurses, as well as other health care professionals, are attempting to apply the evolving technology of the computer to direct client care applications in the workplace. One such relevant use of the computer has been that of targeted disease surveillance in an occupational health setting. This paper will address the process utilized by community health nurses to assess, plan, implement and evaluate a computerized disease surveillance program in an occupational health setting. The program was a joint effort between the United States Army Medical Department Activity, Fort Irwin, California and the Epidemiology Consultant Service of the Division of Preventive Medicine, the Walter Reed Army Institute of Research, Washington, DC. (WRAIR).
PMCID: PMC2578424
12.  Rationale and design of a multicenter randomized controlled trial on a 'minimal intervention' in Dutch army personnel with nonspecific low back pain [ISRCTN19334317] 
Background
Researchers from the Royal Netherlands Army are studying the potential of isolated lumbar extensor training in low back pain in their working population. Currently, a randomized controlled trial is carried out in five military health centers in The Netherlands and Germany, in which a 10-week program of not more than 2 training sessions (10–15 minutes) per week is studied in soldiers with nonspecific low back pain for more than 4 weeks. The purpose of the study is to investigate the efficacy of this 'minimal intervention program', compared to usual care. Moreover, attempts are made to identify subgroups of different responders to the intervention.
Methods
Besides a baseline measurement, follow-up data are gathered at two short-term intervals (5 and 10 weeks after randomization) and two long-term intervals (6 months and one year after the end of the intervention), respectively. At every test moment, participants fill out a compound questionnaire on a stand-alone PC, and they undergo an isometric back strength measurement on a lower back machine.
Primary outcome measures in this study are: self-assessed degree of complaints and degree of handicap in daily activities due to back pain. In addition, our secondary measurements focus on: fear of movement/(re-) injury, mental and social health perception, individual back extension strength, and satisfaction of the patient with the treatment perceived. Finally, we assess a number of potential prognostic factors: demographic and job characteristics, overall health, the degree of physical activity, and the attitudes and beliefs of the physiotherapist towards chronic low back pain.
Discussion
Although a substantial number of trials have been conducted that included lumbar extension training in low back pain patients, hardly any study has emphasized a minimal intervention approach comparable to ours. For reasons of time efficiency and patient preferences, this minimal sports medicine approach of low back pain management is interesting for the population under study, and possibly for comparable working populations with physical demanding job activities.
doi:10.1186/1471-2474-5-40
PMCID: PMC533884  PMID: 15535881
13.  Number of Patients Studied Prior to Approval of New Medicines: A Database Analysis 
PLoS Medicine  2013;10(3):e1001407.
In an evaluation of medicines approved by the European Medicines Agency 2000 to 2010, Ruben Duijnhoven and colleagues find that the number of patients evaluated for medicines approved for chronic use are inadequate for evaluation of safety or long-term efficacy.
Background
At the time of approval of a new medicine, there are few long-term data on the medicine's benefit–risk balance. Clinical trials are designed to demonstrate efficacy, but have major limitations with regard to safety in terms of patient exposure and length of follow-up. This study of the number of patients who had been administered medicines at the time of medicine approval by the European Medicines Agency aimed to determine the total number of patients studied, as well as the number of patients studied long term for chronic medication use, compared with the International Conference on Harmonisation's E1 guideline recommendations.
Methods and Findings
All medicines containing new molecular entities approved between 2000 and 2010 were included in the study, including orphan medicines as a separate category. The total number of patients studied before approval was extracted (main outcome). In addition, the number of patients with long-term use (6 or 12 mo) was determined for chronic medication. 200 unique new medicines were identified: 161 standard and 39 orphan medicines. The median total number of patients studied before approval was 1,708 (interquartile range [IQR] 968–3,195) for standard medicines and 438 (IQR 132–915) for orphan medicines. On average, chronic medication was studied in a larger number of patients (median 2,338, IQR 1,462–4,135) than medication for intermediate (878, IQR 513–1,559) or short-term use (1,315, IQR 609–2,420). Safety and efficacy of chronic use was studied in fewer than 1,000 patients for at least 6 and 12 mo in 46.4% and 58.3% of new medicines, respectively. Among the 84 medicines intended for chronic use, 68 (82.1%) met the guideline recommendations for 6-mo use (at least 300 participants studied for 6 mo and at least 1,000 participants studied for any length of time), whereas 67 (79.8%) of the medicines met the criteria for 12-mo patient exposure (at least 100 participants studied for 12 mo).
Conclusions
For medicines intended for chronic use, the number of patients studied before marketing is insufficient to evaluate safety and long-term efficacy. Both safety and efficacy require continued study after approval. New epidemiologic tools and legislative actions necessitate a review of the requirements for the number of patients studied prior to approval, particularly for chronic use, and adequate use of post-marketing studies.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Before any new medicine is marketed for the treatment of a human disease, it has to go through extensive laboratory and clinical research. In the laboratory, scientists investigate the causes of diseases, identify potential new treatments, and test these interventions in disease models, some of which involve animals. The safety and efficacy of potential new interventions is then investigated in a series of clinical trials—studies in which the new treatment is tested in selected groups of patients under strictly controlled conditions, first to determine whether the drug is tolerated by humans and then to assess its efficacy. Finally, the results of these trials are reviewed by the government body responsible for drug approval; in the US, this body is the Food and Drug Administration, and in the European Union, the European Medicines Agency (EMA) is responsible for the scientific evaluation and approval of new medicines.
Why Was This Study Done?
Clinical trials are primarily designed to test the efficacy—the ability to produce the desired therapeutic effect—of new medicines. The number of patients needed to establish efficacy determines the size of a clinical trial, and the indications for which efficacy must be shown determine the trial's duration. However, identifying adverse effects of drugs generally requires the drug to be taken by more patients than are required to show efficacy, so the information about adverse effects is often relatively limited at the end of clinical testing. Consequently, when new medicines are approved, their benefit–risk ratios are often poorly defined, even though physicians need this information to decide which treatment to recommend to their patients. For the evaluation of risk or adverse effects of medicines being developed for chronic (long-term) treatment of non-life-threatening diseases, current guidelines recommend that at least 1,000–1,500 patients are exposed to the new drug and that 300 and 100 patients use the drug for six and twelve months, respectively, before approval. But are these guidelines being followed? In this database analysis, the researchers use data collected by the EMA to determine how many patients are exposed to new medicines before approval in the European Union and how many are exposed for extended periods of time to medicines intended for chronic use.
What Did the Researchers Do and Find?
Using the European Commission's Community Register of Medicinal Products, the researchers identified 161 standard medicines and 39 orphan medicines (medicines to treat or prevent rare life-threatening diseases) that contained new active substances and that were approved in the European Union between 2000 and 2010. They extracted information on the total number of patients studied and on the number exposed to the medicines for six months and twelve months before approval of each medicine from EMA's European public assessment reports. The average number of patients studied before approval was 1,708 for standard medicines and 438 for orphan medicines (marketing approval is easier to obtain for orphan medicines than for standard medicines to encourage drug companies to develop medicines that might otherwise be unprofitable). On average, medicines for chronic use (for example, asthma medications) were studied in more patients (2,338) than those for intermediate use such as anticancer drugs (878), or short-term use such as antibiotics (1,315). The safety and efficacy of chronic use was studied in fewer than 1,000 patients for at least six and twelve months in 46.4% and 58.4% of new medicines, respectively. Finally, among the 84 medicines intended for chronic use, 72 were studied in at least 300 patients for six months, and 70 were studied in at least 100 patients for twelve months.
What Do These Findings Mean?
These findings suggest that although the number of patients studied before approval is sufficient to determine the short-term efficacy of new medicines, it is insufficient to determine safety or long-term efficacy. Any move by drug approval bodies to require pharmaceutical companies to increase the total number of patients exposed to a drug, or the number exposed for extended periods of time to drugs intended for chronic use, would inevitably delay the entry of new products into the market, which likely would be unacceptable to patients and healthcare providers. Nevertheless, the researchers suggest that a reevaluation of the study size and long-term data requirements that need to be met for the approval of new medicines, particularly those designed for long-term use, is merited. They also stress the need for continued study of both the safety and efficacy of new medicines after approval and the importance of post-marketing studies that actively examine safety issues.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001407.
The European Medicines Agency (EMA) provides information about all aspects of the scientific evaluation and approval of new medicines in the European Union; its European public assessment reports are publicly available
The European Commission's Community Register of Medicinal Products is a publicly searchable database of medicinal products approved for human use in the European Union
The US Food and Drug Administration provides information about drug approval in the US for consumers and for health professionals
The US National Institutes of Health provides information (including personal stories) about clinical trials
doi:10.1371/journal.pmed.1001407
PMCID: PMC3601954  PMID: 23526887
14.  The Reliability and Validity of the Self-Reported Drinking Measures in the Army’s Health Risk Appraisal Survey 
Background
The reliability and validity of self-reported drinking behaviors from the Army Health Risk Appraisal (HRA) survey are unknown.
Methods
We compared demographics and health experiences of those who completed the HRA with those who did not (1991–1998). We also evaluated the reliability and validity of eight HRA alcohol-related items, including the CAGE, weekly drinking quantity, and drinking and driving measures. We used Cohen’s κ and Pearson’s r to assess reliability and convergent validity. To assess criterion (predictive) validity, we used proportional hazards and logistical regression models predicting alcohol-related hospitalizations and alcohol-related separations from the Army, respectively.
Results
A total of 404,966 soldiers completed an HRA. No particular demographic group seems to be over- or underrepresented. Although few respondents skipped alcohol items, those who did tended to be older and of minority race. The alcohol items demonstrate a reasonable degree of reliability, with Cronbach’s α = 0.69 and test-retest reliability associations in the 0.75–0.80 range for most items over 2- to 30-day interims between surveys. The alcohol measures showed good criterion-related validity: those consuming more than 21 drinks per week were at 6 times the risk for subsequent alcohol-related hospitalization versus those who abstained from drinking (hazard ratio, 6.36; 95% confidence interval=5.79, 6.99). Those who said their friends worried about their drinking were almost 5 times more likely to be discharged due to alcoholism (risk ratio, 4.9; 95% confidence interval=4.00, 6.04) and 6 times more likely to experience an alcohol-related hospitalization (hazard ratio, 6.24; 95% confidence interval=5.74, 6.77).
Conclusions
The Army’s HRA alcohol items seem to elicit reliable and valid responses. Because HRAs contain identifiers, alcohol use can be linked with subsequent health and occupational outcomes, making the HRA a useful epidemiological research tool. Associations between perceived peer opinions of drinking and subsequent problems deserve further exploration.
doi:10.1097/01.ALC.0000067978.27660.73
PMCID: PMC2141695  PMID: 12766628
Alcohol; Military; Reliability; Validity; Survey
15.  Suicide and the United States Army: Perspectives from the Former Psychiatry Consultant to the Army Surgeon General 
Editor’s note:
The suicide rate of active-duty soldiers doubled between 2003 and 2010. In response, the Department of Defense and the United States Army improved their data collection methods to better understand the causes of military suicides. As retired colonel Dr. Elspeth Cameron Ritchie writes, unit history and the accumulation of stressors—from relationship problems to chronic pain—are significant suicide risk factors among soldiers. But, she argues, Army officials must use this knowledge to design more-effective strategies for suicide reduction, including limiting access to weapons, especially post-deployment, and better connecting soldiers with their communities.
PMCID: PMC3574805  PMID: 23447787
16.  The Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS) 
Psychiatry  2014;77(2):107-119.
Importance/Objective
Although the suicide rate in the U.S. Army has traditionally been below age-gender matched civilian rates, it has climbed steadily since the beginning of the Iraq and Afghanistan conflicts and since 2008 has exceeded the demographically matched civilian rate. The Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS) is a multicomponent epidemiological and neurobiological study designed to generate actionable evidence-based recommendations to reduce Army suicides and increase knowledge about risk and resilience factors for suicidality and its psychopathological correlates. This paper presents an overview of the Army STARRS component study designs and of recent findings.
Design/Setting/Participants/Intervention
Army STARRS includes six main component studies: (1) the Historical Administrative Data Study (HADS) of Army and Department of Defense (DoD) administrative data systems (including records of suicidal behaviors) for all soldiers on active duty 2004–2009 aimed at finding administrative record predictors of suicides; (2) retrospective case-control studies of fatal and nonfatal suicidal behaviors (each planned to have n = 150 cases and n = 300 controls); (3) a study of new soldiers (n = 50,765 completed surveys) assessed just before beginning basic combat training (BCT) with self-administered questionnaires (SAQ), neurocognitive tests, and blood samples; (4) a cross-sectional study of approximately 35,000 (completed SAQs) soldiers representative of all other (i.e., exclusive of BCT) active duty soldiers; (5) a pre-post deployment study (with blood samples) of soldiers in brigade combat teams about to deploy to Afghanistan (n = 9,421 completed baseline surveys), with sub-samples assessed again one, three, and nine months after returning from deployment; and (6) a pilot study to follow-up SAQ respondents transitioning to civilian life. Army/DoD administrative data are being linked prospectively to the large-scale survey samples to examine predictors of subsequent suicidality and related mental health outcomes.
Main outcome measures
Measures (self-report and administratively recorded) of suicidal behaviors and their psychopathological correlates.
Results
Component study cooperation rates are comparatively high. Sample biases are relatively small. Inefficiencies introduced into parameter estimates by using nonresponse adjustment weights and time-space clustering are small. Initial findings show that the suicide death rate, which rose over 2004–2009, increased for those deployed, those never deployed, and those previously deployed. Analyses of administrative records show that those deployed or previously deployed were at greater suicide risk. Receiving a waiver to enter the Army was not associated with increased risk. However, being demoted in the past two years was associated with increased risk. Time in current deployment, length of time since return from most recent deployment, total number of deployments, and time interval between most recent deployments (known as dwell time) were not associated with suicide risk. Initial analyses of survey data show that 13.9% of currently active non-deployed regular Army soldiers considered suicide at some point in their lifetime, while 5.3% had made a suicide plan, and 2.4% had attempted suicide. Importantly, 47–60% of these outcomes first occurred prior to enlistment. Prior mental disorders, in particular major depression and intermittent explosive disorder, were the strongest predictors of these self-reported suicidal behaviors. Most onsets of plans-attempts among ideators (58.3–63.3%) occurred within the year of onset of ideation. About 25.1% of non-deployed U.S. Army personnel met 30-day criteria for a DSM-IV anxiety, mood, disruptive behavior, or substance disorder (15.0% an internalizing disorder; 18.4% an externalizing disorder) and 11.1% for multiple disorders. Importantly, three-fourths of these disorders had pre-enlistment onsets.
Conclusions
Integration across component studies creates strengths going well beyond those in conventional applications of the same individual study designs. These design features create a strong methodological foundation from which Army STARRS can pursue its substantive research goals. The early findings reported here illustrate the importance of the study and its approach as a model of studying rare events particularly of national security concern. Continuing analyses of the data will inform suicide prevention for the U.S. Army.
doi:10.1521/psyc.2014.77.2.107
PMCID: PMC4075436  PMID: 24865195
17.  Response bias, weighting adjustments, and design effects in the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS) 
The Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS) is a multi-component epidemiological and neurobiological study designed to generate actionable recommendations to reduce U.S. Army suicides and increase knowledge about determinants of suicidality. Three Army STARRS component studies are large-scale surveys: one of new soldiers prior to beginning Basic Combat Training (BCT; n=50,765 completed self-administered questionnaires); another of other soldiers exclusive of those in BCT (n=35,372); and a third of three Brigade Combat Teams about to deploy to Afghanistan who are being followed multiple times after returning from deployment (n= 9,421). Although the response rates in these surveys are quite good (72.0-90.8%), questions can be raised about sample biases in estimating prevalence of mental disorders and suicidality, the main outcomes of the surveys based on evidence that people in the general population with mental disorders are under-represented in community surveys. This paper presents the results of analyses designed to determine whether such bias exists in the Army STARRS surveys and, if so, to develop weights to correct for these biases. Data are also presented on sample inefficiencies introduced by weighting and sample clustering and on analyses of the trade-off between bias and efficiency in weight trimming.
doi:10.1002/mpr.1399
PMCID: PMC3992816  PMID: 24318218
Suicide; mental disorders; U.S. Army; epidemiologic research design; design effects; sample bias; sample weights; survey design efficiency; survey sampling
18.  Design of the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS) 
The Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS) is a multi-component epidemiological and neurobiological study designed to generate actionable evidence-based recommendations to reduce U.S. Army suicides and increase basic knowledge about the determinants of suicidality. This report presents an overview of the designs of the six component Army STARRS studies. These include: an integrated study of historical administrative data systems (HADS) designed to provide data on significant administrative predictors of suicides among the more than 1.6 million soldiers on active duty in 2004–2009; retrospective case-control studies of suicide attempts and fatalities; separate large-scale cross-sectional studies of new soldiers (i.e., those just beginning Basic Combat Training [BCT], who completed self-administered questionnaires [SAQ] and neurocognitive tests and provided blood samples) and soldiers exclusive of those in BCT (who completed SAQs); a pre-post deployment study of soldiers in three Brigade Combat Teams about to deploy to Afghanistan (who completed SAQs and provided blood samples) followed multiple times after returning from deployment; and a platform for following up Army STARRS participants who have returned to civilian life. DoD/Army administrative data records are linked with SAQ data to examine prospective associations between self-reports and subsequent suicidality. The presentation closes with a discussion of the methodological advantages of cross-component coordination.
doi:10.1002/mpr.1401
PMCID: PMC3992857  PMID: 24318217
Suicide; mental disorders; U.S. Army; epidemiologic research design; design effects; sample bias; sample weights; survey design efficiency; survey sampling
19.  Field procedures in the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS) 
The Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS) is a multi-component epidemiological and neurobiological study of unprecedented size and complexity designed to generate actionable evidence-based recommendations to reduce U.S. Army suicides and increase basic knowledge about determinants of suicidality by carrying out coordinated component studies. A number of major logistical challenges were faced in implementing these studies. The current report presents an overview of the approaches taken to meet these challenges, with a special focus on the field procedures used to implement the component studies. As detailed in the paper, these challenges were addressed at the onset of the initiative by establishing an Executive Committee, a Data Coordination Center (the Survey Research Center [SRC] at the University of Michigan), and study-specific design and analysis teams that worked with staff on instrumentation and field procedures. SRC staff, in turn, worked with the Office of the Deputy Under Secretary of the Army (ODUSA) and local Army Points of Contact (POCs) to address logistical issues and facilitate data collection. These structures, coupled with careful fieldworker training, supervision, and piloting contributed to the major Army STARRS data collection efforts having higher response rates than previous large-scale studies of comparable military samples.
doi:10.1002/mpr.1400
PMCID: PMC3992884  PMID: 24038395
Suicide; mental disorders; U.S. Army; epidemiologic research design; design effects; sample bias; sample weights; survey design efficiency; survey sampling
20.  Clinical reappraisal of the Composite International Diagnostic Interview Screening Scales (CIDI-SC) in the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS) 
A clinical reappraisal study was carried out in conjunction with the Army STARRS All-Army Study (AAS) to evaluate concordance of DSM-IV diagnoses based on the Composite International Diagnostic Interview screening scales (CIDI-SC) and PTSD Checklist (PCL) with diagnoses based on independent clinical reappraisal interviews (Structured Clinical Interview for DSM-IV [SCID]). Diagnoses included: lifetime mania/hypomania, panic disorder, and intermittent explosive disorder; 6-month adult attention-deficit/hyperactivity disorder; and 30-day major depressive episode, generalized anxiety disorder, PTSD, and substance (alcohol or drug) use disorder (abuse or dependence). The sample (n=460) was weighted for over-sampling CIDI-SC/PCL screened positives. Diagnostic thresholds were set to equalize false positives and false negatives. Good individual-level concordance was found between CIDI-SC/PCL and SCID diagnoses at these thresholds (AUC = .69–.79). AUC was considerably higher for continuous than dichotomous screening scale scores (AUC = .80–.90), arguing for substantive analyses using not only dichotomous case designations but also continuous measures of predicted probabilities of clinical diagnoses.
doi:10.1002/mpr.1398
PMCID: PMC4027964  PMID: 24318219
Composite International Diagnostic Interview (CIDI); CIDI Screening Scales (CIDI-SC); diagnostic concordance; PTSD Checklist (PCL); screening scales; validity
24.  THE ARMY MEDICAL SERVICE AND THE ARMY COUNCIL 
British Medical Journal  1905;2(2340):1211-1213.
PMCID: PMC2322683  PMID: 20762362

Results 1-25 (40941)