PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1111500)

Clipboard (0)
None

Related Articles

1.  Causes of Death of Prisoners of War during the Korean War (1950-1953) 
Yonsei Medical Journal  2013;54(2):480-488.
Purpose
This study aimed at analyzing the causes of death of prisoners of war (POWs) during the Korean War (1950-1953) who fought for the Communist side (North Korea and the People's Republic of China). In 1998, the United States Department of Defense released new information about the prisoners including, 7,614 deaths of the POW during the Korean War. The data on the causes of death of the POWs during the Korean War provides valuable information on the both the public health and history of the conflict.
Materials and Methods
To analyze the causes of death of the POWs, we classified the clinical diagnosis and findings on 7,614 deaths into 22 chapters, as outlined in the International Statistical Classification of Diseases and Related Health Problems-10th Revision (ICD-10). Second, we traced changes in the monthly death totals of POWs as well as deaths caused by common infectious diseases and external causes of death including injury over time from August 1950 to September 1953.
Results
The most common category of causes of deaths of POWs was infectious disease, 5,013 (65.8%) out of 7,614 deaths, followed by external causes including injury, 817 (10.7%). Overall, tuberculosis and dysentery/diarrhea were the most common causes of death. Deaths caused by acute and chronic infection, or external causes showed different patterns of increases and decline over time during the Korean War.
Conclusion
The information and data on POWs' deaths during the Korean War reflects the critical impact of the POWs' living conditions and the effect of public health measures implemented in POW camps during the war.
doi:10.3349/ymj.2013.54.2.480
PMCID: PMC3575971  PMID: 23364985
Prisoners of war (POW); Korean War; causes of deaths; infectious diseases
2.  Exposure to the Holocaust and World War II Concentration Camps during Late Adolescence and Adulthood is not Associated with Increased Risk for Dementia at Old Age 
Holocaust and Nazi concentration camp survivors were subjects to prolonged and multi-dimensional trauma and stress. The aim of the present study was to assess the association between exposure to such trauma during late adolescence and adulthood with dementia at old age. In 1963, approximately 10,000 male civil servants aged 40–71 participated in the Israel Ischemic Heart Disease (IIHD) study. Of them, 691 reported having survived Nazi concentration camps [concentration Camp Survivors (CCS)]. Additional 2316 participants were holocaust survivors but not concentration camp survivors (HSNCC) and 1688 were born in European countries but not exposed to the Holocaust (NH). Dementia was assessed in 1999–2000, over three decades later, in 1889 survivors of the original IIHD cohort; 139 of whom were CCS, 435 were HSNCC, and 236 were NH. Dementia prevalence was 11.5% in CCS, 12.6% in HSNCC, and 15.7% in NH. The odds ratio of dementia prevalence, estimated by age adjusted logistic regression, for CCS as compared to HSNCC was 0.97 (95% CI: 0.53–1.77), approximate Z =−0.10; p = 0.92. Further adjustment for socioeconomic status, diabetes mellitus, and other co-morbidity at midlife (coronary heart disease, lung, and kidney disease), and height did not change the results substantially. Thus, in subjects who survived until old age, late adolescence and adulthood exposure to extreme stress, as reflected by experiencing holocaust and Nazi concentration camps, was not associated with increased prevalence of dementia. Individuals who survived concentration camps and then lived into old age may carry survival advantages that are associated with protection from dementia and mortality.
doi:10.3233/JAD-2010-101327
PMCID: PMC3157888  PMID: 21157030
Adolescence; adulthood; dementia; Holocaust; Nazi concentration camps
3.  The impact of change in a doctor's job position: a five-year cohort study of job satisfaction among Norwegian doctors 
Background
Job satisfaction among physicians may be of importance to their individual careers and their work with patients. We lack prospective studies on whether a change in a doctor's job position influences their job satisfaction over a five-year period if we control for other workload factors.
Methods
A longitudinal national cohort of all physicians who graduated in Norway in 1993 and 1994 was surveyed by postal questionnaire in 2003 (T1) and 2008 (T2). Outcomes were measured with a 10-item job satisfaction scale. Predictor variables in a multiple regression model were: change in job position, reduction in work-home interface stress, reduction in work hours, age, and gender.
Results
A total of 59% of subjects (306/522) responded at both time points. The mean value of job satisfaction in the total sample increased from 51.6 (SD = 9.0) at T1 to 53.4 (SD = 8.2) at T2 (paired t test, t = 3.8, p < 0.001). The major groups or positions at T1 were senior house officers (45%), chief specialists in hospitals (23%), and general practitioners (17%), and the latter showed the highest levels of job satisfaction. Physicians who changed position during the period (n = 176) experienced an increase in job satisfaction from 49.5 (SD = 8.4) in 2003 to 52.9 (SD = 7.5) in 2008 (paired t test, t = 5.2, p < 0.001). Job satisfaction remained unchanged for physicians who stayed in the same position. There was also an increase in satisfaction among those who changed from positions other than senior house officer at T1 (p < 0.01). The significant adjusted predictor variables in the multiple regression model were the change in position from senior house officer at T1 to any other position (β = 2.83, p < 0.001), any change in job position (from any position except SHO at T1) (β = 4.18, p < 0.01) and reduction in work-home interface stress (β = 1.04, p < 0.001).
Conclusions
The physicians experienced an increase in job satisfaction over a five-year period, which was predicted by a change in job position and a reduction in work-home stress. This study has implications with respect to career advice for young doctors.
doi:10.1186/1472-6963-12-41
PMCID: PMC3342917  PMID: 22340521
4.  Low prevalence of byssinotic symptoms in 12 flax scutching mills in Normandy, France. 
The concentrations of airborne dust and bacteria were determined in 12 flax scutching mills and in two milk processing plants in Normandy, France. A total of 308 of 340 flax workers and 111 of 113 milk processors volunteered to answer a respiratory questionnaire. Personal exposure to airborne dust in the scutching mills varied from 22.2 mg/m3 to 144 mg/m3 and areal concentrations from 8.92 mg/m3 to 47.1 mg/m3. The concentration of Gram negative bacteria ranged from 3970 (colony forming units) cfu/m3 to 67,900 cfu/m3 and that of total bacteria from 12,900 cfu/m3 to more than 600,000 cfu/m3. In all, 20% of the flax scutchers were found, on the basis of the questionnaire, to suffer from persistent cough and 25% from chronic phlegm production. The corresponding figures among milk processors were 3.6% and 4.5%. Unexpectedly, only 12.5% of the scutchers appeared to suffer from byssinotic symptoms even though they were heavily exposed to airborne dust and bacteria. The low prevalence of byssinosis might be due to self selection of the workforce or a relatively low concentration of the causative agent despite high airborne contamination.
PMCID: PMC1008003  PMID: 3378012
5.  Origins of the Spanish Influenza pandemic (1918–1920) and its relation to the First World War 
The virus which was responsible for the first benign wave of the Spanish Influenza in the spring of 1918, and which was to become extremely virulent by the end of the summer of 1918, was inextricably associated with the soldiers who fought during the First World War. The millions of young men who occupied the military camps and trenches were the substrate on which the influenza virus developed and expanded. Many factors contributed to it, such as: the mixing on French soil of soldiers and workers from the five continents, the very poor quality of life of the soldiers, agglomeration, stress, fear, war gasses used for the first time in history in a massive and indiscriminate manner, life exposed to the elements, cold weather, humidity and contact with birds, pigs and other animals, both wild and domestic. Today, this combination of circumstances is not present and so it seems unlikely that new pandemics, such as those associated with the avian influenza or swine influenza, will emerge with the virulence which characterized the Spanish Influenza during the autumn of 1918.
PMCID: PMC2805838  PMID: 20076789
Pandemic influenza; Spanish influenza; Word War I; influenza A; H1N1
6.  The association of state per capita income and military service deaths in the Vietnam and Iraq wars 
Background
In the United States, social burdens including war casualties are often distributed unequally across groups of individuals, communities, and states. The purpose of this report was to examine the association between war deaths and per capita income in the 50 states and District of Columbia during the Vietnam and Iraq wars.
Methods
The numbers of deaths by the home state of record for each conflict were obtained from Department of Defense records on the Internet as were key variables including age at death, gender, race, branch of service, rank, circumstances of death, home state of record and the ratio of wounded to dead. In addition, we obtained state per capita income and state population for the relevant times.
Results
Characteristics of decedents in the 2 conflicts were very similar with young, white enlisted men accounting for the majority of deaths. However, in the Iraq war, women accounted for a 2.4% of casualties. Also of note was the higher ratio of wounded to dead in Iraq. At the level of the state, the correlation between the ratio of deaths per 100,000 and per capita income was -0.51 (p < 0.0001) for Vietnam and -0.52 for Iraq (p < 0.0001). In both eras, states with lower per capita income tended to have higher ratios of deaths per population.
Conclusion
For military service members serving in the Vietnam and Iraq conflicts, there were many more women who died in the latter war. Whether war deaths resulted in lower per capita income cannot be determined from these cross sectional data; we simply note a strong association between per capita income and war casualty rates for both wars.
doi:10.1186/1478-7954-7-1
PMCID: PMC2621124  PMID: 19126218
7.  Activity of daily living and its associated factors in war survivors with no visual acuity 
BACKGROUND:
War is a known cause of tremendous physical injuries to different body organs, and eyes are not exceptions. War-related no visual acuity (NVA) affects both the victim and the family. Activity of daily living (ADL) can display personal life independency and is considered as a morbidity index. This study was designed to investigate the ADL profile of war survivors with NVA.
METHODS:
This cross-sectional study was conducted in 2007 in Iran. In this study, 500 Iranian people with war related NVA were invited to take part in a camp in Mashhad city. ADL was evaluated using Barthel Index and demographic data were collected using a data sheet. Stepwise linear regression was used to determine the associates of ADL.
RESULTS:
The overall response rate to the invitation was 50%. From the total 250 participants 96.5% were male with a mean age of 43 ± 8 years. Only 8.3% had no dependency in ADL and other 91.7% had some ranges of dependency in at least one of the daily living activities. ADL score was higher in highly educated participants, those younger than 50 years old, those with less co-morbid physical problems (hearing loss) and those with regular physical exercises. According to regression analysis, age and duration of war related NVA were significant predictors of ADL.
CONCLUSIONS:
According to the results, both age and the time passed from war related NVA increase the dependency of people with war related NVA.
PMCID: PMC3082812  PMID: 21526082
War; No Visual Acuity; Activity of Daily Living
8.  Changes in Immune Parameters Seen in Gulf War Veterans but Not in Civilians with Chronic Fatigue Syndrome 
The purpose of this study was to evaluate immune function through the assessment of lymphocyte subpopulations (total T cells, major histocompatibility complex [MHC] I- and II-restricted T cells, B cells, NK cells, MHC II-restricted T-cell-derived naive and memory cells, and several MHC I-restricted T-cell activation markers) and the measurement of cytokine gene expression (interleukin 2 [IL-2], IL-4, IL-6, IL-10, IL-12, gamma interferon [IFN-γ], and tumor necrosis factor alpha [TNF-α]) from peripheral blood lymphocytes. Subjects included two groups of patients meeting published case definitions for chronic fatigue syndrome (CFS)—a group of veterans who developed their illness following their return home from participating in the Gulf War and a group of nonveterans who developed the illness sporadically. Case control comparison groups were comprised of healthy Gulf War veterans and nonveterans, respectively. We found no significant difference for any of the immune variables in the nonveteran population. In contrast, veterans with CFS had significantly more total T cells and MHC II+ T cells and a significantly higher percentage of these lymphocyte subpopulations, as well as a significantly lower percentage of NK cells, than the respective controls. In addition, veterans with CFS had significantly higher levels of IL-2, IL-10, IFN-γ, and TNF-α than the controls. These data do not support the hypothesis of immune dysfunction in the genesis of CFS for sporadic cases of CFS but do suggest that service in the Persian Gulf is associated with an altered immune status in veterans who returned with severe fatiguing illness.
PMCID: PMC95652  PMID: 9874656
9.  Impact of civil war on emotion recognition: the denial of sadness in Sierra Leone 
Studies of children with atypical emotional experience demonstrate that childhood exposure to high levels of hostility and threat biases emotion perception. This study investigates emotion processing, in former child soldiers and non-combatant civilians. All participants have experienced prolonged violence exposure during childhood. The study, carried out in Sierra Leone, aimed to examine the effects of exposure to and forced participation in acts of extreme violence on the emotion processing of young adults war survivors. A total of 76 young, male adults (38 former child soldier survivors and 38 civilian survivors) were tested in order to assess participants' ability to identify four different facial emotion expressions from photographs and movies. Both groups were able to recognize facial expressions of emotion. However, despite their general ability to correctly identify facial emotions, participants showed a significant response bias in their recognition of sadness. Both former soldiers and civilians made more errors in identifying expressions of sadness than in the other three emotions and when mislabeling sadness participants most often described it as anger. Conversely, when making erroneous identifications of other emotions, participants were most likely to label the expressed emotion as sadness. In addition, while for three of the four emotions participants were better able to make a correct identification the greater the intensity of the expression, this pattern was not observed for sadness. During movies presentation the recognition of sadness was significantly worse for soldiers. While both former child soldiers and civilians were found to be able to identify facial emotions, a significant response bias in their attribution of negative emotions was observed. Such bias was particularly pronounced in former child soldiers. These findings point to a pervasive long-lasting effect of childhood exposure to violence on emotion processing in later life.
doi:10.3389/fpsyg.2013.00523
PMCID: PMC3760028  PMID: 24027541
childhood; violence exposure; emotion recognition; young adults; war survivors; denial of sadness; Sierra Leone
10.  Protective Factors and Risk Modification of Violence in Iraq and Afghanistan War Veterans 
The Journal of Clinical Psychiatry  2012;73(6):e767-e773.
Objective
After returning home, a subset of Iraq and Afghanistan War Veterans report engaging in aggression toward others. This study is the first to identify variables empirically related to decreased risk of community violence among Veterans.
Method
The authors conducted a national survey from July 2009 to April 2010 in which participants were randomly drawn from over one million U. S. military service members who served after September 11, 2001. Data were colleceted from a total of 1388 Iraq and Afghanistan War era and theater veterans. The final sample included veterans from all 50 states and all military branches.
Results
One-third of survey respondents self-identified committing an act of aggression toward others during the past year, mostly involving minor aggressive behavior. Younger age, criminal arrest record, combat exposure, probable posttraumatic stress disorder, and alcohol misuse were positively related to violence toward others. Multivariate analyses showed that stable living situation and the perception of having control over one’s life were associated with reduced odds of severe violence. Greater resilience, perceiving positive social support, and having money to cover basic needs were linked to reduced odds of other physically aggression.
Conclusion
The study identifies aggression as a problem for a subset of Iraq and Afghanistan War Veterans who endorsed few protective factors. Data revealed that protective factors added incremental value to statistical modeling of violence, even when controlling for robust risk factors. The data indicate that, in addition to clinical interventions directed at treating mental health and substance abuse problems, psychosocial rehabilitation approaches aimed at improving domains of basic functioning and psychological well-being may also be effective in modifying risk and reducing violence among veterans.
doi:10.4088/JCP.11m07593
PMCID: PMC3399731  PMID: 22795217
11.  Cutaneous Melioidosis in a Man Who Was Taken as a Prisoner of War by the Japanese during World War II 
Journal of Clinical Microbiology  2005;43(2):970-972.
Melioidosis, an infection caused by the gram-negative bacillus Burkholderia pseudomallei, is endemic to Southeast Asia and Northern Australia. Human infection is acquired through contact with contaminated water via percutaneous inoculation. Clinical manifestations range from skin and soft tissue infection to pneumonia with sepsis. We report a case of a man who was taken as a prisoner of war by the Japanese during World War II who presented with a nonhealing ulcer on his right hand 62 years after the initial exposure.
doi:10.1128/JCM.43.2.970-972.2005
PMCID: PMC548040  PMID: 15695721
12.  Ross E. Baker, DC: A Canadian chiropractic survivor 
This paper is an historical biography of a fortunate man. It begins with a glimpse of Ross E. Baker’s origins in south-western Ontario, watches him going to school and working in Hamilton before joining the Canadian Army and shipping off to Europe to fight in the Second World War. At War’s end, the article picks up Dr. Baker as he comes home, starts a family, becomes a chiropractor and sustains a viable practice. Now in the twilight of life, the good doctor is last seen content with his retirement, spending days at his cottage property, reviewing his memoirs and reflecting on the tumult, terror and eventual triumph of the D-Day landing at Normandy.
PMCID: PMC3924507  PMID: 24587499
Baker; history; chiropractor; Baker; histoire; chiropraticien
13.  Global Comparison of Warring Groups in 2002–2007: Fatalities from Targeting Civilians vs. Fighting Battles 
PLoS ONE  2011;6(9):e23976.
Background
Warring groups that compete to dominate a civilian population confront contending behavioral options: target civilians or battle the enemy. We aimed to describe degrees to which combatant groups concentrated lethal behavior into intentionally targeting civilians as opposed to engaging in battle with opponents in contemporary armed conflict.
Methodology/Principal Findings
We identified all 226 formally organized state and non-state groups (i.e. actors) that engaged in lethal armed conflict during 2002–2007: 43 state and 183 non-state. We summed civilians killed by an actor's intentional targeting with civilians and combatants killed in battles in which the actor was involved for total fatalities associated with each actor, indicating overall scale of armed conflict. We used a Civilian Targeting Index (CTI), defined as the proportion of total fatalities caused by intentional targeting of civilians, to measure the concentration of lethal behavior into civilian targeting. We report actor-specific findings and four significant trends: 1.) 61% of all 226 actors (95% CI 55% to 67%) refrained from targeting civilians. 2.) Logistic regression showed actors were more likely to have targeted civilians if conflict duration was three or more years rather than one year. 3.) In the 88 actors that targeted civilians, multiple regressions showed an inverse correlation between CTI values and the total number of fatalities. Conflict duration of three or more years was associated with lower CTI values than conflict duration of one year. 4.) When conflict scale and duration were accounted for, state and non-state actors did not differ. We describe civilian targeting by actors in prolonged conflict. We discuss comparable patterns found in nature and interdisciplinary research.
Conclusions/Significance
Most warring groups in 2002–2007 did not target civilians. Warring groups that targeted civilians in small-scale, brief conflict concentrated more lethal behavior into targeting civilians, and less into battles, than groups in larger-scale, longer conflict.
doi:10.1371/journal.pone.0023976
PMCID: PMC3167835  PMID: 21915272
14.  Cross-sectional Biomonitoring of Metals in Adult Populations in Post-war Eastern Croatia: Differences Between Areas of Moderate and Heavy Combat 
Croatian Medical Journal  2010;51(5):451-460.
Aim
To determine differences in metal and metalloid exposure between residents of areas in eastern Croatia exposed to heavy fighting during the war in Croatia and residents of areas exposed to moderate fighting.
Methods
Concentrations of aluminum (Al), arsenic (As), barium (Ba), cadmium (Cd), chromium (Cr), copper (Cu), iron (Fe), nickel (Ni), lead (Pb), uranium (U), vanadium (V), and zinc (Zn), reported to be associated with military operations, were determined in hair, serum, and urine samples using inductively-coupled plasma mass spectroscopy. A total of 127 and 46 participants from areas of heavy and moderate fighting, respectively, were included.
Results
Compared with participants from areas exposed to moderate fighting, participants from areas exposed to heavy fighting had significantly higher serum concentrations of Al (87.61 vs 42.75 μg/L, P = 0.007), As (5.05 ± 1.79 vs 4.16 ± 1.55 μg/L, P = 0.003), Ba (7.12 vs 6.01 μg/L, P = 0.044), and V (17.98 vs 16.84 μg/L, P = 0.008); significantly higher urine concentrations of As (43.90 vs 11.51 μg/L, P < 0.001) and Cd (0.67 vs 0.50 μg/L, P = 0.031); and significantly higher hair concentrations of Al (12.61 vs 7.33 μg/L, P < 0.001), As (0.32 vs 0.05 μg/L, P < 0.001), Cd (0.03 vs 0.02 μg/L, P = 0.002), Fe (22.58 vs 12.68 μg/L, P = 0.001), Pb (1.04 vs 0.69 μg/L, P = 0.006), and V (0.07 vs 0.03 μg/L, P < 0.001).
Conclusion
Differences between populations from eastern Croatian areas exposed to heavy and populations exposed to moderate fighting point to the need for extensive monitoring of metal and metalloid exposure, emphasizing the role of biomonitoring through ecologic and preventive activities.
doi:10.3325/cmj.2010.51.451
PMCID: PMC2969140  PMID: 20960595
15.  The risk to the United Kingdom population of zinc cadmium sulfide dispersion by the Ministry of Defence during the "cold war" 
Objectives: To estimate exposures to cadmium (Cd) received by the United Kingdom population as a result of the dispersion of zinc Cd sulfide (ZnCdS) by the Ministry of Defence between 1953 and 1964, as a simulator of biological warfare agents.
Methods: A retrospective risk assessment study was carried out on the United Kingdom population during the period 1953–64. This determined land and air dispersion of ZnCdS over most of the United Kingdom, inhalation exposure of the United Kingdom population, soil contamination, and risks to personnel operating equipment that dispersed ZnCdS.
Results: About 4600 kg ZnCdS were dispersed from aircraft and ships, at times when the prevailing winds would allow large areas of the country to be covered. Cadmium released from 44 long range trials for which data are available, and extrapolated to a total of 76 trials to allow for trials with incomplete information, is about 1.2% of the estimated total release of Cd into the atmosphere over the same period. "Worst case" estimates are 10 µg Cd inhaled over 8 years, equivalent to Cd inhaled in an urban environment in 12–100 days, or from smoking 100 cigarettes. A further 250 kg ZnCdS was dispersed from the land based sites, but significant soil contamination occurred only in limited areas, which were and have remained uninhabited. Of the four personnel involved in the dispersion procedures (who were probably exposed to much higher concentrations of Cd than people on the ground), none are suspected of having related illnesses.
Conclusion: Exposure to Cd from dissemination of ZnCdS during the "cold war" should not have resulted in adverse health effects in the United Kingdom population.
doi:10.1136/oem.59.1.13
PMCID: PMC1740210  PMID: 11836463
16.  Management of war-related vascular injuries: experience from the second gulf war 
Aim
To study the biomechanism, pattern of injury, management, and outcome of major vascular injuries treated at Mubarak Al-Kabeer Teaching Hospital, Kuwait during the Second Gulf War.
Methods
This is a descriptive retrospective study. War-related injured patients who had major vascular injuries and were treated at Mubarak Al-Kabeer Teaching Hospital from August 1990 to September 1991 were studied. Studied variables included age, gender, anatomical site of vascular injury, mechanism of injury, associated injuries, type of vascular repair, and clinical outcome.
Results
36 patients having a mean (SD) age of 29.8 (10.2) years were studied. 32 (89%) were males and 21 (58%) were civilians. Majority of injuries were caused by bullets (47.2%) and blast injuries (47.2%). Eight patients (22%) presented with shock.
There were 31 arterial injuries, common and superficial femoral artery injuries were most common (10/31). Arterial repair included interposition saphenous vein graft in seven patients, thrombectomy with end-to-end / lateral repair in twelve patients, vein patch in two patients, and arterial ligation in four patients. Six patients had arterial ligation as part of primary amputation. 3/21 (14.3%) patients had secondary amputation after attempted arterial vascular repair of an extremity. There were a total of 17 venous injuries, 13 managed by lateral suture repair and 4 by ligation. The median (range) hospital stay was 8 (1–76) days. 5 patients died (14%).
Conclusions
Major vascular injuries occurred in 10% of hospitalized war-related injured patients. Our secondary amputation rate of extremities was 14%. The presence of a vascular surgeon within a military surgical team is highly recommended. Basic principles and techniques of vascular repair remain an essential part of training general surgeons because it may be needed in unexpected wars.
doi:10.1186/1749-7922-8-22
PMCID: PMC3700839  PMID: 23816260
Injury; Management; Trauma; Vascular; War
18.  In the face of war: examining sexual vulnerabilities of Acholi adolescent girls living in displacement camps in conflict-affected Northern Uganda 
Background
Adolescent girls are an overlooked group within conflict-affected populations and their sexual health needs are often neglected. Girls are disproportionately at risk of HIV and other STIs in times of conflict, however the lack of recognition of their unique sexual health needs has resulted in a dearth of distinctive HIV protection and prevention responses. Departing from the recognition of a paucity of literature on the distinct vulnerabilities of girls in time of conflict, this study sought to deepen the knowledge base on this issue by qualitatively exploring the sexual vulnerabilities of adolescent girls surviving abduction and displacement in Northern Uganda.
Methods
Over a ten-month period between 2004–2005, at the height of the Lord’s Resistance Army insurgency in Northern Uganda, 116 in-depth interviews and 16 focus group discussions were held with adolescent girls and adult women living in three displacement camps in Gulu district, Northern Uganda. The data was transcribed and key themes and common issues were identified. Once all data was coded the ethnographic software programme ATLAS was used to compare and contrast themes and categories generated in the in-depth interviews and focus group discussions.
Results
Our results demonstrated the erosion of traditional Acholi mentoring and belief systems that had previously served to protect adolescent girls’ sexuality. This disintegration combined with: the collapse of livelihoods; being left in camps unsupervised and idle during the day; commuting within camp perimeters at night away from the family hut to sleep in more central locations due to privacy and insecurity issues, and; inadequate access to appropriate sexual health information and services, all contribute to adolescent girls’ heightened sexual vulnerability and subsequent enhanced risk for HIV/AIDS in times of conflict.
Conclusions
Conflict prevention planners, resettlement programme developers, and policy-makers need to recognize adolescent girls affected by armed conflict as having distinctive needs, which require distinctive responses. More adaptive and sustainable gender-sensitive reproductive health strategies and HIV prevention initiatives for displaced adolescent girls in conflict settings must be developed.
doi:10.1186/1472-698X-12-38
PMCID: PMC3536565  PMID: 23270488
Adolescent girls; Conflict; Sexual vulnerability; Displacement camps; Northern Uganda; Acholi; Qualitative; HIV/AIDS
19.  Caenorhabditis elegans: a model to monitor bacterial air quality 
BMC Research Notes  2011;4:503.
Background
Low environmental air quality is a significant cause of mortality and morbidity and this question is now emerging as a main concern of governmental authorities. Airborne pollution results from the combination of chemicals, fine particles, and micro-organisms quantitatively or qualitatively dangerous for health or for the environment. Increasing regulations and limitations for outdoor air quality have been decreed in regards to chemicals and particles contrary to micro-organisms. Indeed, pertinent and reliable tests to evaluate this biohazard are scarce. In this work, our purpose was to evaluate the Caenorhaditis elegans killing test, a model considered as an equivalent to the mouse acute toxicity test in pharmaceutical industry, in order to monitor air bacterial quality.
Findings
The present study investigates the bacterial population in dust clouds generated during crop ship loading in harbor installations (Rouen harbor, Normandy, France). With a biocollector, airborne bacteria were impacted onto the surface of agar medium. After incubation, a replicate of the colonies on a fresh agar medium was done using a velvet. All the replicated colonies were pooled creating the "Total Air Sample". Meanwhile, all the colonies on the original plate were isolated. Among which, five representative bacterial strains were chosen. The virulence of these representatives was compared to that of the "Total Air Sample" using the Caenorhaditis elegans killing test. The survival kinetic of nematodes fed with the "Total Air Sample" is consistent with the kinetics obtained using the five different representatives strains.
Conclusions
Bacterial air quality can now be monitored in a one shot test using the Caenorhaditis elegans killing test.
doi:10.1186/1756-0500-4-503
PMCID: PMC3279514  PMID: 22099854
20.  A mating method accounting for inbreeding and multi-trait selection in dairy cattle populations 
Selection in dairy cattle populations usually takes into account both the breed profiles for many traits and their overall estimated breeding values (EBV). This can result in effective contributions of breeding animals departing substantially from contributions optimised for saving future genetic variability. In this work, we propose a mating method that considers not only inbreeding but also the detailed EBV of progeny or the EBV of sires in reference to acceptance thresholds. Penalties were defined for inbreeding and for inadequate EBV profiles. Relative reductions of penalties yielded by any mating design were expressed on a scale ranging from 0 to 1. A value of 0 represented the average performance of random matings and a value of 1 represented the maximal reduction allowed by a specialized, single-penalty, mating design. The core of the method was an adaptative simulated annealing, where the maximized function was the average of both ratios, under the constraints that both relative penalty reductions should be equal and that the within-herd concentration criterion should be equal to a predefined reasonable value. The method was tested on two French dairy cattle populations originating from the same AI organization. The optimised mating design allowed substantial reductions of penalty: 70% and 64% for the Holstein and the Normandy populations, respectively. Thus, this mating method decreased inbreeding and met various demands from breeders.
doi:10.1186/1297-9686-41-7
PMCID: PMC3225904  PMID: 19284690
21.  Prevalence and association of perceived stress, substance use and behavioral addictions: a cross-sectional study among university students in France, 2009–2011 
BMC Public Health  2013;13:724.
Background
University students face multiple stressors such as academic overload, constant pressure to succeed, competition with peers as well as concerns about the future. Stress should not be considered on its own, but should be associated with potential risk behaviors leading to onset of substance use and related problems heightened during the university period. The aim of this study was to determine the prevalence of main substance use and behavioral addictions among students in higher education in France and to examine the relationship with perceived stress.
Methods
A self-administered questionnaire was filled out by university student volunteers from Upper Normandy (France) either by anonymous online questionnaire or by paper questionnaire. Data collected included socio-economic characteristics, Perceived Stress Scale (PSS), substance use (tobacco, alcohol, and cannabis) and hazardous behaviors: alcohol abuse problems, smoking, consumption of cannabis, eating disorders, and cyber addiction.
Results
A total of 1876 students were included. Mean PSS score was 15.9 (standard deviation = 7.2). Highly stressed students (4th quartile) were compared with lesser stressed students (1st quartile). A positive relation was observed between female gender, regular smokers, alcohol abuse problems, risk of cyberaddiction and especially eating disorders (AOR = 5.45, 95% CI = 3.42-8.69), and increasing PSS score. PSS score however, was not significantly related to the curriculum, regular alcohol use, drunkenness or binge drinking even after additional controlling for use of other substances. We found a significant negative association between stress and practice of sport: students with the most physical activity were less likely to report perceived stress (4th quartile: AOR = 0.57, 95% CI = 0.39-0.80).
Conclusions
This cross-sectional study among university students in France revealed that perceived stress was associated not only with known risks such as alcohol misuse, but also with new risks such as eating disorders and cyber addiction. These results could help to develop preventive interventions focussing on these risk behaviors and subsequently improving stress coping capacity in this high-risk population.
doi:10.1186/1471-2458-13-724
PMCID: PMC3750571  PMID: 23919651
22.  Geographical movement of doctors from education to training and eventual career post: UK cohort studies 
Objective
To investigate the geographical mobility of UK-trained doctors.
Design
Cohort studies conducted by postal questionnaires.
Setting
UK.
Participants
A total 31,353 UK-trained doctors in 11 cohorts defined by year of qualification, from 1974 to 2008.
Main outcome measures
Location of family home prior to medical school, location of medical school, region of first training post, region of first career post. Analysis for the UK divided into 17 standard geographical regions.
Results
The response rate was 81.2% (31,353/45,061; denominators, below, depended on how far the doctors’ careers had progressed). Of all respondents, 36% (11,381/31,353) attended a medical school in their home region and 48% (10,370/21,740) undertook specialty training in the same region as their medical school.
Of respondents who had reached the grade of consultant or principal in general practice in the UK, 34% (4169/12,119) settled in the same region as their home before entering medical school. Of those in the UK, 70% (7643/10,887) held their first career post in the same region as either their home before medical school, or their medical school or their location of training. For 18% (1938/10,887), all four locations – family home, medical school, place of training, place of first career post – were within the same region. A higher percentage of doctors from the more recent than from the older cohorts settled in the region of their family home.
Conclusion
Many doctors do not change geographical region in their successive career moves, and recent cohorts appear less inclined to do so.
doi:10.1177/0141076812472617
PMCID: PMC3595409  PMID: 23481431
23.  Short-term training of upper gastrointestinal endoscopy for resident doctors in Sotogahama Central Hospital in Aomori, Japan 
It is essential for young physicians in municipal hospitals to be familiar with the technique of upper gastrointestinal (GI) endoscopy. Endoscopy is an exciting subspecialty in primary care medicine. Endoscopic procedures are primarily performed by general physicians in Japan. However, a standardized strategy for teaching diagnostic GI endoscopy is still lacking, and there is not sufficient time for young physicians to effectively learn the upper GI endoscopy technique. To elucidate how young physicians can be trained in the skills of GI endoscopy in a short time period, we initiated a 12-week training course. Two young physicians performed upper GI endoscopies for outpatients and inpatients 2 or 3 days a week from April 2010 to March 2012. The total number of cases undergoing GI endoscopy during the training course in each year was 117 and 111, respectively. The young physicians were trained in this technique by the attending physician. The short-term training course included four phases. During these phases, the young physicians learned how to insert the endoscope through the nasal cavity or oral cavity into the esophageal inlet, how to pass the endoscope from the esophageal inlet into the duodenum, how to take pictures with the endoscope, and how to stain the gastric and duodenal mucosa and take mucosal biopsy samples. The young physicians experienced 20–30 cases in each phase. In week five, they performed endoscope insertion into the duodenum along the folds of the greater curvature of the stomach. They viewed the entire stomach and took pictures until week ten of the course. The pictures taken in week ten were of a better quality for examining the disease lesions than those taken in week six. In the last 2 weeks of the training course, the young physicians stained the gastric and duodenal mucosa and took mucosal biopsy samples. The short-term training course of 100–120 cases in 12 weeks was effective for teaching young physicians how to perform GI endoscopies independently.
doi:10.2147/AMEP.S43476
PMCID: PMC3746972  PMID: 23976870
endoscopy; gastroenterology; general medicine; medical education; young physicians
24.  Mortality in adults aged 26-54 years related to socioeconomic conditions in childhood and adulthood: post war birth cohort study 
BMJ : British Medical Journal  2002;325(7372):1076-1080.
Objective
To examine premature mortality in adults in relation to socioeconomic conditions in childhood and adulthood.
Design
Nationally representative birth cohort study with prospective information on socioeconomic conditions.
Setting
England, Scotland, and Wales.
Study members
2132 women and 2322 men born in March 1946 and followed until age 55 years.
Main outcome measures
Deaths between 26 and 54 years of age notified by the NHS central register.
Results
Study members whose father's occupation was manual at age 4, or who lived in the worst housing, or who received the poorest care in childhood had double the death rate during adulthood of those living in the best socioeconomic conditions. All indicators of socioeconomic disadvantage at age 26 years, particularly lack of home ownership, were associated with a higher death rate. Manual origins and poor care in childhood remained associated with mortality even after adjusting for social class in adulthood or home ownership. The hazard ratio was 2.6 (95% confidence interval 1.5 to 4.4) for those living in manual households as children and as adults compared with those living in non-manual households at both life stages. The hazard ratio for those from manual origins who did not own their own home at age 26 years was 4.9 (2.3 to 10.5) compared with those from non-manual origins who were home owners.
Conclusions
Socioeconomic conditions in childhood as well as early adulthood have strongly influenced the survival of British people born in the immediate post war era.
What is already known on this topicAssociations between socioeconomic conditions in childhood and mortality in adulthood suggest that risks to survival begin in early lifeStudies have been generally retrospective, been unrepresentative, used only one marker of childhood conditions, controlled inadequately for adult conditions, or not included womenWhat this study addsThe death rate for women and men between 26 and 54 years living in poor socioeconomic conditions in childhood was double that of those living in the best conditionsThose for whom socioeconomic disadvantage continued into early adulthood were between three and five times more likely to die than those in the most advantageous conditions
PMCID: PMC131184  PMID: 12424168
25.  Vulnerability to high risk sexual behaviour (HRSB) following exposure to war trauma as seen in post-conflict communities in eastern uganda: a qualitative study 
Conflict and Health  2011;5:22.
Background
Much of the literature on the relationship between conflict-related trauma and high risk sexual behaviour (HRSB) often focuses on refugees and not mass in-country displaced people due to armed conflicts. There is paucity of research about contexts underlying HRSB and HIV/AIDS in conflict and post-conflict communities in Uganda. Understanding factors that underpin vulnerability to HRSB in post-conflict communities is vital in designing HIV/AIDS prevention interventions. We explored the socio-cultural factors, social interactions, socio-cultural practices, social norms and social network structures that underlie war trauma and vulnerability to HRSB in a post-conflict population.
Methods
We did a cross-sectional qualitative study of 3 sub-counties in Katakwi district and 1 in Amuria in Uganda between March and May 2009. We collected data using 8 FGDs, 32 key informant interviews and 16 in-depth interviews. We tape-recorded and transcribed the data. We followed thematic analysis principles to manage, analyse and interpret the data. We constantly identified and compared themes and sub-themes in the dataset as we read the transcripts. We used illuminating verbatim quotations to illustrate major findings.
Results
The commonly identified HRSB behaviours include; transactional sex, sexual predation, multiple partners, early marriages and forced marriages. Breakdown of the social structure due to conflict had resulted in economic destruction and a perceived soaring of vulnerable people whose propensity to HRSB is high. Dishonour of sexual sanctity through transactional sex and practices like incest mirrored the consequence of exposure to conflict. HRSB was associated with concentration of people in camps where idleness and unemployment were the norm. Reports of girls and women who had been victims of rape and defilement by men with guns were common. Many people were known to have started to display persistent worries, hopelessness, and suicidal ideas and to abuse alcohol.
Conclusions
The study demonstrated that conflicts disrupt the socio-cultural set up of communities and destroy sources of people's livelihood. Post-conflict socio-economic reconstruction needs to encompass programmes that restructure people's morals and values through counselling. HIV/AIDS prevention programming in post-conflict communities should deal with socio-cultural disruptions that emerged during conflicts. Some of the disruptions if not dealt with, could become normalized yet they are predisposing factors to HRSB. Socio-economic vulnerability as a consequence of conflict seemed to be associated with HRSB through alterations in sexual morality. To pursue safer sexual health choices, people in post-conflict communities need life skills.
doi:10.1186/1752-1505-5-22
PMCID: PMC3213062  PMID: 22011647

Results 1-25 (1111500)