PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (702458)

Clipboard (0)
None

Related Articles

1.  Surgical treatment of Duchenne muscular dystrophy patients in Germany: the present situation 
Acta Myologica  2012;31(1):21-23.
In 1988, we familiarised ourselves at Poitiers with the concept of operative treatment of the lower limbs and the spine in Duchenne muscular dystrophy (DMD) patients which Yves Rideau and his collaborators (1, 2) had developed there in the early 1980s. Thereupon, we immediately established the techniques at our home universities, first at the Technische Universität Aachen and, from 1999 on, at the Universitätsklinikum Erlangen, Germany. Since then, we have applied the technique to more than 500 DMD patients in total by performing more than 800 operations on the lower limbs and/or spine. In support of findings reported by Professor Rideau in this issue (3) we observed that, where patients are still ambulatory at the time of operation, the operation delays the point at which patients become wheelchair-bound by about two years. Likewise, patients receiving this treatment were/are also able to perform the Gowers' manoeuvre for around two years longer (4-6).
PMCID: PMC3440800  PMID: 22655513
Duchenne muscular dystrophy; prophylactic surgery; prevention of scoliosis
2.  The Reversal of Fortunes: Trends in County Mortality and Cross-County Mortality Disparities in the United States  
PLoS Medicine  2008;5(4):e66.
Background
Counties are the smallest unit for which mortality data are routinely available, allowing consistent and comparable long-term analysis of trends in health disparities. Average life expectancy has steadily increased in the United States but there is limited information on long-term mortality trends in the US counties This study aimed to investigate trends in county mortality and cross-county mortality disparities, including the contributions of specific diseases to county level mortality trends.
Methods and Findings
We used mortality statistics (from the National Center for Health Statistics [NCHS]) and population (from the US Census) to estimate sex-specific life expectancy for US counties for every year between 1961 and 1999. Data for analyses in subsequent years were not provided to us by the NCHS. We calculated different metrics of cross-county mortality disparity, and also grouped counties on the basis of whether their mortality changed favorably or unfavorably relative to the national average. We estimated the probability of death from specific diseases for counties with above- or below-average mortality performance. We simulated the effect of cross-county migration on each county's life expectancy using a time-based simulation model. Between 1961 and 1999, the standard deviation (SD) of life expectancy across US counties was at its lowest in 1983, at 1.9 and 1.4 y for men and women, respectively. Cross-county life expectancy SD increased to 2.3 and 1.7 y in 1999. Between 1961 and 1983 no counties had a statistically significant increase in mortality; the major cause of mortality decline for both sexes was reduction in cardiovascular mortality. From 1983 to 1999, life expectancy declined significantly in 11 counties for men (by 1.3 y) and in 180 counties for women (by 1.3 y); another 48 (men) and 783 (women) counties had nonsignificant life expectancy decline. Life expectancy decline in both sexes was caused by increased mortality from lung cancer, chronic obstructive pulmonary disease (COPD), diabetes, and a range of other noncommunicable diseases, which were no longer compensated for by the decline in cardiovascular mortality. Higher HIV/AIDS and homicide deaths also contributed substantially to life expectancy decline for men, but not for women. Alternative specifications of the effects of migration showed that the rise in cross-county life expectancy SD was unlikely to be caused by migration.
Conclusions
There was a steady increase in mortality inequality across the US counties between 1983 and 1999, resulting from stagnation or increase in mortality among the worst-off segment of the population. Female mortality increased in a large number of counties, primarily because of chronic diseases related to smoking, overweight and obesity, and high blood pressure.
Majid Ezzati and colleagues analyze US county-level mortality data for 1961 to 1999, and find a steady increase in mortality inequality across counties between 1983 and 1999.
Editors' Summary
Background.
It has long been recognized that the number of years that distinct groups of people in the United States would be expected to live based on their current mortality patterns (“life expectancy”) varies enormously. For example, white Americans tend to live longer than black Americans, the poor tend to have shorter life expectancies than the wealthy, and women tend to outlive men. Where one lives might also be a factor that determines his or her life expectancy, because of social conditions and health programs in different parts of the country.
Why Was the Study Done?
While life expectancies have generally been rising across the United States over time, there is little information, especially over the long term, on the differences in life expectancies across different counties. The researchers therefore set out to examine whether there were different life expectancies across different US counties over the last four decades. The researchers chose to look at counties—the smallest geographic units for which data on death rates are collected in the US—because it allowed them to make comparisons between small subgroups of people that share the same administrative structure.
What Did the Researchers Do and Find?
The researchers looked at differences in death rates between all counties in US states plus the District of Columbia over four decades, from 1961 to 1999. They obtained the data on number of deaths from the National Center for Health Statistics, and they obtained data on the number of people living in each county from the US Census. The NCHS did not provide death data after 2001. They broke the death rates down by sex and by disease to assess trends over time for women and men, and for different causes of death.
Over these four decades, the researchers found that the overall US life expectancy increased from 67 to 74 years of age for men and from 74 to 80 years for women. Between 1961 and 1983 the death rate fell in both men and women, largely due to reductions in deaths from cardiovascular disease (heart disease and stroke). During this same period, 1961–1983, the differences in death rates among/across different counties fell. However, beginning in the early 1980s the differences in death rates among/across different counties began to increase. The worst-off counties no longer experienced a fall in death rates, and in a substantial number of counties, mortality actually increased, especially for women, a shift that the researchers call “the reversal of fortunes.” This stagnation in the worst-off counties was primarily caused by a slowdown or halt in the reduction of deaths from cardiovascular disease coupled with a moderate rise in a number of other diseases, such as lung cancer, chronic lung disease, and diabetes, in both men and women, and a rise in HIV/AIDS and homicide in men. The researchers' key finding, therefore, was that the differences in life expectancy across different counties initially narrowed and then widened.
What Do these Findings Mean?
The findings suggest that beginning in the early 1980s and continuing through 1999 those who were already disadvantaged did not benefit from the gains in life expectancy experienced by the advantaged, and some became even worse off. The study emphasizes how important it is to monitor health inequalities between different groups, in order to ensure that everyone—and not just the well-off—can experience gains in life expectancy. Although the “reversal of fortune” that the researchers found applied to only a minority of the population, the authors argue that their study results are troubling because an oft-stated aim of the US health system is the improvement of the health of “all people, and especially those at greater risk of health disparities” (see, for example http://www.cdc.gov/osi/goals/SIHPGPostcard.pdf).
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050066.
A study by Nancy Krieger and colleagues, published in PLoS Medicine in February 2008, documented a similar “fall and rise” in health inequities. Krieger and colleagues reported that the difference in health between rich and poor and between different racial/ethnic groups, as measured by rates of dying young and of infant deaths, shrank in the US from 1966 to 1980 then widened from 1980 to 2002
Murray and colleagues, in a 2006 PLoS Medicine article, calculated US mortality rates according to “race-county” units and divided into the “eight Americas,” and found disparities in life expectancy across them
The US Centers for Disease Control has an Office of Minority Health and Health Disparities. The office “aims to accelerate CDC's health impact in the US population and to eliminate health disparities for vulnerable populations as defined by race/ethnicity, socioeconomic status, geography, gender, age, disability status, risk status related to sex and gender, and among other populations identified to be at-risk for health disparities”
Wikipedia has a chapter on health disparities (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
In 2001 the US Agency for Healthcare Research and Quality sponsored a workshop on “strategies to reduce health disparities”
doi:10.1371/journal.pmed.0050066
PMCID: PMC2323303  PMID: 18433290
3.  Retention in HIV Care between Testing and Treatment in Sub-Saharan Africa: A Systematic Review 
PLoS Medicine  2011;8(7):e1001056.
In this systematic review, Sydney Rosen and Matthew Fox find that less than one-third of patients who tested positive for HIV, but were not eligible for antiretroviral therapy (ART) when diagnosed, were retained in pre-ART care continuously.
Background
Improving the outcomes of HIV/AIDS treatment programs in resource-limited settings requires successful linkage of patients testing positive for HIV to pre–antiretroviral therapy (ART) care and retention in pre-ART care until ART initiation. We conducted a systematic review of pre-ART retention in care in Africa.
Methods and Findings
We searched PubMed, ISI Web of Knowledge, conference abstracts, and reference lists for reports on the proportion of adult patients retained between any two points between testing positive for HIV and initiating ART in sub-Saharan African HIV/AIDS care programs. Results were categorized as Stage 1 (from HIV testing to receipt of CD4 count results or clinical staging), Stage 2 (from staging to ART eligibility), or Stage 3 (from ART eligibility to ART initiation). Medians (ranges) were reported for the proportions of patients retained in each stage. We identified 28 eligible studies. The median proportion retained in Stage 1 was 59% (35%–88%); Stage 2, 46% (31%–95%); and Stage 3, 68% (14%–84%). Most studies reported on only one stage; none followed a cohort of patients through all three stages. Enrollment criteria, terminology, end points, follow-up, and outcomes varied widely and were often poorly defined, making aggregation of results difficult. Synthesis of findings from multiple studies suggests that fewer than one-third of patients testing positive for HIV and not yet eligible for ART when diagnosed are retained continuously in care, though this estimate should be regarded with caution because of review limitations.
Conclusions
Studies of retention in pre-ART care report substantial loss of patients at every step, starting with patients who do not return for their initial CD4 count results and ending with those who do not initiate ART despite eligibility. Better health information systems that allow patients to be tracked between service delivery points are needed to properly evaluate pre-ART loss to care, and researchers should attempt to standardize the terminology, definitions, and time periods reported.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Since 1981, AIDS has killed more than 25 million people, and about 33 million people (mostly living in low- and middle-income countries) are now infected with HIV, the virus that causes AIDS. HIV gradually destroys immune system cells (including CD4 cells, a type of lymphocyte), leaving infected individuals susceptible to other infections. Early in the AIDS epidemic, most HIV-infected people died within ten years of infection. Then, in 1996, highly active antiretroviral therapy (ART) became available, and, for people living in developed countries, HIV infection became a chronic condition. Unfortunately, ART was extremely expensive, and HIV/AIDS remained a fatal illness for people living in developing countries. In 2003, governments, international agencies, and funding bodies began to implement plans to increase ART coverage in resource-limited countries. By the end of 2009, about a third of the people in these countries who needed ART (HIV-positive people whose CD4 count had dropped so low that they could not fight other infections) were receiving treatment.
Why Was This Study Done?
Unfortunately, many HIV-positive people in resource-limited countries who receive ART still do not have a normal life expectancy, often because they start ART when they have a very low CD4 count. ART is more successful if it is started before the CD4 count falls far below 350 cells/mm3 of blood, the threshold recommended by the World Health Organization for ART initiation. Thus, if the outcomes of HIV/AIDS programs in resource-limited settings are to be improved, all individuals testing positive for HIV must receive continuous pre-ART care that includes regular CD4 counts to ensure that ART is initiated as soon as they become eligible for treatment. Before interventions can be developed to achieve this aim, it is necessary to understand where and when patients are lost to pre-ART care. In this systematic review (a study that uses predefined criteria to identify all the research on a given topic), the researchers investigate the retention of HIV-positive adults in pre-ART care in sub-Saharan Africa.
What Did the Researchers Do and Find?
The researchers identified 28 studies that included data on the proportion of adult patients retained between any two time points between testing positive for HIV and starting ART in HIV/AIDS care programs in sub-Saharan Africa. They defined three stages of pre-ART care: Stage 1, the interval between testing positive for HIV and receiving CD4 count results or being clinically assessed; Stage 2, the interval between enrollment in pre-ART care and the determination of eligibility for ART; and Stage 3, the interval between being deemed eligible for ART and treatment initiation. A median of 59% of patients were retained in Stage 1 of pre-ART care, 46% were retained in Stage 2, and 68% were retained in Stage 3. Retention rates in each stage differed greatly between studies—between 14% and 84% for Stage 3 pre-ART care, for example. Because the enrollment criteria and other characteristics of the identified studies varied widely and were often poorly defined, it was hard to combine study results. Nevertheless, the researchers estimate that, taking all the studies together, less than one-third of patients testing positive for HIV but not eligible for ART when diagnosed were retained in pre-ART care continuously.
What Do These Findings Mean?
These findings suggest that there is a substantial loss of HIV-positive patients at every stage of pre-ART care in sub-Saharan Africa. Thus, some patients receiving a positive HIV test never return for the results of their initial CD4 count, some disappear between having an initial CD4 count and becoming eligible for ART, and others fail to initiate ART after having been found eligible for treatment. Because only a few studies were identified (half of which were undertaken in South Africa) and because the quality and design of some of these studies were suboptimal, the findings of this systematic review must be treated with caution. In particular, the estimate of the overall loss of patients during pre-ART care is likely to be imprecise. The researchers call, therefore, for the implementation of better health information systems that would allow patients to be tracked between service delivery points as a way to improve the evaluation and understanding of the loss of HIV-positive patients to pre-ART care in resource-limited countries.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001056.
Information is available from the US National Institute of Allergy and Infectious Diseases on HIV infection and AIDS
HIV InSite has comprehensive information on all aspects of HIV/AIDS
Information is available from Avert, an international AIDS charity on many aspects of HIV/AIDS, including information on HIV/AIDS treatment and care, on HIV and AIDS in Africa and on universal access to AIDS treatment (in English and Spanish)
The World Health Organization provides information about universal access to AIDS treatment, including the 2010 progress report (in English, French and Spanish); its 2010 ART guidelines can be downloaded (in several languages)
The International AIDS Economics Network posts information about economic, social, and behavioral aspects of HIV care and treatment
Up-to-date research findings about HIV care and treatment are summarized by NAM/aidsmap
doi:10.1371/journal.pmed.1001056
PMCID: PMC3139665  PMID: 21811403
4.  Cost-Effectiveness of Pooled Nucleic Acid Amplification Testing for Acute HIV Infection after Third-Generation HIV Antibody Screening and Rapid Testing in the United States: A Comparison of Three Public Health Settings 
PLoS Medicine  2010;7(9):e1000342.
Angela Hutchinson and colleagues conducted a cost-effectiveness analysis of pooled nucleic acid amplification testing following HIV testing and show that it is not cost-effective at recommended antibody testing intervals for high-risk persons except in very high-incidence settings.
Background
Detection of acute HIV infection (AHI) with pooled nucleic acid amplification testing (NAAT) following HIV testing is feasible. However, cost-effectiveness analyses to guide policy around AHI screening are lacking; particularly after more sensitive third-generation antibody screening and rapid testing.
Methods and Findings
We conducted a cost-effectiveness analysis of pooled NAAT screening that assessed the prevention benefits of identification and notification of persons with AHI and cases averted compared with repeat antibody testing at different intervals. Effectiveness data were derived from a Centers for Disease Control and Prevention AHI study conducted in three settings: municipal sexually transmitted disease (STD) clinics, a community clinic serving a population of men who have sex with men, and HIV counseling and testing sites. Our analysis included a micro-costing study of NAAT and a mathematical model of HIV transmission. Cost-effectiveness ratios are reported as costs per quality-adjusted life year (QALY) gained in US dollars from the societal perspective. Sensitivity analyses were conducted on key variables, including AHI positivity rates, antibody testing frequency, symptomatic detection of AHI, and costs. Pooled NAAT for AHI screening following annual antibody testing had cost-effectiveness ratios exceeding US$200,000 per QALY gained for the municipal STD clinics and HIV counseling and testing sites and was cost saving for the community clinic. Cost-effectiveness ratios increased substantially if the antibody testing interval decreased to every 6 months and decreased to cost-saving if the testing interval increased to every 5 years. NAAT was cost saving in the community clinic in all situations. Results were particularly sensitive to AHI screening yield.
Conclusions
Pooled NAAT screening for AHI following negative third-generation antibody or rapid tests is not cost-effective at recommended antibody testing intervals for high-risk persons except in very high-incidence settings.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Since 1981, acquired immunodeficiency syndrome (AIDS) has killed about 25 million people and about 30 million people are now infected with the human immunodeficiency virus (HIV), which causes AIDS. HIV, which is most often transmitted through unprotected sex with an infected partner or injection drug use, infects and kills immune system cells, leaving infected individuals susceptible to other infectious diseases. The first, often undiagnosed stage of HIV infection—acute HIV infection (AHI)—lasts a few weeks and sometimes involves a flu-like illness. During AHI, the immune system responds to HIV by beginning to make antibodies that recognize the virus but seroconversion—the appearance of detectable amounts of antibody in the blood—takes 6–12 weeks. During the second, symptom-free stage of HIV infection, which can last many years, the virus gradually destroys the immune system so that by the third stage of infection unusual infections (for example, persistent yeast infections) begin to occur. The final stage of infection (AIDS) is characterized by multiple severe infections and by the development of unusual cancers.
Why Was This Study Done?
Antiretroviral drugs control HIV infections but don't cure them. It is very important, therefore, to prevent HIV transmission by avoiding HIV risk behaviors that increase the risk of HIV infection such as having sex without a condom or with many partners. Individuals with AHI in particular need to avoid high-risk behaviors because these people are extremely infectious. However, routine tests for HIV infection that measure antibodies in the blood often give false-negative results in people with AHI because of the time lag between infection and seroconversion. Nucleic acid amplification testing (NAAT), which detects HIV genetic material in the blood, is a more accurate way to diagnose AHI but is expensive. In this study, the researchers investigate whether pooled NAAT screening (specimens are pooled before testing to reduce costs) for AHI in clinic settings after third-generation antibody testing is a cost-effective HIV prevention strategy. That is, does the gain in quality-adjusted life years (QALY, a measure of the quantity and quality of life generated by healthcare interventions) achieved by averting new HIV infections outweigh the costs of pooled NAAT screening?
What Did the Researchers Do and Find?
The researchers combined effectiveness data from a US study in which AHI was detected using pooled NAAT in three settings (sexually transmitted disease [STD] clinics, a community clinic serving men who have sex with men [MSM], and HIV counseling/testing sites) with a “micro-costing” study of NAAT and a mathematical model of HIV transmission. They then calculated the costs per QALY gained (the cost-effectiveness ratio) as a result of HIV prevention by identification and notification of people with AHI through pooled NAAT screening compared with repeat antibody testing. Pooled NAAT for AHI screening following annual antibody testing (the recommended testing interval for high-risk individuals), they estimate, would cost US$372,300 and US$484,400 per QALY gained for the counseling/testing sites and STD clinics, respectively, whereas pooled NAAT for AHI screening was cost-saving for the community clinic serving MSM. The cost-effectiveness ratio increased for the counseling/testing sites and STD clinics when the antibody testing interval was decreased to 6 months but remained cost-saving for the community clinic. With an antibody testing interval of 5 years, pooled NAAT was cost-saving in all three settings.
What Do These Findings Mean?
Cost-effectiveness ratios of US$100,000–US$200,000 are considered acceptable in the US. These results suggest therefore, that the cost of pooled NAAT screening for AHI following negative third-generation antibody testing is not acceptable at the recommended testing interval for high-risk individuals except in settings where HIV infection is very common such as clinics serving MSM. The researchers reach a similar conclusion in a separate cost-effectiveness analysis of pooled NAAT screening following a negative rapid HIV test. Although the accuracy of these results depends on numerous assumptions made in the cost-effectiveness analyses (for example, the degree to which awareness of HIV status affects the behavior of people with AHI), sensitivity analyses (investigations of the effect of altering key assumptions) show that these findings are not greatly affected by changes in many of these assumptions. Thus, the researchers conclude, NAAT screening should be reserved for settings that serve populations in which there are very high levels of new HIV infection.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000342.
The US Centers for Disease Control and Prevention provides information on HIV infection and AIDS and on HIV testing and diagnosis
HIV InSite has information on all aspects of HIV/AIDS
Information is available from Avert, an international AIDS nonprofit organization on many aspects of HIV/AIDS, including HIV testing (in English and Spanish)
MedlinePlus has links to further resources on AIDS (in English and Spanish)
The UK National Institute of Health and Clinical Excellence has a page on measuring effectiveness and cost-effectiveness
doi:10.1371/journal.pmed.1000342
PMCID: PMC2946951  PMID: 20927354
5.  Effects of shared medical appointments on quality of life and cost-effectiveness for patients with a chronic neuromuscular disease. Study protocol of a randomized controlled trial 
BMC Neurology  2011;11:106.
Background
Shared medical appointments are a series of one-to-one doctor-patient contacts, in presence of a group of 6-10 fellow patients. This group visits substitute the annual control visits of patients with the neurologist. The same items attended to in a one-to- one appointment are addressed. The possible advantages of a shared medical appointment could be an added value to the present management of neuromuscular patients. The currently problem-focused one-to-one out-patient visits often leave little time for the patient's psychosocial needs, patient education, and patient empowerment.
Methods/design
A randomized, prospective controlled study (RCT) with a follow up of 6 months will be conducted to evaluate the clinical and cost-effectiveness of shared medical appointments compared to usual care for 300 neuromuscular patients and their partners at the Radboud University Nijmegen Medical Center. Every included patient will be randomly allocated to one of the two study arms. This study has been reviewed and approved by the medical ethics committee of the region Arnhem-Nijmegen, the Netherlands. The primary outcome measure is quality of life as measured by the EQ-5D, SF-36 and the Individualized neuromuscular Quality of Life Questionnaire. The primary analysis will be an intention-to-treat analysis on the area under the curve of the quality of life scores. A linear mixed model will be used with random factor group and fixed factors treatment, baseline score and type of neuromuscular disease. For the economic evaluation an incremental cost-effectiveness analysis will be conducted from a societal perspective, relating differences in costs to difference in health outcome. Results are expected in 2012.
Discussion
This study will be the first randomized controlled trial which evaluates the effect of shared medical appointments versus usual care for neuromuscular patients. This will enable to determine if there is additional value of shared medical appointments to the current therapeutical spectrum. When this study shows that group visits produce the alleged benefits, this may help to increase the acceptance of this innovative and creative way of using one of the most precious resources in health care more efficiently: time.
Trial registration
DutchTrial Register http://www.trialregister.nlNTR1412
doi:10.1186/1471-2377-11-106
PMCID: PMC3178478  PMID: 21861909
6.  Invasive home mechanical ventilation, mainly focused on neuromuscular disorders 
Introduction and background
Invasive home mechanical ventilation is used for patients with chronic respiratory insufficiency. This elaborate and technology-dependent ventilation is carried out via an artificial airway (tracheal cannula) to the trachea. Exact numbers about the incidence of home mechanical ventilation are not available. Patients with neuromuscular diseases represent a large portion of it.
Research questions
Specific research questions are formulated and answered concerning the dimensions of medicine/nursing, economics, social, ethical and legal aspects. Beyond the technical aspect of the invasive home, mechanical ventilation, medical questions also deal with the patient’s symptoms and clinical signs as well as the frequency of complications. Economic questions pertain to the composition of costs and the differences to other ways of homecare concerning costs and quality of care. Questions regarding social aspects consider the health-related quality of life of patients and caregivers. Additionally, the ethical aspects connected to the decision of home mechanical ventilation are viewed. Finally, legal aspects of financing invasive home mechanical ventilation are discussed.
Methods
Based on a systematic literature search in 2008 in a total of 31 relevant databases current literature is viewed and selected by means of fixed criteria. Randomized controlled studies, systematic reviews and HTA reports (health technology assessment), clinical studies with patient numbers above ten, health-economic evaluations, primary studies with particular cost analyses and quality-of-life studies related to the research questions are included in the analysis.
Results and discussion
Invasive mechanical ventilation may improve symptoms of hypoventilation, as the analysis of the literature shows. An increase in life expectancy is likely, but for ethical reasons it is not confirmed by premium-quality studies. Complications (e. g. pneumonia) are rare. Mobile home ventilators are available for the implementation of the ventilation. Their technical performance however, differs regrettably.
Studies comparing the economic aspects of ventilation in a hospital to outpatient ventilation, describe home ventilation as a more cost-effective alternative to in-patient care in an intensive care unit, however, more expensive in comparison to a noninvasive (via mask) ventilation. Higher expenses arise due to the necessary equipment and the high expenditure of time for the partial 24-hour care of the affected patients through highly qualified personnel. However, none of the studies applies to the German provisionary conditions. The calculated costs strongly depend on national medical fees and wages of caregivers, which barely allows a transmission of the results.
The results of quality-of-life studies are mostly qualitative. The patient’s quality of life using mechanical ventilation is predominantly considered well. Caregivers of ventilated patients report positive as well as negative ratings. Regarding the ethical questions, it was researched which aspects of ventilation implementation will have to be considered.
From a legal point of view the financing of home ventilation, especially invasive mechanical ventilation, requiring specialised technical nursing is regulated in the code of social law (Sozialgesetzbuch V). The absorption of costs is distributed to different insurance carriers, who often, due to cost pressures within the health care system, insurance carriers, who consider others and not themselves as responsible. Therefore in practice, the necessity to enforce a claim of cost absorption often arises in order to exercise the basic right of free choice of location.
Conclusion
Positive effects of the invasive mechanical ventilation (overall survival and symptomatic) are highly probable based on the analysed literature, although with a low level of evidence. An establishment of a home ventilation registry and health care research to ascertain valid data to improve outpatient structures is necessary. Gathering specific German data is needed to adequately depict the national concepts of provision and reimbursement. A differentiation of the cost structure according to the type of chosen outpatient care is currently not possible. There is no existing literature concerning the difference of life quality depending on the chosen outpatient care (homecare, assisted living, or in a nursing home specialised in invasive home ventilation). Further research is required.
For a so called participative decision – made by the patient after intense counselling – an early and honest patient education pro respectively contra invasive mechanical ventilation is needed. Besides the long term survival, the quality of life and individual, social and religious aspects have also to be considered.
doi:10.3205/hta000086
PMCID: PMC3010883  PMID: 21289881
home ventilation; invasive ventilation; extra-clinical ventilation; mechanical ventilation; neuromuscular disease; respiratory insufficience; vital capacity; Health Technology Assessment; HTA; economic analysis; ethics; psychologic pressure; quality of life; health related quality of life
7.  Commercial Serological Tests for the Diagnosis of Active Pulmonary and Extrapulmonary Tuberculosis: An Updated Systematic Review and Meta-Analysis 
PLoS Medicine  2011;8(8):e1001062.
An up-to-date systematic review and meta-analysis by Karen Steingart and colleagues confirms that commercially available serological tests do not provide an accurate diagnosis of tuberculosis.
Background
Serological (antibody detection) tests for tuberculosis (TB) are widely used in developing countries. As part of a World Health Organization policy process, we performed an updated systematic review to assess the diagnostic accuracy of commercial serological tests for pulmonary and extrapulmonary TB with a focus on the relevance of these tests in low- and middle-income countries.
Methods and Findings
We used methods recommended by the Cochrane Collaboration and GRADE approach for rating quality of evidence. In a previous review, we searched multiple databases for papers published from 1 January 1990 to 30 May 2006, and in this update, we add additional papers published from that period until 29 June 2010. We prespecified subgroups to address heterogeneity and summarized test performance using bivariate random effects meta-analysis. For pulmonary TB, we included 67 studies (48% from low- and middle-income countries) with 5,147 participants. For all tests, estimates were variable for sensitivity (0% to 100%) and specificity (31% to 100%). For anda-TB IgG, the only test with enough studies for meta-analysis, pooled sensitivity was 76% (95% CI 63%–87%) in smear-positive (seven studies) and 59% (95% CI 10%–96%) in smear-negative (four studies) patients; pooled specificities were 92% (95% CI 74%–98%) and 91% (95% CI 79%–96%), respectively. Compared with ELISA (pooled sensitivity 60% [95% CI 6%–65%]; pooled specificity 98% [95% CI 96%–99%]), immunochromatographic tests yielded lower pooled sensitivity (53%, 95% CI 42%–64%) and comparable pooled specificity (98%, 95% CI 94%–99%). For extrapulmonary TB, we included 25 studies (40% from low- and middle-income countries) with 1,809 participants. For all tests, estimates were variable for sensitivity (0% to 100%) and specificity (59% to 100%). Overall, quality of evidence was graded very low for studies of pulmonary and extrapulmonary TB.
Conclusions
Despite expansion of the literature since 2006, commercial serological tests continue to produce inconsistent and imprecise estimates of sensitivity and specificity. Quality of evidence remains very low. These data informed a recently published World Health Organization policy statement against serological tests.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Every year nearly 10 million people develop tuberculosis—a contagious bacterial infection—and about two million people die from the disease. Mycobacterium tuberculosis, the bacterium that causes tuberculosis, is spread in airborne droplets when people with the disease cough or sneeze. It usually infects the lungs (pulmonary tuberculosis) but can also infect the lymph nodes, bones, and other tissues (extrapulmonary tuberculosis). The characteristic symptoms of tuberculosis are a persistent cough, weight loss, and night sweats. Diagnostic tests for the disease include microscopic examination of sputum (mucus brought up from the lungs by coughing) for M. tuberculosis bacilli, chest radiography, mycobacterial culture (in which bacteriologists try to grow M. tuberculosis from sputum or tissue samples), and nucleic acid amplification tests (which detect the bacterium's genome in patient samples). Tuberculosis can usually be cured by taking several powerful drugs daily or several times a week for at least six months.
Why Was This Study Done?
Although efforts to control tuberculosis have advanced over the past decade, missed tuberculosis diagnoses and mismanaged tuberculosis continue to fuel the global epidemic. A missed diagnosis may lead to more severe illness and death, especially for people infected with both tuberculosis and HIV. Also, a missed diagnosis means that an untreated individual with pulmonary tuberculosis may remain infectious for longer, continuing to spread tuberculosis within the community Missed diagnoses are a particular problem in resource-limited countries where sputum microscopy and chest radiography often perform poorly and other diagnostic tests are too expensive and complex for routine use. Serological tests, which detect antibodies against M. tuberculosis in the blood (antibodies are proteins made by the immune system in response to infections), might provide a way to diagnose tuberculosis in resource-limited countries. Indeed, many serological tests for tuberculosis diagnosis are on sale in developing countries. However, because of doubts about the accuracy of these commercial tests, they are not recommended for use in routine practice. In this systematic review and meta-analysis, the researchers assess the diagnostic accuracy of commercial serological tests for pulmonary and extrapulmonary tuberculosis. A systematic review uses predefined criteria to identify all the research on a given topic; meta-analysis is a statistical method that combines the results of several studies.
What Did the Researchers Do and Find?
The researchers searched the literature for studies that evaluated serological tests for active tuberculosis published between 1990 and 2010. They used data from these studies to calculate each test's sensitivity (the proportion of patients with a positive serological test among patients with tuberculosis confirmed by a reference method; a high sensitivity indicates that the test detects most patients with tuberculosis) and specificity (the proportion of patients with a negative serological result among people without tuberculosis; a high specificity means the test gives few false-positive diagnoses). They also assessed the methodological quality of each study and rated the overall quality of the evidence. The researchers found 67 studies (half from low/middle-income countries) that evaluated serological tests for the diagnosis of pulmonary tuberculosis. The sensitivity of these tests varied between studies, ranging from 0% to 100%; their specificities ranged from 31% to 100%. For the anda-TB IgG test—the only test with sufficient studies for a meta-analysis—the pooled sensitivity from the relevant studies was 76% in smear-positive patients and 59% in smear-negative patients. The pooled specificities were 92% and 91%, respectively. The researchers found 25 studies (40% from low/middle-income countries) that evaluated serological tests for the diagnosis of extrapulmonary tuberculosis. Again, sensitivities and specificities for each test varied greatly between studies, ranging from 0% to 100% and 59% to 100%, respectively. Overall, for both pulmonary and extrapulmonary tuberculosis, the quality of evidence from the studies of the serological tests was graded very low.
What Do These Findings Mean?
This systematic review, which updates an analysis published in 2007, indicates that commercial serological tests do not provide an accurate diagnosis of tuberculosis. This finding confirms previous systematic reviews of the evidence, despite a recent expansion in the relevant literature. Moreover, the researchers' analysis indicates that the overall quality of the body of evidence on these tests remains poor. Many of the identified studies used unsatisfactory patient selection methods, for example. Clearly, there is a need for continued and improved research on existing serological tests and for research into new approaches to the serological diagnosis of tuberculosis. For now, though, based on these findings, cost-effectiveness data, and expert opinion, the World Health Organization has issued a recommendation against the use of currently available serological tests for the diagnosis of tuberculosis, while stressing the importance of continued research on these and other tests that could provide quick and accurate diagnosis of TB.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001062.
The World Health Organization provides information on all aspects of tuberculosis, including information on tuberculosis diagnostics on the Stop TB Partnership (some information is in several languages); the Strategic and Technical Advisory Group for Tuberculosis recommendations on tuberculosis diagnosis are available
The Web site Evidence-Based Tuberculosis Diagnosis (from Stop TB Partnership's New Diagnostics Working Group) provides access to several resources on TB diagnostics, including systematic reviews, guidelines, and training materials
The US Centers for Disease Control and Prevention has information about tuberculosis, including information on the diagnosis of tuberculosis disease
The US National Institute of Allergy and Infectious Diseases also has detailed information on all aspects of tuberculosis
MedlinePlus has links to further information about tuberculosis (in English and Spanish)
doi:10.1371/journal.pmed.1001062
PMCID: PMC3153457  PMID: 21857806
8.  Effectiveness of teleassistance on the improvement of health related quality of life in people with neuromuscular diseases 
Background
Neuromuscular diseases are a group of pathologies characterized by the progressive loss of muscular strength, atrophy or hypertrophy, fatigue, muscle pain and degeneration of the muscles and the nerves controlling them (The French Muscular Dystrophy Association, 2004). Perceived isolation and health related quality of life are affected in the majority of cases due to the illness chronicity. Internet, and in this way, the use of chat and videoconferencing programs, is an alternative option to mitigate the mentioned variables.
Aim
The aim of the study is to assess the effectiveness of teleassistance on reducing isolation and improving health related quality of life in adults with neuromuscular diseases.
Methods
The sample is composed of 60 participants randomly selected and affected by different neuromuscular diseases (e.g. Myasthenia Gravis, Becker Muscular Dystrophy, Facioescapulohumeral Muscular Dystrophy, etc.). Thirty patients were assigned to the experimental group, which participated in the chat and videoconferencing sessions, and the other thirty to the control group, which did not participate. The inclusion criteria for both groups were: medical confirmed diagnosis (CIE-10) of one of the diseases mentioned above, age ≥18 years, agreeing to participate in the study by signing an informed consent, and finally, ability to manipulate a computer (just for the experimental group). The exclusion criteria for both groups were to have a psychiatric disorder (DSM-IV-TR), head trauma or severe visual limitations. All the patients were recruited from neuromuscular disorders associations and Hospitals of The Basque Country. Effectiveness were assessed by a pre-post design in which questionnaires and interviews were administrated (e.g. Disability Assessment Schedule—WHO-DAS II, Sickness Impact Profile, The MOS Social Support Survey, etc.). The online support entails different activities developed during three months in once a week sessions: a) Group videoconference sessions with a Psychologist, b) Individual videoconference sessions with a Neurologist, and c) Forum discussion groups about biopsychosocial issues. The psychologist counseling consists on a psychosocial program about general topics such as illness information, emotional reactions to the disease, the most frequently automatic thoughts, etc. A web site was developed to carry out the intervention: http://neuromusculares.deusto.es/. An exhaustive preliminary analysis of this pre-post assessment is necessary in order to know if the psychosocial programme is effective and if it could be a helpful tool for this type of population.
Results
Preliminary results will be presented in order to confirm if teleassistance is an effective alternative way of advising people with neuromuscular disorders.
PMCID: PMC3571156
teleassistance; online support; neuromuscular disease; isolation; quality of life
9.  Living with myotonic dystrophy; what can be learned from couples? a qualitative study 
BMC Neurology  2011;11:86.
Background
Myotonic dystrophy type 1 (MD1) is one of the most prevalent neuromuscular diseases, yet very little is known about how MD1 affects the lives of couples and how they themselves manage individually and together. To better match health care to their problems, concerns and needs, it is important to understand their perspective of living with this hereditary, systemic disease.
Methods
A qualitative study was carried out with a purposive sample of five middle-aged couples, including three men and two women with MD1 and their partners. Fifteen in-depth interviews with persons with MD1, with their partners and with both of them as a couple took place in the homes of the couples in two cities and three villages in the Netherlands in 2009.
Results
People with MD1 associate this progressive, neuromuscular condition with decreasing abilities, describing physical, cognitive and psychosocial barriers to everyday activities and social participation. Partners highlighted the increasing care giving burden, giving directions and using reminders to compensate for the lack of initiative and avoidant behaviour due to MD1. Couples portrayed the dilemmas and frustrations of renegotiating roles and responsibilities; stressing the importance of achieving a balance between individual and shared activities. All participants experienced a lack of understanding from relatives, friends, and society, including health care, leading to withdrawal and isolation. Health care was perceived as fragmentary, with specialists focusing on specific aspects of the disease rather than seeking to understand the implications of the systemic disorder on daily life.
Conclusions
Learning from these couples has resulted in recommendations that challenge the tendency to treat MD1 as a condition with primarily physical impairments. It is vital to listen to couples, to elicit the impact of MD1, as a multisystem disorder that influences every aspect of their life together. Couple management, supporting the self-management skills of both partners is proposed as a way of reducing the mismatch between health services and health needs.
doi:10.1186/1471-2377-11-86
PMCID: PMC3158552  PMID: 21752270
10.  Dietary Supplementation with a Superoxide Dismutase-Melon Concentrate Reduces Stress, Physical and Mental Fatigue in Healthy People: A Randomised, Double-Blind, Placebo-Controlled Trial 
Nutrients  2014;6(6):2348-2359.
Background: We aimed to investigate effects of superoxide dismutase (SOD)-melon concentrate supplementation on psychological stress, physical and mental fatigue in healthy people. Methods: A randomized, double-blind, placebo-controlled trial was performed on 61 people divided in two groups: active supplement (n = 32) and placebo (n = 29) for 12 weeks. Volunteers were given one small hard capsule per day. One capsule contained 10 mg of SOD-melon concentrate (140 U of SOD) and starch for the active supplement and starch only for the placebo. Stress and fatigue were evaluated using four psychometric scales: PSS-14; SF-36; Stroop tests and Prevost scale. Results: The supplementation with SOD-melon concentrate significantly decreased perceived stress, compared to placebo. Moreover, quality of life was improved and physical and mental fatigue were reduced with SOD-melon concentrate supplementation. Conclusion: SOD-melon concentrate supplementation appears to be an effective and natural way to reduce stress and fatigue. Trial registration: trial approved by the ethical committee of Poitiers (France), and the ClinicalTrials.gov Identifier is NCT01767922.
doi:10.3390/nu6062348
PMCID: PMC4073155  PMID: 24949549
psychological stress; physical and mental fatigue; oxidative stress; melon; superoxide dismutase; psychometric scales
11.  Effect of an oral supplementation with a proprietary melon juice concentrate (Extramel®) on stress and fatigue in healthy people: a pilot, double-blind, placebo-controlled clinical trial 
Nutrition Journal  2009;8:40.
Background
Recent studies have demonstrated a correlation between perceived stress and oxidative stress. As SOD is the main enzyme of the enzymatic antioxidant defence system of the body, we evaluated the effect of an oral daily intake of a proprietary melon juice concentrate rich in SOD (EXTRAMEL®) on the signs and symptoms of stress and fatigue in healthy volunteers.
Methods
This randomized, double blind, placebo controlled clinical study was conducted with seventy healthy volunteers aged between 30 and 55 years, who feel daily stress and fatigue. They took the dietary supplement based on the melon juice concentrate (10 mg Extramel® corresponding to 140 IU SOD per capsule) or a placebo one time daily during 4 weeks. Stress and fatigue were measured using four observational psychometric scales: FARD, PSS-14, SF-12 and Epworth scale. The study was conducted by Isoclin, a clinical research organization, located in Poitiers, France.
Results
No adverse effect was noted. The supplementation with the proprietary melon juice concentrate bringing 140 IU SOD/day significantly improved signs and symptoms of stress and fatigue linked to performance, physical (pain, sleep troubles), cognitive (concentration, weariness, sleep troubles) or behavioural (attitude, irritability, difficulty of contact) compared to the placebo. In the same way, quality of life and perceived stress were significantly improved with SOD supplementation.
Conclusion
This pilot study showed that an oral supplementation with a proprietary melon juice concentrate rich in SOD may have a positive effect on several signs and symptoms of perceived stress and fatigue.
doi:10.1186/1475-2891-8-40
PMCID: PMC2757026  PMID: 19754931
12.  A historical reflection on the discovery of human retroviruses 
Retrovirology  2009;6:40.
The discovery of HIV-1 as the cause of AIDS was one of the major scientific achievements during the last century. Here the events leading to this discovery are reviewed with particular attention to priority and actual contributions by those involved. Since I would argue that discovering HIV was dependent on the previous discovery of the first human retrovirus HTLV-I, the history of this discovery is also re-examined. The first human retroviruses (HTLV-I) was first reported by Robert C. Gallo and coworkers in 1980 and reconfirmed by Yorio Hinuma and coworkers in 1981. These discoveries were in turn dependent on the previous discovery by Gallo and coworkers in 1976 of interleukin 2 or T-cell growth factor as it was called then. HTLV-II was described by Gallo's group in 1982. A human retrovirus distinct from HTLV-I and HTLV-II in that it was shown to have the morphology of a lentivirus was in my mind described for the first time by Luc Montagnier in an oral presentation at Cold Spring Harbor in September of 1983. This virus was isolated from a patient with lymphadenopathy using the protocol previously described for HTLV by Gallo. The first peer reviewed paper by Montagnier's group of such a retrovirus, isolated from two siblings of whom one with AIDS, appeared in Lancet in April of 1984. However, the proof that a new human retrovirus (HIV-1) was the cause of AIDS was first established in four publications by Gallo's group in the May 4th issue of Science in 1984.
doi:10.1186/1742-4690-6-40
PMCID: PMC2686671  PMID: 19409074
13.  Measuring health-related quality of life in tuberculosis: a systematic review 
Introduction
Tuberculosis remains a major public health problem worldwide. In recent years, increasing efforts have been dedicated to assessing the health-related quality of life experienced by people infected with tuberculosis. The objectives of this study were to better understand the impact of tuberculosis and its treatment on people's quality of life, and to review quality of life instruments used in current tuberculosis research.
Methods
A systematic literature search from 1981 to 2008 was performed through a number of electronic databases as well as a manual search. Eligible studies assessed multi-dimensional quality of life in people with tuberculosis disease or infection using standardized instruments. Results of the included studies were summarized qualitatively.
Results
Twelve original studies met our criteria for inclusion. A wide range of quality of life instruments were involved, and the Short-Form 36 was most commonly used. A validated tuberculosis-specific quality of life instrument was not located. The findings showed that tuberculosis had a substantial and encompassing impact on patients' quality of life. Overall, the anti-tuberculosis treatment had a positive effect of improving patients' quality of life; their physical health tended to recover more quickly than the mental well-being. However, after the patients successfully completed treatment and were microbiologically 'cured', their quality of life remained significantly worse than the general population.
Conclusion
Tuberculosis has substantially adverse impacts on patients' quality of life, which persist after microbiological 'cure'. A variety of instruments were used to assess quality of life in tuberculosis and there has been no well-established tuberculosis-specific instrument, making it difficult to fully understand the impact of the illness.
doi:10.1186/1477-7525-7-14
PMCID: PMC2651863  PMID: 19224645
14.  Staufen1-mediated mRNA decay induces Requiem mRNA decay through binding of Staufen1 to the Requiem 3′UTR 
Nucleic Acids Research  2014;42(11):6999-7011.
Requiem (REQ/DPF2) was originally identified as an apoptosis-inducing protein in mouse myeloid cells and belongs to the novel Krüppel-type zinc finger d4-protein family of proteins, which includes neuro-d4 (DPF1) and cer-d4 (DPF3). Interestingly, when a portion of the REQ messenger ribonucleic acid (mRNA) 3′ untranslated region (3′UTR), referred to as G8, was overexpressed in K562 cells, β-globin expression was induced, suggesting that the 3′UTR of REQ mRNA plays a physiological role. Here, we present evidence that the REQ mRNA 3′UTR, along with its trans-acting factor, Staufen1 (STAU1), is able to reduce the level of REQ mRNA via STAU1-mediated mRNA decay (SMD). By screening a complementary deoxyribonucleic acid (cDNA) expression library with an RNA–ligand binding assay, we identified STAU1 as an interactor of the REQ mRNA 3′UTR. Specifically, we provide evidence that STAU1 binds to putative 30-nucleotide stem–loop-structured RNA sequences within the G8 region, which we term the protein binding site core; this binding triggers the degradation of REQ mRNA and thus regulates translation. Furthermore, we demonstrate that siRNA-mediated silencing of either STAU1 or UPF1 increases the abundance of cellular REQ mRNA and, consequently, the REQ protein, indicating that REQ mRNA is a target of SMD.
doi:10.1093/nar/gku388
PMCID: PMC4066795  PMID: 24799437
15.  QOI6/416: The Health On the Net Code of Conduct for Medical and Health-related Web sites: Three years on 
Introduction
The explosive growth of the World Wide Web has made it more and more urgent to monitor and improve the quality of the information circulating through the Internet. But assessing medical and health information on the Web is a difficult challenge. In recent years, some organisations have been working on this issue. The Health On the Net Foundation (HON), for its part, administers an eight-point Code of Conduct called the HONcode, initiated in 1996. This solution is different from the others. The HONcode does not intend to rate or assess the quality of information provided by a Web site. This article gives a general presentation of the HONcode and its status in 1999, three years after it was launched.
Methods
It defines a set of voluntary rules designed to help a Web developer site practice responsible and to make sure a reader always knows the source and the purpose of the information he or she is reading. These guidelines encourage the authority, complimentarity, confidentiality, proper attribution, justifiability and validity of the medical advice and information provided. Furthermore, sites that subscribe to the HONcode commit themselves to providing transparent information on site sponsorship and clearly separating advertising and editorial content.
Results
In 1998, the number of external links to the HONcode principles page grew by about 250% (e.g., from 2 to 5) . The rate of increase from January through April, 1999, suggests 300% this year. The growth in the number of links to the HONcode matches that of links to of the entire Web site, and is slightly higher than growth in links to HON's MedHunt search engine. Statistical analysis shows that approximately 50% of Web sites linked to the HONcode principles have a commercial domain name, 16% are from European countries, 15% are from non-profit organisations, and 7% are educational sites.
Discussion
This evolution shows a real need of guidelines for the developers of medical and health Web sites and their users. This conclusion is reinforced by the result of an international survey which HON conducted on the Internet in March and April 1999. The HONcode's scope of application is currently limited to publishing policy and ethical aspects. The HONcode does not address issues of quality and the relevance of the health and medical content of a Web page -- and this is a crucial issue for the further healthy development of the Internet in the medical domain. One way to assess and provide reliable medical and health content could be to review medical and health-related Web content by peers. Building on the system adopted by the scientific community decades ago for the paper medium, this Electronic peer review method could cover content as well as ethical and publishing aspects while retaining use of the HONcode display icon on registered sites as an additional reference.
doi:10.2196/jmir.1.suppl1.e99
PMCID: PMC1761776
Quality Assurance; Quality Health Care; Peer Review; Ethics
16.  Extensive Neuronal Differentiation of Human Neural Stem Cell Grafts in Adult Rat Spinal Cord 
PLoS Medicine  2007;4(2):e39.
Background
Effective treatments for degenerative and traumatic diseases of the nervous system are not currently available. The support or replacement of injured neurons with neural grafts, already an established approach in experimental therapeutics, has been recently invigorated with the addition of neural and embryonic stem-derived precursors as inexhaustible, self-propagating alternatives to fetal tissues. The adult spinal cord, i.e., the site of common devastating injuries and motor neuron disease, has been an especially challenging target for stem cell therapies. In most cases, neural stem cell (NSC) transplants have shown either poor differentiation or a preferential choice of glial lineages.
Methods and Findings
In the present investigation, we grafted NSCs from human fetal spinal cord grown in monolayer into the lumbar cord of normal or injured adult nude rats and observed large-scale differentiation of these cells into neurons that formed axons and synapses and established extensive contacts with host motor neurons. Spinal cord microenvironment appeared to influence fate choice, with centrally located cells taking on a predominant neuronal path, and cells located under the pia membrane persisting as NSCs or presenting with astrocytic phenotypes. Slightly fewer than one-tenth of grafted neurons differentiated into oligodendrocytes. The presence of lesions increased the frequency of astrocytic phenotypes in the white matter.
Conclusions
NSC grafts can show substantial neuronal differentiation in the normal and injured adult spinal cord with good potential of integration into host neural circuits. In view of recent similar findings from other laboratories, the extent of neuronal differentiation observed here disputes the notion of a spinal cord that is constitutively unfavorable to neuronal repair. Restoration of spinal cord circuitry in traumatic and degenerative diseases may be more realistic than previously thought, although major challenges remain, especially with respect to the establishment of neuromuscular connections.
When neural stem cells from human fetal spinal cord were grafted into the lumbar cord of normal or injured adult nude rats, substantial neuronal differentiation was found.
Editors' Summary
Background.
Every year, spinal cord injuries, many caused by road traffic accidents, paralyze about 11,000 people in the US. This paralysis occurs because the spinal cord is the main communication highway between the body and the brain. Information from the skin and other sensory organs is transmitted to the brain along the spinal cord by bundles of neurons, nervous system cells that transmit and receive messages. The brain then sends information back down the spinal cord to control movement, breathing, and other bodily functions. The bones of the spine normally protect the spinal cord but, if these are broken or dislocated, the spinal cord can be cut or compressed, which interrupts the information flow. Damage near the top of the spinal cord can paralyze the arms and legs (tetraplegia); damage lower down paralyzes the legs only (paraplegia). Spinal cord injuries also cause many other medical problems, including the loss of bowel and bladder control. Although the deleterious effects of spinal cord injuries can be minimized by quickly immobilizing the patient and using drugs to reduce inflammation, the damaged nerve fibers never regrow. Consequently, spinal cord injury is permanent.
Why Was This Study Done?
Scientists are currently searching for ways to reverse spinal cord damage. One potential approach is to replace the damaged neurons using neural stem cells (NSCs). These cells, which can be isolated from embryos and from some areas of the adult nervous system, are able to develop into all the specialized cells types of the nervous system. However, because most attempts to repair spinal cord damage with NSC transplants have been unsuccessful, many scientists believe that the environment of the spinal cord is unsuitable for nerve regeneration. In this study, the researchers have investigated what happens to NSCs derived from the spinal cord of a human fetus after transplantation into the spinal cord of adult rats.
What Did the Researchers Do and Find?
The researchers injected human NSCs that they had grown in dishes into the spinal cord of intact nude rats (animals that lack a functioning immune system and so do not destroy human cells) and into nude rats whose spinal cord had been damaged at the transplantation site. The survival and fate of the transplanted cells was assessed by staining thin slices of spinal cord with an antibody that binds to a human-specific protein and with antibodies that recognize proteins specific to NSCs, neurons, or other nervous system cells. The researchers report that the human cells survived well in the adult spinal cord of the injured and normal rats and migrated into the gray matter of the spinal cord (which contains neuronal cell bodies) and into the white matter (which contains the long extensions of nerve cells that carry nerve impulses). 75% and 60% of the human cells in the gray and white matter, respectively, contained a neuron-specific protein six months after transplantation but only 10% of those in the membrane surrounding the spinal cord became neurons; the rest developed into astrocytes (another nervous system cell type) or remained as stem cells. Finally, many of the human-derived neurons made the neurotransmitter GABA (one of the chemicals that transfers messages between neurons) and made contacts with host spinal cord neurons.
What Do These Findings Mean?
These findings suggest that human NSC grafts can, after all, develop into neurons (predominantly GABA-producing neurons) in normal and injured adult spinal cord and integrate into the existing spinal cord if the conditions are right. Although these animal experiments suggest that NSC transplants might help people with spinal injuries, they have some important limitations. For example, the spinal cord lesions used here are mild and unlike those seen in human patients. This and the use of nude rats might have reduced the scarring in the damaged spinal cord that is often a major barrier to nerve regeneration. Furthermore, the researchers did not test whether NSC transplants provide functional improvements after spinal cord injury. However, since other researchers have also recently reported that NSCs can grow and develop into neurons in injured adult spinal cord, these new results further strengthen hopes it might eventually be possible to use human NSCs to repair damaged spinal cords.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/doi:10.1371/journal.pmed.0040039.
The US National Institute of Neurological Disorders and Stroke provides information on spinal cord injury and current spinal cord research
Spinal Research (a UK charity) offers information on spinal cord injury and repair
The US National Spinal Cord Injury Association Web site contains factsheets on spinal cord injuries
MedlinePlus encyclopedia has pages on spinal cord trauma and interactive tutorials on spinal cord injury
The International Society for Stem Cell Research offers information on all sorts of stem cells including NSCs
The US National Human Neural Stem Cell Resource provides information on human NSCs, including the current US government's stance on stem cell research
doi:10.1371/journal.pmed.0040039
PMCID: PMC1796906  PMID: 17298165
17.  Macroautophagy Is Regulated by the UPR–Mediator CHOP and Accentuates the Phenotype of SBMA Mice 
PLoS Genetics  2011;7(10):e1002321.
Altered protein homeostasis underlies degenerative diseases triggered by misfolded proteins, including spinal and bulbar muscular atrophy (SBMA), a neuromuscular disorder caused by a CAG/glutamine expansion in the androgen receptor. Here we show that the unfolded protein response (UPR), an ER protein quality control pathway, is induced in skeletal muscle from SBMA patients, AR113Q knock-in male mice, and surgically denervated wild-type mice. To probe the consequence of UPR induction, we deleted CHOP (C/EBP homologous protein), a transcription factor induced following ER stress. CHOP deficiency accentuated atrophy in both AR113Q and surgically denervated muscle through activation of macroautophagy, a lysosomal protein quality control pathway. Conversely, impaired autophagy due to Beclin-1 haploinsufficiency decreased muscle wasting and extended lifespan of AR113Q males, producing a significant and unexpected amelioration of the disease phenotype. Our findings highlight critical cross-talk between the UPR and macroautophagy, and they indicate that autophagy activation accentuates aspects of the SBMA phenotype.
Author Summary
In many age-dependent neurodegenerative diseases, the accumulation of misfolded or mutant proteins drives pathogenesis. Several protein quality control pathways have emerged as central regulators of the turnover of these toxic proteins and therefore impact phenotypic severity. In spinal and bulbar muscular atrophy (SBMA), the mutant androgen receptor with an expanded glutamine tract undergoes hormone-dependent nuclear translocation, unfolding, and oligomerization—steps that are critical to the development of progressive proximal limb and bulbar muscle weakness in men. Here we show that the unfolded protein response (UPR), an endoplasmic reticulum stress response, is triggered in skeletal muscle from SBMA patients and knock-in mice. We find that disruption of the UPR exacerbates skeletal muscle atrophy through the induction of macroautophagy, a lysosomal protein quality pathway. In contrast, impaired autophagy diminishes muscle wasting and prolongs lifespan of SBMA mice. Our findings highlight cross-talk between the UPR and autophagy, and they suggest that limited activation of the autophagic pathway may be beneficial in certain neuromuscular diseases such as SBMA where the nucleus is the essential site of toxicity.
doi:10.1371/journal.pgen.1002321
PMCID: PMC3192827  PMID: 22022281
18.  Regional Changes in Charcoal-Burning Suicide Rates in East/Southeast Asia from 1995 to 2011: A Time Trend Analysis 
PLoS Medicine  2014;11(4):e1001622.
Using a time trend analysis, Ying-Yeh Chen and colleagues examine the evidence for regional increases in charcoal-burning suicide rates in East and Southeast Asia from 1995 to 2011.
Please see later in the article for the Editors' Summary
Background
Suicides by carbon monoxide poisoning resulting from burning barbecue charcoal reached epidemic levels in Hong Kong and Taiwan within 5 y of the first reported cases in the early 2000s. The objectives of this analysis were to investigate (i) time trends and regional patterns of charcoal-burning suicide throughout East/Southeast Asia during the time period 1995–2011 and (ii) whether any rises in use of this method were associated with increases in overall suicide rates. Sex- and age-specific trends over time were also examined to identify the demographic groups showing the greatest increases in charcoal-burning suicide rates across different countries.
Methods and Findings
We used data on suicides by gases other than domestic gas for Hong Kong, Japan, the Republic of Korea, Taiwan, and Singapore in the years 1995/1996–2011. Similar data for Malaysia, the Philippines, and Thailand were also extracted but were incomplete. Graphical and joinpoint regression analyses were used to examine time trends in suicide, and negative binomial regression analysis to study sex- and age-specific patterns. In 1995/1996, charcoal-burning suicides accounted for <1% of all suicides in all study countries, except in Japan (5%), but they increased to account for 13%, 24%, 10%, 7%, and 5% of all suicides in Hong Kong, Taiwan, Japan, the Republic of Korea, and Singapore, respectively, in 2011. Rises were first seen in Hong Kong after 1998 (95% CI 1997–1999), followed by Singapore in 1999 (95% CI 1998–2001), Taiwan in 2000 (95% CI 1999–2001), Japan in 2002 (95% CI 1999–2003), and the Republic of Korea in 2007 (95% CI 2006–2008). No marked increases were seen in Malaysia, the Philippines, or Thailand. There was some evidence that charcoal-burning suicides were associated with an increase in overall suicide rates in Hong Kong, Taiwan, and Japan (for females), but not in Japan (for males), the Republic of Korea, and Singapore. Rates of change in charcoal-burning suicide rate did not differ by sex/age group in Taiwan and Hong Kong but appeared to be greatest in people aged 15–24 y in Japan and people aged 25–64 y in the Republic of Korea. The lack of specific codes for charcoal-burning suicide in the International Classification of Diseases and variations in coding practice in different countries are potential limitations of this study.
Conclusions
Charcoal-burning suicides increased markedly in some East/Southeast Asian countries (Hong Kong, Taiwan, Japan, the Republic of Korea, and Singapore) in the first decade of the 21st century, but such rises were not experienced by all countries in the region. In countries with a rise in charcoal-burning suicide rates, the timing, scale, and sex/age pattern of increases varied by country. Factors underlying these variations require further investigation, but may include differences in culture or in media portrayals of the method.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Every year, almost one million people die by suicide globally; suicide is the fifth leading cause of death in women aged 15–49 and the sixth leading cause of death in men in the same age group. Most people who take their own life are mentally ill. For others, stressful events (the loss of a partner, for example) have made life seem worthless or too painful to bear. Strategies to reduce suicide rates include better treatment of mental illness and programs that help people at high risk of suicide deal with stress. Suicide rates can also be reduced by limiting access to common suicide methods. These methods vary from place to place. Hanging is the predominant suicide method in many countries, but in Hong Kong, for example, jumping from a high building is the most common method. Suicide methods also vary over time. For example, after a woman in Hong Kong took her life in 1998 by burning barbecue charcoal in a sealed room (a process that produces the toxic gas carbon monoxide), charcoal burning rapidly went from being a rare method of killing oneself in Hong Kong to the second most common suicide method.
Why Was This Study Done?
Cases of charcoal-burning suicide have also been reported in several East and Southeast Asian countries, but there has been no systematic investigation of time trends and regional patterns of this form of suicide. A better understanding of regional changes in the number of charcoal-burning suicides might help to inform efforts to prevent the emergence of other new suicide methods. Here, the researchers investigate the time trends and regional patterns of charcoal-burning suicide in several countries in East and Southeast Asia between 1995 and 2011 and ask whether any rises in the use of this method are associated with increases in overall suicide rates. The researchers also investigate sex- and age-specific time trends in charcoal-burning suicides to identify which groups of people show the greatest increases in this form of suicide across different countries.
What Did the Researchers Do and Find?
The researchers analyzed method-specific data on suicide deaths for Hong Kong, Japan, the Republic of Korea, Taiwan, and Singapore between 1995/1996 and 2011 obtained from the World Health Organization Mortality Database and from national death registers. In 1995/1996, charcoal-burning suicides accounted for less than 1% of all suicides in all these countries except Japan (4.9%). By 2011, charcoal-burning suicides accounted for between 5% (Singapore) and 24% (Taiwan) of all suicides. Rises in the rate of charcoal-burning suicide were first seen in Hong Kong in 1999, in Singapore in 2000, in Taiwan in 2001, in Japan in 2003, and in the Republic of Korea in 2008. By contrast, incomplete data from Malaysia, the Philippines, and Thailand showed no evidence of a marked increase in charcoal-burning suicide in these countries over the same period. Charcoal-burning suicides were associated with an increase in overall suicide rates in Hong Kong in 1998–2003, in Taiwan in 2000–2006, and in Japanese women after 2003. Finally, the annual rate of change in charcoal-burning suicide rate did not differ by sex/age group in Taiwan and Hong Kong, whereas in Japan people aged 15–24 and in the Republic of Korea people aged 25–64 tended to have the greatest rates of increase.
What Do These Findings Mean?
These findings show that charcoal-burning suicides increased markedly in several but not all East and Southeast Asian countries during the first decade of the 21st century. Moreover, in countries where there was an increase, the timing, scale, and sex/age pattern of the increase varied by country. The accuracy of these findings is likely to be limited by several aspects of the study. For example, because of the way that method-specific suicides are recorded in the World Health Organization Mortality Database and national death registries, the researchers may have slightly overestimated the number of charcoal-burning suicides. Further studies are now needed to identify the factors that underlie the variations between countries in charcoal-burning suicide rates and time trends reported here. However, the current findings highlight the need to undertake surveillance to identify the emergence of new suicide methods and the importance of policy makers, the media, and internet service providers working together to restrict graphic and detailed descriptions of new suicide methods.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001622.
A PLOS Medicine research article by Shu-Sen Chang and colleagues investigates time trends and regional patterns of charcoal-burning suicide in Taiwan
The World Health Organization provides information on the global burden of suicide and on suicide prevention (in several languages); it also has an article on international patterns in methods of suicide
The US National Institute of Mental Health provides information on suicide and suicide prevention
The UK National Health Service Choices website has detailed information about suicide and its prevention
MedlinePlus provides links to further resources about suicide (in English and Spanish)
The International Association for Suicide Prevention provides links to crisis centers in Asia
The charity Healthtalkonline has personal stories about dealing with suicide
doi:10.1371/journal.pmed.1001622
PMCID: PMC3972087  PMID: 24691071
19.  Endoscopic ultrasonography guided biliary drainage: Summary of consortium meeting, May 7th, 2011, Chicago 
Endoscopic retrograde cholangiopancreatography (ERCP) has become the preferred procedure for biliary or pancreatic drainage in various pancreatico-biliary disorders. With a success rate of more than 90%, ERCP may not achieve biliary or pancreatic drainage in cases with altered anatomy or with tumors obstructing access to the duodenum. In the past those failures were typically managed exclusively by percutaneous approaches by interventional radiologists or surgical intervention. The morbidity associated was significant especially in those patients with advanced malignancy, seeking minimally invasive interventions and improved quality of life. With the advent of biliary drainage via endoscopic ultrasound (EUS) guidance, EUS guided biliary drainage has been used more frequently within the last decade in different countries. As with any novel advanced endoscopic procedure that encompasses various approaches, advanced endoscopists all over the world have innovated and adopted diverse EUS guided biliary and pancreatic drainage techniques. This diversity has resulted in variations and improvements in EUS Guided biliary and pancreatic drainage; and over the years has led to an extensive nomenclature. The diversity of techniques, nomenclature and recent progress in our intrumentation has led to a dedicated meeting on May 7th, 2011 during Digestive Disease Week 2011. More than 40 advanced endoscopists from United States, Brazil, Mexico, Venezuela, Colombia, Italy, France, Austria, Germany, Spain, Japan, China, South Korea and India attended this pivotal meeting. The meeting covered improved EUS guided biliary access and drainage procedures, terminology, nomenclature, training and credentialing; as well as emerging devices for EUS guided biliary drainage. This paper summarizes the meeting’s agenda and the conclusions generated by the creation of this consortium group.
doi:10.3748/wjg.v19.i9.1372
PMCID: PMC3602496  PMID: 23538784
Endoscopic ultrasound; Biliary drainage; Endosonography-guided cholangiopancreatography; Endoscopic ultrasound guided; Pancreatic drainage; Endoscopic retrograde cholangiopancreatography
20.  APP interacts with LRP4 and agrin to coordinate the development of the neuromuscular junction in mice 
eLife  2013;2:e00220.
ApoE, ApoE receptors and APP cooperate in the pathogenesis of Alzheimer’s disease. Intriguingly, the ApoE receptor LRP4 and APP are also required for normal formation and function of the neuromuscular junction (NMJ). In this study, we show that APP interacts with LRP4, an obligate co-receptor for muscle-specific tyrosine kinase (MuSK). Agrin, a ligand for LRP4, also binds to APP and co-operatively enhances the interaction of APP with LRP4. In cultured myotubes, APP synergistically increases agrin-induced acetylcholine receptor (AChR) clustering. Deletion of the transmembrane domain of LRP4 (LRP4 ECD) results in growth retardation of the NMJ, and these defects are markedly enhanced in APP−/−;LRP4ECD/ECD mice. Double mutant NMJs are significantly reduced in size and number, resulting in perinatal lethality. Our findings reveal novel roles for APP in regulating neuromuscular synapse formation through hetero-oligomeric interaction with LRP4 and agrin and thereby provide new insights into the molecular mechanisms that govern NMJ formation and maintenance.
DOI: http://dx.doi.org/10.7554/eLife.00220.001
eLife digest
One of the hallmarks of Alzheimer’s disease is the formation of plaques in the brain by a protein called β-amyloid. This protein is generated by the cleavage of a precursor protein, and mutations in the gene that encodes amyloid precursor protein greatly increase the risk of developing a familial, early-onset form of Alzheimer’s disease in middle age. Individuals with a particular variant of a lipoprotein called ApoE (ApoE4) are also more likely to develop Alzheimer’s disease at a younger age than the rest of the population. Due to its prevalence—approximately 20% of the world’s population are carriers of at least one allele—ApoE4 is the single-most important risk factor for the late-onset form of Alzheimer’s disease.
Amyloid precursor protein and the receptors for ApoE—in particular one called LRP4—are also essential for the development of the specialized synapse that forms between motor neurons and muscles. However, the mechanisms by which they, individually or together, contribute to the formation of these neuromuscular junctions are incompletely understood.
Now, Choi et al. have shown that amyloid precursor protein and LRP4 interact at the developing neuromuscular junction. A protein called agrin, which is produced by motor neurons and which must bind to LRP4 to induce neuromuscular junction formation, also binds directly to amyloid precursor protein. This latter interaction leads to the formation of a complex between LRP4 and amyloid precursor protein that robustly promotes the formation of the neuromuscular junction. Mutations that remove the part of LRP4 that anchors it to the cell membrane weaken this complex and thus reduce the development of neuromuscular junctions in mice, especially if the animals also lack amyloid precursor protein.
These three proteins thus seem to influence the development and maintenance of neuromuscular junctions by regulating the activity of a fourth protein, called MuSK, which is present on the surface of muscle cells. Activation of MuSK by agrin bound to LRP4 promotes the clustering of acetylcholine receptors in the membrane, which is a crucial step in the formation of the neuromuscular junction. Intriguingly, Choi et al. have now shown that amyloid precursor protein can, by interacting directly with LRP4, also activate MuSK even in the absence of agrin, albeit only to a small extent.
The work of Choi et al. suggests that the complex formed between agrin, amyloid precursor protein and LRP4 helps to focus the activation of MuSK, and thus the clustering of acetylcholine receptors, to the site of the developing neuromuscular junction. Since all four proteins are also found in the central nervous system, similar processes might well be at work during the development and maintenance of synapses in the brain. Further studies of these interactions, both at the neuromuscular junction and in the brain, should shed new light on both normal synapse formation and the synaptic dysfunction that is seen in Alzheimer’s disease.
DOI: http://dx.doi.org/10.7554/eLife.00220.002
doi:10.7554/eLife.00220
PMCID: PMC3748711  PMID: 23986861
neuromuscular synapse; neurodegeneration; nervous system development; Alzheimer's disease; LRP; ApoE; Mouse
21.  Validation of the individualised neuromuscular quality of life for the USA with comparison of the impact of muscle disease on those living in USA versus UK 
Background
The Individualised Neuromuscular Quality of Life (INQoL) questionnaire is a published muscle disease specific measure of QoL that has been validated using both qualitative and quantitative methods in a United Kingdom population of adults with muscle disease. If INQoL is to be used in other countries it needs to be linguistically and culturally validated for those countries. It may be important to understand any cultural differences in how patients rate their QoL when applying QoL measures in multi-national clinical trials.
Methods
We conducted a postal survey of QoL issues in US adults with muscle disease using an agreed translation, from UK to US English, of the same questionnaire as was used in the original construction of INQoL. This questionnaire included an opportunity for free text comments on any aspects of QoL that might not have been covered by the questionnaire. We examined the responses using both quantitative and qualitative approaches. The frequency of the responses in US versus UK populations was compared using appropriate correlation tests and Rasch analysis. A phenomenological approach was used to guide the qualitative analysis and facilitate the exploration of patients' perceptions and experiences.
Results
The US survey received 333 responses which were compared with 251 UK survey responses.
We found that INQoL domains covered all the issues raised by US subjects with no additional domains required. The experiences of those with muscle disease were remarkably similar in the US and UK but there were differences related to the impact of muscle disease on relationships and on employment which was greater for those living in the United States. The greater impact on employment was associated with a higher importance rating given to employment in the US. This may reflect the lower level of financial support for those who are unemployed, and the loss of employment related health benefits.
Conclusions
INQoL is appropriate for use in US population but there may be differences in the importance that US subject attach to certain aspects of QoL that could be the basis for further study.
If these differences are confirmed then this may have implications for the interpretation of QoL outcomes in multi-national trials.
doi:10.1186/1477-7525-9-114
PMCID: PMC3295649  PMID: 22177525
Adult muscle disease; Quality of life
22.  Current and Former Smoking and Risk for Venous Thromboembolism: A Systematic Review and Meta-Analysis 
PLoS Medicine  2013;10(9):e1001515.
In a meta-analysis of 32 observational studies involving 3,966,184 participants and 35,151 events, Suhua Wu and colleagues found that current, ever, and former smoking was associated with risk of venous thromboembolism.
Please see later in the article for the Editors' Summary
Background
Smoking is a well-established risk factor for atherosclerotic disease, but its role as an independent risk factor for venous thromboembolism (VTE) remains controversial. We conducted a meta-analysis to summarize all published prospective studies and case-control studies to update the risk for VTE in smokers and determine whether a dose–response relationship exists.
Methods and Findings
We performed a literature search using MEDLINE (source PubMed, January 1, 1966 to June 15, 2013) and EMBASE (January 1, 1980 to June 15, 2013) with no restrictions. Pooled effect estimates were obtained by using random-effects meta-analysis. Thirty-two observational studies involving 3,966,184 participants and 35,151 VTE events were identified. Compared with never smokers, the overall combined relative risks (RRs) for developing VTE were 1.17 (95% CI 1.09–1.25) for ever smokers, 1.23 (95% CI 1.14–1.33) for current smokers, and 1.10 (95% CI 1.03–1.17) for former smokers, respectively. The risk increased by 10.2% (95% CI 8.6%–11.8%) for every additional ten cigarettes per day smoked or by 6.1% (95% CI 3.8%–8.5%) for every additional ten pack-years. Analysis of 13 studies adjusted for body mass index (BMI) yielded a relatively higher RR (1.30; 95% CI 1.24–1.37) for current smokers. The population attributable fractions of VTE were 8.7% (95% CI 4.8%–12.3%) for ever smoking, 5.8% (95% CI 3.6%–8.2%) for current smoking, and 2.7% (95% CI 0.8%–4.5%) for former smoking. Smoking was associated with an absolute risk increase of 24.3 (95% CI 15.4–26.7) cases per 100,000 person-years.
Conclusions
Cigarette smoking is associated with a slightly increased risk for VTE. BMI appears to be a confounding factor in the risk estimates. The relationship between VTE and smoking has clinical relevance with respect to individual screening, risk factor modification, and the primary and secondary prevention of VTE.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Blood normally flows throughout the human body, supplying its organs and tissues with oxygen and nutrients. But, when an injury occurs, proteins called clotting factors make the blood gel (coagulate) at the injury site. The resultant clot (thrombus) plugs the wound and prevents blood loss. Occasionally, a thrombus forms inside an uninjured blood vessel and partly or completely blocks the blood flow. Clot formation inside one of the veins deep within the body, usually in a leg, is called deep vein thrombosis (DVT) and can cause pain, swelling, and redness in the affected limb. DVT can be treated with drugs that stop the blood clot from getting larger (anticoagulants) but, if left untreated, part of the clot can break off and travel to the lungs, where it can cause a life-threatening pulmonary embolism. DVT and pulmonary embolism are collectively known as venous thromboembolism (VTE). Risk factors for VTE include having an inherited blood clotting disorder, oral contraceptive use, prolonged inactivity (for example, during a long-haul plane flight), and having surgery. VTEs are present in about a third of all people who die in hospital and, in non-bedridden populations, about 10% of people die within 28 days of a first VTE event.
Why Was This Study Done?
Some but not all studies have reported that smoking is also a risk factor for VTE. A clear demonstration of a significant association (a relationship unlikely to have occurred by chance) between smoking and VTE might help to reduce the burden of VTE because smoking can potentially be reduced by encouraging individuals to quit smoking and through taxation policies and other measures designed to reduce tobacco consumption. In this systematic review and meta-analysis, the researchers examine the link between smoking and the risk of VTE in the general population and investigate whether heavy smokers have a higher risk of VTE than light smokers. A systematic review uses predefined criteria to identify all the research on a given topic; meta-analysis is a statistical method for combining the results of several studies.
What Did the Researchers Do and Find?
The researchers identified 32 observational studies (investigations that record a population's baseline characteristics and subsequent disease development) that provided data on smoking and VTE. Together, the studies involved nearly 4 million participants and recorded 35,151 VTE events. Compared with never smokers, ever smokers (current and former smokers combined) had a relative risk (RR) of developing VTE of 1.17. That is, ever smokers were 17% more likely to develop VTE than never smokers. For current smokers and former smokers, RRs were 1.23 and 1.10, respectively. Analysis of only studies that adjusted for body mass index (a measure of body fat and a known risk factor for conditions that affect the heart and circulation) yielded a slightly higher RR (1.30) for current smokers compared with never smokers. For ever smokers, the population attributable fraction (the proportional reduction in VTE that would accrue in the population if no one smoked) was 8.7%. Notably, the risk of VTE increased by 10.2% for every additional ten cigarettes smoked per day and by 6.1% for every additional ten pack-years. Thus, an individual who smoked one pack of cigarettes per day for 40 years had a 26.7% higher risk of developing VTE than someone who had never smoked. Finally, smoking was associated with an absolute risk increase of 24.3 cases of VTE per 100,000 person-years.
What Do These Findings Mean?
These findings indicate that cigarette smoking is associated with a statistically significant, slightly increased risk for VTE among the general population and reveal a dose-relationship between smoking and VTE risk. They cannot prove that smoking causes VTE—people who smoke may share other unknown characteristics (confounding factors) that are actually responsible for their increased risk of VTE. Indeed, these findings identify body mass index as a potential confounding factor that might affect the accuracy of estimates of the association between smoking and VTE risk. Although the risk of VTE associated with smoking is smaller than the risk associated with some well-established VTE risk factors, smoking is more common (globally, there are 1.1 billion smokers) and may act synergistically with some of these risk factors. Thus, smoking behavior should be considered when screening individuals for VTE and in the prevention of first and subsequent VTE events.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001515.
The US National Heart Lung and Blood Institute provides information on deep vein thrombosis (including an animation about how DVT causes pulmonary embolism), and information on pulmonary embolism
The UK National Health Service Choices website has information on deep vein thrombosis, including personal stories, and on pulmonary embolism; SmokeFree is a website provided by the UK National Health Service that offers advice on quitting smoking
The non-profit organization US National Blood Clot Alliance provides detailed information about deep vein thrombosis and pulmonary embolism for patients and professionals and includes a selection of personal stories about these conditions
The World Health Organization provides information about the dangers of tobacco (in several languages)
Smokefree.gov, from the US National Cancer Institute, offers online tools and resources to help people quit smoking
MedlinePlus has links to further information about deep vein thrombosis, pulmonary embolism, and the dangers of smoking (in English and Spanish)
doi:10.1371/journal.pmed.1001515
PMCID: PMC3775725  PMID: 24068896
23.  Reliability and validity of the Chinese version of the pediatric quality of life inventoryTM (PedsQLTM) 3.0 neuromuscular module in children with Duchenne muscular dystrophy 
Background
The Pediatric Quality of Life InventoryTM (PedsQLTM) is a widely used instrument to measure pediatric health-related quality of life (HRQOL) in children aged 2 to 18 years. The current study aimed to evaluate the reliability and validity of the Chinese version of the PedsQLTM 3.0 Neuromuscular Module in children with Duchenne muscular dystrophy (DMD).
Methods
The PedsQLTM 3.0 Neuromuscular Module was translated into Chinese following PedsQLTM Measurement Model Translation Methodology. The Chinese version scale was administered to 56 children with DMD and their parents, and the psychometric properties were evaluated.
Results
The missing value percentages for each item of the Chinese version scale ranged from 0.00% to 0.54%. Internal consistency reliability approached or exceeded the minimum reliability standard of α = 0.7 (child α = 0.81, parent α = 0.86). Test-retest reliability was satisfactory, with intraclass correlation coefficients (ICCs) of 0.66 for children and 0.88 for parents (P < 0.01). Correlation coefficients between iteims and their hypothesized subscales were higher than those with other subscales (P < 0.05). The subscale of “About My/My Child’s Neuromuscular Disease” significantly related to mobility and stair climbing status (Child t = 2.21, Parent t = 2.83, P < 0.05). The inter-correlations among the Chinese version of the PedsQLTM 3.0 Neuromuscular Module and the PedsQLTM 4.0 Generic Core Scales had medium to large effect sizes (P < 0.05). The child self-report scores were in moderate agreement with the parent proxy-report scores (ICC = 0.51, P < 0.05).
Conclusions
The Chinese version of the PedsQLTM 3.0 Neuromuscular Module has acceptable psychometric properties. It is a reliable measure of disease-specific HRQOL in Chinese children with DMD.
doi:10.1186/1477-7525-11-47
PMCID: PMC3606306  PMID: 23497421
Duchenne muscular dystrophy; Pediatric quality of life inventoryTM; Neuromuscular module; Chinese version; Reliability; Validity
24.  Diabetes Mellitus Increases the Risk of Active Tuberculosis: A Systematic Review of 13 Observational Studies 
PLoS Medicine  2008;5(7):e152.
Background
Several studies have suggested that diabetes mellitus (DM) increases the risk of active tuberculosis (TB). The rising prevalence of DM in TB-endemic areas may adversely affect TB control. We conducted a systematic review and a meta-analysis of observational studies assessing the association of DM and TB in order to summarize the existing evidence and to assess methodological quality of the studies.
Methods and Findings
We searched the PubMed and EMBASE databases to identify observational studies that had reported an age-adjusted quantitative estimate of the association between DM and active TB disease. The search yielded 13 observational studies (n = 1,786,212 participants) with 17,698 TB cases. Random effects meta-analysis of cohort studies showed that DM was associated with an increased risk of TB (relative risk = 3.11, 95% CI 2.27–4.26). Case-control studies were heterogeneous and odds ratios ranged from 1.16 to 7.83. Subgroup analyses showed that effect estimates were higher in non-North American studies.
Conclusion
DM was associated with an increased risk of TB regardless of study design and population. People with DM may be important targets for interventions such as active case finding and treatment of latent TB and efforts to diagnose, detect, and treat DM may have a beneficial impact on TB control.
In a systematic review and meta-analysis including more than 17,000 tuberculosis cases, Christie Jeon and Megan Murray find that diabetes mellitus is associated with an approximately 3-fold increased risk of tuberculosis.
Editors' Summary
Background.
Every year, 8.8 million people develop active tuberculosis and 1.6 million people die from this highly contagious infection that usually affects the lungs. Tuberculosis is caused by Mycobacterium tuberculosis, bacteria that are spread through the air when people with active tuberculosis cough or sneeze. Most infected people never become ill—a third of the world's population is actually infected with M. tuberculosis—because the human immune system usually contains the infection. However, the bacteria remain dormant within the body and can cause disease many years later if host immunity declines because of increasing age or because of other medical conditions such as HIV infection. Active tuberculosis can be cured by taking a combination of several antibiotics every day for at least six months, and current control efforts concentrate on prompt detection and carefully monitored treatment of people with active tuberculosis to prevent further transmission of the bacteria.
Why Was This Study Done?
Despite this control strategy, tuberculosis remains a major health problem in many countries. To reduce the annual number of new tuberculosis cases (incidence) and the number of people with tuberculosis (prevalence) in such countries, it may be necessary to identify and target factors that increase an individual's risk of developing active tuberculosis. One possible risk factor for tuberculosis is diabetes, a condition characterized by high blood sugar levels and long-term complications involving the circulation, eyes and kidneys, and the body's ability to fight infection. 180 million people currently have diabetes, but this number is expected to double by 2030. Low- to middle-income countries (for example, India and China) have the highest burden of tuberculosis and are experiencing the fastest increase in diabetes prevalence. If diabetes does increase the risk of developing active tuberculosis, this overlap between the diabetes and tuberculosis epidemics could adversely affect global tuberculosis control efforts. In this study, the researchers undertake a systematic review (a search using specific criteria to identify relevant research studies, which are then appraised) and a random effects meta-analysis (a type of statistical analysis that pools the results of several studies) to learn more about the association between diabetes and tuberculosis.
What Did the Researchers Do and Find?
From their search of electronic databases, the researchers found 13 observational studies (nonexperimental investigations that record individual characteristics and health outcomes without trying to influence them in any way) that had examined whether diabetes mellitus increases the risk of active tuberculosis. Diabetes was positively associated with tuberculosis in all but one study, but the estimates of how much diabetes increases the risk of developing active tuberculosis were highly variable, ranging from no effect to an increased risk of nearly 8-fold in one study. The variability may represent true differences between the study populations, as higher increases in risk due to diabetes was found in studies conducted outside of North America, including Central America, Europe, and Asia; or it may reflect differences in how well each study was done. This variability meant that the researchers could not include all of the studies in their meta-analysis. However, the three prospective cohort studies (studies that follow a group of individuals with potential risk factors for a disease over time to see if they develop that disease) that they had identified in their systematic review had more consistent effects estimates, and were included in the meta-analysis. This meta-analysis showed that, compared to people without diabetes, people with diabetes had a 3-fold increased risk of developing active tuberculosis.
What Do These Findings Mean?
These findings support the idea that diabetes increases the risk of tuberculosis, a biologically plausible idea because, in experimental and clinical studies, diabetes was found to impair the immune responses needed to control bacterial infections. The 3-fold increased risk of tuberculosis associated with diabetes that the meta-analysis reveals suggests that diabetes may already be responsible for more than 10% of tuberculosis cases in countries such as India and China, a figure that will likely increase as diabetes becomes more common.
However, the estimate of this impact is based on three cohort studies from Asia; other studies suggest that the extent of the impact due to diabetes may vary by region and ethnicity. In populations where diabetes affects the risk of tuberculosis to a similar or greater extent, global tuberculosis control might benefit from active case finding and treatment of dormant tuberculosis in people with diabetes and from increased efforts to diagnose and treat diabetes.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050152.
The US National Institute of Allergy and Infectious Diseases provides information on all aspects of tuberculosis
The US Centers for Disease Control and Prevention provide several fact sheets and other information resources about tuberculosis
The World Health Organization provides information (in several languages) on efforts to reduce the global burden of tuberculosis, including information on the Stop TB Strategy and the 2008 report Global Tuberculosis Control—Surveillance, Planning, Financing
The US Centers for Disease Control and Prevention provides information for the public and professionals on all aspects of diabetes
The US National Institute of Diabetes and Digestive and Kidney Diseases also provides information about diabetes (in English and Spanish)
doi:10.1371/journal.pmed.0050152
PMCID: PMC2459204  PMID: 18630984
25.  Vocal Training Mitigates Age-Related Changes Within the Vocal Mechanism in Old Rats 
Aging affects voice production and is associated with reduced communicative ability and quality of life. Voice therapy is a critical component of treatment, but its effects on neuromuscular mechanisms are unknown. The ultrasonic vocalizations (USVs) of rats can be used to test the effects of aging and voice use on the laryngeal neuromuscular system. This study tested the hypothesis that age-related changes in the USVs of rats and laryngeal neuromuscular junctions can be reversed through vocal exercise. Young and old rats were trained for 8 weeks to increase their USVs and were compared with a no intervention group pre- and post-treatment. USV acoustics and aspects of neuromuscular junction (NMJ) morphology were measured in the thyroarytenoid muscle. Vocal training reduced or eliminated some age differences found in both USVs and NMJs. We conclude that vocal exercise may assist in mitigating age-related changes in voice characteristics and underlying neuromuscular adaptations.
doi:10.1093/gerona/glt044
PMCID: PMC3814239  PMID: 23671289
Voice; Larynx; Ultrasonic vocalization; Neuromuscular junction.

Results 1-25 (702458)