PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (304778)

Clipboard (0)
None

Related Articles

1.  A Smartphone Client-Server Teleradiology System for Primary Diagnosis of Acute Stroke 
Background
Recent advances in the treatment of acute ischemic stroke have made rapid acquisition, visualization, and interpretation of images a key factor for positive patient outcomes. We have developed a new teleradiology system based on a client-server architecture that enables rapid access to interactive advanced 2-D and 3-D visualization on a current generation smartphone device (Apple iPhone or iPod Touch, or an Android phone) without requiring patient image data to be stored on the device. Instead, a server loads and renders the patient images, then transmits a rendered frame to the remote device.
Objective
Our objective was to determine if a new smartphone client-server teleradiology system is capable of providing accuracies and interpretation times sufficient for diagnosis of acute stroke.
Methods
This was a retrospective study. We obtained 120 recent consecutive noncontrast computed tomography (NCCT) brain scans and 70 computed tomography angiogram (CTA) head scans from the Calgary Stroke Program database. Scans were read by two neuroradiologists, one on a medical diagnostic workstation and an iPod or iPhone (hereafter referred to as an iOS device) and the other only on an iOS device. NCCT brain scans were evaluated for early signs of infarction, which includes early parenchymal ischemic changes and dense vessel sign, and to exclude acute intraparenchymal hemorrhage and stroke mimics. CTA brain scans were evaluated for any intracranial vessel occlusion. The interpretations made on an iOS device were compared with those made at a workstation. The total interpretation times were recorded for both platforms. Interrater agreement was assessed. True positives, true negatives, false positives, and false negatives were obtained, and sensitivity, specificity, and accuracy of detecting the abnormalities on the iOS device were computed.
Results
The sensitivity, specificity, and accuracy of detecting intraparenchymal hemorrhage were 100% using the iOS device with a perfect interrater agreement (kappa = 1). The sensitivity, specificity, and accuracy of detecting acute parenchymal ischemic change were 94.1%, 100%, and 98.09% respectively for reader 1 and 97.05%, 100%, and 99.04% for reader 2 with nearly perfect interrater agreement (kappa = .8). The sensitivity, specificity, and accuracy of detecting dense vessel sign were 100%, 95.4%, and 96.19% respectively for reader 1 and 72.2%, 100%, and 95.23% for reader 2 using the iOS device with a good interrater agreement (kappa = .69). The sensitivity, specificity, and accuracy of detecting vessel occlusion on CT angiography scans were 94.4%, 100%, and 98.46% respectively for both readers using the iOS device, with perfect interrater agreement (kappa = 1). No significant difference (P < .05) was noted in the interpretation time between the workstation and iOS device.
Conclusion
The smartphone client-server teleradiology system appears promising and may have the potential to allow urgent management decisions in acute stroke. However, this study was retrospective, involved relatively few patient studies, and only two readers. Generalizing conclusions about its clinical utility, especially in other diagnostic use cases, should not be made until additional studies are performed.
doi:10.2196/jmir.1732
PMCID: PMC3221380  PMID: 21550961
Acute stroke; teleradiology; computed tomography; mhealth; mobile phone
2.  Prediction accuracy of a sample-size estimation method for ROC studies 
Academic radiology  2010;17(5):628-638.
Rationale and Objectives
Sample-size estimation is an important consideration when planning a receiver operating characteristic (ROC) study. The aim of this work was to assess the prediction accuracy of a sample-size estimation method using the Monte Carlo simulation method.
Materials and Methods
Two ROC ratings simulators characterized by low reader and high case variabilities (LH) and high reader and low case variabilities (HL) were used to generate pilot data sets in 2 modalities. Dorfman-Berbaum-Metz multiple-reader multiple-case (DBM-MRMC) analysis of the ratings yielded estimates of the modality-reader, modality-case and error variances. These were input to the Hillis-Berbaum (HB) sample-size estimation method, which predicted the number of cases needed to achieve 80% power for 10 readers and an effect size of 0.06 in the pivotal study. Predictions that generalized to readers and cases (random-all), to cases only (random-cases) and to readers only (random-readers) were generated. A prediction-accuracy index defined as the probability that any single prediction yields true power in the range 75% to 90% was used to assess the HB method.
Results
For random-case generalization the HB-method prediction-accuracy was reasonable, ~ 50% for 5 readers in the pilot study. Prediction-accuracy was generally higher under low reader variability conditions (LH) than under high reader variability conditions (HL). Under ideal conditions (many readers in the pilot study) the DBM-MRMC based HB method overestimated the number of cases. The overestimates could be explained by the observed large variability of the DBM-MRMC modality-reader variance estimates, particularly when reader variability was large (HL). The largest benefit of increasing the number of readers in the pilot study was realized for LH, where 15 readers were enough to yield prediction accuracy > 50% under all generalization conditions, but the benefit was lesser for HL where prediction accuracy was ~ 36% for 15 readers under random-all and random-reader conditions.
Conclusion
The HB method tends to overestimate the number of cases. Random-case generalization had reasonable prediction accuracy. Provided about 15 readers were used in the pilot study the method performed reasonably under all conditions for LH. When reader variability was large, the prediction-accuracy for random-all and random-reader generalizations was compromised. Study designers may wish to compare the HB predictions to those of other methods and to sample-sizes used in previous similar studies.
doi:10.1016/j.acra.2010.01.007
PMCID: PMC2867097  PMID: 20380980
ROC; sample-size; methodology assessment; statistical power; DBM; MRMC; simulation; Monte Carlo
3.  Clinical Utility of Vitamin D Testing 
Executive Summary
This report from the Medical Advisory Secretariat (MAS) was intended to evaluate the clinical utility of vitamin D testing in average risk Canadians and in those with kidney disease. As a separate analysis, this report also includes a systematic literature review of the prevalence of vitamin D deficiency in these two subgroups.
This evaluation did not set out to determine the serum vitamin D thresholds that might apply to non-bone health outcomes. For bone health outcomes, no high or moderate quality evidence could be found to support a target serum level above 50 nmol/L. Similarly, no high or moderate quality evidence could be found to support vitamin D’s effects in non-bone health outcomes, other than falls.
Vitamin D
Vitamin D is a lipid soluble vitamin that acts as a hormone. It stimulates intestinal calcium absorption and is important in maintaining adequate phosphate levels for bone mineralization, bone growth, and remodelling. It’s also believed to be involved in the regulation of cell growth proliferation and apoptosis (programmed cell death), as well as modulation of the immune system and other functions. Alone or in combination with calcium, Vitamin D has also been shown to reduce the risk of fractures in elderly men (≥ 65 years), postmenopausal women, and the risk of falls in community-dwelling seniors. However, in a comprehensive systematic review, inconsistent results were found concerning the effects of vitamin D in conditions such as cancer, all-cause mortality, and cardiovascular disease. In fact, no high or moderate quality evidence could be found concerning the effects of vitamin D in such non-bone health outcomes. Given the uncertainties surrounding the effects of vitamin D in non-bone health related outcomes, it was decided that this evaluation should focus on falls and the effects of vitamin D in bone health and exclusively within average-risk individuals and patients with kidney disease.
Synthesis of vitamin D occurs naturally in the skin through exposure to ultraviolet B (UVB) radiation from sunlight, but it can also be obtained from dietary sources including fortified foods, and supplements. Foods rich in vitamin D include fatty fish, egg yolks, fish liver oil, and some types of mushrooms. Since it is usually difficult to obtain sufficient vitamin D from non-fortified foods, either due to low content or infrequent use, most vitamin D is obtained from fortified foods, exposure to sunlight, and supplements.
Clinical Need: Condition and Target Population
Vitamin D deficiency may lead to rickets in infants and osteomalacia in adults. Factors believed to be associated with vitamin D deficiency include:
darker skin pigmentation,
winter season,
living at higher latitudes,
skin coverage,
kidney disease,
malabsorption syndromes such as Crohn’s disease, cystic fibrosis, and
genetic factors.
Patients with chronic kidney disease (CKD) are at a higher risk of vitamin D deficiency due to either renal losses or decreased synthesis of 1,25-dihydroxyvitamin D.
Health Canada currently recommends that, until the daily recommended intakes (DRI) for vitamin D are updated, Canada’s Food Guide (Eating Well with Canada’s Food Guide) should be followed with respect to vitamin D intake. Issued in 2007, the Guide recommends that Canadians consume two cups (500 ml) of fortified milk or fortified soy beverages daily in order to obtain a daily intake of 200 IU. In addition, men and women over the age of 50 should take 400 IU of vitamin D supplements daily. Additional recommendations were made for breastfed infants.
A Canadian survey evaluated the median vitamin D intake derived from diet alone (excluding supplements) among 35,000 Canadians, 10,900 of which were from Ontario. Among Ontarian males ages 9 and up, the median daily dietary vitamin D intake ranged between 196 IU and 272 IU per day. Among females, it varied from 152 IU to 196 IU per day. In boys and girls ages 1 to 3, the median daily dietary vitamin D intake was 248 IU, while among those 4 to 8 years it was 224 IU.
Vitamin D Testing
Two laboratory tests for vitamin D are available, 25-hydroxy vitamin D, referred to as 25(OH)D, and 1,25-dihydroxyvitamin D. Vitamin D status is assessed by measuring the serum 25(OH)D levels, which can be assayed using radioimmunoassays, competitive protein-binding assays (CPBA), high pressure liquid chromatography (HPLC), and liquid chromatography-tandem mass spectrometry (LC-MS/MS). These may yield different results with inter-assay variation reaching up to 25% (at lower serum levels) and intra-assay variation reaching 10%.
The optimal serum concentration of vitamin D has not been established and it may change across different stages of life. Similarly, there is currently no consensus on target serum vitamin D levels. There does, however, appear to be a consensus on the definition of vitamin D deficiency at 25(OH)D < 25 nmol/l, which is based on the risk of diseases such as rickets and osteomalacia. Higher target serum levels have also been proposed based on subclinical endpoints such as parathyroid hormone (PTH). Therefore, in this report, two conservative target serum levels have been adopted, 25 nmol/L (based on the risk of rickets and osteomalacia), and 40 to 50 nmol/L (based on vitamin D’s interaction with PTH).
Ontario Context
Volume & Cost
The volume of vitamin D tests done in Ontario has been increasing over the past 5 years with a steep increase of 169,000 tests in 2007 to more than 393,400 tests in 2008. The number of tests continues to rise with the projected number of tests for 2009 exceeding 731,000. According to the Ontario Schedule of Benefits, the billing cost of each test is $51.7 for 25(OH)D (L606, 100 LMS units, $0.517/unit) and $77.6 for 1,25-dihydroxyvitamin D (L605, 150 LMS units, $0.517/unit). Province wide, the total annual cost of vitamin D testing has increased from approximately $1.7M in 2004 to over $21.0M in 2008. The projected annual cost for 2009 is approximately $38.8M.
Evidence-Based Analysis
The objective of this report is to evaluate the clinical utility of vitamin D testing in the average risk population and in those with kidney disease. As a separate analysis, the report also sought to evaluate the prevalence of vitamin D deficiency in Canada. The specific research questions addressed were thus:
What is the clinical utility of vitamin D testing in the average risk population and in subjects with kidney disease?
What is the prevalence of vitamin D deficiency in the average risk population in Canada?
What is the prevalence of vitamin D deficiency in patients with kidney disease in Canada?
Clinical utility was defined as the ability to improve bone health outcomes with the focus on the average risk population (excluding those with osteoporosis) and patients with kidney disease.
Literature Search
A literature search was performed on July 17th, 2009 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from January 1, 1998 until July 17th, 2009. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria, full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search. Articles with unknown eligibility were reviewed with a second clinical epidemiologist, then a group of epidemiologists until consensus was established. The quality of evidence was assessed as high, moderate, low or very low according to GRADE methodology.
Observational studies that evaluated the prevalence of vitamin D deficiency in Canada in the population of interest were included based on the inclusion and exclusion criteria listed below. The baseline values were used in this report in the case of interventional studies that evaluated the effect of vitamin D intake on serum levels. Studies published in grey literature were included if no studies published in the peer-reviewed literature were identified for specific outcomes or subgroups.
Considering that vitamin D status may be affected by factors such as latitude, sun exposure, food fortification, among others, the search focused on prevalence studies published in Canada. In cases where no Canadian prevalence studies were identified, the decision was made to include studies from the United States, given the similar policies in vitamin D food fortification and recommended daily intake.
Inclusion Criteria
Studies published in English
Publications that reported the prevalence of vitamin D deficiency in Canada
Studies that included subjects from the general population or with kidney disease
Studies in children or adults
Studies published between January 1998 and July 17th 2009
Exclusion Criteria
Studies that included subjects defined according to a specific disease other than kidney disease
Letters, comments, and editorials
Studies that measured the serum vitamin D levels but did not report the percentage of subjects with serum levels below a given threshold
Outcomes of Interest
Prevalence of serum vitamin D less than 25 nmol/L
Prevalence of serum vitamin D less than 40 to 50 nmol/L
Serum 25-hydroxyvitamin D was the metabolite used to assess vitamin D status. Results from adult and children studies were reported separately. Subgroup analyses according to factors that affect serum vitamin D levels (e.g., seasonal effects, skin pigmentation, and vitamin D intake) were reported if enough information was provided in the studies
Quality of Evidence
The quality of the prevalence studies was based on the method of subject recruitment and sampling, possibility of selection bias, and generalizability to the source population. The overall quality of the trials was examined according to the GRADE Working Group criteria.
Summary of Findings
Fourteen prevalence studies examining Canadian adults and children met the eligibility criteria. With the exception of one longitudinal study, the studies had a cross-sectional design. Two studies were conducted among Canadian adults with renal disease but none studied Canadian children with renal disease (though three such US studies were included). No systematic reviews or health technology assessments that evaluated the prevalence of vitamin D deficiency in Canada were identified. Two studies were published in grey literature, consisting of a Canadian survey designed to measure serum vitamin D levels and a study in infants presented as an abstract at a conference. Also included were the results of vitamin D tests performed in community laboratories in Ontario between October 2008 and September 2009 (provided by the Ontario Association of Medical Laboratories).
Different threshold levels were used in the studies, thus we reported the percentage of subjects with serum levels of between 25 and 30 nmol/L and between 37.5 and 50 nmol/L. Some studies stratified the results according to factors affecting vitamin D status and two used multivariate models to investigate the effects of these characteristics (including age, season, BMI, vitamin D intake, skin pigmentation, and season) on serum 25(OH)D levels. It’s unclear, however, if these studies were adequately powered for these subgroup analyses.
Study participants generally consisted of healthy, community-dwelling subjects and most excluded individuals with conditions or medications that alter vitamin D or bone metabolism, such as kidney or liver disease. Although the studies were conducted in different parts of Canada, fewer were performed in Northern latitudes, i.e. above 53°N, which is equivalent to the city of Edmonton.
Adults
Serum vitamin D levels of < 25 to 30 nmol/L were observed in 0% to 25.5% of the subjects included in five studies; the weighted average was 3.8% (95% CI: 3.0, 4.6). The preliminary results of the Canadian survey showed that approximately 5% of the subjects had serum levels below 29.5 nmol/L. The results of over 600,000 vitamin D tests performed in Ontarian community laboratories between October 2008 and September 2009 showed that 2.6% of adults (> 18 years) had serum levels < 25 nmol/L.
The prevalence of serum vitamin D levels below 37.5-50 nmol/L reported among studies varied widely, ranging from 8% to 73.6% with a weighted average of 22.5%. The preliminary results of the CHMS survey showed that between 10% and 25% of subjects had serum levels below 37 to 48 nmol/L. The results of the vitamin D tests performed in community laboratories showed that 10% to 25% of the individuals had serum levels between 39 and 50 nmol/L.
In an attempt to explain this inter-study variation, the study results were stratified according to factors affecting serum vitamin D levels, as summarized below. These results should be interpreted with caution as none were adjusted for other potential confounders. Adequately powered multivariate analyses would be necessary to determine the contribution of risk factors to lower serum 25(OH)D levels.
Seasonal variation
Three adult studies evaluating serum vitamin D levels in different seasons observed a trend towards a higher prevalence of serum levels < 37.5 to 50 nmol/L during the winter and spring months, specifically 21% to 39%, compared to 8% to 14% in the summer. The weighted average was 23.6% over the winter/spring months and 9.6% over summer. The difference between the seasons was not statistically significant in one study and not reported in the other two studies.
Skin Pigmentation
Four studies observed a trend toward a higher prevalence of serum vitamin D levels < 37.5 to 50 nmol/L in subjects with darker skin pigmentation compared to those with lighter skin pigmentation, with weighted averages of 46.8% among adults with darker skin colour and 15.9% among those with fairer skin.
Vitamin D intake and serum levels
Four adult studies evaluated serum vitamin D levels according to vitamin D intake and showed an overall trend toward a lower prevalence of serum levels < 37.5 to 50 nmol/L with higher levels of vitamin D intake. One study observed a dose-response relationship between higher vitamin D intake from supplements, diet (milk), and sun exposure (results not adjusted for other variables). It was observed that subjects taking 50 to 400 IU or > 400 IU of vitamin D per day had a 6% and 3% prevalence of serum vitamin D level < 40 nmol/L, respectively, versus 29% in subjects not on vitamin D supplementation. Similarly, among subjects drinking one or two glasses of milk per day, the prevalence of serum vitamin D levels < 40 nmol/L was found to be 15%, versus 6% in those who drink more than two glasses of milk per day and 21% among those who do not drink milk. On the other hand, one study observed little variation in serum vitamin D levels during winter according to milk intake, with the proportion of subjects exhibiting vitamin D levels of < 40 nmol/L being 21% among those drinking 0-2 glasses per day, 26% among those drinking > 2 glasses, and 20% among non-milk drinkers.
The overall quality of evidence for the studies conducted among adults was deemed to be low, although it was considered moderate for the subgroups of skin pigmentation and seasonal variation.
Newborn, Children and Adolescents
Five Canadian studies evaluated serum vitamin D levels in newborns, children, and adolescents. In four of these, it was found that between 0 and 36% of children exhibited deficiency across age groups with a weighted average of 6.4%. The results of over 28,000 vitamin D tests performed in children 0 to 18 years old in Ontario laboratories (Oct. 2008 to Sept. 2009) showed that 4.4% had serum levels of < 25 nmol/L.
According to two studies, 32% of infants 24 to 30 months old and 35.3% of newborns had serum vitamin D levels of < 50 nmol/L. Two studies of children 2 to 16 years old reported that 24.5% and 34% had serum vitamin D levels below 37.5 to 40 nmol/L. In both studies, older children exhibited a higher prevalence than younger children, with weighted averages 34.4% and 10.3%, respectively. The overall weighted average of the prevalence of serum vitamin D levels < 37.5 to 50 nmol/L among pediatric studies was 25.8%. The preliminary results of the Canadian survey showed that between 10% and 25% of subjects between 6 and 11 years (N= 435) had serum levels below 50 nmol/L, while for those 12 to 19 years, 25% to 50% exhibited serum vitamin D levels below 50 nmol/L.
The effects of season, skin pigmentation, and vitamin D intake were not explored in Canadian pediatric studies. A Canadian surveillance study did, however, report 104 confirmed cases1 (2.9 cases per 100,000 children) of vitamin D-deficient rickets among Canadian children age 1 to 18 between 2002 and 2004, 57 (55%) of which from Ontario. The highest incidence occurred among children living in the North, i.e., the Yukon, Northwest Territories, and Nunavut. In 92 (89%) cases, skin pigmentation was categorized as intermediate to dark, 98 (94%) had been breastfed, and 25 (24%) were offspring of immigrants to Canada. There were no cases of rickets in children receiving ≥ 400 IU VD supplementation/day.
Overall, the quality of evidence of the studies of children was considered very low.
Kidney Disease
Adults
Two studies evaluated serum vitamin D levels in Canadian adults with kidney disease. The first included 128 patients with chronic kidney disease stages 3 to 5, 38% of which had serum vitamin D levels of < 37.5 nmol/L (measured between April and July). This is higher than what was reported in Canadian studies of the general population during the summer months (i.e. between 8% and 14%). In the second, which examined 419 subjects who had received a renal transplantation (mean time since transplantation: 7.2 ± 6.4 years), the prevalence of serum vitamin D levels < 40 nmol/L was 27.3%. The authors concluded that the prevalence observed in the study population was similar to what is expected in the general population.
Children
No studies evaluating serum vitamin D levels in Canadian pediatric patients with kidney disease could be identified, although three such US studies among children with chronic kidney disease stages 1 to 5 were. The mean age varied between 10.7 and 12.5 years in two studies but was not reported in the third. Across all three studies, the prevalence of serum vitamin D levels below the range of 37.5 to 50 nmol/L varied between 21% and 39%, which is not considerably different from what was observed in studies of healthy Canadian children (24% to 35%).
Overall, the quality of evidence in adults and children with kidney disease was considered very low.
Clinical Utility of Vitamin D Testing
A high quality comprehensive systematic review published in August 2007 evaluated the association between serum vitamin D levels and different bone health outcomes in different age groups. A total of 72 studies were included. The authors observed that there was a trend towards improvement in some bone health outcomes with higher serum vitamin D levels. Nevertheless, precise thresholds for improved bone health outcomes could not be defined across age groups. Further, no new studies on the association were identified during an updated systematic review on vitamin D published in July 2009.
With regards to non-bone health outcomes, there is no high or even moderate quality evidence that supports the effectiveness of vitamin D in outcomes such as cancer, cardiovascular outcomes, and all-cause mortality. Even if there is any residual uncertainty, there is no evidence that testing vitamin D levels encourages adherence to Health Canada’s guidelines for vitamin D intake. A normal serum vitamin D threshold required to prevent non-bone health related conditions cannot be resolved until a causal effect or correlation has been demonstrated between vitamin D levels and these conditions. This is as an ongoing research issue around which there is currently too much uncertainty to base any conclusions that would support routine vitamin D testing.
For patients with chronic kidney disease (CKD), there is again no high or moderate quality evidence supporting improved outcomes through the use of calcitriol or vitamin D analogs. In the absence of such data, the authors of the guidelines for CKD patients consider it best practice to maintain serum calcium and phosphate at normal levels, while supplementation with active vitamin D should be considered if serum PTH levels are elevated. As previously stated, the authors of guidelines for CKD patients believe that there is not enough evidence to support routine vitamin D [25(OH)D] testing. According to what is stated in the guidelines, decisions regarding the commencement or discontinuation of treatment with calcitriol or vitamin D analogs should be based on serum PTH, calcium, and phosphate levels.
Limitations associated with the evidence of vitamin D testing include ambiguities in the definition of an ‘adequate threshold level’ and both inter- and intra- assay variability. The MAS considers both the lack of a consensus on the target serum vitamin D levels and assay limitations directly affect and undermine the clinical utility of testing. The evidence supporting the clinical utility of vitamin D testing is thus considered to be of very low quality.
Daily vitamin D intake, either through diet or supplementation, should follow Health Canada’s recommendations for healthy individuals of different age groups. For those with medical conditions such as renal disease, liver disease, and malabsorption syndromes, and for those taking medications that may affect vitamin D absorption/metabolism, physician guidance should be followed with respect to both vitamin D testing and supplementation.
Conclusions
Studies indicate that vitamin D, alone or in combination with calcium, may decrease the risk of fractures and falls among older adults.
There is no high or moderate quality evidence to support the effectiveness of vitamin D in other outcomes such as cancer, cardiovascular outcomes, and all-cause mortality.
Studies suggest that the prevalence of vitamin D deficiency in Canadian adults and children is relatively low (approximately 5%), and between 10% and 25% have serum levels below 40 to 50 nmol/L (based on very low to low grade evidence).
Given the limitations associated with serum vitamin D measurement, ambiguities in the definition of a ‘target serum level’, and the availability of clear guidelines on vitamin D supplementation from Health Canada, vitamin D testing is not warranted for the average risk population.
Health Canada has issued recommendations regarding the adequate daily intake of vitamin D, but current studies suggest that the mean dietary intake is below these recommendations. Accordingly, Health Canada’s guidelines and recommendations should be promoted.
Based on a moderate level of evidence, individuals with darker skin pigmentation appear to have a higher risk of low serum vitamin D levels than those with lighter skin pigmentation and therefore may need to be specially targeted with respect to optimum vitamin D intake. The cause-effect of this association is currently unclear.
Individuals with medical conditions such as renal and liver disease, osteoporosis, and malabsorption syndromes, as well as those taking medications that may affect vitamin D absorption/metabolism, should follow their physician’s guidance concerning both vitamin D testing and supplementation.
PMCID: PMC3377517  PMID: 23074397
4.  The NIEHS Predictive-Toxicology Evaluation Project. 
Environmental Health Perspectives  1996;104(Suppl 5):1001-1010.
The Predictive-Toxicology Evaluation (PTE) project conducts collaborative experiments that subject the performance of predictive-toxicology (PT) methods to rigorous, objective evaluation in a uniquely informative manner. Sponsored by the National Institute of Environmental Health Sciences, it takes advantage of the ongoing testing conducted by the U.S. National Toxicology Program (NTP) to estimate the true error of models that have been applied to make prospective predictions on previously untested, noncongeneric-chemical substances. The PTE project first identifies a group of standardized NTP chemical bioassays either scheduled to be conducted or are ongoing, but not yet complete. The project then announces and advertises the evaluation experiment, disseminates information about the chemical bioassays, and encourages researchers from a wide variety of disciplines to publish their predictions in peer-reviewed journals, using whatever approaches and methods they feel are best. A collection of such papers is published in this Environmental Health Perspectives Supplement, providing readers the opportunity to compare and contrast PT approaches and models, within the context of their prospective application to an actual-use situation. This introduction to this collection of papers on predictive toxicology summarizes the predictions made and the final results obtained for the 44 chemical carcinogenesis bioassays of the first PTE experiment (PTE-1) and presents information that identifies the 30 chemical carcinogenesis bioassays of PTE-2, along with a table of prediction sets that have been published to date. It also provides background about the origin and goals of the PTE project, outlines the special challenge associated with estimating the true error of models that aspire to predict open-system behavior, and summarizes what has been learned to date.
PMCID: PMC1469687  PMID: 8933048
5.  Difficulty in detecting discrepancies in a clinical trial report: 260-reader evaluation 
Background: Scientific literature can contain errors. Discrepancies, defined as two or more statements or results that cannot both be true, may be a signal of problems with a trial report. In this study, we report how many discrepancies are detected by a large panel of readers examining a trial report containing a large number of discrepancies.
Methods: We approached a convenience sample of 343 journal readers in seven countries, and invited them in person to participate in a study. They were asked to examine the tables and figures of one published article for discrepancies. 260 participants agreed, ranging from medical students to professors. The discrepancies they identified were tabulated and counted. There were 39 different discrepancies identified. We evaluated the probability of discrepancy identification, and whether more time spent or greater participant experience as academic authors improved the ability to detect discrepancies.
Results: Overall, 95.3% of discrepancies were missed. Most participants (62%) were unable to find any discrepancies. Only 11.5% noticed more than 10% of the discrepancies. More discrepancies were noted by participants who spent more time on the task (Spearman’s ρ = 0.22, P < 0.01), and those with more experience of publishing papers (Spearman’s ρ = 0.13 with number of publications, P = 0.04).
Conclusions: Noticing discrepancies is difficult. Most readers miss most discrepancies even when asked specifically to look for them. The probability of a discrepancy evading an individual sensitized reader is 95%, making it important that, when problems are identified after publication, readers are able to communicate with each other. When made aware of discrepancies, the majority of readers support editorial action to correct the scientific record.
doi:10.1093/ije/dyv114
PMCID: PMC4521134  PMID: 26174517
Peer review; retraction of publication; clinical governance; patient safety
6.  Advice from a Medical Expert through the Internet on Queries about AIDS and Hepatitis: Analysis of a Pilot Experiment 
PLoS Medicine  2006;3(7):e256.
Background
Advice from a medical expert on concerns and queries expressed anonymously through the Internet by patients and later posted on the Web, offers a new type of patient–doctor relationship. The aim of the current study was to perform a descriptive analysis of questions about AIDS and hepatitis made to an infectious disease expert and sent through the Internet to a consumer-oriented Web site in the Spanish language.
Methods and Findings
Questions were e-mailed and the questions and answers were posted anonymously in the “expert-advice” section of a Web site focused on AIDS and hepatitis. We performed a descriptive study and a temporal analysis of the questions received in the first 12 months after the launch of the site. A total of 899 questions were received from December 2003 to November 2004, with a marked linear growth pattern. Questions originated in Spain in 68% of cases and 32% came from Latin America (the Caribbean, Central America, and South America). Eighty percent of the senders were male. Most of the questions concerned HIV infection (79%) with many fewer on hepatitis (17%) . The highest numbers of questions were submitted just after the weekend (37% of questions were made on Mondays and Tuesdays). Risk factors for contracting HIV infection were the most frequent concern (69%), followed by the window period for detection (12.6%), laboratory results (5.9%), symptoms (4.7%), diagnosis (2.7%), and treatment (2.2%).
Conclusions
Our results confirm a great demand for this type of “ask-the-expert” Internet service, at least for AIDS and hepatitis. Factors such as anonymity, free access, and immediate answers have been key factors in its success.
Editors' Summary
Background.
Although substantial progress has been made in the fight against HIV/AIDS, in terms of developing new treatments and understanding factors that cause the disease to worsen, putting this knowledge into practice can be difficult. Two main barriers exist that can prevent individuals seeking information or treatment. The first is the considerable social stigma still associated with HIV; the second is the poverty of the developing countries—such as those in Latin America—where the disease has reached pandemic proportions. In addition, the disease, which used to be spread mainly through the sharing of injecting drug needles or through sex between men, has now entered the general population. When healthcare services are limited, people are often unable to seek information about HIV, and even when services do exist, the cost of accessing them can be too high. The same is true for other diseases such as hepatitis infection, which often co-exists with HIV. The Internet has the potential to go some way to filling this health information gap. And, many patients seek information on the Internet before consulting their doctor.
Why Was This Study Done?
In 2003, the Madrid-based newspaper El Mundo launched an HIV and hepatitis information resource situated in the health section of its existing Web site. One aspect of this resource was an “ask-the-expert” section, in which readers could anonymously e-mail questions about HIV and hepatitis that would be answered by an infectious disease expert. These ranged from how the diseases can be transmitted and who is most at risk, to what to do if an individual thinks they might have the disease. There seems to be a clear need for this Spanish-language service; in Latin America, 2.1 million people are infected with HIV, with 230,000 new cases in 2005. In the Caribbean, AIDS is the leading cause of death in people aged 15–44 years. In Spain, 71,000 people were infected with HIV in 2005. Although the Internet contains a vast store of health information, and many aspects of patient–doctor interactions have been made electronic, little is known about what format is ideal. The researchers, who included employees of the newspaper, decided to investigate the effectiveness of the question–answer format used by El Mundo.
What Did the Researchers Do and Find?
In the first 12 months after the service was launched, the researchers recorded several details: what day of the week questions were sent, what the questions were about, and whether they were sent by the person needing the information or by a family member or friend. They also noted demographic information, such as the age, sex, and country of origin of the person e-mailing the question.
Of 899 questions sent to the Web site between December 2003 and November 2004, most (80%) were sent by males. Most questions came from Spain, followed by Latin America, and most questions were sent on Mondays and Tuesdays. Some e-mails were from people who felt they had been waiting too long for an answer to their first e-mail—despite the mean time for answering a question being fewer than seven days. Messages of support for the Web site rose during the year from 2% to 22%.
What Do These Findings Mean?
The messages of support and encouragement sent in by users indicated that the service was well-received and useful. Most of the questions were about HIV rather than about hepatitis, which the researchers say could represent the more prominent media coverage of HIV. However, despite the disease's high profile, the questions about HIV were very basic. It could also mean that people hold a false impression that hepatitis is a less serious illness or that they have more information about it than about HIV.
Since most questions were sent in at the start of the week, the researchers believe that many individuals wrote in after engaging in potentially risky sexual behaviour over the weekend.
The researchers also found that existing information on the Web site already answered many of the new questions, indicating that people prefer a question-and-answer model over ready-prepared information. The anonymity, free access, and immediacy of the Internet-based service suggest this could be a model for providing other types of health information.
The findings also suggest that such a service can highlight the needs and concerns of specific populations and can help health planners and policymakers respond to those needs in their countries.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030256.
• The AIDSinfo Web site from the US Department of Health and Human Services provides information on all aspects of HIV/AIDS treatment and prevention and has sections specially written for patients and the general public
• AVERT, an international AIDS charity, has a section on HIV in Latin America that includes details of transmission, infection rates, and treatment
Marco and colleagues analyzed questions sent by the public to a Spanish language "ask-the-expert" Internet site, and found that 70% of queries were about risk factors for acquiring HIV.
doi:10.1371/journal.pmed.0030256
PMCID: PMC1483911  PMID: 16796404
7.  The Relationship of Previous Training and Experience of Journal Peer Reviewers to Subsequent Review Quality 
PLoS Medicine  2007;4(1):e40.
Background
Peer review is considered crucial to the selection and publication of quality science, but very little is known about the previous experiences and training that might identify high-quality peer reviewers. The reviewer selection processes of most journals, and thus the qualifications of their reviewers, are ill defined. More objective selection of peer reviewers might improve the journal peer review process and thus the quality of published science.
Methods and Findings
306 experienced reviewers (71% of all those associated with a specialty journal) completed a survey of past training and experiences postulated to improve peer review skills. Reviewers performed 2,856 reviews of 1,484 separate manuscripts during a four-year study period, all prospectively rated on a standardized quality scale by editors. Multivariable analysis revealed that most variables, including academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training). Being on an editorial board and doing formal grant (study section) review were each predictors for only one of our two comparisons. However, the predictive power of all variables was weak.
Conclusions
Our study confirms that there are no easily identifiable types of formal training or experience that predict reviewer performance. Skill in scientific peer review may be as ill defined and hard to impart as is “common sense.” Without a better understanding of those skills, it seems unlikely journals and editors will be successful in systematically improving their selection of reviewers. This inability to predict performance makes it imperative that all but the smallest journals implement routine review ratings systems to routinely monitor the quality of their reviews (and thus the quality of the science they publish).
A survey of experienced reviewers, asked about training they had received in peer review, found there are no easily identifiable types of formal training and experience that predict reviewer performance.
Editors' Summary
Background.
When medical researchers have concluded their research and written it up, the next step is to get it published as an article in a journal, so that the findings can be circulated widely. These published findings help determine subsequent research and clinical use. The editors of reputable journals, including PLoS Medicine, have to decide whether the articles sent to them are of good quality and accurate and whether they will be of interest to the readers of their journal. To do this they need to obtain specialist advice, so they contact experts in the topic of the research article and ask them to write reports. This is the process of scientific peer review, and the experts who write such reports are known as “peer reviewers.” Although the editors make the final decision, the advice and criticism of these peer reviewers to the editors is essential in making decisions on publication, and usually in requiring authors to make changes to their manuscript. The contribution that peer reviewers have made to the article by the time it is finally published may, therefore, be quite considerable.
Although peer review is accepted as a key part of the process for the publishing of medical research, many people have argued that there are flaws in the system. For example, there may be an element of luck involved; one author might find their paper being reviewed by a reviewer who is biased against the approach they have adopted or who is a very critical person by nature, and another author may have the good fortune to have their work considered by someone who is much more favorably disposed toward their work. Some reviewers are more knowledgeable and thorough in their work than others. The editors of medical journals try to take in account such biases and quality factors in their choice of peer reviewers or when assessing the reviews. Some journals have run training courses for experts who review for them regularly to try to make the standard of peer review as high as possible.
Why Was This Study Done?
It is hard for journal editors to know who will make a good peer reviewer, and there is no proven system for choosing them. The authors of this study wanted to identify the previous experiences and training that make up the background of good peer reviewers and compare them with the quality of the reviews provided. This would help journal editors select good people for the task in future, and as a result will affect the quality of science they publish for readers, including other researchers.
What Did the Researchers Do and Find?
The authors contacted all the regular reviewers from one specialist journal (Annals of Emergency Medicine). A total of 306 of these experienced reviewers (71% of all those associated with the journal) completed a survey of past training and experiences that might be expected to improve peer review skills. These reviewers had done 2,856 reviews of 1,484 separate manuscripts during a four-year study period, and during this time the quality of the reviews had been rated by the journal's editors. Surprisingly, most variables, including academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training), and even these were only weak predictors.
What Do These Findings Mean?
This study suggest that there are no easily identifiable types of formal training or experience that predict peer reviewer performance, although it is clear that some reviewers (and reviews) are better than others. The authors suggest that it is essential therefore that journals routinely monitor the quality of reviews submitted to them to ensure they are getting good advice (a practice that is not universal).
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/doi:10.1371/journal.pmed.0040040
• WAME is an association of editors from many countries who seek to foster international cooperation among editors of peer-reviewed medical journals
• The Fifth International Congress on Peer Review and Biomedical Publication is one of a series of conferences on peer review
• The PLoS Medicine guidelines for reviewers outline what we look for in a review
• The Council of Science Editors promotes ethical scientific publishing practices
• An editorial also published in this issue of PLoS Medicine discusses the peer review process further
doi:10.1371/journal.pmed.0040040
PMCID: PMC1796627  PMID: 17411314
8.  A qualitative study of motivators and barriers to healthy eating in pregnancy for low-income, overweight, african-american mothers 
Poor diet quality is common among low-income, overweight, African-American mothers, placing them at high risk for adverse pregnancy outcomes. We sought to better understand the contextual factors that may influence low-income African-American mothers' diet quality during pregnancy. In 2011, we conducted semi-structured interviews with 21 overweight/obese, pregnant African Americans in Philadelphia, all of whom received Medicaid and were eligible for the Supplemental Nutrition Program for Women, Infants, and Children. Two readers independently coded the interview transcripts to identify recurrent themes. We identified ten themes around motivators and barriers to healthy eating in pregnancy. Mothers believed that consuming healthy foods, like fruits and vegetables, would lead to healthy babies and limit the physical discomforts of pregnancy. However, more often than not, mothers chose foods that were high in fats and sugars because of taste, cost, and convenience. Additionally, mothers had several misconceptions about the definition of healthy (e.g., “juice is good for baby”), which led to overconsumption. Many mothers feared they might “starve” their babies if they didn't get enough to eat, promoting persistent snacking and larger portions. Living in multigenerational households and sharing resources also limited mothers' control over food choices and made consuming healthy foods especially difficult. Despite the good intentions of low-income African-American mothers to improve diet quality during pregnancy, multiple factors worked together as barriers to healthy eating. Interventions which emphasize tasty and affordable healthy food substitutes, address misconceptions, and counsel mothers about true energy needs in pregnancy may improve low-income, African-American, overweight/obese mothers' diet quality.
doi:10.1016/j.jand.2013.05.014
PMCID: PMC3782301  PMID: 23871106
Pregnancy; Diet quality; Low-income; Motivators; Barriers
9.  Effects of iron supplementation and anthelmintic treatment on motor and language development of preschool children in Zanzibar: double blind, placebo controlled study 
BMJ : British Medical Journal  2001;323(7326):1389.
Objective
To measure the effects of iron supplementation and anthelmintic treatment on iron status, anaemia, growth, morbidity, and development of children aged 6-59 months.
Design
Double blind, placebo controlled randomised factorial trial of iron supplementation and anthelmintic treatment.
Setting
Community in Pemba Island, Zanzibar.
Participants
614 preschool children aged 6-59 months.
Main outcome measures
Development of language and motor skills assessed by parental interview before and after treatment in age appropriate subgroups.
Results
Before intervention, anaemia was prevalent and severe, and geohelminth infections were prevalent and light—Plasmodium falciparum infection was nearly universal. Iron supplementation significantly improved iron status, but not haemoglobin status. Iron supplementation improved language development by 0.8 (95% confidence interval 0.2 to 1.4) points on the 20 point scale. Iron supplementation also improved motor development, but this effect was modified by baseline haemoglobin concentrations (P=0.015 for interaction term) and was apparent only in children with baseline haemoglobin concentrations <90 g/l. In children with a baseline haemoglobin concentration of 68 g/l (one standard deviation below the mean value), iron treatment increased scores by 1.1 (0.1 to 2.1) points on the 18 point motor scale. Mebendazole significantly reduced the number and severity of infections caused by Ascaris lumbricoides and Trichuris trichiura, but not by hookworms. Mebendazole increased development scores by 0.4 (−0.3 to 1.1) points on the motor scale and 0.3 (−0.3 to 0.9) points on the language scale.
Conclusions
Iron supplementation improved motor and language development of preschool children in rural Africa. The effects of iron on motor development were limited to children with more severe anaemia (baseline haemoglobin concentration <90 g/l). Mebendazole had a positive effect on motor and language development, but this was not statistically significant.
What is already known on this topicIron is needed for development and functioning of the human brainAnaemic children show developmental delays, but it is not yet clear whether iron deficiency causes these deficits or whether iron supplementation can reverse themHelminth infections in schoolchildren are associated with cognitive deficits, but few studies have been made of helminth infection and early child developmentWhat this study addsLow doses of oral iron supplementation given daily improved language development in children aged 1-4 years in ZanzibarIron supplementation improved motor development, but only in children with initial haemoglobin concentrations below 90 g/lThe effects of routine anthelmintic treatment on motor and language milestones were positive, but non-significant, with our sample size
PMCID: PMC60982  PMID: 11744561
10.  Processing of Scalar Inferences by Mandarin Learners of English: An Online Measure 
PLoS ONE  2016;11(1):e0145494.
Scalar inferences represent the condition when a speaker uses a weaker expression such as some in a pragmatic scale like , and s/he has the intention to reject the stronger use of the other word like all in the utterance. Considerable disagreement has arisen concerning how interlocutors derive the inferences. The study presented here tries to address this issue by examining online scalar inferences among Mandarin learners of English. To date, Default Inference and Relevance Theory have made different predictions regarding how people process scalar inferences. Findings from recently emerging first language studies did not fully resolved the debate but led to even more heated debates. The current three online psycholinguistic experiments reported here tried to address the processing of scalar inferences from second language perspective. Results showed that Mandarin learners of English showed faster reaction times and a higher acceptance rate when interpreting some as some but not all and this was true even when subjects were under time pressure, which was manifested in Experiment 2. Overall, the results of the experiments supported Default Theory. In addition, Experiment 3 also found that working memory capacity plays a critical role during scalar inference processing. High span readers were faster in accepting the some but not all interpretation than low span readers. However, compared with low span readers, high span readers were more likely to accept the some and possibly all condition, possibly due to their working memory capacity to generate scenarios to fit the interpretation.
doi:10.1371/journal.pone.0145494
PMCID: PMC4709112  PMID: 26752294
11.  The Effect of Film Quality on Reading Radiographs of Simple Pneumoconiosis in a trial of X-ray sets 
Four chest radiographs (14 in. × 14 in. postero-anterior) for each of 86 coal-miners were taken (in a trial to compare ϰ-ray sets) and assessed by a number of experienced readers for both quality and pneumoconiosis. All films were developed by one technician under standard conditions so that variations in the quality of the films produced for one subject arose because of differences in the sets and in the way they were used by the radiographers taking the films. The data thus obtained allowed a study of film quality to be made (a) in relation to the subject and (b) as it affected the reading of simple pneumoconiosis.
The subjects were selected to include a high proportion whose earlier radiographs showed pneumoconiosis; they were thus substantially older than a normal colliery population.
The assessments of quality were found to be reasonably consistent both between observers and on different occasions for the same observer.
A clear tendency was found for the quality of a film to depend on the subject. Men with no radiological evidence of pneumoconiosis tended to produce films which were assessed as of better quality than those of men with pneumoconiosis, however slight. Among the latter, chest thickness had an important effect on film quality; men with thicker chests produced poorer films. The subject's age did not appear to have any effect on the quality of his film.
Film quality was found to introduce only slight biases into the reading of pneumoconiosis. Individual readers varied considerably so that, although on average the readers tended to overcorrect for technical faults, i.e. to read more abnormality in black films than in good ones, and less in grey, some readers undercorrected slightly.
What little evidence was available did not suggest that poor quality of films introduced any excess variability into film reading.
PMCID: PMC1038146  PMID: 13761945
12.  Intermittent oral iron supplementation during pregnancy (Review) 
Background
Anaemia is a frequent condition during pregnancy, particularly among women from developing countries who have insufficient iron intake to meet increased iron needs of both the mother and the fetus. Traditionally, gestational anaemia has been prevented with the provision of daily iron supplements throughout pregnancy, but adherence to this regimen due to side effects, interrupted supply of the supplements, and concerns about safety among women with an adequate iron intake, have limited the use of this intervention. Intermittent (i.e. one, two or three times a week on non-consecutive days) supplementation with iron alone or in combination with folic acid or other vitamins and minerals has recently been proposed as an alternative to daily supplementation.
Objectives
To assess the benefits and harms of intermittent supplementation with iron alone or in combination with folic acid or other vitamins and minerals to pregnant women on neonatal and pregnancy outcomes.
Search methods
We searched the Cochrane Pregnancy and Childbirth Group’s Trials Register (23 March 2012). We also searched the WHO International Clinical Trials Registry Platform (ICTRP) for ongoing studies and contacted relevant organisations for the identification of ongoing and unpublished studies (23 March 2012).
Selection criteria
Randomised or quasi-randomised trials.
Data collection and analysis
We assessed the methodological quality of trials using standard Cochrane criteria. Two review authors independently assessed trial eligibility, extracted data and conducted checks for accuracy.
Main results
This review includes 21 trials from 13 different countries, but only 18 trials (with 4072 women) reported on our outcomes of interest and contributed data to the review. All of these studies compared daily versus intermittent iron supplementation.
Three studies provided iron alone, 12 iron+folic acid and three more iron plus multiple vitamins and minerals. Their methodological quality was mixed and most had high levels of attrition. Overall, there was no clear evidence of differences between groups for infant primary outcomes: low birthweight (average risk ratio (RR) 0.96; 95% confidence interval (CI) 0.61 to 1.52, seven studies), infant birthweight (mean difference MD −8.62 g; 95% CI −52.76 g to 35.52 g, eight studies), premature birth (average RR 1.82; 95% CI 0.75 to 4.40, four studies). None of the studies reported neonatal deaths or congenital anomalies.
For maternal outcomes, there was no clear evidence of differences between groups for anaemia at term (average RR 1.22; 95% CI 0.84 to 1.80, four studies) and women receiving intermittent supplementation had less side effects (average RR 0.56; 95% CI 0.37 to 0.84, 11 studies) than those receiving daily supplements. Women receiving intermittent supplements were also at lower risk of having high haemoglobin (Hb) concentrations (greater than 130 g/L) during the second or third trimester of pregnancy (average RR 0.48; 95% CI 0.35 to 0.67, 13 studies). There were no significant differences in iron-deficiency anaemia between women receiving intermittent or daily iron+folic acid supplementation (average RR 0.71; 95% CI 0.08 to 6.63, 1 study). There were no maternal deaths (six studies) or women with severe anaemia in pregnancy (six studies). None of the studies reported on iron deficiency at term or infections during pregnancy.
Where sufficient data were available for primary outcomes, we set up subgroups to look for possible differences between studies in terms of earlier or later supplementation; women’s anaemia status at the start of supplementation; higher and lower weekly doses of iron; and the malarial status of the region in which the trials were conducted. There was no clear effect of these variables on the results of the review.
Authors’ conclusions
The present systematic review is the most comprehensive summary of the evidence assessing the benefits and harms of intermittent iron supplementation regimens in pregnant women on haematological and pregnancy outcomes. The findings suggest that intermittent iron+folic acid regimens produce similar maternal and infant outcomes at birth as daily supplementation but are associated with fewer side effects. Women receiving daily supplements had increased risk of developing high levels of Hb in mid and late pregnancy but were less likely to present mild anaemia near term. Although the evidence is limited and the quality of the trials was low or very low, intermittent may be a feasible alternative to daily iron supplementation among those pregnant women who are not anaemic and have adequate antenatal care.
doi:10.1002/14651858.CD009997
PMCID: PMC4053594  PMID: 22786531
*Dietary Supplements [adverse effects]; Administration, Oral; Anemia, Iron-Deficiency [blood; *prevention & control]; Developing Countries; Drug Administration Schedule; Drug Combinations; Folic Acid [administration & dosage]; Hemoglobin A [metabolism]; Infant, Low Birth Weight; Infant, Newborn; Iron [*administration & dosage; adverse effects]; Iron, Dietary [*administration & dosage]; Pregnancy Complications, Hematologic [blood; prevention & control]; Premature Birth; Randomized Controlled Trials as Topic; Vitamins [administration & dosage]; Female; Humans; Pregnancy
13.  Polysomnography in Patients With Obstructive Sleep Apnea 
Executive Summary
Objective
The objective of this health technology policy assessment was to evaluate the clinical utility and cost-effectiveness of sleep studies in Ontario.
Clinical Need: Target Population and Condition
Sleep disorders are common and obstructive sleep apnea (OSA) is the predominant type. Obstructive sleep apnea is the repetitive complete obstruction (apnea) or partial obstruction (hypopnea) of the collapsible part of the upper airway during sleep. The syndrome is associated with excessive daytime sleepiness or chronic fatigue. Several studies have shown that OSA is associated with hypertension, stroke, and other cardiovascular disorders; many researchers believe that these cardiovascular disorders are consequences of OSA. This has generated increasing interest in recent years in sleep studies.
The Technology Being Reviewed
There is no ‘gold standard’ for the diagnosis of OSA, which makes it difficult to calibrate any test for diagnosis. Traditionally, polysomnography (PSG) in an attended setting (sleep laboratory) has been used as a reference standard for the diagnosis of OSA. Polysomnography measures several sleep variables, one of which is the apnea-hypopnea index (AHI) or respiratory disturbance index (RDI). The AHI is defined as the sum of apneas and hypopneas per hour of sleep; apnea is defined as the absence of airflow for ≥ 10 seconds; and hypopnea is defined as reduction in respiratory effort with ≥ 4% oxygen desaturation. The RDI is defined as the sum of apneas, hypopneas, and abnormal respiratory events per hour of sleep. Often the two terms are used interchangeably. The AHI has been widely used to diagnose OSA, although with different cut-off levels, the basis for which are often unclear or arbitrarily determined. Generally, an AHI of more than five events per hour of sleep is considered abnormal and the patient is considered to have a sleep disorder. An abnormal AHI accompanied by excessive daytime sleepiness is the hallmark for OSA diagnosis. For patients diagnosed with OSA, continuous positive airway pressure (CPAP) therapy is the treatment of choice. Polysomnography may also used for titrating CPAP to individual needs.
In January 2005, the College of Physicians and Surgeons of Ontario published the second edition of Independent Health Facilities: Clinical Practice Parameters and Facility Standards: Sleep Medicine, commonly known as “The Sleep Book.” The Sleep Book states that OSA is the most common primary respiratory sleep disorder and a full overnight sleep study is considered the current standard test for individuals in whom OSA is suspected (based on clinical signs and symptoms), particularly if CPAP or surgical therapy is being considered.
Polysomnography in a sleep laboratory is time-consuming and expensive. With the evolution of technology, portable devices have emerged that measure more or less the same sleep variables in sleep laboratories as in the home. Newer CPAP devices also have auto-titration features and can record sleep variables including AHI. These devices, if equally accurate, may reduce the dependency on sleep laboratories for the diagnosis of OSA and the titration of CPAP, and thus may be more cost-effective.
Difficulties arise, however, when trying to assess and compare the diagnostic efficacy of in-home PSG versus in-lab. The AHI measured from portable devices in-home is the sum of apneas and hypopneas per hour of time in bed, rather than of sleep, and the absolute diagnostic efficacy of in-lab PSG is unknown. To compare in-home PSG with in-lab PSG, several researchers have used correlation coefficients or sensitivity and specificity, while others have used Bland-Altman plots or receiver operating characteristics (ROC) curves. All these approaches, however, have potential pitfalls. Correlation coefficients do not measure agreement; sensitivity and specificity are not helpful when the true disease status is unknown; and Bland-Altman plots measure agreement (but are helpful when the range of clinical equivalence is known). Lastly, receiver operating characteristics curves are generated using logistic regression with the true disease status as the dependent variable and test values as the independent variable. Thus, each value of the test is used as a cut-point to measure sensitivity and specificity, which are then plotted on an x-y plane. The cut-point that maximizes both sensitivity and specificity is chosen as the cut-off level to discriminate between disease and no-disease states. In the absence of a gold standard to determine the true disease status, ROC curves are of minimal value.
At the request of the Ontario Health Technology Advisory Committee (OHTAC), MAS has thus reviewed the literature on PSG published over the last two years to examine new developments.
Methods
Review Strategy
There is a large body of literature on sleep studies and several reviews have been conducted. Two large cohort studies, the Sleep Heart Health Study and the Wisconsin Sleep Cohort Study, are the main sources of evidence on sleep literature.
To examine new developments on PSG published in the past two years, MEDLINE, EMBASE, MEDLINE In-Process & Other Non-Indexed Citations, the Cochrane Database of Systematic Reviews and Cochrane CENTRAL, INAHTA, and websites of other health technology assessment agencies were searched. Any study that reported results of in-home or in-lab PSG was included. All articles that reported findings from the Sleep Heart Health Study and the Wisconsin Sleep Cohort Study were also reviewed.
Diffusion of Sleep Laboratories
To estimate the diffusion of sleep laboratories, a list of sleep laboratories licensed under the Independent Health Facility Act was obtained. The annual number of sleep studies per 100,000 individuals in Ontario from 2000 to 2004 was also estimated using administrative databases.
Summary of Findings
Literature Review
A total of 315 articles were identified that were published in the past two years; 227 were excluded after reviewing titles and abstracts. A total of 59 articles were identified that reported findings of the Sleep Heart Health Study and the Wisconsin Sleep Cohort Study.
Prevalence
Based on cross-sectional data from the Wisconsin Sleep Cohort Study of 602 men and women aged 30 to 60 years, it is estimated that the prevalence of sleep-disordered breathing is 9% in women and 24% in men, on the basis of more than five AHI events per hour of sleep. Among the women with sleep disorder breathing, 22.6% had daytime sleepiness and among the men, 15.5% had daytime sleepiness. Based on this, the prevalence of OSA in the middle-aged adult population is estimated to be 2% in women and 4% in men.
Snoring is present in 94% of OSA patients, but not all snorers have OSA. Women report daytime sleepiness less often compared with their male counterparts (of similar age, body mass index [BMI], and AHI). Prevalence of OSA tends to be higher in older age groups compared with younger age groups.
Diagnostic Value of Polysomnography
It is believed that PSG in the sleep laboratory is more accurate than in-home PSG. In the absence of a gold standard, however, claims of accuracy cannot be substantiated. In general, there is poor correlation between PSG variables and clinical variables. A variety of cut-off points of AHI (> 5, > 10, and > 15) are arbitrarily used to diagnose and categorize severity of OSA, though the clinical importance of these cut-off points has not been determined.
Recently, a study of the use of a therapeutic trial of CPAP to diagnose OSA was reported. The authors studied habitual snorers with daytime sleepiness in the absence of other medical or psychiatric disorders. Using PSG as the reference standard, the authors calculated the sensitivity of this test to be 80% and its specificity to be 97%. Further, they concluded that PSG could be avoided in 46% of this population.
Obstructive Sleep Apnea and Obesity
Obstructive sleep apnea is strongly associated with obesity. Obese individuals (BMI >30 kg/m2) are at higher risk for OSA compared with non-obese individuals and up to 75% of OSA patients are obese. It is hypothesized that obese individuals have large deposits of fat in the neck that cause the upper airway to collapse in the supine position during sleep. The observations reported from several studies support the hypothesis that AHIs (or RDIs) are significantly reduced with weight loss in obese individuals.
Obstructive Sleep Apnea and Cardiovascular Diseases
Associations have been shown between OSA and comorbidities such as diabetes mellitus and hypertension, which are known risk factors for myocardial infarction and stroke. Patients with more severe forms of OSA (based on AHI) report poorer quality of life and increased health care utilization compared with patients with milder forms of OSA. From animal models, it is hypothesized that sleep fragmentation results in glucose intolerance and hypertension. There is, however, no evidence from prospective studies in humans to establish a causal link between OSA and hypertension or diabetes mellitus. It is also not clear that the associations between OSA and other diseases are independent of obesity; in most of these studies, patients with higher values of AHI had higher values of BMI compared with patients with lower AHI values.
A recent meta-analysis of bariatric surgery has shown that weight loss in obese individuals (mean BMI = 46.8 kg/m2; range = 32.30–68.80) significantly improved their health profile. Diabetes was resolved in 76.8% of patients, hypertension was resolved in 61.7% of patients, hyperlipidemia improved in 70% of patients, and OSA resolved in 85.7% of patients. This suggests that obesity leads to OSA, diabetes, and hypertension, rather than OSA independently causing diabetes and hypertension.
Health Technology Assessments, Guidelines, and Recommendations
In April 2005, the Centers for Medicare and Medicaid Services (CMS) in the United States published its decision and review regarding in-home and in-lab sleep studies for the diagnosis and treatment of OSA with CPAP. In order to cover CPAP, CMS requires that a diagnosis of OSA be established using PSG in a sleep laboratory. After reviewing the literature, CMS concluded that the evidence was not adequate to determine that unattended portable sleep study was reasonable and necessary in the diagnosis of OSA.
In May 2005, the Canadian Coordinating Office of Health Technology Assessment (CCOHTA) published a review of guidelines for referral of patients to sleep laboratories. The review included 37 guidelines and associated reviews that covered 18 applications of sleep laboratory studies. The CCOHTA reported that the level of evidence for many applications was of limited quality, that some cited studies were not relevant to the recommendations made, that many recommendations reflect consensus positions only, and that there was a need for more good quality studies of many sleep laboratory applications.
Diffusion
As of the time of writing, there are 97 licensed sleep laboratories in Ontario. In 2000, the number of sleep studies performed in Ontario was 376/100,000 people. There was a steady rise in sleep studies in the following years such that in 2004, 769 sleep studies per 100,000 people were performed, for a total of 96,134 sleep studies. Based on prevalence estimates of the Wisconsin Sleep Cohort Study, it was estimated that 927,105 people aged 30 to 60 years have sleep-disordered breathing. Thus, there may be a 10-fold rise in the rate of sleep tests in the next few years.
Economic Analysis
In 2004, approximately 96,000 sleep studies were conducted in Ontario at a total cost of ~$47 million (Cdn). Since obesity is associated with sleep disordered breathing, MAS compared the costs of sleep studies to the cost of bariatric surgery. The cost of bariatric surgery is $17,350 per patient. In 2004, Ontario spent $4.7 million per year for 270 patients to undergo bariatric surgery in the province, and $8.2 million for 225 patients to seek out-of-country treatment. Using a Markov model, it was concluded that shifting costs from sleep studies to bariatric surgery would benefit more patients with OSA and may also prevent health consequences related to diabetes, hypertension, and hyperlipidemia. It is estimated that the annual cost of treating comorbid conditions in morbidly obese patients often exceeds $10,000 per patient. Thus, the downstream cost savings could be substantial.
Considerations for Policy Development
Weight loss is associated with a decrease in OSA severity. Treating and preventing obesity would also substantially reduce the economic burden associated with diabetes, hypertension, hyperlipidemia, and OSA. Promotion of healthy weights may be achieved by a multisectorial approach as recommended by the Chief Medical Officer of Health for Ontario. Bariatric surgery has the potential to help morbidly obese individuals (BMI > 35 kg/m2 with an accompanying comorbid condition, or BMI > 40 kg/m2) lose weight. In January 2005, MAS completed an assessment of bariatric surgery, based on which OHTAC recommended an improvement in access to these surgeries for morbidly obese patients in Ontario.
Habitual snorers with excessive daytime sleepiness have a high pretest probability of having OSA. These patients could be offered a therapeutic trial of CPAP to diagnose OSA, rather than a PSG. A majority of these patients are also obese and may benefit from weight loss. Individualized weight loss programs should, therefore, be offered and patients who are morbidly obese should be offered bariatric surgery.
That said, and in view of the still evolving understanding of the causes, consequences and optimal treatment of OSA, further research is warranted to identify which patients should be screened for OSA.
PMCID: PMC3379160  PMID: 23074483
14.  Intraoperative optical coherence tomography for assessing human lymph nodes for metastatic cancer 
BMC Cancer  2016;16:144.
Background
Evaluation of lymph node (LN) status is an important factor for detecting metastasis and thereby staging breast cancer. Currently utilized clinical techniques involve the surgical disruption and resection of lymphatic structure, whether nodes or axillary contents, for histological examination. While reasonably effective at detection of macrometastasis, the majority of the resected lymph nodes are histologically negative. Improvements need to be made to better detect micrometastasis, minimize or eliminate lymphatic disruption complications, and provide immediate and accurate intraoperative feedback for in vivo cancer staging to better guide surgery.
Methods
We evaluated the use of optical coherence tomography (OCT), a high-resolution, real-time, label-free imaging modality for the intraoperative assessment of human LNs for metastatic disease in patients with breast cancer. We assessed the sensitivity and specificity of double-blinded trained readers who analyzed intraoperative OCT LN images for presence of metastatic disease, using co-registered post-operative histopathology as the gold standard.
Results
Our results suggest that intraoperative OCT examination of LNs is an appropriate real-time, label-free, non-destructive alternative to frozen-section analysis, potentially offering faster interpretation and results to empower superior intraoperative decision-making.
Conclusions
Intraoperative OCT has strong potential to supplement current post-operative histopathology with real-time in situ assessment of LNs to preserve both non-cancerous nodes and their lymphatic vessels, and thus reduce the associated risks and complications from surgical disruption of lymphoid structures following biopsy.
doi:10.1186/s12885-016-2194-4
PMCID: PMC4763478  PMID: 26907742
Breast cancer; Lymph node; Metastasis; Optical coherence tomography; Intraoperative
15.  Executive functioning and reading achievement in school: a study of Brazilian children assessed by their teachers as “poor readers” 
This study examined executive functioning and reading achievement in 106 6- to 8-year-old Brazilian children from a range of social backgrounds of whom approximately half lived below the poverty line. A particular focus was to explore the executive function profile of children whose classroom reading performance was judged below standard by their teachers and who were matched to controls on chronological age, sex, school type (private or public), domicile (Salvador/BA or São Paulo/SP) and socioeconomic status. Children completed a battery of 12 executive function tasks that were conceptual tapping cognitive flexibility, working memory, inhibition and selective attention. Each executive function domain was assessed by several tasks. Principal component analysis extracted four factors that were labeled “Working Memory/Cognitive Flexibility,” “Interference Suppression,” “Selective Attention,” and “Response Inhibition.” Individual differences in executive functioning components made differential contributions to early reading achievement. The Working Memory/Cognitive Flexibility factor emerged as the best predictor of reading. Group comparisons on computed factor scores showed that struggling readers displayed limitations in Working Memory/Cognitive Flexibility, but not in other executive function components, compared to more skilled readers. These results validate the account that working memory capacity provides a crucial building block for the development of early literacy skills and extends it to a population of early readers of Portuguese from Brazil. The study suggests that deficits in working memory/cognitive flexibility might represent one contributing factor to reading difficulties in early readers. This might have important implications for how educators might intervene with children at risk of academic under achievement.
doi:10.3389/fpsyg.2014.00550
PMCID: PMC4050967  PMID: 24959155
executive function; reading; working memory; cognitive flexibility; selective attention; inhibition; poverty; learning difficulties
16.  Clustered Environments and Randomized Genes: A Fundamental Distinction between Conventional and Genetic Epidemiology  
PLoS Medicine  2007;4(12):e352.
Background
In conventional epidemiology confounding of the exposure of interest with lifestyle or socioeconomic factors, and reverse causation whereby disease status influences exposure rather than vice versa, may invalidate causal interpretations of observed associations. Conversely, genetic variants should not be related to the confounding factors that distort associations in conventional observational epidemiological studies. Furthermore, disease onset will not influence genotype. Therefore, it has been suggested that genetic variants that are known to be associated with a modifiable (nongenetic) risk factor can be used to help determine the causal effect of this modifiable risk factor on disease outcomes. This approach, mendelian randomization, is increasingly being applied within epidemiological studies. However, there is debate about the underlying premise that associations between genotypes and disease outcomes are not confounded by other risk factors. We examined the extent to which genetic variants, on the one hand, and nongenetic environmental exposures or phenotypic characteristics on the other, tend to be associated with each other, to assess the degree of confounding that would exist in conventional epidemiological studies compared with mendelian randomization studies.
Methods and Findings
We estimated pairwise correlations between nongenetic baseline variables and genetic variables in a cross-sectional study comparing the number of correlations that were statistically significant at the 5%, 1%, and 0.01% level (α = 0.05, 0.01, and 0.0001, respectively) with the number expected by chance if all variables were in fact uncorrelated, using a two-sided binomial exact test. We demonstrate that behavioural, socioeconomic, and physiological factors are strongly interrelated, with 45% of all possible pairwise associations between 96 nongenetic characteristics (n = 4,560 correlations) being significant at the p < 0.01 level (the ratio of observed to expected significant associations was 45; p-value for difference between observed and expected < 0.000001). Similar findings were observed for other levels of significance. In contrast, genetic variants showed no greater association with each other, or with the 96 behavioural, socioeconomic, and physiological factors, than would be expected by chance.
Conclusions
These data illustrate why observational studies have produced misleading claims regarding potentially causal factors for disease. The findings demonstrate the potential power of a methodology that utilizes genetic variants as indicators of exposure level when studying environmentally modifiable risk factors.
In a cross-sectional study Davey Smith and colleagues show why observational studies can produce misleading claims regarding potential causal factors for disease, and illustrate the use of mendelian randomization to study environmentally modifiable risk factors.
Editors' Summary
Background.
Epidemiology is the study of the distribution and causes of human disease. Observational epidemiological studies investigate whether particular modifiable factors (for example, smoking or eating healthily) are associated with the risk of a particular disease. The link between smoking and lung cancer was discovered in this way. Once the modifiable factors associated with a disease are established as causal factors, individuals can reduce their risk of developing that disease by avoiding causative factors or by increasing their exposure to protective factors. Unfortunately, modifiable factors that are associated with risk of a disease in observational studies sometimes turn out not to cause or prevent disease. For example, higher intake of vitamins C and E apparently protected people against heart problems in observational studies, but taking these vitamins did not show any protection against heart disease in randomized controlled trials (studies in which identical groups of patients are randomly assigned various interventions and then their health monitored). One explanation for this type of discrepancy is known as confounding—the distortion of the effect of one factor by the presence of another that is associated both with the exposure under study and with the disease outcome. So in this example, people who took vitamin supplements might have also have exercised more than people who did not take supplements and it could have been the exercise rather than the supplements that was protective against heart disease.
Why Was This Study Done?
It isn't always possible to check the results of observational studies in randomized controlled trials so epidemiologists have developed other ways to minimize confounding. One approach is known as mendelian randomization. Several gene variants have been identified that affect risk factors. For example, variants in a gene called APOE affect the level of cholesterol in an individual's blood, a risk factor for heart disease. People inherit gene variants randomly from their parents to build up their own unique genotype (total genetic makeup). Consequently, a study that examines the associations between a gene variant and a disease can indicate whether the risk factor affected by that gene variant causes the disease. There should be no confounding in this type of study, the argument goes, because different genetic variants should not be associated with each other or with nongenetic variables that typically confound directly assessed associations between risk factors and disease. But is this true? In this study, the researchers have tested whether nongenetic risk factors are confounded by each other and also whether genetic variants are confounded by nongenetic risk factors and also by other genetic variants
What Did the Researchers Do and Find?
Using data collected in the British Women's Heart and Health Study, the researchers calculated how many pairs of nongenetic variables (for example, frequency of eating meat, alcohol intake) were significantly correlated with each other. That is, the number of pairs of nongenetic variables in which a high correlation between both variables occurred in more study participants than expected by chance. They compared this number with the number of correlations that would occur by chance if all the variables were totally independent. When the researchers assumed that 1 in 100 combinations of pairs of variables would have been correlated by chance, the ratio of observed to expected significant correlations was seen 45 times more frequently than would be expected by chance. When the researchers repeated this exercise with genetic variants, the ratio of observed to expected significant correlations was 1.58, a figure not significantly different from 1. Similarly, the ratio of observed to expected significant correlations when pairwise combinations between genetic and nongenetic variants were considered was 1.22.
What Do These Findings Mean?
These findings have two main implications. First, the large excess of observed over expected associations among the nongenetic variables indicates that many nongenetic modifiable factors occur in clusters—for example, people with healthy diets often have other healthy habits. Researchers doing observational studies always try to adjust for confounding but this result suggests that this adjustment will be hard to do, in part because it will not always be clear which factors are confounders. Second, the lack of a large excess of observed over expected associations among the genetic variables (and also among genetic variables paired with nongenetic variables) indicates that little confounding is likely to occur in studies that use mendelian randomization. In other words, this approach is a valid way to identify which environmentally modifiable risk factors cause human disease.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040352.
Wikipedia has pages on epidemiology and on mendelian randomization (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages).
Epidemiology for the Uninitiated is a primer from the British Medical Journal
Information is available on the British Women's Heart and Health Study
doi:10.1371/journal.pmed.0040352
PMCID: PMC2121108  PMID: 18076282
17.  Effect of Folic Acid and Betaine Supplementation on Flow-Mediated Dilation: A Randomized, Controlled Study in Healthy Volunteers 
PLoS Clinical Trials  2006;1(2):e10.
Objectives:
We investigated whether lowering of fasting homocysteine concentrations, either with folic acid or with betaine supplementation, differentially affects vascular function, a surrogate marker for risk of cardiovascular disease, in healthy volunteers. As yet, it remains uncertain whether a high concentration of homocysteine itself or whether a low folate status—its main determinant—is involved in the pathogenesis of cardiovascular disease. To shed light on this issue, we performed this study.
Design:
This was a randomized, placebo-controlled, double-blind, crossover study.
Setting:
The study was performed at Wageningen University in Wageningen, the Netherlands.
Participants:
Participants were 39 apparently healthy men and women, aged 50–70 y.
Interventions:
Participants ingested 0.8 mg/d of folic acid, 6 g/d of betaine, and placebo for 6 wk each, with 6-wk washout in between.
Outcome Measures:
At the end of each supplementation period, plasma homocysteine concentrations and flow-mediated dilation (FMD) of the brachial artery were measured in duplicate.
Results:
Folic acid supplementation lowered fasting homocysteine by 20% (−2.0 μmol/l, 95% confidence interval [CI]: −2.3; −1.6), and betaine supplementation lowered fasting plasma homocysteine by 12% (−1.2 μmol/l; −1.6; −0.8) relative to placebo. Mean (± SD) FMD after placebo supplementation was 2.8 (± 1.8) FMD%. Supplementation with betaine or folic acid did not affect FMD relative to placebo; differences relative to placebo were −0.4 FMD% (95%CI, −1.2; 0.4) and −0.1 FMD% (−0.9; 0.7), respectively.
Conclusions:
Folic acid and betaine supplementation both did not improve vascular function in healthy volunteers, despite evident homocysteine lowering. This is in agreement with other studies in healthy participants, the majority of which also fail to find improved vascular function upon folic acid treatment. However, homocysteine or folate might of course affect cardiovascular disease risk through other mechanisms.
Editorial Commentary
Background: Evidence from observational studies indicates a link between high concentrations of homocysteine (an amino acid) in the blood and increased risk of cardiovascular disease. However, the basis for the link between homocysteine concentrations and cardiovascular disease risk is not clear. Supplementing the diet with B-vitamins lowers homocysteine levels, and large-scale trials are underway that will determine whether B-vitamin supplementation has an effect on cardiovascular outcomes, such as heart attacks and strokes. These trials also involve administration of folic acid as well as other B-vitamins. It is not obvious, however, whether the effects of B-vitamin supplementation arise as a result of homocysteine lowering or via some other biochemical pathway.
What this trial shows: Olthof and colleagues aimed to further understand the effects of homocysteine lowering by randomizing 40 healthy volunteer participants to receive either folic acid supplementation; placebo; or betaine, a nutrient that lowers homocysteine levels via a different biochemical pathway than folic acid. Each participant in the trial received each supplement for 6 wk, with a 6-wk washout period before the next supplement was given. The researchers then used a technique called flow-mediated dilation (FMD) to measure functioning of the main artery of the upper arm, as a surrogate for cardiovascular disease risk. In this trial, both folic acid and betaine supplementation significantly lowered homocysteine levels over the 6-wk supplementation period. However, both forms of supplementation failed to result in any significant change in functioning of the artery, as measured using FMD.
Strengths and limitations: In this trial 40 participants were recruited, and 39 were followed up to trial completion. A crossover design was used, with each participant receiving each supplement and a placebo in sequence. This method enabled a smaller number of participants to be used to answer the question of interest, as compared to parallel-group designs. The majority of participants in the trial were followed up. However, the trial's outcomes are surrogates for cardiovascular disease risk, measured over fairly short time periods, and no clinical outcomes were examined.
Contribution to the evidence: This trial adds to the evidence on the effects of nutrient supplementation on surrogate outcomes for cardiovascular disease risk. The results show that over a 6-wk study period, these surrogate outcomes are not affected by either folic acid or betaine supplementation.
doi:10.1371/journal.pctr.0010010
PMCID: PMC1488898  PMID: 16871332
18.  Call to Action on Use and Reimbursement for Home Blood Pressure Monitoring A Joint Scientific Statement From the American Heart Association, American Society of Hypertension, and the Preventive Cardiovascular Nurses’ Association 
Hypertension  2008;52(1):10-29.
The standard method for the measurement of blood pressure (BP) in clinical practice has traditionally been to use readings taken with the auscultatory technique by a physician or nurse in a clinic or office setting. While such measurements are likely to remain the cornerstone for the diagnosis and management of hypertension for the foreseeable future, it is becoming increasingly clear that they often give inadequate or even misleading information about a patient’s true BP status. All clinical measurements of BP may be regarded as surrogate estimates of the “True” BP, which may regarded as the average level over prolonged periods of time. In the past 30 years there has been an increasing trend to supplement office or clinic readings with out-of-office measurements of BP, taken either by the patient or a relative at home (home or self-monitoring- HBPM) or by an automated recorder for 24 hours (ambulatory blood pressure monitoring- ABPM).
Of the two methods HBPM has the greatest potential for being incorporated into the routine care of hypertensive patients, in the same way that home blood glucose monitoring performed by the patient has become a routine part of the management of diabetes. The currently available monitors are relatively reliable, easy to use, inexpensive, and accurate, and are already being purchased in large numbers by patients. Despite this, their use has only been cursorily endorsed in current guidelines for the management of hypertension, and there have been no detailed recommendations as to how they should be incorporated into routine clinical practice. And despite the fact that there is strong evidence that HBPM can predict clinical outcomes and improve clinical care, the cost of the monitors is not generally reimbursed. It is the purpose of this Call to Action paper to address the issues of the incorporation of HBPM into the routine management of hypertensive patients and its reimbursement.
doi:10.1161/HYPERTENSIONAHA.107.189010
PMCID: PMC2989415  PMID: 18497370
19.  Yes, You Can? A Speaker’s Potency to Act upon His Words Orchestrates Early Neural Responses to Message-Level Meaning 
PLoS ONE  2013;8(7):e69173.
Evidence is accruing that, in comprehending language, the human brain rapidly integrates a wealth of information sources–including the reader or hearer’s knowledge about the world and even his/her current mood. However, little is known to date about how language processing in the brain is affected by the hearer’s knowledge about the speaker. Here, we investigated the impact of social attributions to the speaker by measuring event-related brain potentials while participants watched videos of three speakers uttering true or false statements pertaining to politics or general knowledge: a top political decision maker (the German Federal Minister of Finance at the time of the experiment), a well-known media personality and an unidentifiable control speaker. False versus true statements engendered an N400 - late positivity response, with the N400 (150–450 ms) constituting the earliest observable response to message-level meaning. Crucially, however, the N400 was modulated by the combination of speaker and message: for false versus true political statements, an N400 effect was only observable for the politician, but not for either of the other two speakers; for false versus true general knowledge statements, an N400 was engendered by all three speakers. We interpret this result as demonstrating that the neurophysiological response to message-level meaning is immediately influenced by the social status of the speaker and whether he/she has the power to bring about the state of affairs described.
doi:10.1371/journal.pone.0069173
PMCID: PMC3722173  PMID: 23894425
20.  Observer training for computer-aided detection of pulmonary nodules in chest radiography 
European Radiology  2012;22(8):1659-1664.
Objectives
To assess whether short-term feedback helps readers to increase their performance using computer-aided detection (CAD) for nodule detection in chest radiography.
Methods
The 140 CXRs (56 with a solitary CT-proven nodules and 84 negative controls) were divided into four subsets of 35; each were read in a different order by six readers. Lesion presence, location and diagnostic confidence were scored without and with CAD (IQQA-Chest, EDDA Technology) as second reader. Readers received individual feedback after each subset. Sensitivity, specificity and area under the receiver-operating characteristics curve (AUC) were calculated for readings with and without CAD with respect to change over time and impact of CAD.
Results
CAD stand-alone sensitivity was 59 % with 1.9 false-positives per image. Mean AUC slightly increased over time with and without CAD (0.78 vs. 0.84 with and 0.76 vs. 0.82 without CAD) but differences did not reach significance. The sensitivity increased (65 % vs. 70 % and 66 % vs. 70 %) and specificity decreased over time (79 % vs. 74 % and 80 % vs. 77 %) but no significant impact of CAD was found.
Conclusion
Short-term feedback does not increase the ability of readers to differentiate true- from false-positive candidate lesions and to use CAD more effectively.
Key Points
• Computer-aided detection (CAD) is increasingly used as an adjunct for many radiological techniques.
• Short-term feedback does not improve reader performance with CAD in chest radiography.
• Differentiation between true- and false-positive CAD for low conspicious possible lesions proves difficult.
• CAD can potentially increase reader performance for nodule detection in chest radiography.
doi:10.1007/s00330-012-2412-7
PMCID: PMC3387360  PMID: 22447377
Radiographic image interpretation; Computer-assisted; Solitary pulmonary nodule; Radiography; Lung; Education
21.  A Randomized Controlled Trial of Folate Supplementation When Treating Malaria in Pregnancy with Sulfadoxine-Pyrimethamine 
PLoS Clinical Trials  2006;1(6):e28.
Objectives:
Sulfadoxine-pyrimethamine (SP) is an antimalarial drug that acts on the folate metabolism of the malaria parasite. We investigated whether folate (FA) supplementation in a high or a low dose affects the efficacy of SP for the treatment of uncomplicated malaria in pregnant women.
Design:
This was a randomized, placebo-controlled, double-blind trial.
Setting:
The trial was carried out at three hospitals in western Kenya.
Participants:
The participants were 488 pregnant women presenting at their first antenatal visit with uncomplicated malaria parasitaemia (density of ≥ 500 parasites/μl), a haemoglobin level higher than 7 g/dl, a gestational age between 17 and 34 weeks, and no history of antimalarial or FA use, or sulfa allergy. A total of 415 women completed the study.
Interventions:
All participants received SP and iron supplementation. They were randomized to the following arms: FA 5 mg, FA 0.4 mg, or FA placebo. After 14 days, all participants continued with FA 5 mg daily as per national guidelines. Participants were followed at days 2, 3, 7, 14, 21, and 28 or until treatment failure.
Outcome Measures:
The outcomes were SP failure rate and change in haemoglobin at day 14.
Results:
The proportion of treatment failure at day 14 was 13.9% (19/137) in the placebo group, 14.5% (20/138) in the FA 0.4 mg arm (adjusted hazard ratio [AHR], 1.07; 98.7% confidence interval [CI], 0.48 to 2.37; p = 0.8), and 27.1% (38/140) in the FA 5 mg arm (AHR, 2.19; 98.7% CI, 1.09 to 4.40; p = 0.005). The haemoglobin levels at day 14 were not different relative to placebo (mean difference for FA 5 mg, 0.17 g/dl; 98.7% CI, −0.19 to 0.52; and for FA 0.4 mg, 0.14 g/dl; 98.7% CI, −0.21 to 0.49).
Conclusions:
Concomitant use of 5 mg FA supplementation compromises the efficacy of SP for the treatment of uncomplicated malaria in pregnant women. Countries that use SP for treatment or prevention of malaria in pregnancy need to evaluate their antenatal policy on timing or dose of FA supplementation.
Editorial Commentary
Background: Health authorities worldwide recommend that pregnant women supplement their diet with folate (one of the B-vitamins), normally 0.4 mg per day. There is good evidence from systematic reviews of controlled trials that folate supplementation around conception and early in pregnancy is effective in protecting against neural tube (spine and brain) defects; continued supplementation throughout pregnancy reduces the chance of anemia in the mother. In many African countries, including Kenya, the dose of folate used is 5 mg per day, because this dose is more easily available there. In Kenya, as well as elsewhere in Africa, sulfadoxine-pyrimethamine is also given twice or more after the first trimester to treat and/or prevent malaria infection (which is more likely, and can have serious consequences, when a woman is pregnant). However, there is some evidence from laboratory experiments and clinical studies, none of which were done in pregnant women, suggesting that folate supplementation might reduce the effectiveness of sulfadoxine-pyrimethamine. Therefore, these researchers conducted a trial to test this hypothesis in 415 pregnant Kenyan women with malaria parasites in the blood but no severe symptoms. All were given standard sulfadoxine-pyrimethamine treatment. The women were randomized to receive either folate 5 mg daily, folate 0.4 mg daily, or placebo tablets for 14 days, after which all women reverted to the standard folate 5 mg tablets. The women were followed up for 28 days after the initial sulfadoxine-pyrimethamine dose and the principal outcome the researchers were interested in was the failure of sulfadoxine-pyrimethamine treatment, defined as fever and the presence of parasites in the blood (clinical failure) or the failure of parasites to clear from the blood or to reappear too soon (parasitological failure).
What this trial shows: In this trial, women receiving folate 5 mg daily were approximately twice as likely to fail treatment with sulfadoxine-pyrimethamine than women receiving folate 0.4 mg or placebo. (Overall, around 27% of the women receiving folate 5 mg had treatment failure during the follow-up period.) All the treatment groups had similar levels of blood hemoglobin at the end of the study. There did not seem to be any major differences in adverse events (such as premature deliveries, stillbirths, or neonatal deaths) among women taking part in the different study groups.
Strengths and limitations: The randomization procedures were appropriate and procedures were used to blind participants and researchers to the different interventions, therefore reducing the risk of bias. Since the trial had a placebo arm, it was possible to conclude that the lower dose of folate (0.4 mg) did not significantly affect efficacy of sulfadoxine-pyrimethamine as compared with placebo. A limitation of the study is that the length of the intervention was short, since all women reverted to standard 5 mg folate after 14 days. It is therefore not clear whether a longer trial would have shown additional risks or benefits of the different doses of folate. Finally, PCR genotyping was not done on the parasites infecting women in the trial; this procedure could have distinguished between true treatment failures and new infections (but which would have been unlikely within 14 days).
Contribution to the evidence: Other trials and observational studies have suggested that high doses of folate can reduce the efficacy of sulfadoxine-pyrimethamine in children and adults. However these studies have not examined the effect in pregnant women, for whom most national bodies recommend regular folate supplementation. The results from this trial supports the findings from previous studies and enables the evidence to be generalized to pregnant women. The study also found no evidence that 0.4 mg folate compromises the efficacy of sulfadoxine-pyrimethamine. The findings suggest that the lower level of folate dosing should be used in pregnancy, or that antimalarial treatments other than sulfadoxine-pyrimethamine be used.
doi:10.1371/journal.pctr.0010028
PMCID: PMC1617124  PMID: 17053829
22.  Health disparities and advertising content of women's magazines: a cross-sectional study 
BMC Public Health  2005;5:85.
Background
Disparities in health status among ethnic groups favor the Caucasian population in the United States on almost all major indicators. Disparities in exposure to health-related mass media messages may be among the environmental factors contributing to the racial and ethnic imbalance in health outcomes. This study evaluated whether variations exist in health-related advertisements and health promotion cues among lay magazines catering to Hispanic, African American and Caucasian women.
Methods
Relative and absolute assessments of all health-related advertising in 12 women's magazines over a three-month period were compared. The four highest circulating, general interest magazines oriented to Black women and to Hispanic women were compared to the four highest-circulating magazines aimed at a mainstream, predominantly White readership. Data were collected and analyzed in 2002 and 2003.
Results
Compared to readers of mainstream magazines, readers of African American and Hispanic magazines were exposed to proportionally fewer health-promoting advertisements and more health-diminishing advertisements. Photographs of African American role models were more often used to advertise products with negative health impact than positive health impact, while the reverse was true of Caucasian role models in the mainstream magazines.
Conclusion
To the extent that individual levels of health education and awareness can be influenced by advertising, variations in the quantity and content of health-related information among magazines read by different ethnic groups may contribute to racial disparities in health behaviors and health status.
doi:10.1186/1471-2458-5-85
PMCID: PMC1208907  PMID: 16109157
23.  A Prospective Comparison of 18F-FDG PET/CT and CT as Diagnostic Tools to Identify the Primary Tumor Site in Patients with Extracervical Carcinoma of Unknown Primary Site 
The Oncologist  2012;17(9):1146-1154.
The diagnostic value of 18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT) and conventional CT regarding the ability to detect the primary tumor site in patients with extracervical metastases from carcinoma of unknown primary site was evaluated prospectively. 18F-FDG PET/CT was not shown to provide a clear advantage.
Learning Objectives
After completing this course, the reader will be able to: Compare the diagnostic performances of 18F-FDG PET/CT and conventional CT with respect to their ability to detect primary tumor sites in carcinoma of unknown primary patients with extracervical metastases.Describe the rate of identification of primary tumor sites using 18F-FDG PET/CT and conventional CT.
This article is available for continuing medical education credit at CME.TheOncologist.com
Background.
The aim of the present study was to evaluate prospectively the diagnostic value of 18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT) and conventional CT regarding the ability to detect the primary tumor site in patients with extracervical metastases from carcinoma of unknown primary (CUP) site.
Patients and Methods.
From January 2006 to December 2010, 136 newly diagnosed CUP patients with extracervical metastases underwent 18F-FDG PET/CT.
A standard of reference (SR) was established by a multidisciplinary team to ensure that the same set of criteria were used for classification of patients, that is, either as CUP patients or patients with a suggested primary tumor site. The independently obtained suggestions of primary tumor sites using PET/CT and CT were correlated with the SR to reach a consensus regarding true-positive (TP), true-negative, false-negative, and false-positive results.
Results.
SR identified a primary tumor site in 66 CUP patients (48.9%). PET/CT identified 38 TP primary tumor sites and CT identified 43 TP primary tumor sites. No statistically significant differences were observed between 18F-FDG PET/CT and CT alone in regard to sensitivity, specificity, and accuracy.
Conclusion.
In the general CUP population with multiple extracervical metastases 18F-FDG PET/CT does not represent a clear diagnostic advantage over CT alone regarding the ability to detect the primary tumor site.
doi:10.1634/theoncologist.2011-0449
PMCID: PMC3448407  PMID: 22711751
Carcinoma of unknown primary tumor site; CUP; CT; 18F-FDG PET/CT
24.  Changes in breathing while listening to read speech: the effect of reader and speech mode 
The current paper extends previous work on breathing during speech perception and provides supplementary material regarding the hypothesis that adaptation of breathing during perception “could be a basis for understanding and imitating actions performed by other people” (Paccalin and Jeannerod, 2000). The experiments were designed to test how the differences in reader breathing due to speaker-specific characteristics, or differences induced by changes in loudness level or speech rate influence the listener breathing. Two readers (a male and a female) were pre-recorded while reading short texts with normal and then loud speech (both readers) or slow speech (female only). These recordings were then played back to 48 female listeners. The movements of the rib cage and abdomen were analyzed for both the readers and the listeners. Breathing profiles were characterized by the movement expansion due to inhalation and the duration of the breathing cycle. We found that both loudness and speech rate affected each reader’s breathing in different ways. Listener breathing was different when listening to the male or the female reader and to the different speech modes. However, differences in listener breathing were not systematically in the same direction as reader differences. The breathing of listeners was strongly sensitive to the order of presentation of speech mode and displayed some adaptation in the time course of the experiment in some conditions. In contrast to specific alignments of breathing previously observed in face-to-face dialog, no clear evidence for a listener–reader alignment in breathing was found in this purely auditory speech perception task. The results and methods are relevant to the question of the involvement of physiological adaptations in speech perception and to the basic mechanisms of listener–speaker coupling.
doi:10.3389/fpsyg.2013.00906
PMCID: PMC3856677  PMID: 24367344
breathing; respiration; speech production; speech perception; adaptation; loudness; speech rate
25.  Editorial 
As a new year begins, it is a good time to review developments of the past twelve months and to announce some changes in GSE for 2007. Since November 2005, GSE has received 122 new manuscripts, accepted 42 articles (of which 19 were submitted before November 2005) and still has 32 manuscripts in evaluation. Thus the number of submitted manuscripts is constantly increasing while the number of published articles is maintained at around 40 per year. Published articles originate from 15 countries with Spain leading (10), followed by the USA (5), Australia, France and Germany (4 each), UK (3), China, Denmark and Finland (2 each) and finally, Brazil, Canada, Greece, Japan, Norway and Slovenia. Of these 42 published papers, 19 deal with methodologies of quantitative genetics and their applications to animal selection and characterization, six address genetic diversity of populations and breeds and seven fall in the field of molecular genetics. These figures clearly show that GSE is attractive to the animal quantitative genetics community and has acquired a strong experience and reputation in this domain.
To answer this increasing demand, we have asked two new associate editors to join our editorial panel and are pleased that they have agreed: Denis Couvet from the Muséum National d'Histoire Naturelle (France) specialized in conservation biology and population genetics and Frédéric Farnir from the University of Liège (Belgium) whose research interests focus on the genetic and functional study of QTL involved in agricultural traits.
In this editorial note, we also wish to inform you about our misfortune with the calculation of the 2005 Impact Factor published in June 2006 in the "Journal of Citation Reports" by Thompson Scientific. Based on our calculations, the published 2005 IF 1.62 turned out to be erroneous and in disfavour of GSE, which Thompson Scientific has acknowledged. The true 2005 IF is 1.783 and thus, GSE occupies the 5th position in the section "Agriculture, Dairy & Animal Science" and the 82nd in the section "Genetics & Heredity". Corrections in JCR have been done in October 2006.
Finally, GSE and EDP Sciences wish to keep up with the rapid changes of publication systems, i.e. the advent of "Open Access" publishing, to make scientific research widely and freely available. Thus, we are happy to announce that as a first step in this direction, GSE now gives the possibility to authors to choose how they want their paper to be published by offering the "Open Choice" option. With this option, authors can have their articles accepted for publication made available to all interested readers (subscribers or non-subscribers) as soon as they are on-line in exchange of a basic fee, i.e. 550 euros for papers published in 2007 (without VAT).
With all this news, we offer the collaborators, authors and readers of GSE our season greetings and best wishes for a successful and productive New Year 2007.
doi:10.1186/1297-9686-39-1-1
PMCID: PMC3400394

Results 1-25 (304778)