PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (213409)

Clipboard (0)
None

Related Articles

1.  A Smartphone Client-Server Teleradiology System for Primary Diagnosis of Acute Stroke 
Background
Recent advances in the treatment of acute ischemic stroke have made rapid acquisition, visualization, and interpretation of images a key factor for positive patient outcomes. We have developed a new teleradiology system based on a client-server architecture that enables rapid access to interactive advanced 2-D and 3-D visualization on a current generation smartphone device (Apple iPhone or iPod Touch, or an Android phone) without requiring patient image data to be stored on the device. Instead, a server loads and renders the patient images, then transmits a rendered frame to the remote device.
Objective
Our objective was to determine if a new smartphone client-server teleradiology system is capable of providing accuracies and interpretation times sufficient for diagnosis of acute stroke.
Methods
This was a retrospective study. We obtained 120 recent consecutive noncontrast computed tomography (NCCT) brain scans and 70 computed tomography angiogram (CTA) head scans from the Calgary Stroke Program database. Scans were read by two neuroradiologists, one on a medical diagnostic workstation and an iPod or iPhone (hereafter referred to as an iOS device) and the other only on an iOS device. NCCT brain scans were evaluated for early signs of infarction, which includes early parenchymal ischemic changes and dense vessel sign, and to exclude acute intraparenchymal hemorrhage and stroke mimics. CTA brain scans were evaluated for any intracranial vessel occlusion. The interpretations made on an iOS device were compared with those made at a workstation. The total interpretation times were recorded for both platforms. Interrater agreement was assessed. True positives, true negatives, false positives, and false negatives were obtained, and sensitivity, specificity, and accuracy of detecting the abnormalities on the iOS device were computed.
Results
The sensitivity, specificity, and accuracy of detecting intraparenchymal hemorrhage were 100% using the iOS device with a perfect interrater agreement (kappa = 1). The sensitivity, specificity, and accuracy of detecting acute parenchymal ischemic change were 94.1%, 100%, and 98.09% respectively for reader 1 and 97.05%, 100%, and 99.04% for reader 2 with nearly perfect interrater agreement (kappa = .8). The sensitivity, specificity, and accuracy of detecting dense vessel sign were 100%, 95.4%, and 96.19% respectively for reader 1 and 72.2%, 100%, and 95.23% for reader 2 using the iOS device with a good interrater agreement (kappa = .69). The sensitivity, specificity, and accuracy of detecting vessel occlusion on CT angiography scans were 94.4%, 100%, and 98.46% respectively for both readers using the iOS device, with perfect interrater agreement (kappa = 1). No significant difference (P < .05) was noted in the interpretation time between the workstation and iOS device.
Conclusion
The smartphone client-server teleradiology system appears promising and may have the potential to allow urgent management decisions in acute stroke. However, this study was retrospective, involved relatively few patient studies, and only two readers. Generalizing conclusions about its clinical utility, especially in other diagnostic use cases, should not be made until additional studies are performed.
doi:10.2196/jmir.1732
PMCID: PMC3221380  PMID: 21550961
Acute stroke; teleradiology; computed tomography; mhealth; mobile phone
2.  Prediction accuracy of a sample-size estimation method for ROC studies 
Academic radiology  2010;17(5):628-638.
Rationale and Objectives
Sample-size estimation is an important consideration when planning a receiver operating characteristic (ROC) study. The aim of this work was to assess the prediction accuracy of a sample-size estimation method using the Monte Carlo simulation method.
Materials and Methods
Two ROC ratings simulators characterized by low reader and high case variabilities (LH) and high reader and low case variabilities (HL) were used to generate pilot data sets in 2 modalities. Dorfman-Berbaum-Metz multiple-reader multiple-case (DBM-MRMC) analysis of the ratings yielded estimates of the modality-reader, modality-case and error variances. These were input to the Hillis-Berbaum (HB) sample-size estimation method, which predicted the number of cases needed to achieve 80% power for 10 readers and an effect size of 0.06 in the pivotal study. Predictions that generalized to readers and cases (random-all), to cases only (random-cases) and to readers only (random-readers) were generated. A prediction-accuracy index defined as the probability that any single prediction yields true power in the range 75% to 90% was used to assess the HB method.
Results
For random-case generalization the HB-method prediction-accuracy was reasonable, ~ 50% for 5 readers in the pilot study. Prediction-accuracy was generally higher under low reader variability conditions (LH) than under high reader variability conditions (HL). Under ideal conditions (many readers in the pilot study) the DBM-MRMC based HB method overestimated the number of cases. The overestimates could be explained by the observed large variability of the DBM-MRMC modality-reader variance estimates, particularly when reader variability was large (HL). The largest benefit of increasing the number of readers in the pilot study was realized for LH, where 15 readers were enough to yield prediction accuracy > 50% under all generalization conditions, but the benefit was lesser for HL where prediction accuracy was ~ 36% for 15 readers under random-all and random-reader conditions.
Conclusion
The HB method tends to overestimate the number of cases. Random-case generalization had reasonable prediction accuracy. Provided about 15 readers were used in the pilot study the method performed reasonably under all conditions for LH. When reader variability was large, the prediction-accuracy for random-all and random-reader generalizations was compromised. Study designers may wish to compare the HB predictions to those of other methods and to sample-sizes used in previous similar studies.
doi:10.1016/j.acra.2010.01.007
PMCID: PMC2867097  PMID: 20380980
ROC; sample-size; methodology assessment; statistical power; DBM; MRMC; simulation; Monte Carlo
3.  The NIEHS Predictive-Toxicology Evaluation Project. 
Environmental Health Perspectives  1996;104(Suppl 5):1001-1010.
The Predictive-Toxicology Evaluation (PTE) project conducts collaborative experiments that subject the performance of predictive-toxicology (PT) methods to rigorous, objective evaluation in a uniquely informative manner. Sponsored by the National Institute of Environmental Health Sciences, it takes advantage of the ongoing testing conducted by the U.S. National Toxicology Program (NTP) to estimate the true error of models that have been applied to make prospective predictions on previously untested, noncongeneric-chemical substances. The PTE project first identifies a group of standardized NTP chemical bioassays either scheduled to be conducted or are ongoing, but not yet complete. The project then announces and advertises the evaluation experiment, disseminates information about the chemical bioassays, and encourages researchers from a wide variety of disciplines to publish their predictions in peer-reviewed journals, using whatever approaches and methods they feel are best. A collection of such papers is published in this Environmental Health Perspectives Supplement, providing readers the opportunity to compare and contrast PT approaches and models, within the context of their prospective application to an actual-use situation. This introduction to this collection of papers on predictive toxicology summarizes the predictions made and the final results obtained for the 44 chemical carcinogenesis bioassays of the first PTE experiment (PTE-1) and presents information that identifies the 30 chemical carcinogenesis bioassays of PTE-2, along with a table of prediction sets that have been published to date. It also provides background about the origin and goals of the PTE project, outlines the special challenge associated with estimating the true error of models that aspire to predict open-system behavior, and summarizes what has been learned to date.
PMCID: PMC1469687  PMID: 8933048
4.  Advice from a Medical Expert through the Internet on Queries about AIDS and Hepatitis: Analysis of a Pilot Experiment 
PLoS Medicine  2006;3(7):e256.
Background
Advice from a medical expert on concerns and queries expressed anonymously through the Internet by patients and later posted on the Web, offers a new type of patient–doctor relationship. The aim of the current study was to perform a descriptive analysis of questions about AIDS and hepatitis made to an infectious disease expert and sent through the Internet to a consumer-oriented Web site in the Spanish language.
Methods and Findings
Questions were e-mailed and the questions and answers were posted anonymously in the “expert-advice” section of a Web site focused on AIDS and hepatitis. We performed a descriptive study and a temporal analysis of the questions received in the first 12 months after the launch of the site. A total of 899 questions were received from December 2003 to November 2004, with a marked linear growth pattern. Questions originated in Spain in 68% of cases and 32% came from Latin America (the Caribbean, Central America, and South America). Eighty percent of the senders were male. Most of the questions concerned HIV infection (79%) with many fewer on hepatitis (17%) . The highest numbers of questions were submitted just after the weekend (37% of questions were made on Mondays and Tuesdays). Risk factors for contracting HIV infection were the most frequent concern (69%), followed by the window period for detection (12.6%), laboratory results (5.9%), symptoms (4.7%), diagnosis (2.7%), and treatment (2.2%).
Conclusions
Our results confirm a great demand for this type of “ask-the-expert” Internet service, at least for AIDS and hepatitis. Factors such as anonymity, free access, and immediate answers have been key factors in its success.
Editors' Summary
Background.
Although substantial progress has been made in the fight against HIV/AIDS, in terms of developing new treatments and understanding factors that cause the disease to worsen, putting this knowledge into practice can be difficult. Two main barriers exist that can prevent individuals seeking information or treatment. The first is the considerable social stigma still associated with HIV; the second is the poverty of the developing countries—such as those in Latin America—where the disease has reached pandemic proportions. In addition, the disease, which used to be spread mainly through the sharing of injecting drug needles or through sex between men, has now entered the general population. When healthcare services are limited, people are often unable to seek information about HIV, and even when services do exist, the cost of accessing them can be too high. The same is true for other diseases such as hepatitis infection, which often co-exists with HIV. The Internet has the potential to go some way to filling this health information gap. And, many patients seek information on the Internet before consulting their doctor.
Why Was This Study Done?
In 2003, the Madrid-based newspaper El Mundo launched an HIV and hepatitis information resource situated in the health section of its existing Web site. One aspect of this resource was an “ask-the-expert” section, in which readers could anonymously e-mail questions about HIV and hepatitis that would be answered by an infectious disease expert. These ranged from how the diseases can be transmitted and who is most at risk, to what to do if an individual thinks they might have the disease. There seems to be a clear need for this Spanish-language service; in Latin America, 2.1 million people are infected with HIV, with 230,000 new cases in 2005. In the Caribbean, AIDS is the leading cause of death in people aged 15–44 years. In Spain, 71,000 people were infected with HIV in 2005. Although the Internet contains a vast store of health information, and many aspects of patient–doctor interactions have been made electronic, little is known about what format is ideal. The researchers, who included employees of the newspaper, decided to investigate the effectiveness of the question–answer format used by El Mundo.
What Did the Researchers Do and Find?
In the first 12 months after the service was launched, the researchers recorded several details: what day of the week questions were sent, what the questions were about, and whether they were sent by the person needing the information or by a family member or friend. They also noted demographic information, such as the age, sex, and country of origin of the person e-mailing the question.
Of 899 questions sent to the Web site between December 2003 and November 2004, most (80%) were sent by males. Most questions came from Spain, followed by Latin America, and most questions were sent on Mondays and Tuesdays. Some e-mails were from people who felt they had been waiting too long for an answer to their first e-mail—despite the mean time for answering a question being fewer than seven days. Messages of support for the Web site rose during the year from 2% to 22%.
What Do These Findings Mean?
The messages of support and encouragement sent in by users indicated that the service was well-received and useful. Most of the questions were about HIV rather than about hepatitis, which the researchers say could represent the more prominent media coverage of HIV. However, despite the disease's high profile, the questions about HIV were very basic. It could also mean that people hold a false impression that hepatitis is a less serious illness or that they have more information about it than about HIV.
Since most questions were sent in at the start of the week, the researchers believe that many individuals wrote in after engaging in potentially risky sexual behaviour over the weekend.
The researchers also found that existing information on the Web site already answered many of the new questions, indicating that people prefer a question-and-answer model over ready-prepared information. The anonymity, free access, and immediacy of the Internet-based service suggest this could be a model for providing other types of health information.
The findings also suggest that such a service can highlight the needs and concerns of specific populations and can help health planners and policymakers respond to those needs in their countries.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030256.
• The AIDSinfo Web site from the US Department of Health and Human Services provides information on all aspects of HIV/AIDS treatment and prevention and has sections specially written for patients and the general public
• AVERT, an international AIDS charity, has a section on HIV in Latin America that includes details of transmission, infection rates, and treatment
Marco and colleagues analyzed questions sent by the public to a Spanish language "ask-the-expert" Internet site, and found that 70% of queries were about risk factors for acquiring HIV.
doi:10.1371/journal.pmed.0030256
PMCID: PMC1483911  PMID: 16796404
5.  The Relationship of Previous Training and Experience of Journal Peer Reviewers to Subsequent Review Quality 
PLoS Medicine  2007;4(1):e40.
Background
Peer review is considered crucial to the selection and publication of quality science, but very little is known about the previous experiences and training that might identify high-quality peer reviewers. The reviewer selection processes of most journals, and thus the qualifications of their reviewers, are ill defined. More objective selection of peer reviewers might improve the journal peer review process and thus the quality of published science.
Methods and Findings
306 experienced reviewers (71% of all those associated with a specialty journal) completed a survey of past training and experiences postulated to improve peer review skills. Reviewers performed 2,856 reviews of 1,484 separate manuscripts during a four-year study period, all prospectively rated on a standardized quality scale by editors. Multivariable analysis revealed that most variables, including academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training). Being on an editorial board and doing formal grant (study section) review were each predictors for only one of our two comparisons. However, the predictive power of all variables was weak.
Conclusions
Our study confirms that there are no easily identifiable types of formal training or experience that predict reviewer performance. Skill in scientific peer review may be as ill defined and hard to impart as is “common sense.” Without a better understanding of those skills, it seems unlikely journals and editors will be successful in systematically improving their selection of reviewers. This inability to predict performance makes it imperative that all but the smallest journals implement routine review ratings systems to routinely monitor the quality of their reviews (and thus the quality of the science they publish).
A survey of experienced reviewers, asked about training they had received in peer review, found there are no easily identifiable types of formal training and experience that predict reviewer performance.
Editors' Summary
Background.
When medical researchers have concluded their research and written it up, the next step is to get it published as an article in a journal, so that the findings can be circulated widely. These published findings help determine subsequent research and clinical use. The editors of reputable journals, including PLoS Medicine, have to decide whether the articles sent to them are of good quality and accurate and whether they will be of interest to the readers of their journal. To do this they need to obtain specialist advice, so they contact experts in the topic of the research article and ask them to write reports. This is the process of scientific peer review, and the experts who write such reports are known as “peer reviewers.” Although the editors make the final decision, the advice and criticism of these peer reviewers to the editors is essential in making decisions on publication, and usually in requiring authors to make changes to their manuscript. The contribution that peer reviewers have made to the article by the time it is finally published may, therefore, be quite considerable.
Although peer review is accepted as a key part of the process for the publishing of medical research, many people have argued that there are flaws in the system. For example, there may be an element of luck involved; one author might find their paper being reviewed by a reviewer who is biased against the approach they have adopted or who is a very critical person by nature, and another author may have the good fortune to have their work considered by someone who is much more favorably disposed toward their work. Some reviewers are more knowledgeable and thorough in their work than others. The editors of medical journals try to take in account such biases and quality factors in their choice of peer reviewers or when assessing the reviews. Some journals have run training courses for experts who review for them regularly to try to make the standard of peer review as high as possible.
Why Was This Study Done?
It is hard for journal editors to know who will make a good peer reviewer, and there is no proven system for choosing them. The authors of this study wanted to identify the previous experiences and training that make up the background of good peer reviewers and compare them with the quality of the reviews provided. This would help journal editors select good people for the task in future, and as a result will affect the quality of science they publish for readers, including other researchers.
What Did the Researchers Do and Find?
The authors contacted all the regular reviewers from one specialist journal (Annals of Emergency Medicine). A total of 306 of these experienced reviewers (71% of all those associated with the journal) completed a survey of past training and experiences that might be expected to improve peer review skills. These reviewers had done 2,856 reviews of 1,484 separate manuscripts during a four-year study period, and during this time the quality of the reviews had been rated by the journal's editors. Surprisingly, most variables, including academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training), and even these were only weak predictors.
What Do These Findings Mean?
This study suggest that there are no easily identifiable types of formal training or experience that predict peer reviewer performance, although it is clear that some reviewers (and reviews) are better than others. The authors suggest that it is essential therefore that journals routinely monitor the quality of reviews submitted to them to ensure they are getting good advice (a practice that is not universal).
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/doi:10.1371/journal.pmed.0040040
• WAME is an association of editors from many countries who seek to foster international cooperation among editors of peer-reviewed medical journals
• The Fifth International Congress on Peer Review and Biomedical Publication is one of a series of conferences on peer review
• The PLoS Medicine guidelines for reviewers outline what we look for in a review
• The Council of Science Editors promotes ethical scientific publishing practices
• An editorial also published in this issue of PLoS Medicine discusses the peer review process further
doi:10.1371/journal.pmed.0040040
PMCID: PMC1796627  PMID: 17411314
6.  Effects of iron supplementation and anthelmintic treatment on motor and language development of preschool children in Zanzibar: double blind, placebo controlled study 
BMJ : British Medical Journal  2001;323(7326):1389.
Objective
To measure the effects of iron supplementation and anthelmintic treatment on iron status, anaemia, growth, morbidity, and development of children aged 6-59 months.
Design
Double blind, placebo controlled randomised factorial trial of iron supplementation and anthelmintic treatment.
Setting
Community in Pemba Island, Zanzibar.
Participants
614 preschool children aged 6-59 months.
Main outcome measures
Development of language and motor skills assessed by parental interview before and after treatment in age appropriate subgroups.
Results
Before intervention, anaemia was prevalent and severe, and geohelminth infections were prevalent and light—Plasmodium falciparum infection was nearly universal. Iron supplementation significantly improved iron status, but not haemoglobin status. Iron supplementation improved language development by 0.8 (95% confidence interval 0.2 to 1.4) points on the 20 point scale. Iron supplementation also improved motor development, but this effect was modified by baseline haemoglobin concentrations (P=0.015 for interaction term) and was apparent only in children with baseline haemoglobin concentrations <90 g/l. In children with a baseline haemoglobin concentration of 68 g/l (one standard deviation below the mean value), iron treatment increased scores by 1.1 (0.1 to 2.1) points on the 18 point motor scale. Mebendazole significantly reduced the number and severity of infections caused by Ascaris lumbricoides and Trichuris trichiura, but not by hookworms. Mebendazole increased development scores by 0.4 (−0.3 to 1.1) points on the motor scale and 0.3 (−0.3 to 0.9) points on the language scale.
Conclusions
Iron supplementation improved motor and language development of preschool children in rural Africa. The effects of iron on motor development were limited to children with more severe anaemia (baseline haemoglobin concentration <90 g/l). Mebendazole had a positive effect on motor and language development, but this was not statistically significant.
What is already known on this topicIron is needed for development and functioning of the human brainAnaemic children show developmental delays, but it is not yet clear whether iron deficiency causes these deficits or whether iron supplementation can reverse themHelminth infections in schoolchildren are associated with cognitive deficits, but few studies have been made of helminth infection and early child developmentWhat this study addsLow doses of oral iron supplementation given daily improved language development in children aged 1-4 years in ZanzibarIron supplementation improved motor development, but only in children with initial haemoglobin concentrations below 90 g/lThe effects of routine anthelmintic treatment on motor and language milestones were positive, but non-significant, with our sample size
PMCID: PMC60982  PMID: 11744561
7.  A qualitative study of motivators and barriers to healthy eating in pregnancy for low-income, overweight, african-american mothers 
Poor diet quality is common among low-income, overweight, African-American mothers, placing them at high risk for adverse pregnancy outcomes. We sought to better understand the contextual factors that may influence low-income African-American mothers' diet quality during pregnancy. In 2011, we conducted semi-structured interviews with 21 overweight/obese, pregnant African Americans in Philadelphia, all of whom received Medicaid and were eligible for the Supplemental Nutrition Program for Women, Infants, and Children. Two readers independently coded the interview transcripts to identify recurrent themes. We identified ten themes around motivators and barriers to healthy eating in pregnancy. Mothers believed that consuming healthy foods, like fruits and vegetables, would lead to healthy babies and limit the physical discomforts of pregnancy. However, more often than not, mothers chose foods that were high in fats and sugars because of taste, cost, and convenience. Additionally, mothers had several misconceptions about the definition of healthy (e.g., “juice is good for baby”), which led to overconsumption. Many mothers feared they might “starve” their babies if they didn't get enough to eat, promoting persistent snacking and larger portions. Living in multigenerational households and sharing resources also limited mothers' control over food choices and made consuming healthy foods especially difficult. Despite the good intentions of low-income African-American mothers to improve diet quality during pregnancy, multiple factors worked together as barriers to healthy eating. Interventions which emphasize tasty and affordable healthy food substitutes, address misconceptions, and counsel mothers about true energy needs in pregnancy may improve low-income, African-American, overweight/obese mothers' diet quality.
doi:10.1016/j.jand.2013.05.014
PMCID: PMC3782301  PMID: 23871106
Pregnancy; Diet quality; Low-income; Motivators; Barriers
8.  Intermittent oral iron supplementation during pregnancy (Review) 
Background
Anaemia is a frequent condition during pregnancy, particularly among women from developing countries who have insufficient iron intake to meet increased iron needs of both the mother and the fetus. Traditionally, gestational anaemia has been prevented with the provision of daily iron supplements throughout pregnancy, but adherence to this regimen due to side effects, interrupted supply of the supplements, and concerns about safety among women with an adequate iron intake, have limited the use of this intervention. Intermittent (i.e. one, two or three times a week on non-consecutive days) supplementation with iron alone or in combination with folic acid or other vitamins and minerals has recently been proposed as an alternative to daily supplementation.
Objectives
To assess the benefits and harms of intermittent supplementation with iron alone or in combination with folic acid or other vitamins and minerals to pregnant women on neonatal and pregnancy outcomes.
Search methods
We searched the Cochrane Pregnancy and Childbirth Group’s Trials Register (23 March 2012). We also searched the WHO International Clinical Trials Registry Platform (ICTRP) for ongoing studies and contacted relevant organisations for the identification of ongoing and unpublished studies (23 March 2012).
Selection criteria
Randomised or quasi-randomised trials.
Data collection and analysis
We assessed the methodological quality of trials using standard Cochrane criteria. Two review authors independently assessed trial eligibility, extracted data and conducted checks for accuracy.
Main results
This review includes 21 trials from 13 different countries, but only 18 trials (with 4072 women) reported on our outcomes of interest and contributed data to the review. All of these studies compared daily versus intermittent iron supplementation.
Three studies provided iron alone, 12 iron+folic acid and three more iron plus multiple vitamins and minerals. Their methodological quality was mixed and most had high levels of attrition. Overall, there was no clear evidence of differences between groups for infant primary outcomes: low birthweight (average risk ratio (RR) 0.96; 95% confidence interval (CI) 0.61 to 1.52, seven studies), infant birthweight (mean difference MD −8.62 g; 95% CI −52.76 g to 35.52 g, eight studies), premature birth (average RR 1.82; 95% CI 0.75 to 4.40, four studies). None of the studies reported neonatal deaths or congenital anomalies.
For maternal outcomes, there was no clear evidence of differences between groups for anaemia at term (average RR 1.22; 95% CI 0.84 to 1.80, four studies) and women receiving intermittent supplementation had less side effects (average RR 0.56; 95% CI 0.37 to 0.84, 11 studies) than those receiving daily supplements. Women receiving intermittent supplements were also at lower risk of having high haemoglobin (Hb) concentrations (greater than 130 g/L) during the second or third trimester of pregnancy (average RR 0.48; 95% CI 0.35 to 0.67, 13 studies). There were no significant differences in iron-deficiency anaemia between women receiving intermittent or daily iron+folic acid supplementation (average RR 0.71; 95% CI 0.08 to 6.63, 1 study). There were no maternal deaths (six studies) or women with severe anaemia in pregnancy (six studies). None of the studies reported on iron deficiency at term or infections during pregnancy.
Where sufficient data were available for primary outcomes, we set up subgroups to look for possible differences between studies in terms of earlier or later supplementation; women’s anaemia status at the start of supplementation; higher and lower weekly doses of iron; and the malarial status of the region in which the trials were conducted. There was no clear effect of these variables on the results of the review.
Authors’ conclusions
The present systematic review is the most comprehensive summary of the evidence assessing the benefits and harms of intermittent iron supplementation regimens in pregnant women on haematological and pregnancy outcomes. The findings suggest that intermittent iron+folic acid regimens produce similar maternal and infant outcomes at birth as daily supplementation but are associated with fewer side effects. Women receiving daily supplements had increased risk of developing high levels of Hb in mid and late pregnancy but were less likely to present mild anaemia near term. Although the evidence is limited and the quality of the trials was low or very low, intermittent may be a feasible alternative to daily iron supplementation among those pregnant women who are not anaemic and have adequate antenatal care.
doi:10.1002/14651858.CD009997
PMCID: PMC4053594  PMID: 22786531
*Dietary Supplements [adverse effects]; Administration, Oral; Anemia, Iron-Deficiency [blood; *prevention & control]; Developing Countries; Drug Administration Schedule; Drug Combinations; Folic Acid [administration & dosage]; Hemoglobin A [metabolism]; Infant, Low Birth Weight; Infant, Newborn; Iron [*administration & dosage; adverse effects]; Iron, Dietary [*administration & dosage]; Pregnancy Complications, Hematologic [blood; prevention & control]; Premature Birth; Randomized Controlled Trials as Topic; Vitamins [administration & dosage]; Female; Humans; Pregnancy
9.  The Effect of Film Quality on Reading Radiographs of Simple Pneumoconiosis in a trial of X-ray sets 
Four chest radiographs (14 in. × 14 in. postero-anterior) for each of 86 coal-miners were taken (in a trial to compare ϰ-ray sets) and assessed by a number of experienced readers for both quality and pneumoconiosis. All films were developed by one technician under standard conditions so that variations in the quality of the films produced for one subject arose because of differences in the sets and in the way they were used by the radiographers taking the films. The data thus obtained allowed a study of film quality to be made (a) in relation to the subject and (b) as it affected the reading of simple pneumoconiosis.
The subjects were selected to include a high proportion whose earlier radiographs showed pneumoconiosis; they were thus substantially older than a normal colliery population.
The assessments of quality were found to be reasonably consistent both between observers and on different occasions for the same observer.
A clear tendency was found for the quality of a film to depend on the subject. Men with no radiological evidence of pneumoconiosis tended to produce films which were assessed as of better quality than those of men with pneumoconiosis, however slight. Among the latter, chest thickness had an important effect on film quality; men with thicker chests produced poorer films. The subject's age did not appear to have any effect on the quality of his film.
Film quality was found to introduce only slight biases into the reading of pneumoconiosis. Individual readers varied considerably so that, although on average the readers tended to overcorrect for technical faults, i.e. to read more abnormality in black films than in good ones, and less in grey, some readers undercorrected slightly.
What little evidence was available did not suggest that poor quality of films introduced any excess variability into film reading.
PMCID: PMC1038146  PMID: 13761945
10.  Executive functioning and reading achievement in school: a study of Brazilian children assessed by their teachers as “poor readers” 
This study examined executive functioning and reading achievement in 106 6- to 8-year-old Brazilian children from a range of social backgrounds of whom approximately half lived below the poverty line. A particular focus was to explore the executive function profile of children whose classroom reading performance was judged below standard by their teachers and who were matched to controls on chronological age, sex, school type (private or public), domicile (Salvador/BA or São Paulo/SP) and socioeconomic status. Children completed a battery of 12 executive function tasks that were conceptual tapping cognitive flexibility, working memory, inhibition and selective attention. Each executive function domain was assessed by several tasks. Principal component analysis extracted four factors that were labeled “Working Memory/Cognitive Flexibility,” “Interference Suppression,” “Selective Attention,” and “Response Inhibition.” Individual differences in executive functioning components made differential contributions to early reading achievement. The Working Memory/Cognitive Flexibility factor emerged as the best predictor of reading. Group comparisons on computed factor scores showed that struggling readers displayed limitations in Working Memory/Cognitive Flexibility, but not in other executive function components, compared to more skilled readers. These results validate the account that working memory capacity provides a crucial building block for the development of early literacy skills and extends it to a population of early readers of Portuguese from Brazil. The study suggests that deficits in working memory/cognitive flexibility might represent one contributing factor to reading difficulties in early readers. This might have important implications for how educators might intervene with children at risk of academic under achievement.
doi:10.3389/fpsyg.2014.00550
PMCID: PMC4050967  PMID: 24959155
executive function; reading; working memory; cognitive flexibility; selective attention; inhibition; poverty; learning difficulties
11.  Clustered Environments and Randomized Genes: A Fundamental Distinction between Conventional and Genetic Epidemiology  
PLoS Medicine  2007;4(12):e352.
Background
In conventional epidemiology confounding of the exposure of interest with lifestyle or socioeconomic factors, and reverse causation whereby disease status influences exposure rather than vice versa, may invalidate causal interpretations of observed associations. Conversely, genetic variants should not be related to the confounding factors that distort associations in conventional observational epidemiological studies. Furthermore, disease onset will not influence genotype. Therefore, it has been suggested that genetic variants that are known to be associated with a modifiable (nongenetic) risk factor can be used to help determine the causal effect of this modifiable risk factor on disease outcomes. This approach, mendelian randomization, is increasingly being applied within epidemiological studies. However, there is debate about the underlying premise that associations between genotypes and disease outcomes are not confounded by other risk factors. We examined the extent to which genetic variants, on the one hand, and nongenetic environmental exposures or phenotypic characteristics on the other, tend to be associated with each other, to assess the degree of confounding that would exist in conventional epidemiological studies compared with mendelian randomization studies.
Methods and Findings
We estimated pairwise correlations between nongenetic baseline variables and genetic variables in a cross-sectional study comparing the number of correlations that were statistically significant at the 5%, 1%, and 0.01% level (α = 0.05, 0.01, and 0.0001, respectively) with the number expected by chance if all variables were in fact uncorrelated, using a two-sided binomial exact test. We demonstrate that behavioural, socioeconomic, and physiological factors are strongly interrelated, with 45% of all possible pairwise associations between 96 nongenetic characteristics (n = 4,560 correlations) being significant at the p < 0.01 level (the ratio of observed to expected significant associations was 45; p-value for difference between observed and expected < 0.000001). Similar findings were observed for other levels of significance. In contrast, genetic variants showed no greater association with each other, or with the 96 behavioural, socioeconomic, and physiological factors, than would be expected by chance.
Conclusions
These data illustrate why observational studies have produced misleading claims regarding potentially causal factors for disease. The findings demonstrate the potential power of a methodology that utilizes genetic variants as indicators of exposure level when studying environmentally modifiable risk factors.
In a cross-sectional study Davey Smith and colleagues show why observational studies can produce misleading claims regarding potential causal factors for disease, and illustrate the use of mendelian randomization to study environmentally modifiable risk factors.
Editors' Summary
Background.
Epidemiology is the study of the distribution and causes of human disease. Observational epidemiological studies investigate whether particular modifiable factors (for example, smoking or eating healthily) are associated with the risk of a particular disease. The link between smoking and lung cancer was discovered in this way. Once the modifiable factors associated with a disease are established as causal factors, individuals can reduce their risk of developing that disease by avoiding causative factors or by increasing their exposure to protective factors. Unfortunately, modifiable factors that are associated with risk of a disease in observational studies sometimes turn out not to cause or prevent disease. For example, higher intake of vitamins C and E apparently protected people against heart problems in observational studies, but taking these vitamins did not show any protection against heart disease in randomized controlled trials (studies in which identical groups of patients are randomly assigned various interventions and then their health monitored). One explanation for this type of discrepancy is known as confounding—the distortion of the effect of one factor by the presence of another that is associated both with the exposure under study and with the disease outcome. So in this example, people who took vitamin supplements might have also have exercised more than people who did not take supplements and it could have been the exercise rather than the supplements that was protective against heart disease.
Why Was This Study Done?
It isn't always possible to check the results of observational studies in randomized controlled trials so epidemiologists have developed other ways to minimize confounding. One approach is known as mendelian randomization. Several gene variants have been identified that affect risk factors. For example, variants in a gene called APOE affect the level of cholesterol in an individual's blood, a risk factor for heart disease. People inherit gene variants randomly from their parents to build up their own unique genotype (total genetic makeup). Consequently, a study that examines the associations between a gene variant and a disease can indicate whether the risk factor affected by that gene variant causes the disease. There should be no confounding in this type of study, the argument goes, because different genetic variants should not be associated with each other or with nongenetic variables that typically confound directly assessed associations between risk factors and disease. But is this true? In this study, the researchers have tested whether nongenetic risk factors are confounded by each other and also whether genetic variants are confounded by nongenetic risk factors and also by other genetic variants
What Did the Researchers Do and Find?
Using data collected in the British Women's Heart and Health Study, the researchers calculated how many pairs of nongenetic variables (for example, frequency of eating meat, alcohol intake) were significantly correlated with each other. That is, the number of pairs of nongenetic variables in which a high correlation between both variables occurred in more study participants than expected by chance. They compared this number with the number of correlations that would occur by chance if all the variables were totally independent. When the researchers assumed that 1 in 100 combinations of pairs of variables would have been correlated by chance, the ratio of observed to expected significant correlations was seen 45 times more frequently than would be expected by chance. When the researchers repeated this exercise with genetic variants, the ratio of observed to expected significant correlations was 1.58, a figure not significantly different from 1. Similarly, the ratio of observed to expected significant correlations when pairwise combinations between genetic and nongenetic variants were considered was 1.22.
What Do These Findings Mean?
These findings have two main implications. First, the large excess of observed over expected associations among the nongenetic variables indicates that many nongenetic modifiable factors occur in clusters—for example, people with healthy diets often have other healthy habits. Researchers doing observational studies always try to adjust for confounding but this result suggests that this adjustment will be hard to do, in part because it will not always be clear which factors are confounders. Second, the lack of a large excess of observed over expected associations among the genetic variables (and also among genetic variables paired with nongenetic variables) indicates that little confounding is likely to occur in studies that use mendelian randomization. In other words, this approach is a valid way to identify which environmentally modifiable risk factors cause human disease.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040352.
Wikipedia has pages on epidemiology and on mendelian randomization (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages).
Epidemiology for the Uninitiated is a primer from the British Medical Journal
Information is available on the British Women's Heart and Health Study
doi:10.1371/journal.pmed.0040352
PMCID: PMC2121108  PMID: 18076282
12.  Effect of Folic Acid and Betaine Supplementation on Flow-Mediated Dilation: A Randomized, Controlled Study in Healthy Volunteers 
PLoS Clinical Trials  2006;1(2):e10.
Objectives:
We investigated whether lowering of fasting homocysteine concentrations, either with folic acid or with betaine supplementation, differentially affects vascular function, a surrogate marker for risk of cardiovascular disease, in healthy volunteers. As yet, it remains uncertain whether a high concentration of homocysteine itself or whether a low folate status—its main determinant—is involved in the pathogenesis of cardiovascular disease. To shed light on this issue, we performed this study.
Design:
This was a randomized, placebo-controlled, double-blind, crossover study.
Setting:
The study was performed at Wageningen University in Wageningen, the Netherlands.
Participants:
Participants were 39 apparently healthy men and women, aged 50–70 y.
Interventions:
Participants ingested 0.8 mg/d of folic acid, 6 g/d of betaine, and placebo for 6 wk each, with 6-wk washout in between.
Outcome Measures:
At the end of each supplementation period, plasma homocysteine concentrations and flow-mediated dilation (FMD) of the brachial artery were measured in duplicate.
Results:
Folic acid supplementation lowered fasting homocysteine by 20% (−2.0 μmol/l, 95% confidence interval [CI]: −2.3; −1.6), and betaine supplementation lowered fasting plasma homocysteine by 12% (−1.2 μmol/l; −1.6; −0.8) relative to placebo. Mean (± SD) FMD after placebo supplementation was 2.8 (± 1.8) FMD%. Supplementation with betaine or folic acid did not affect FMD relative to placebo; differences relative to placebo were −0.4 FMD% (95%CI, −1.2; 0.4) and −0.1 FMD% (−0.9; 0.7), respectively.
Conclusions:
Folic acid and betaine supplementation both did not improve vascular function in healthy volunteers, despite evident homocysteine lowering. This is in agreement with other studies in healthy participants, the majority of which also fail to find improved vascular function upon folic acid treatment. However, homocysteine or folate might of course affect cardiovascular disease risk through other mechanisms.
Editorial Commentary
Background: Evidence from observational studies indicates a link between high concentrations of homocysteine (an amino acid) in the blood and increased risk of cardiovascular disease. However, the basis for the link between homocysteine concentrations and cardiovascular disease risk is not clear. Supplementing the diet with B-vitamins lowers homocysteine levels, and large-scale trials are underway that will determine whether B-vitamin supplementation has an effect on cardiovascular outcomes, such as heart attacks and strokes. These trials also involve administration of folic acid as well as other B-vitamins. It is not obvious, however, whether the effects of B-vitamin supplementation arise as a result of homocysteine lowering or via some other biochemical pathway.
What this trial shows: Olthof and colleagues aimed to further understand the effects of homocysteine lowering by randomizing 40 healthy volunteer participants to receive either folic acid supplementation; placebo; or betaine, a nutrient that lowers homocysteine levels via a different biochemical pathway than folic acid. Each participant in the trial received each supplement for 6 wk, with a 6-wk washout period before the next supplement was given. The researchers then used a technique called flow-mediated dilation (FMD) to measure functioning of the main artery of the upper arm, as a surrogate for cardiovascular disease risk. In this trial, both folic acid and betaine supplementation significantly lowered homocysteine levels over the 6-wk supplementation period. However, both forms of supplementation failed to result in any significant change in functioning of the artery, as measured using FMD.
Strengths and limitations: In this trial 40 participants were recruited, and 39 were followed up to trial completion. A crossover design was used, with each participant receiving each supplement and a placebo in sequence. This method enabled a smaller number of participants to be used to answer the question of interest, as compared to parallel-group designs. The majority of participants in the trial were followed up. However, the trial's outcomes are surrogates for cardiovascular disease risk, measured over fairly short time periods, and no clinical outcomes were examined.
Contribution to the evidence: This trial adds to the evidence on the effects of nutrient supplementation on surrogate outcomes for cardiovascular disease risk. The results show that over a 6-wk study period, these surrogate outcomes are not affected by either folic acid or betaine supplementation.
doi:10.1371/journal.pctr.0010010
PMCID: PMC1488898  PMID: 16871332
13.  Call to Action on Use and Reimbursement for Home Blood Pressure Monitoring A Joint Scientific Statement From the American Heart Association, American Society of Hypertension, and the Preventive Cardiovascular Nurses’ Association 
Hypertension  2008;52(1):10-29.
The standard method for the measurement of blood pressure (BP) in clinical practice has traditionally been to use readings taken with the auscultatory technique by a physician or nurse in a clinic or office setting. While such measurements are likely to remain the cornerstone for the diagnosis and management of hypertension for the foreseeable future, it is becoming increasingly clear that they often give inadequate or even misleading information about a patient’s true BP status. All clinical measurements of BP may be regarded as surrogate estimates of the “True” BP, which may regarded as the average level over prolonged periods of time. In the past 30 years there has been an increasing trend to supplement office or clinic readings with out-of-office measurements of BP, taken either by the patient or a relative at home (home or self-monitoring- HBPM) or by an automated recorder for 24 hours (ambulatory blood pressure monitoring- ABPM).
Of the two methods HBPM has the greatest potential for being incorporated into the routine care of hypertensive patients, in the same way that home blood glucose monitoring performed by the patient has become a routine part of the management of diabetes. The currently available monitors are relatively reliable, easy to use, inexpensive, and accurate, and are already being purchased in large numbers by patients. Despite this, their use has only been cursorily endorsed in current guidelines for the management of hypertension, and there have been no detailed recommendations as to how they should be incorporated into routine clinical practice. And despite the fact that there is strong evidence that HBPM can predict clinical outcomes and improve clinical care, the cost of the monitors is not generally reimbursed. It is the purpose of this Call to Action paper to address the issues of the incorporation of HBPM into the routine management of hypertensive patients and its reimbursement.
doi:10.1161/HYPERTENSIONAHA.107.189010
PMCID: PMC2989415  PMID: 18497370
14.  Yes, You Can? A Speaker’s Potency to Act upon His Words Orchestrates Early Neural Responses to Message-Level Meaning 
PLoS ONE  2013;8(7):e69173.
Evidence is accruing that, in comprehending language, the human brain rapidly integrates a wealth of information sources–including the reader or hearer’s knowledge about the world and even his/her current mood. However, little is known to date about how language processing in the brain is affected by the hearer’s knowledge about the speaker. Here, we investigated the impact of social attributions to the speaker by measuring event-related brain potentials while participants watched videos of three speakers uttering true or false statements pertaining to politics or general knowledge: a top political decision maker (the German Federal Minister of Finance at the time of the experiment), a well-known media personality and an unidentifiable control speaker. False versus true statements engendered an N400 - late positivity response, with the N400 (150–450 ms) constituting the earliest observable response to message-level meaning. Crucially, however, the N400 was modulated by the combination of speaker and message: for false versus true political statements, an N400 effect was only observable for the politician, but not for either of the other two speakers; for false versus true general knowledge statements, an N400 was engendered by all three speakers. We interpret this result as demonstrating that the neurophysiological response to message-level meaning is immediately influenced by the social status of the speaker and whether he/she has the power to bring about the state of affairs described.
doi:10.1371/journal.pone.0069173
PMCID: PMC3722173  PMID: 23894425
15.  A Randomized Controlled Trial of Folate Supplementation When Treating Malaria in Pregnancy with Sulfadoxine-Pyrimethamine 
PLoS Clinical Trials  2006;1(6):e28.
Objectives:
Sulfadoxine-pyrimethamine (SP) is an antimalarial drug that acts on the folate metabolism of the malaria parasite. We investigated whether folate (FA) supplementation in a high or a low dose affects the efficacy of SP for the treatment of uncomplicated malaria in pregnant women.
Design:
This was a randomized, placebo-controlled, double-blind trial.
Setting:
The trial was carried out at three hospitals in western Kenya.
Participants:
The participants were 488 pregnant women presenting at their first antenatal visit with uncomplicated malaria parasitaemia (density of ≥ 500 parasites/μl), a haemoglobin level higher than 7 g/dl, a gestational age between 17 and 34 weeks, and no history of antimalarial or FA use, or sulfa allergy. A total of 415 women completed the study.
Interventions:
All participants received SP and iron supplementation. They were randomized to the following arms: FA 5 mg, FA 0.4 mg, or FA placebo. After 14 days, all participants continued with FA 5 mg daily as per national guidelines. Participants were followed at days 2, 3, 7, 14, 21, and 28 or until treatment failure.
Outcome Measures:
The outcomes were SP failure rate and change in haemoglobin at day 14.
Results:
The proportion of treatment failure at day 14 was 13.9% (19/137) in the placebo group, 14.5% (20/138) in the FA 0.4 mg arm (adjusted hazard ratio [AHR], 1.07; 98.7% confidence interval [CI], 0.48 to 2.37; p = 0.8), and 27.1% (38/140) in the FA 5 mg arm (AHR, 2.19; 98.7% CI, 1.09 to 4.40; p = 0.005). The haemoglobin levels at day 14 were not different relative to placebo (mean difference for FA 5 mg, 0.17 g/dl; 98.7% CI, −0.19 to 0.52; and for FA 0.4 mg, 0.14 g/dl; 98.7% CI, −0.21 to 0.49).
Conclusions:
Concomitant use of 5 mg FA supplementation compromises the efficacy of SP for the treatment of uncomplicated malaria in pregnant women. Countries that use SP for treatment or prevention of malaria in pregnancy need to evaluate their antenatal policy on timing or dose of FA supplementation.
Editorial Commentary
Background: Health authorities worldwide recommend that pregnant women supplement their diet with folate (one of the B-vitamins), normally 0.4 mg per day. There is good evidence from systematic reviews of controlled trials that folate supplementation around conception and early in pregnancy is effective in protecting against neural tube (spine and brain) defects; continued supplementation throughout pregnancy reduces the chance of anemia in the mother. In many African countries, including Kenya, the dose of folate used is 5 mg per day, because this dose is more easily available there. In Kenya, as well as elsewhere in Africa, sulfadoxine-pyrimethamine is also given twice or more after the first trimester to treat and/or prevent malaria infection (which is more likely, and can have serious consequences, when a woman is pregnant). However, there is some evidence from laboratory experiments and clinical studies, none of which were done in pregnant women, suggesting that folate supplementation might reduce the effectiveness of sulfadoxine-pyrimethamine. Therefore, these researchers conducted a trial to test this hypothesis in 415 pregnant Kenyan women with malaria parasites in the blood but no severe symptoms. All were given standard sulfadoxine-pyrimethamine treatment. The women were randomized to receive either folate 5 mg daily, folate 0.4 mg daily, or placebo tablets for 14 days, after which all women reverted to the standard folate 5 mg tablets. The women were followed up for 28 days after the initial sulfadoxine-pyrimethamine dose and the principal outcome the researchers were interested in was the failure of sulfadoxine-pyrimethamine treatment, defined as fever and the presence of parasites in the blood (clinical failure) or the failure of parasites to clear from the blood or to reappear too soon (parasitological failure).
What this trial shows: In this trial, women receiving folate 5 mg daily were approximately twice as likely to fail treatment with sulfadoxine-pyrimethamine than women receiving folate 0.4 mg or placebo. (Overall, around 27% of the women receiving folate 5 mg had treatment failure during the follow-up period.) All the treatment groups had similar levels of blood hemoglobin at the end of the study. There did not seem to be any major differences in adverse events (such as premature deliveries, stillbirths, or neonatal deaths) among women taking part in the different study groups.
Strengths and limitations: The randomization procedures were appropriate and procedures were used to blind participants and researchers to the different interventions, therefore reducing the risk of bias. Since the trial had a placebo arm, it was possible to conclude that the lower dose of folate (0.4 mg) did not significantly affect efficacy of sulfadoxine-pyrimethamine as compared with placebo. A limitation of the study is that the length of the intervention was short, since all women reverted to standard 5 mg folate after 14 days. It is therefore not clear whether a longer trial would have shown additional risks or benefits of the different doses of folate. Finally, PCR genotyping was not done on the parasites infecting women in the trial; this procedure could have distinguished between true treatment failures and new infections (but which would have been unlikely within 14 days).
Contribution to the evidence: Other trials and observational studies have suggested that high doses of folate can reduce the efficacy of sulfadoxine-pyrimethamine in children and adults. However these studies have not examined the effect in pregnant women, for whom most national bodies recommend regular folate supplementation. The results from this trial supports the findings from previous studies and enables the evidence to be generalized to pregnant women. The study also found no evidence that 0.4 mg folate compromises the efficacy of sulfadoxine-pyrimethamine. The findings suggest that the lower level of folate dosing should be used in pregnancy, or that antimalarial treatments other than sulfadoxine-pyrimethamine be used.
doi:10.1371/journal.pctr.0010028
PMCID: PMC1617124  PMID: 17053829
16.  Health disparities and advertising content of women's magazines: a cross-sectional study 
BMC Public Health  2005;5:85.
Background
Disparities in health status among ethnic groups favor the Caucasian population in the United States on almost all major indicators. Disparities in exposure to health-related mass media messages may be among the environmental factors contributing to the racial and ethnic imbalance in health outcomes. This study evaluated whether variations exist in health-related advertisements and health promotion cues among lay magazines catering to Hispanic, African American and Caucasian women.
Methods
Relative and absolute assessments of all health-related advertising in 12 women's magazines over a three-month period were compared. The four highest circulating, general interest magazines oriented to Black women and to Hispanic women were compared to the four highest-circulating magazines aimed at a mainstream, predominantly White readership. Data were collected and analyzed in 2002 and 2003.
Results
Compared to readers of mainstream magazines, readers of African American and Hispanic magazines were exposed to proportionally fewer health-promoting advertisements and more health-diminishing advertisements. Photographs of African American role models were more often used to advertise products with negative health impact than positive health impact, while the reverse was true of Caucasian role models in the mainstream magazines.
Conclusion
To the extent that individual levels of health education and awareness can be influenced by advertising, variations in the quantity and content of health-related information among magazines read by different ethnic groups may contribute to racial disparities in health behaviors and health status.
doi:10.1186/1471-2458-5-85
PMCID: PMC1208907  PMID: 16109157
17.  Observer training for computer-aided detection of pulmonary nodules in chest radiography 
European Radiology  2012;22(8):1659-1664.
Objectives
To assess whether short-term feedback helps readers to increase their performance using computer-aided detection (CAD) for nodule detection in chest radiography.
Methods
The 140 CXRs (56 with a solitary CT-proven nodules and 84 negative controls) were divided into four subsets of 35; each were read in a different order by six readers. Lesion presence, location and diagnostic confidence were scored without and with CAD (IQQA-Chest, EDDA Technology) as second reader. Readers received individual feedback after each subset. Sensitivity, specificity and area under the receiver-operating characteristics curve (AUC) were calculated for readings with and without CAD with respect to change over time and impact of CAD.
Results
CAD stand-alone sensitivity was 59 % with 1.9 false-positives per image. Mean AUC slightly increased over time with and without CAD (0.78 vs. 0.84 with and 0.76 vs. 0.82 without CAD) but differences did not reach significance. The sensitivity increased (65 % vs. 70 % and 66 % vs. 70 %) and specificity decreased over time (79 % vs. 74 % and 80 % vs. 77 %) but no significant impact of CAD was found.
Conclusion
Short-term feedback does not increase the ability of readers to differentiate true- from false-positive candidate lesions and to use CAD more effectively.
Key Points
• Computer-aided detection (CAD) is increasingly used as an adjunct for many radiological techniques.
• Short-term feedback does not improve reader performance with CAD in chest radiography.
• Differentiation between true- and false-positive CAD for low conspicious possible lesions proves difficult.
• CAD can potentially increase reader performance for nodule detection in chest radiography.
doi:10.1007/s00330-012-2412-7
PMCID: PMC3387360  PMID: 22447377
Radiographic image interpretation; Computer-assisted; Solitary pulmonary nodule; Radiography; Lung; Education
18.  Changes in breathing while listening to read speech: the effect of reader and speech mode 
The current paper extends previous work on breathing during speech perception and provides supplementary material regarding the hypothesis that adaptation of breathing during perception “could be a basis for understanding and imitating actions performed by other people” (Paccalin and Jeannerod, 2000). The experiments were designed to test how the differences in reader breathing due to speaker-specific characteristics, or differences induced by changes in loudness level or speech rate influence the listener breathing. Two readers (a male and a female) were pre-recorded while reading short texts with normal and then loud speech (both readers) or slow speech (female only). These recordings were then played back to 48 female listeners. The movements of the rib cage and abdomen were analyzed for both the readers and the listeners. Breathing profiles were characterized by the movement expansion due to inhalation and the duration of the breathing cycle. We found that both loudness and speech rate affected each reader’s breathing in different ways. Listener breathing was different when listening to the male or the female reader and to the different speech modes. However, differences in listener breathing were not systematically in the same direction as reader differences. The breathing of listeners was strongly sensitive to the order of presentation of speech mode and displayed some adaptation in the time course of the experiment in some conditions. In contrast to specific alignments of breathing previously observed in face-to-face dialog, no clear evidence for a listener–reader alignment in breathing was found in this purely auditory speech perception task. The results and methods are relevant to the question of the involvement of physiological adaptations in speech perception and to the basic mechanisms of listener–speaker coupling.
doi:10.3389/fpsyg.2013.00906
PMCID: PMC3856677  PMID: 24367344
breathing; respiration; speech production; speech perception; adaptation; loudness; speech rate
19.  A Prospective Comparison of 18F-FDG PET/CT and CT as Diagnostic Tools to Identify the Primary Tumor Site in Patients with Extracervical Carcinoma of Unknown Primary Site 
The Oncologist  2012;17(9):1146-1154.
The diagnostic value of 18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT) and conventional CT regarding the ability to detect the primary tumor site in patients with extracervical metastases from carcinoma of unknown primary site was evaluated prospectively. 18F-FDG PET/CT was not shown to provide a clear advantage.
Learning Objectives
After completing this course, the reader will be able to: Compare the diagnostic performances of 18F-FDG PET/CT and conventional CT with respect to their ability to detect primary tumor sites in carcinoma of unknown primary patients with extracervical metastases.Describe the rate of identification of primary tumor sites using 18F-FDG PET/CT and conventional CT.
This article is available for continuing medical education credit at CME.TheOncologist.com
Background.
The aim of the present study was to evaluate prospectively the diagnostic value of 18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT) and conventional CT regarding the ability to detect the primary tumor site in patients with extracervical metastases from carcinoma of unknown primary (CUP) site.
Patients and Methods.
From January 2006 to December 2010, 136 newly diagnosed CUP patients with extracervical metastases underwent 18F-FDG PET/CT.
A standard of reference (SR) was established by a multidisciplinary team to ensure that the same set of criteria were used for classification of patients, that is, either as CUP patients or patients with a suggested primary tumor site. The independently obtained suggestions of primary tumor sites using PET/CT and CT were correlated with the SR to reach a consensus regarding true-positive (TP), true-negative, false-negative, and false-positive results.
Results.
SR identified a primary tumor site in 66 CUP patients (48.9%). PET/CT identified 38 TP primary tumor sites and CT identified 43 TP primary tumor sites. No statistically significant differences were observed between 18F-FDG PET/CT and CT alone in regard to sensitivity, specificity, and accuracy.
Conclusion.
In the general CUP population with multiple extracervical metastases 18F-FDG PET/CT does not represent a clear diagnostic advantage over CT alone regarding the ability to detect the primary tumor site.
doi:10.1634/theoncologist.2011-0449
PMCID: PMC3448407  PMID: 22711751
Carcinoma of unknown primary tumor site; CUP; CT; 18F-FDG PET/CT
20.  Glutamine: A novel approach to chemotherapy-induced toxicity 
Treatment of cancer is associated with short- and long-term side-effects. Cancer produces a state of glutamine deficiency, which is further aggravated by toxic effects of chemotherapeutic agents leading to increased tolerance of tumor to chemotherapy as well as reduced tolerance of normal tissues to the side-effects of chemotherapy. This article reviews the possible role of glutamine supplementation in reducing the serious adverse events in patients treated with anticancer drugs. The literature related to the possible role of glutamine in humans with cancer and the supportive evidence from animal studies was reviewed. Searches were made and the literature was retrieved using PUBMED, MEDLINE, COCHRANE LIBRARY, CENAHL and EMBASE, with a greater emphasis on the recent advances and clinical trials. Glutamine supplementation was found to protect against radiation-induced mucositis, anthracycline-induced cardiotoxicity and paclitaxel-related myalgias/arthralgias. Glutamine may prevent neurotoxicity of paclitaxel, cisplatin, oxaplatin bortezomib and lenolidamide, and is beneficial in the reduction of the dose-limiting gastrointestinal toxic effects of irinotecan and 5-FU-induced mucositis and stomatitis. Dietary glutamine reduces the severity of the immunosuppressive effect induced by methotrexate and improves the immune status of rats recovering from chemotherapy. In patients with acute myeloid leukemia requiring parenteral nutrition, glycyl-glutamine supplementation could hasten neutrophil recovery after intensive myelosuppressive chemotherapy. Current data supports the usefulness of glutamine supplementation in reducing complications of chemotherapy; however, paucity of clinical trials weakens the clear interpretation of these findings.
doi:10.4103/0971-5851.96962
PMCID: PMC3385273  PMID: 22754203
Cancer; chemotherapy; glutamine; toxicity
21.  Editorial 
As a new year begins, it is a good time to review developments of the past twelve months and to announce some changes in GSE for 2007. Since November 2005, GSE has received 122 new manuscripts, accepted 42 articles (of which 19 were submitted before November 2005) and still has 32 manuscripts in evaluation. Thus the number of submitted manuscripts is constantly increasing while the number of published articles is maintained at around 40 per year. Published articles originate from 15 countries with Spain leading (10), followed by the USA (5), Australia, France and Germany (4 each), UK (3), China, Denmark and Finland (2 each) and finally, Brazil, Canada, Greece, Japan, Norway and Slovenia. Of these 42 published papers, 19 deal with methodologies of quantitative genetics and their applications to animal selection and characterization, six address genetic diversity of populations and breeds and seven fall in the field of molecular genetics. These figures clearly show that GSE is attractive to the animal quantitative genetics community and has acquired a strong experience and reputation in this domain.
To answer this increasing demand, we have asked two new associate editors to join our editorial panel and are pleased that they have agreed: Denis Couvet from the Muséum National d'Histoire Naturelle (France) specialized in conservation biology and population genetics and Frédéric Farnir from the University of Liège (Belgium) whose research interests focus on the genetic and functional study of QTL involved in agricultural traits.
In this editorial note, we also wish to inform you about our misfortune with the calculation of the 2005 Impact Factor published in June 2006 in the "Journal of Citation Reports" by Thompson Scientific. Based on our calculations, the published 2005 IF 1.62 turned out to be erroneous and in disfavour of GSE, which Thompson Scientific has acknowledged. The true 2005 IF is 1.783 and thus, GSE occupies the 5th position in the section "Agriculture, Dairy & Animal Science" and the 82nd in the section "Genetics & Heredity". Corrections in JCR have been done in October 2006.
Finally, GSE and EDP Sciences wish to keep up with the rapid changes of publication systems, i.e. the advent of "Open Access" publishing, to make scientific research widely and freely available. Thus, we are happy to announce that as a first step in this direction, GSE now gives the possibility to authors to choose how they want their paper to be published by offering the "Open Choice" option. With this option, authors can have their articles accepted for publication made available to all interested readers (subscribers or non-subscribers) as soon as they are on-line in exchange of a basic fee, i.e. 550 euros for papers published in 2007 (without VAT).
With all this news, we offer the collaborators, authors and readers of GSE our season greetings and best wishes for a successful and productive New Year 2007.
doi:10.1186/1297-9686-39-1-1
PMCID: PMC3400394
22.  Inter- and intraradiologist variability in the BI-RADS assessment and breast density categories for screening mammograms 
The British Journal of Radiology  2012;85(1019):1465-1470.
Objective
The aim of this study was to evaluate reader variability in screening mammograms according to the American College of Radiology Breast Imaging Reporting and Data System (BI-RADS) assessment and breast density categories.
Methods
A stratified random sample of 100 mammograms was selected from a population-based breast cancer screening programme in Barcelona, Spain: 13 histopathologically confirmed breast cancers and 51 with true-negative and 36 with false-positive results. 21 expert radiologists from radiological units of breast cancer screening programmes in Catalonia, Spain, reviewed the mammography images twice within a 6-month interval. The readers described each mammography using BI-RADS assessment and breast density categories. Inter- and intraradiologist agreement was assessed using percentage of concordance and the kappa (κ) statistic.
Results
Fair interobserver agreement was observed for the BI-RADS assessment [κ=0.37, 95% confidence interval (CI) 0.36–0.38]. When the categories were collapsed in terms of whether additional evaluation was required (Categories III, 0, IV, V) or not (I and II), moderate agreement was found (κ=0.53, 95% CI 0.52–0.54). Intra-observer agreement for BI-RADS assessment was moderate using all categories (κ=0.53, 95% CI 0.50–0.55) and substantial on recall (κ=0.66, 95% CI 0.63–0.70). Regarding breast density, inter- and intraradiologist agreement was substantial (κ=0.73, 95% CI 0.72–0.74 and κ=0.69, 95% CI 0.68–0.70, respectively).
Conclusion
We observed a substantial intra-observer agreement in the BI-RADS assessment but only moderate interobserver agreement. Both inter- and intra-observer agreement in mammographic interpretation of breast density was substantial.
Advances in knowledge
Educational efforts should be made to decrease radiologists' variability in BI-RADS assessment interpretation in population-based breast screening programmes.
doi:10.1259/bjr/21256379
PMCID: PMC3500788  PMID: 22993385
23.  Ghost Authorship in Industry-Initiated Randomised Trials 
PLoS Medicine  2007;4(1):e19.
Background
Ghost authorship, the failure to name, as an author, an individual who has made substantial contributions to an article, may result in lack of accountability. The prevalence and nature of ghost authorship in industry-initiated randomised trials is not known.
Methods and Findings
We conducted a cohort study comparing protocols and corresponding publications for industry-initiated trials approved by the Scientific-Ethical Committees for Copenhagen and Frederiksberg in 1994–1995. We defined ghost authorship as present if individuals who wrote the trial protocol, performed the statistical analyses, or wrote the manuscript, were not listed as authors of the publication, or as members of a study group or writing committee, or in an acknowledgment. We identified 44 industry-initiated trials. We did not find any trial protocol or publication that stated explicitly that the clinical study report or the manuscript was to be written or was written by the clinical investigators, and none of the protocols stated that clinical investigators were to be involved with data analysis. We found evidence of ghost authorship for 33 trials (75%; 95% confidence interval 60%–87%). The prevalence of ghost authorship was increased to 91% (40 of 44 articles; 95% confidence interval 78%–98%) when we included cases where a person qualifying for authorship was acknowledged rather than appearing as an author. In 31 trials, the ghost authors we identified were statisticians. It is likely that we have overlooked some ghost authors, as we had very limited information to identify the possible omission of other individuals who would have qualified as authors.
Conclusions
Ghost authorship in industry-initiated trials is very common. Its prevalence could be considerably reduced, and transparency improved, if existing guidelines were followed, and if protocols were publicly available.
Of 44 industry-initiated trials, there was evidence of ghost authorship in 33, increasing to 40 when a person qualifying for authorship was acknowledged rather than appearing as an author.
Editors' Summary
Background.
Original scientific findings are usually published in the form of a “paper”, whether it is actually distributed on paper, or circulated via the internet, as this one is. Papers are normally prepared by a group of researchers who did the research and are then listed at the top of the article. These authors therefore take responsibility for the integrity of the results and interpretation of them. However, many people are worried that sometimes the author list on the paper does not tell the true story of who was involved. In particular, for clinical research, case histories and previous research has suggested that “ghost authorship” is commonplace. Ghost authors are people who were involved in some way in the research study, or writing the paper, but who have been left off the final author list. This might happen because the study “looks” more credible if the true authors (for example, company employees or freelance medical writers) are not revealed. This practice might hide competing interests that readers should be aware of, and has therefore been condemned by academics, groups of editors, and some pharmaceutical companies.
Why Was This Study Done?
This group of researchers wanted to get an idea of how often ghost authorship happened in medical research done by companies. Previous studies looking into this used surveys, whereby the researchers would write to one author on each of a group of papers to ask whether anyone else had been involved in the work but who was not listed on the paper. These sorts of studies typically underestimate the rate of ghost authorship, because the main author might not want to admit what had been going on. However, the researchers here managed to get access to trial protocols (documents setting out the plans for future research studies), which gave them a way to investigate ghost authorship.
What Did the Researchers Do and Find?
In order to investigate the frequency and type of ghost authorship, these researchers identified every trial which was approved between 1994 and 1995 by the ethics committees of Copenhagen and Frederiksberg in Denmark. Then they winnowed this group down to include only the trials that were sponsored by industry (pharmaceutical companies and others), and only those trials that were finished and published. The protocols for each trial were obtained from the ethics committees and the researchers then matched up each protocol with its corresponding paper. Then, they compared names which appeared in the protocol against names appearing on the eventual paper, either on the author list or acknowledged elsewhere in the paper as being involved. The researchers ended up studying 44 trials. For 31 of these (75% of them) they found some evidence of ghost authorship, in that people were identified as having written the protocol or who had been involved in doing statistical analyses or writing the manuscript, but did not end up listed in the manuscript. If the definition of authorship was made narrower, and “ghost authorship” included people qualifying for authorship who were mentioned in the acknowledgements but not the author list, the researchers' estimate went up to 91%, that is 40 of the 44 trials. For most of the trials with missing authors, the ghost was a statistician (the person who analyzes the trial data).
What Do These Findings Mean?
In this study, the researchers found that ghost authorship was very common in papers published in medical journals (this study covered a broad range of peer-reviewed journals in many medical disciplines). The method used in this paper seems more reliable than using surveys to work out how often ghost authorship happens. The researchers aimed to define authorship using the policies set out by a group called the International Committee of Medical Journal Editors (ICMJE), and the findings here suggest that the ICMJE's standards for authorship are very often ignored. This means that people who read the published paper cannot always accurately judge or trust the information presented within it, and competing interests may be hidden. The researchers here suggest that protocols should be made publicly available so that everyone can see what trials are planned and who is involved in conducting them. The findings also suggest that journals should not only list the authors of each paper but describe what each author has done, so that the published information accurately reflects what has been carried out.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040019.
Read the Perspective by Liz Wager, which discusses these findings in more depth
The International Committee of Medical Journal Editors (ICMJE) is a group of general medical journal editors who have produced general guidelines for biomedical manuscripts; their definition of authorship is also described
The Committee on Publication Ethics is a forum for editors of peer-reviewed journals to discuss issues related to the integrity of the scientific record; the Web site lists anonymized problems and the committee's advice, not just regarding authorship, but other types of problems as well
Good Publication Practice for Pharmaceutical Companies outlines common standards for publication of industry-sponsored medical research, and some pharmaceutical companies have agreed to these
doi:10.1371/journal.pmed.0040019
PMCID: PMC1769411  PMID: 17227134
24.  The Study of Observer Variation in the Radiological Classification of Pneumoconiosis 
In a long-term investigation such as the National Coal Board's Pneumoconiosis Field Research (P.F.R.), it is essential to establish satisfactory and stable procedures for making the necessary observations and measurements. It is equally important regularly to apply suitable methods of checking the accuracy and consistency of the various observations and measurements. One aspect of vital importance in the P.F.R. is the classification of the series of chest radiographs taken, at intervals, of all the men under observation. This is inevitably a subjective process, and (as with other similar fields of work) it is desirable to obtain some understanding of the basic process behind the operation. This can usefully be done by the help of “models” designed to describe the process, if necessary in simplified terms. The problem of the radiological classification of pneumoconiosis has been studied hitherto in terms of coefficients of disagreement (inter-observer variation) and inconsistency (intra-observer variation), but for various reasons the method was not considered entirely satisfactory. New methods of approach were therefore developed for studying the performance of the two doctors responsible for the film reading in the Research, and two distinct “models” were derived. The advantages and disadvantages of each are described in the paper, together with the applications of the two models to the study of some of the problems arising in the course of the investigation.
The first model is based on the assumption that if a film is selected at random from a batch representing a whole colliery population, and that if the film is of “true” category i, the chance of its being read as another category (j) is a constant, Pij, which depends upon the observer concerned, the particular batch of films being read, and the values of i and j. This model enables the performance of the readers to be monitored satisfactorily, and it has also been used to investigate different methods for arriving at an agreed, or “definitive”, assessment of radiological abnormality. The Pij model suffers from the disadvantage of applying only to “average” films, and the assumptions made are such that it manifestly does not provide an entirely realistic representation of the reading process on any particular film.
The second “improved” model was therefore developed to overcome this criticism. Briefly, it is considered that each film is representative of a unique degree of abnormality, located on a continuum, or abnormality scale, which covers the whole range of simple pneumoconiosis. The scale of abnormality is then chosen in such a way that, whatever the true degree of abnormality of the film, the observer's readings will be normally distributed about the true value with constant bias and variability at all points along the scale. The very large number of readings available has been analysed to determine the optimum positions of the category boundaries on the abnormality scale and in this way the scale has been unambiguously defined. The model enables the routine reading standards to be monitored, and it has also been used to investigate the underlying distribution of abnormality at individual collieries. Its chief disadvantage is the extensive computational work required.
The “fit” of both models to the data collected in the Research is shown to be satisfactory and on balance it appears that both have applications in this field of study. The method chosen in any given circumstance will depend upon the particular requirement and the facilities available for computational work.
PMCID: PMC1038082  PMID: 13698433
25.  Semiautomated technique for identification of subgingival isolates. 
Journal of Clinical Microbiology  1984;19(5):599-605.
A semiautomated approach for the characterization of subgingival bacterial isolates which economizes in media preparation, inoculation, reading, recording, and interpretation of results was tested. Test ingredients were added to a basal medium consisting of Mycoplasma broth supplemented with 5 micrograms of hemin, 0.5 mg of NaHCO3, and 0.5 mg of L-cysteine per ml. Sterile test media were aseptically dispensed into wells of sterile microtiter plates with a MIC 2000 dispenser. Inocula were grown in broth or scraped from agar plates, dispersed, and inoculated with a MIC 2000 inoculator. After 2 to 4 days of incubation, the optical density of growth was determined with an Artek 210 vertical beam reader at 580 nm and stored on a floppy disk. Reagents were added to each well, and the changes in optical density were determined. Thresholds for positive reactions were determined after extensive preliminary studies for each test. The tests were run in duplicate on each plate and interpreted with an Artek vertical beam reader. Tests that were run in this system included: fermentation of carbohydrates, decarboxylase reactions, reduction of nitrate and nitrite, ammonia production, hydrolysis of esculin, growth in the presence of inhibitory or stimulatory substances, and indole production. Approximately 80% of all isolates from subgingival samples could be characterized by this technique. Comparisons were made between the semiautomated and conventional identification techniques. Overall reproducibility of 2,980 strains by the semiautomated and conventional techniques were 95 and 90%, respectively. There was an 86% similarity of results by the semiautomated and conventional methods. The semiautomated technique was more rapid, less expensive, and as reproducible as the conventional method of identification.
Images
PMCID: PMC271139  PMID: 6376536

Results 1-25 (213409)