PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (140773)

Clipboard (0)
None

Related Articles

1.  Computer supported collaborative learning in a clerkship: an exploratory study on the relation of discussion activity and revision of critical appraisal papers 
BMC Medical Education  2012;12:79.
Background
Medical students in clerkship are continuously confronted with real and relevant patient problems. To support clinical problem solving skills, students perform a Critical Appraisal of a Topic (CAT) task, often resulting in a paper. Because such a paper may contain errors, students could profit from discussion with peers, leading to paper revision. Active peer discussion by a Computer Supported Collaborative Learning (CSCL) environment show positive medical students perceptions on subjective knowledge improvement. High students’ activity during discussions in a CSCL environment demonstrated higher task-focussed discussion reflecting higher levels of knowledge construction. However, it remains unclear whether high discussion activity influences students’ decisions revise their CAT paper. The aim of this research is to examine whether students who revise their critical appraisal papers after discussion in a CSCL environment show more task-focussed activity and discuss more intensively on critical appraisal topics than students who do not revise their papers.
Methods
Forty-seven medical students, stratified in subgroups, participated in a structured asynchronous online discussion of individual written CAT papers on self-selected clinical problems. The discussion was structured by three critical appraisal topics. After the discussion, the students could revise their paper. For analysis purposes, all students’ postings were blinded and analysed by the investigator, unaware of students characteristics and whether or not the paper was revised. Postings were counted and analysed by an independent rater, Postings were assigned into outside activity, non-task-focussed activity or task-focussed activity. Additionally, postings were assigned to one of the three critical appraisal topics. Analysis results were compared by revised and unrevised papers.
Results
Twenty-four papers (51.6%) were revised after the online discussion. The discussions of the revised papers showed significantly higher numbers of postings, more task-focussed activities, and more postings about the two critical appraisal topics: “appraisal of the selected article(s)”, and “relevant conclusion regarding the clinical problem”.
Conclusion
A CSCL environment can support medical students in the execution and critical appraisal of authentic tasks in the clinical workplace. Revision of CAT papers appears to be related to discussions activity, more specifically reflecting high task-focussed activity of critical appraisal topics.
doi:10.1186/1472-6920-12-79
PMCID: PMC3507639  PMID: 22906218
2.  Ghost Authorship in Industry-Initiated Randomised Trials 
PLoS Medicine  2007;4(1):e19.
Background
Ghost authorship, the failure to name, as an author, an individual who has made substantial contributions to an article, may result in lack of accountability. The prevalence and nature of ghost authorship in industry-initiated randomised trials is not known.
Methods and Findings
We conducted a cohort study comparing protocols and corresponding publications for industry-initiated trials approved by the Scientific-Ethical Committees for Copenhagen and Frederiksberg in 1994–1995. We defined ghost authorship as present if individuals who wrote the trial protocol, performed the statistical analyses, or wrote the manuscript, were not listed as authors of the publication, or as members of a study group or writing committee, or in an acknowledgment. We identified 44 industry-initiated trials. We did not find any trial protocol or publication that stated explicitly that the clinical study report or the manuscript was to be written or was written by the clinical investigators, and none of the protocols stated that clinical investigators were to be involved with data analysis. We found evidence of ghost authorship for 33 trials (75%; 95% confidence interval 60%–87%). The prevalence of ghost authorship was increased to 91% (40 of 44 articles; 95% confidence interval 78%–98%) when we included cases where a person qualifying for authorship was acknowledged rather than appearing as an author. In 31 trials, the ghost authors we identified were statisticians. It is likely that we have overlooked some ghost authors, as we had very limited information to identify the possible omission of other individuals who would have qualified as authors.
Conclusions
Ghost authorship in industry-initiated trials is very common. Its prevalence could be considerably reduced, and transparency improved, if existing guidelines were followed, and if protocols were publicly available.
Of 44 industry-initiated trials, there was evidence of ghost authorship in 33, increasing to 40 when a person qualifying for authorship was acknowledged rather than appearing as an author.
Editors' Summary
Background.
Original scientific findings are usually published in the form of a “paper”, whether it is actually distributed on paper, or circulated via the internet, as this one is. Papers are normally prepared by a group of researchers who did the research and are then listed at the top of the article. These authors therefore take responsibility for the integrity of the results and interpretation of them. However, many people are worried that sometimes the author list on the paper does not tell the true story of who was involved. In particular, for clinical research, case histories and previous research has suggested that “ghost authorship” is commonplace. Ghost authors are people who were involved in some way in the research study, or writing the paper, but who have been left off the final author list. This might happen because the study “looks” more credible if the true authors (for example, company employees or freelance medical writers) are not revealed. This practice might hide competing interests that readers should be aware of, and has therefore been condemned by academics, groups of editors, and some pharmaceutical companies.
Why Was This Study Done?
This group of researchers wanted to get an idea of how often ghost authorship happened in medical research done by companies. Previous studies looking into this used surveys, whereby the researchers would write to one author on each of a group of papers to ask whether anyone else had been involved in the work but who was not listed on the paper. These sorts of studies typically underestimate the rate of ghost authorship, because the main author might not want to admit what had been going on. However, the researchers here managed to get access to trial protocols (documents setting out the plans for future research studies), which gave them a way to investigate ghost authorship.
What Did the Researchers Do and Find?
In order to investigate the frequency and type of ghost authorship, these researchers identified every trial which was approved between 1994 and 1995 by the ethics committees of Copenhagen and Frederiksberg in Denmark. Then they winnowed this group down to include only the trials that were sponsored by industry (pharmaceutical companies and others), and only those trials that were finished and published. The protocols for each trial were obtained from the ethics committees and the researchers then matched up each protocol with its corresponding paper. Then, they compared names which appeared in the protocol against names appearing on the eventual paper, either on the author list or acknowledged elsewhere in the paper as being involved. The researchers ended up studying 44 trials. For 31 of these (75% of them) they found some evidence of ghost authorship, in that people were identified as having written the protocol or who had been involved in doing statistical analyses or writing the manuscript, but did not end up listed in the manuscript. If the definition of authorship was made narrower, and “ghost authorship” included people qualifying for authorship who were mentioned in the acknowledgements but not the author list, the researchers' estimate went up to 91%, that is 40 of the 44 trials. For most of the trials with missing authors, the ghost was a statistician (the person who analyzes the trial data).
What Do These Findings Mean?
In this study, the researchers found that ghost authorship was very common in papers published in medical journals (this study covered a broad range of peer-reviewed journals in many medical disciplines). The method used in this paper seems more reliable than using surveys to work out how often ghost authorship happens. The researchers aimed to define authorship using the policies set out by a group called the International Committee of Medical Journal Editors (ICMJE), and the findings here suggest that the ICMJE's standards for authorship are very often ignored. This means that people who read the published paper cannot always accurately judge or trust the information presented within it, and competing interests may be hidden. The researchers here suggest that protocols should be made publicly available so that everyone can see what trials are planned and who is involved in conducting them. The findings also suggest that journals should not only list the authors of each paper but describe what each author has done, so that the published information accurately reflects what has been carried out.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040019.
Read the Perspective by Liz Wager, which discusses these findings in more depth
The International Committee of Medical Journal Editors (ICMJE) is a group of general medical journal editors who have produced general guidelines for biomedical manuscripts; their definition of authorship is also described
The Committee on Publication Ethics is a forum for editors of peer-reviewed journals to discuss issues related to the integrity of the scientific record; the Web site lists anonymized problems and the committee's advice, not just regarding authorship, but other types of problems as well
Good Publication Practice for Pharmaceutical Companies outlines common standards for publication of industry-sponsored medical research, and some pharmaceutical companies have agreed to these
doi:10.1371/journal.pmed.0040019
PMCID: PMC1769411  PMID: 17227134
3.  Internet publicity of data problems in the bioscience literature correlates with enhanced corrective action 
PeerJ  2014;2:e313.
Several online forums exist to facilitate open and/or anonymous discussion of the peer-reviewed scientific literature. Data integrity is a common discussion topic, and it is widely assumed that publicity surrounding such matters will accelerate correction of the scientific record. This study aimed to test this assumption by examining a collection of 497 papers for which data integrity had been questioned either in public or in private. As such, the papers were divided into two sub-sets: a public set of 274 papers discussed online, and the remainder a private set of 223 papers not publicized. The sources of alleged data problems, as well as criteria for defining problem data, and communication of problems to journals and appropriate institutions, were similar between the sets. The number of laboratory groups represented in each set was also similar (75 in public, 62 in private), as was the number of problem papers per laboratory group (3.65 in public, 3.54 in private). Over a study period of 18 months, public papers were retracted 6.5-fold more, and corrected 7.7-fold more, than those in the private set. Parsing the results by laboratory group, 28 laboratory groups in the public set had papers which received corrective action, versus 6 laboratory groups in the private set. For those laboratory groups in the public set with corrected/retracted papers, the fraction of their papers acted on was 62% of those initially flagged, whereas in the private set this fraction was 27%. Such clustering of actions suggests a pattern in which correction/retraction of one paper from a group correlates with more corrections/retractions from the same group, with this pattern being stronger in the public set. It is therefore concluded that online discussion enhances levels of corrective action in the scientific literature. Nevertheless, anecdotal discussion reveals substantial room for improvement in handling of such matters.
doi:10.7717/peerj.313
PMCID: PMC3994617  PMID: 24765564
Retraction; Correction; Erratum; Image manipulation; Social media; Science publishing
4.  What is a problem? 
Poiesis & Praxis  2011;7(4):249-274.
Among others, the term “problem” plays a major role in the various attempts to characterize interdisciplinarity or transdisciplinarity, as used synonymously in this paper. Interdisciplinarity (ID) is regarded as “problem solving among science, technology and society” and as “problem orientation beyond disciplinary constraints” (cf. Frodeman et al.: The Oxford Handbook of Interdisciplinarity. Oxford University Press, Oxford, 2010). The point of departure of this paper is that the discourse and practice of ID have problems with the “problem”. The objective here is to shed some light on the vague notion of “problem” in order to advocate a specific type of interdisciplinarity: problem-oriented interdisciplinarity. The outline is as follows: Taking an ex negativo approach, I will show what problem-oriented ID does not mean. Using references to well-established distinctions in philosophy of science, I will show three other types of ID that should not be placed under the umbrella term “problem-oriented ID”: object-oriented ID (“ontology”), theory-oriented ID (epistemology), and method-oriented ID (methodology). Different philosophical thought traditions can be related to these distinguishable meanings. I will then clarify the notion of “problem” by looking at three systematic elements: an undesired (initial) state, a desired (goal) state, and the barriers in getting from the one to the other. These three elements include three related kinds of knowledge: systems, target, and transformation knowledge. This paper elaborates further methodological and epistemological elements of problem-oriented ID. It concludes by stressing that problem-oriented ID is the most needed as well as the most challenging type of ID.
doi:10.1007/s10202-011-0091-0
PMCID: PMC3136692  PMID: 21874128
5.  The burgeoning field of transdisciplinary adaptation research in Quebec (1998–): a climate change-related public health narrative 
This paper presents a public health narrative on Quebec’s new climatic conditions and human health, and describes the transdisciplinary nature of the climate change adaptation research currently being adopted in Quebec, characterized by the three phases of problem identification, problem investigation, and problem transformation. A transdisciplinary approach is essential for dealing with complex ill-defined problems concerning human–environment interactions (for example, climate change), for allowing joint research, collective leadership, complex collaborations, and significant exchanges among scientists, decision makers, and knowledge users. Such an approach is widely supported in theory but has proved to be extremely difficult to implement in practice, and those who attempt it have met with heavy resistance, succeeding when they find the occasional opportunity within institutional or social contexts. In this paper we narrate the ongoing struggle involved in tackling the negative effects of climate change in multi-actor contexts at local and regional levels, a struggle that began in a quiet way in 1998. The paper will describe how public health adaptation research is supporting transdisciplinary action and implementation while also preparing for the future, and how this interaction to tackle a life-world problem (adaptation of the Quebec public health sector to climate change) in multi-actors contexts has progressively been established during the last 13 years. The first of the two sections introduces the social context of a Quebec undergoing climate changes. Current climatic conditions and expected changes will be described, and attendant health risks for the Quebec population. The second section addresses the scientific, institutional and normative dimensions of the problem. It corresponds to a “public health narrative” presented in three phases: (1) problem identification (1998–2002) beginning in northern Quebec; (2) problem investigation (2002–2006) in which the issues are successively explored, understood, and conceptualized for all of Quebec, and (3) problem transformation (2006–2009), which discusses major interactions among the stakeholders and the presentation of an Action Plan by a central actor, the Quebec government, in alliance with other stakeholders. In conclusion, we underline the importance, in the current context, of providing for a sustained transdisciplinary adaptation to climatic change. This paper should be helpful for (1) public health professionals confronted with establishing a transdisciplinary approach to a real-world problem other than climate change, (2) professionals in other sectors (such as public safety, built environment) confronted with climate change, who wish to implement transdisciplinary adaptive interventions and/or research, and (3) knowledge users (public and private actors; nongovernment organizations; citizens) from elsewhere in multi-contexts/environments/sectors who wish to promote complex collaborations (with us or not), collective leadership, and “transfrontier knowledge-to-action” for implementing climate change-related adaptation measures.
doi:10.2147/JMDH.S14294
PMCID: PMC3180480  PMID: 21966228
climate change; impacts; adaptation; public health; Quebec; Canada; Arctic; intersectoral approach; complex collaborations; collective leadership; transfrontier knowledge- to-action; narrative; storytelling; success story
6.  Focusing on the Golden Ball Metaheuristic: An Extended Study on a Wider Set of Problems 
The Scientific World Journal  2014;2014:563259.
Nowadays, the development of new metaheuristics for solving optimization problems is a topic of interest in the scientific community. In the literature, a large number of techniques of this kind can be found. Anyway, there are many recently proposed techniques, such as the artificial bee colony and imperialist competitive algorithm. This paper is focused on one recently published technique, the one called Golden Ball (GB). The GB is a multiple-population metaheuristic based on soccer concepts. Although it was designed to solve combinatorial optimization problems, until now, it has only been tested with two simple routing problems: the traveling salesman problem and the capacitated vehicle routing problem. In this paper, the GB is applied to four different combinatorial optimization problems. Two of them are routing problems, which are more complex than the previously used ones: the asymmetric traveling salesman problem and the vehicle routing problem with backhauls. Additionally, one constraint satisfaction problem (the n-queen problem) and one combinatorial design problem (the one-dimensional bin packing problem) have also been used. The outcomes obtained by GB are compared with the ones got by two different genetic algorithms and two distributed genetic algorithms. Additionally, two statistical tests are conducted to compare these results.
doi:10.1155/2014/563259
PMCID: PMC4137618  PMID: 25165742
7.  Smartphone Versus Pen-and-Paper Data Collection of Infant Feeding Practices in Rural China 
Background
Maternal, Newborn, and Child Health (MNCH) household survey data are collected mainly with pen-and-paper. Smartphone data collection may have advantages over pen-and-paper, but little evidence exists on how they compare.
Objective
To compare smartphone data collection versus the use of pen-and-paper for infant feeding practices of the MNCH household survey. We compared the two data collection methods for differences in data quality (data recording, data entry, open-ended answers, and interrater reliability), time consumption, costs, interviewers’ perceptions, and problems encountered.
Methods
We recruited mothers of infants aged 0 to 23 months in four village clinics in Zhaozhou Township, Zhao County, Hebei Province, China. We randomly assigned mothers to a smartphone or a pen-and-paper questionnaire group. A pair of interviewers simultaneously questioned mothers on infant feeding practices, each using the same method (either smartphone or pen-and-paper).
Results
We enrolled 120 mothers, and all completed the study. Data recording errors were prevented in the smartphone questionnaire. In the 120 pen-and-paper questionnaires (60 mothers), we found 192 data recording errors in 55 questionnaires. There was no significant difference in recording variation between the groups for the questionnaire pairs (P = .32) or variables (P = .45). The smartphone questionnaires were automatically uploaded and no data entry errors occurred. We found that even after double data entry of the pen-and-paper questionnaires, 65.0% (78/120) of the questionnaires did not match and needed to be checked. The mean duration of an interview was 10.22 (SD 2.17) minutes for the smartphone method and 10.83 (SD 2.94) minutes for the pen-and-paper method, which was not significantly different between the methods (P = .19). The mean costs per questionnaire were higher for the smartphone questionnaire (¥143, equal to US $23 at the exchange rate on April 24, 2012) than for the pen-and-paper questionnaire (¥83, equal to US $13). The smartphone method was acceptable to interviewers, and after a pilot test we encountered only minor problems (eg, the system halted for a few seconds or it shut off), which did not result in data loss.
Conclusions
This is the first study showing that smartphones can be successfully used for household data collection on infant feeding in rural China. Using smartphones for data collection, compared with pen-and-paper, eliminated data recording and entry errors, had similar interrater reliability, and took an equal amount of time per interview. While the costs for the smartphone method were higher than the pen-and-paper method in our small-scale survey, the costs for both methods would be similar for a large-scale survey. Smartphone data collection should be further evaluated for other surveys and on a larger scale to deliver maximum benefits in China and elsewhere.
doi:10.2196/jmir.2183
PMCID: PMC3510690  PMID: 22989894
Data collection; health survey; questionnaires; infant feeding; smartphone
8.  Selection in Reported Epidemiological Risks: An Empirical Assessment 
PLoS Medicine  2007;4(3):e79.
Background
Epidemiological studies may be subject to selective reporting, but empirical evidence thereof is limited. We empirically evaluated the extent of selection of significant results and large effect sizes in a large sample of recent articles.
Methods and Findings
We evaluated 389 articles of epidemiological studies that reported, in their respective abstracts, at least one relative risk for a continuous risk factor in contrasts based on median, tertile, quartile, or quintile categorizations. We examined the proportion and correlates of reporting statistically significant and nonsignificant results in the abstract and whether the magnitude of the relative risks presented (coined to be consistently ≥1.00) differs depending on the type of contrast used for the risk factor. In 342 articles (87.9%), ≥1 statistically significant relative risk was reported in the abstract, while only 169 articles (43.4%) reported ≥1 statistically nonsignificant relative risk in the abstract. Reporting of statistically significant results was more common with structured abstracts, and was less common in US-based studies and in cancer outcomes. Among 50 randomly selected articles in which the full text was examined, a median of nine (interquartile range 5–16) statistically significant and six (interquartile range 3–16) statistically nonsignificant relative risks were presented (p = 0.25). Paradoxically, the smallest presented relative risks were based on the contrasts of extreme quintiles; on average, the relative risk magnitude was 1.41-, 1.42-, and 1.36-fold larger in contrasts of extreme quartiles, extreme tertiles, and above-versus-below median values, respectively (p < 0.001).
Conclusions
Published epidemiological investigations almost universally highlight significant associations between risk factors and outcomes. For continuous risk factors, investigators selectively present contrasts between more extreme groups, when relative risks are inherently lower.
An evaluation of published articles reporting epidemiological studies found that they almost universally highlight significant associations between risk factors and outcomes.
Editors' Summary
Background.
Medical and scientific researchers use statistical tests to try to work out whether their observations—for example, seeing a difference in some characteristic between two groups of people—might have occurred as a result of chance alone. Statistical tests cannot determine this for sure, rather they can only give a probability that the observations would have arisen by chance. When researchers have many different hypotheses, and carry out many statistical tests on the same set of data, they run the risk of concluding that there are real differences where in fact there are none. At the same time, it has long been known that scientific and medical researchers tend to pick out the findings on which to report in their papers. Findings that are more interesting, impressive, or statistically significant are more likely to be published. This is termed “publication bias” or “selective reporting bias.” Therefore, some people are concerned that the published scientific literature might contain many false-positive findings, i.e., findings that are not true but are simply the result of chance variation in the data. This would have a serious impact on the accuracy of the published scientific literature and would tend to overestimate the strength and direction of relationships being studied.
Why Was This Study Done?
Selective reporting bias has already been studied in detail in the area of randomized trials (studies where participants are randomly allocated to receive an intervention, e.g., a new drug, versus an alternative intervention or “comparator,” in order to understand the benefits or safety of the new intervention). These studies have shown that very many of the findings of trials are never published, and that statistically significant findings are more likely to be included in published papers than nonsignificant findings. However, much medical research is carried out that does not use randomized trial methods, either because that method is not useful to answer the question at hand or is unethical. Epidemiological research is often concerned with looking at links between risk factors and the development of disease, and this type of research would generally use observation rather than experiment to uncover connections. The researchers here were concerned that selective reporting bias might be just as much of a problem in epidemiological research as in randomized trials research, and wanted to study this specifically.
What Did the Researchers Do and Find?
In this investigation, searches were carried out of PubMed, a database of biomedical research studies, to extract epidemiological studies that were published between January 2004 and October 2005. The researchers wanted to specifically look at studies reporting the effect of continuous risk factors and their effect on health or disease outcomes (a continuous risk factor is something like age or glucose concentration in the blood, is a number, and can have any value on a sliding scale). Three hundred and eighty-nine original research studies were found, and the researchers pulled out from the abstracts and full text of these papers the relative risks that were reported along with the results of statistical tests for them. (Relative risk is the chance of getting an outcome, say disease, in one group as compared to another group.) The researchers found that nearly 90% of these studies had one or more statistically significant risks reported in the abstract, but only 43% reported one or more risks that were not statistically significant. When looking at all of the findings reported anywhere in the full text for 50 of these studies, the researchers saw that papers overall reported more statistically significant risks than nonsignificant risks. Finally, it seemed that in the set of papers studied here, the way in which statistical analyses were done produced a bias towards more extreme findings: for datasets showing small relative risks, papers were more likely to report a comparison between extreme subsets of the data so as to report larger relative risks.
What Do These Findings Mean?
These findings suggest that there is a tendency among epidemiology researchers to highlight statistically significant findings and to avoid highlighting nonsignificant findings in their research papers. This behavior may be a problem, because many of these significant findings could in future turn out to be “false positives.” At present, registers exist for researchers to describe ongoing clinical trials, and to set out the outcomes that they plan to analyze for those trials. These registers will go some way towards addressing some of the problems described here, but only for clinical trials research. Registers do not yet exist for epidemiological studies, and therefore it is important that researchers and readers are aware of and cautious about the problem of selective reporting in epidemiological research.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040079.
Wikipedia entry on publication bias (note: Wikipedia is an internet encyclopedia that anyone can edit)
The International Committee of Medical Journal Editors gives guidelines for submitting manuscripts to its member journals, and includes comments about registration of ongoing studies and the obligation to publish negative studies
ClinicalTrials.gov and the ISRCTN register are two registries of ongoing clinical trials
doi:10.1371/journal.pmed.0040079
PMCID: PMC1808481  PMID: 17341129
9.  L2-norm multiple kernel learning and its application to biomedical data fusion 
BMC Bioinformatics  2010;11:309.
Background
This paper introduces the notion of optimizing different norms in the dual problem of support vector machines with multiple kernels. The selection of norms yields different extensions of multiple kernel learning (MKL) such as L∞, L1, and L2 MKL. In particular, L2 MKL is a novel method that leads to non-sparse optimal kernel coefficients, which is different from the sparse kernel coefficients optimized by the existing L∞ MKL method. In real biomedical applications, L2 MKL may have more advantages over sparse integration method for thoroughly combining complementary information in heterogeneous data sources.
Results
We provide a theoretical analysis of the relationship between the L2 optimization of kernels in the dual problem with the L2 coefficient regularization in the primal problem. Understanding the dual L2 problem grants a unified view on MKL and enables us to extend the L2 method to a wide range of machine learning problems. We implement L2 MKL for ranking and classification problems and compare its performance with the sparse L∞ and the averaging L1 MKL methods. The experiments are carried out on six real biomedical data sets and two large scale UCI data sets. L2 MKL yields better performance on most of the benchmark data sets. In particular, we propose a novel L2 MKL least squares support vector machine (LSSVM) algorithm, which is shown to be an efficient and promising classifier for large scale data sets processing.
Conclusions
This paper extends the statistical framework of genomic data fusion based on MKL. Allowing non-sparse weights on the data sources is an attractive option in settings where we believe most data sources to be relevant to the problem at hand and want to avoid a "winner-takes-all" effect seen in L∞ MKL, which can be detrimental to the performance in prospective studies. The notion of optimizing L2 kernels can be straightforwardly extended to ranking, classification, regression, and clustering algorithms. To tackle the computational burden of MKL, this paper proposes several novel LSSVM based MKL algorithms. Systematic comparison on real data sets shows that LSSVM MKL has comparable performance as the conventional SVM MKL algorithms. Moreover, large scale numerical experiments indicate that when cast as semi-infinite programming, LSSVM MKL can be solved more efficiently than SVM MKL.
Availability
The MATLAB code of algorithms implemented in this paper is downloadable from http://homes.esat.kuleuven.be/~sistawww/bioi/syu/l2lssvm.html.
doi:10.1186/1471-2105-11-309
PMCID: PMC2906488  PMID: 20529363
10.  How Can We Improve Problem Solving in Undergraduate Biology? Applying Lessons from 30 Years of Physics Education Research 
CBE Life Sciences Education  2013;12(2):153-161.
If students are to successfully grapple with authentic, complex biological problems as scientists and citizens, they need practice solving such problems during their undergraduate years. Physics education researchers have investigated student problem solving for the past three decades. Although physics and biology problems differ in structure and content, the instructional purposes align closely: explaining patterns and processes in the natural world and making predictions about physical and biological systems. In this paper, we discuss how research-supported approaches developed by physics education researchers can be adopted by biologists to enhance student problem-solving skills. First, we compare the problems that biology students are typically asked to solve with authentic, complex problems. We then describe the development of research-validated physics curricula emphasizing process skills in problem solving. We show that solving authentic, complex biology problems requires many of the same skills that practicing physicists and biologists use in representing problems, seeking relationships, making predictions, and verifying or checking solutions. We assert that acquiring these skills can help biology students become competent problem solvers. Finally, we propose how biology scholars can apply lessons from physics education in their classrooms and inspire new studies in biology education research.
How can physics inform biology in problem-solving? The authors discuss how research-supported approaches developed by physics education researchers can be adopted by biologists to enhance student problem-solving skills.
doi:10.1187/cbe.12-09-0149
PMCID: PMC3671643  PMID: 23737623
11.  Introducing the MCHF/OVRP/SDMP: Multicapacitated/Heterogeneous Fleet/Open Vehicle Routing Problems with Split Deliveries and Multiproducts 
The Scientific World Journal  2014;2014:515402.
In this paper, we analyze a real-world OVRP problem for a production company. Considering real-world constrains, we classify our problem as multicapacitated/heterogeneous fleet/open vehicle routing problem with split deliveries and multiproduct (MCHF/OVRP/SDMP) which is a novel classification of an OVRP. We have developed a mixed integer programming (MIP) model for the problem and generated test problems in different size (10–90 customers) considering real-world parameters. Although MIP is able to find optimal solutions of small size (10 customers) problems, when the number of customers increases, the problem gets harder to solve, and thus MIP could not find optimal solutions for problems that contain more than 10 customers. Moreover, MIP fails to find any feasible solution of large-scale problems (50–90 customers) within time limits (7200 seconds). Therefore, we have developed a genetic algorithm (GA) based solution approach for large-scale problems. The experimental results show that the GA based approach reaches successful solutions with 9.66% gap in 392.8 s on average instead of 7200 s for the problems that contain 10–50 customers. For large-scale problems (50–90 customers), GA reaches feasible solutions of problems within time limits. In conclusion, for the real-world applications, GA is preferable rather than MIP to reach feasible solutions in short time periods.
doi:10.1155/2014/515402
PMCID: PMC4083288  PMID: 25045735
12.  Do longer consultations improve the management of psychological problems in general practice? A systematic literature review 
Background
Psychological problems present a huge burden of illness in our community and GPs are the main providers of care. There is evidence that longer consultations in general practice are associated with improved quality of care; but this needs to be balanced against the fact that doctor time is a limited resource and longer consultations may lead to reduced access to health care.
The aim of this research was to conduct a systematic literature review to determine whether management of psychological problems in general practice is associated with an increased consultation length and to explore whether longer consultations are associated with better health outcomes for patients with psychological problems.
Methods
A search was conducted on Medline (Ovid) databases up to7 June 2006. The following search terms, were used:
general practice or primary health care (free text) or family practice (MeSH)
AND consultation length or duration (free text) or time factors (MeSH)
AND depression or psychological problems or depressed (free text).
A similar search was done in Web of Science, Pubmed, Google Scholar, and Cochrane Library and no other papers were found.
Studies were included if they contained data comparing consultation length and management or detection of psychological problems in a general practice or primary health care setting. The studies were read and categories developed to enable systematic data extraction and synthesis.
Results
29 papers met the inclusion criteria. Consultations with a recorded diagnosis of a psychological problem were reported to be longer than those with no recorded psychological diagnosis. It is not clear if this is related to the extra time or the consultation style. GPs reported that time pressure is a major barrier to treating depression. There was some evidence that increased consultation length is associated with more accurate diagnosis of psychological problems.
Conclusion
Further research is needed to elucidate the factors in longer consultations that are associated with greater detection of psychological problems, and to determine the association between the detection of psychological problems and the attitude, gender, age or training of the GP and the age, gender and socioeconomic status of the patient. These are important considerations if general practice is to deal more effectively with people with psychological problems.
doi:10.1186/1472-6963-7-71
PMCID: PMC1890290  PMID: 17506904
13.  Prevalence and Risk of Violence and the Physical, Mental, and Sexual Health Problems Associated with Human Trafficking: Systematic Review 
PLoS Medicine  2012;9(5):e1001224.
Siân Oram and colleagues conduct a systematic review of the evidence on the health consequences of human trafficking. They describe a limited and poor-quality evidence base, but some evidence suggests a high prevalence of violence and mental distress among women and girls trafficked for sexual exploitation, among other findings.
Background
There is very limited evidence on the health consequences of human trafficking. This systematic review reports on studies investigating the prevalence and risk of violence while trafficked and the prevalence and risk of physical, mental, and sexual health problems, including HIV, among trafficked people.
Methods and Findings
We conducted a systematic review comprising a search of Medline, PubMed, PsycINFO, EMBASE, and Web of Science, hand searches of reference lists of included articles, citation tracking, and expert recommendations. We included peer-reviewed papers reporting on the prevalence or risk of violence while trafficked and/or on the prevalence or risk of any measure of physical, mental, or sexual health among trafficked people. Two reviewers independently screened papers for eligibility and appraised the quality of included studies. The search identified 19 eligible studies, all of which reported on trafficked women and girls only and focused primarily on trafficking for sexual exploitation. The review suggests a high prevalence of violence and of mental distress among women and girls trafficked for sexual exploitation. The random effects pooled prevalence of diagnosed HIV was 31.9% (95% CI 21.3%–42.4%) in studies of women accessing post-trafficking support in India and Nepal, but the estimate was associated with high heterogeneity (I2 = 83.7%). Infection prevalence may be related as much to prevalence rates in women's areas of origin or exploitation as to the characteristics of their experience. Findings are limited by the methodological weaknesses of primary studies and their poor comparability and generalisability.
Conclusions
Although limited, existing evidence suggests that trafficking for sexual exploitation is associated with violence and a range of serious health problems. Further research is needed on the health of trafficked men, individuals trafficked for other forms of exploitation, and effective health intervention approaches.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
The United Nations defines human trafficking as the recruitment and movement of individuals—most often by force, coercion or deception—for the purpose of exploitation. Essentially, human trafficking is the modern version of the slave trade and is a gross violation of human rights. People who have been trafficked may be sold on to the sex industry or forced to work in many forms of labor, including in domestic service and in the agricultural and construction industries. Given the nature of human trafficking, quantifying the scale of the problem is fraught with difficulties, but 2005 statistics estimate that 2.5 million people were in forced labor as a result of being trafficked.
Why Was This Study Done?
To date, the health consequences and public health implications of human trafficking have received little international attention, partly because not much is known about this area. So in this study, the researchers examined published studies in order to assimilate evidence and information on the prevalence of all forms of violence relating to people who have been trafficked and the prevalence of physical, mental, and sexual health problems, including HIV/AIDS, among this group.
What Did the Researchers Do and Find?
The researchers searched the published literature for suitable studies by conducting a comprehensive key word search of key databases and by contacting experts. The researchers did not exclude any type of study from their search but used stringent criteria to identify appropriate studies and then assessed the quality of identified studies by using a critical appraisal tool.
Using this process, the researchers initially identified 407 papers but only 19 were suitable for their analysis, representing 16 different studies. The majority (11) of these studies were conducted in Asia (Nepal, India, Thailand, and Cambodia), and all studies focused solely on women and girls, with all but two studies examining sexual exploitation only.
In their analysis of these studies, the researchers found that women and girls who had been trafficked for sexual exploitation were consistently reported to have experienced high levels of physical and sexual violence. Studies also reported a high prevalence of physical, mental, and sexual health problems among women who had been trafficked and headache, back pain, stomach pain, and memory problems were commonly reported physical health symptoms. The studies that used screening tools to identify mental distress found high levels of anxiety (48.0%–97.7%), depression (54.9%–100%), and post-traumatic stress disorder (19.5%–77.0%). Furthermore, the three studies that examined the associations between trafficking and health suggest that a longer duration of trafficking may be linked to higher levels of mental distress and increased risk of HIV infection. The few studies that examined the prevalence of HIV infection (in women accessing post-trafficking services in India and Nepal) showed a combined prevalence of 31.9%.
What Do These Findings Mean?
These findings, although limited, show that trafficking for sexual exploitation is associated with violence and a range of serious health problems. However, the key finding of this study is that evidence on trafficked people's experiences of violence and of physical, mental, and sexual health problems is extremely limited. There is an enormous gap in research on the health of trafficked men, trafficked children, and people who have been trafficked for labor exploitation. There is an urgent need for more and better information on the needs and experiences of people who have been trafficked, including evidence on effective interventions to mitigate the associated physical and psychological damage.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001224.
Humantrafficking.org is a web resource for combatting human trafficking, available in a number of languages
Stop the Traffick is an international movement campaigning to stop human trafficking, available in a number of languages
The not for sale campaign works to abolish this form of slavery
doi:10.1371/journal.pmed.1001224
PMCID: PMC3362635  PMID: 22666182
14.  Steady state analysis of Boolean molecular network models via model reduction and computational algebra 
BMC Bioinformatics  2014;15:221.
Background
A key problem in the analysis of mathematical models of molecular networks is the determination of their steady states. The present paper addresses this problem for Boolean network models, an increasingly popular modeling paradigm for networks lacking detailed kinetic information. For small models, the problem can be solved by exhaustive enumeration of all state transitions. But for larger models this is not feasible, since the size of the phase space grows exponentially with the dimension of the network. The dimension of published models is growing to over 100, so that efficient methods for steady state determination are essential. Several methods have been proposed for large networks, some of them heuristic. While these methods represent a substantial improvement in scalability over exhaustive enumeration, the problem for large networks is still unsolved in general.
Results
This paper presents an algorithm that consists of two main parts. The first is a graph theoretic reduction of the wiring diagram of the network, while preserving all information about steady states. The second part formulates the determination of all steady states of a Boolean network as a problem of finding all solutions to a system of polynomial equations over the finite number system with two elements. This problem can be solved with existing computer algebra software. This algorithm compares favorably with several existing algorithms for steady state determination. One advantage is that it is not heuristic or reliant on sampling, but rather determines algorithmically and exactly all steady states of a Boolean network. The code for the algorithm, as well as the test suite of benchmark networks, is available upon request from the corresponding author.
Conclusions
The algorithm presented in this paper reliably determines all steady states of sparse Boolean networks with up to 1000 nodes. The algorithm is effective at analyzing virtually all published models even those of moderate connectivity. The problem for large Boolean networks with high average connectivity remains an open problem.
doi:10.1186/1471-2105-15-221
PMCID: PMC4230806  PMID: 24965213
Steady state computation; Boolean model; Discrete model
15.  Using Remote Sensing and GIS in the Analysis of Ecosystem Decline along the River Niger Basin: The Case of Mali and Niger 
In the Sub-Saharan African region of the River Niger Basin, where none of the major rivers is fully contained within the borders of a single nation, riverine ecosystem health monitoring is essential for survival. Even the globally proclaimed goals of sustainability and environmental security in the region are unattainable without using geospatial technologies of remote sensing and Geographic Information Systems (GIS) as conduits for environmental health within shared waters. Yet the systematic study of the nature of cooperation between states over shared water resources in troubled areas of the Middle East continues to dominate the literature with minimal coverage of the Sub-Saharan Africa experience and the role of GIS and remote sensing in monitoring the problem. Considering the intense ecosystem stress inflicted on River Niger by human activities and natural forces emanating from upstream and downstream nations. Researching the growing potential for acute riverine ecosystem decline among the nations of Niger and Mali along the River Niger Basin with the latest advances in spatial information technology as a decision support tool not only helps in ecosystem recovery and the avoidance of conflicts, but it has the potentials to bring countries much closer through information exchange. While the nature of the problem remains compounded due to the depletion of available water resources and environmental resources within shared waters, the lack of information exchange extracts ecological costs from all players. This is essential as the Niger Basin nations move towards a multinational watershed management as a conduit for sustainability. To confront these problems, some research questions with relevance to the paper have been posed. The questions include, Have there been any declines in the riverine ecosystem of the study area? What are the effects and what factors trigger the changes? What mitigation measures are in place for dealing with the problems? The first objective of the paper is to develop a new framework for analyzing the health of riverine ecosystems while the second objective seeks a contribution to the literature. The third objective is to design a geo-spatial tool for riverine ecosystem management and impact analysis. The fourth objective is to measure the nature of change in riverine environments with the latest advances in geo-spatial information technologies and methods. In terms of methodology, the paper relies on primary data sources analyzed with descriptive statistics, GIS techniques and remote sensing. The sections in the paper consist of a review of the major environmental effects and factors associated with the problem as well as mitigation measures in Mali and Niger. The paper concludes with some recommendations. The results point to growing modification along the riverine environments of the Mali and Niger portions of the River Niger Basin due to a host of factors.
PMCID: PMC3728584  PMID: 17617682
GIS; remote sensing; riverine ecosystem; management; decline
16.  Family practice clerkship encounters documented with structured phrases on paper and hand-held computer logs. 
Patient encounter logs allow faculty to monitor students' clinical experiences, especially in decentralized clerkships. However, there are generally tradeoffs involving the expressiveness of patient encounter forms, the effort required to complete the forms, and the utility of the forms for informing the clerkship director. The family practice clerkship at Washington University changed the school's standard free text, paper log to a controlled vocabulary paper log, borrowing 93 generic ICD-9 codes and the SNOMED concept of 'process at location' phrases for localized problems. Subsequently, this architecture was used in a Palm computer program. Students using the structured paper logs documented slightly more patient encounters than students using free text logs in the previous year, with similar numbers of problems per patient (1.3 to 1.4) and prevalence of common illnesses, but used the phrase structure and code vocabulary inconsistently. Students using computer logs documented many more patient encounters, but only documented 1.09 problems per patient. Students' documentation of psychosocial diagnoses declined significantly with the computer log. Although the computer program was flexible, the effort required to enter multiple problems exceeded the effort of finding similar codes on a short paper form. This problem confounds efforts to monitor exposure to complex patients and hidden medical problems. Another design for the hand-held computer log is being tested.
PMCID: PMC2243790  PMID: 11079943
17.  Addressing the policy cacophony does not require more evidence: an argument for reframing obesity as caloric overconsumption 
BMC Public Health  2012;12:1042.
Background
Numerous policies have been proposed to address the public health problem of obesity, resulting in a policy cacophony. The noise of so many policy options renders it difficult for policymakers to determine which policies warrant implementation. This has resulted in calls for more and better evidence to support obesity policy. However, it is not clear that evidence is the solution. This paper argues that to address the policy cacophony it is necessary to rethink the problem of obesity, and more specifically, how the problem of obesity is framed. This paper argues that the frame “obesity” be replaced by the frame “caloric overconsumption”, concluding that the frame caloric overconsumption can overcome the obesity policy cacophony.
Discussion
Frames are important because they influence public policy. Understood as packages that define issues, frames influence how best to approach a problem. Consequently, debates over public policy are considered battles over framing, with small shifts in how an issue is framed resulting in significant changes to the policy environment. This paper presents a rationale for reframing the problem of obesity as caloric overconsumption. The frame “obesity” contributes to the policy cacophony by including policies aimed at both energy output and energy input. However, research increasingly demonstrates that energy input is the primary cause of obesity, and that increases in energy input are largely attributable to the food environment. By focusing on policies that aim to prevent increases in energy input, the frame caloric overconsumption will reduce the noise of the obesity policy cacophony. While the proposed frame will face some challenges, particularly industry opposition, policies aimed at preventing caloric overconsumption have a clearer focus, and can be more politically palatable if caloric overconsumption is seen as an involuntary risk resulting from the food environment.
Summary
The paper concludes that policymakers will be able to make better sense of the obesity policy cacophony if the problem of obesity is reframed as caloric overconsumption. By focusing on a specific cause of obesity, energy input, the frame caloric overconsumption allows policymakers to focus on the most promising obesity prevention policies.
doi:10.1186/1471-2458-12-1042
PMCID: PMC3527165  PMID: 23199375
Obesity; Caloric overconsumption; Framing; Food environment; Public health policy
18.  Screening for distress, the 6th vital sign: common problems in cancer outpatients over one year in usual care: associations with marital status, sex, and age 
BMC Cancer  2012;12:441.
Background
Very few studies examine the longitudinal prevalence of problems and the awareness or use of clinical programs by patients who report these problems. Of the studies that examine age, gender and marital status as predictors of a range of patient outcomes, none examines the interactions between these demographic variables. This study examined the typical trajectory of common practical and psychosocial problems endorsed over 12 months in a usual-care sample of cancer outpatients. Specifically, we examined whether marital status, sex, age, and their interactions predicted these trajectories. We did not actively triage or refer patients in this study in order to examine the natural course of problem reports.
Methods
Patients completed baseline screening (N = 1196 of 1707 approached) and the sample included more men (N = 696) than women (N = 498), average age 61.1 years. The most common diagnoses were gastrointestinal (27.1%), prostate (19.2%), skin (11.1%) and gynecological (9.2%). Among other measures, patients completed a Common Problem Checklist and Psychosocial Resources Use questions at baseline, 3, 6, and 12 months using paper and pencil surveys.
Results
Results indicated that patients reported psychosocial problems more often than practical and both decreased significantly over time. Younger single patients reported more practical problems than those in committed relationships. Younger patients and women of all ages reported more psychosocial problems. Among a number of interesting interactions, for practical problems, single older patients improved more; whereas among married people, younger patients improved more. For psychosocial problems we found that older female patients improved more than younger females, but among males, it was younger patients who improved more. Young single men and women reported the most past-and future-use of services.
Conclusions
Younger women are particularly vulnerable to experiencing practical and psychosocial problems when diagnosed with cancer, but being married protects these younger women. Marriage appeared to buffer reports of both practical and psychosocial problems, and led to less awareness and use of services. Unexpectedly, young men reported the highest use of psychosocial services. This study informs clinical program development with information on these risk groups.
doi:10.1186/1471-2407-12-441
PMCID: PMC3528655  PMID: 23031647
Marital status; Age; Sex; Cancer; Oncology; Screening for distress; Common problems
19.  A review of the handling of missing longitudinal outcome data in clinical trials 
Trials  2014;15:237.
The aim of this review was to establish the frequency with which trials take into account missingness, and to discover what methods trialists use for adjustment in randomised controlled trials with longitudinal measurements. Failing to address the problems that can arise from missing outcome data can result in misleading conclusions. Missing data should be addressed as a means of a sensitivity analysis of the complete case analysis results. One hundred publications of randomised controlled trials with longitudinal measurements were selected randomly from trial publications from the years 2005 to 2012. Information was extracted from these trials, including whether reasons for dropout were reported, what methods were used for handing the missing data, whether there was any explanation of the methods for missing data handling, and whether a statistician was involved in the analysis. The main focus of the review was on missing data post dropout rather than missing interim data. Of all the papers in the study, 9 (9%) had no missing data. More than half of the papers included in the study failed to make any attempt to explain the reasons for their choice of missing data handling method. Of the papers with clear missing data handling methods, 44 papers (50%) used adequate methods of missing data handling, whereas 30 (34%) of the papers used missing data methods which may not have been appropriate. In the remaining 17 papers (19%), it was difficult to assess the validity of the methods used. An imputation method was used in 18 papers (20%). Multiple imputation methods were introduced in 1987 and are an efficient way of accounting for missing data in general, and yet only 4 papers used these methods. Out of the 18 papers which used imputation, only 7 displayed the results as a sensitivity analysis of the complete case analysis results. 61% of the papers that used an imputation explained the reasons for their chosen method. Just under a third of the papers made no reference to reasons for missing outcome data. There was little consistency in reporting of missing data within longitudinal trials.
doi:10.1186/1745-6215-15-237
PMCID: PMC4087243  PMID: 24947664
Review; Missing; Data; Handling; Longitudinal; Repeated; Measures
20.  Predicting Outcome after Traumatic Brain Injury: Development and International Validation of Prognostic Scores Based on Admission Characteristics 
PLoS Medicine  2008;5(8):e165.
Background
Traumatic brain injury (TBI) is a leading cause of death and disability. A reliable prediction of outcome on admission is of great clinical relevance. We aimed to develop prognostic models with readily available traditional and novel predictors.
Methods and Findings
Prospectively collected individual patient data were analyzed from 11 studies. We considered predictors available at admission in logistic regression models to predict mortality and unfavorable outcome according to the Glasgow Outcome Scale at 6 mo after injury. Prognostic models were developed in 8,509 patients with severe or moderate TBI, with cross-validation by omission of each of the 11 studies in turn. External validation was on 6,681 patients from the recent Medical Research Council Corticosteroid Randomisation after Significant Head Injury (MRC CRASH) trial. We found that the strongest predictors of outcome were age, motor score, pupillary reactivity, and CT characteristics, including the presence of traumatic subarachnoid hemorrhage. A prognostic model that combined age, motor score, and pupillary reactivity had an area under the receiver operating characteristic curve (AUC) between 0.66 and 0.84 at cross-validation. This performance could be improved (AUC increased by approximately 0.05) by considering CT characteristics, secondary insults (hypotension and hypoxia), and laboratory parameters (glucose and hemoglobin). External validation confirmed that the discriminative ability of the model was adequate (AUC 0.80). Outcomes were systematically worse than predicted, but less so in 1,588 patients who were from high-income countries in the CRASH trial.
Conclusions
Prognostic models using baseline characteristics provide adequate discrimination between patients with good and poor 6 mo outcomes after TBI, especially if CT and laboratory findings are considered in addition to traditional predictors. The model predictions may support clinical practice and research, including the design and analysis of randomized controlled trials.
Ewout Steyerberg and colleagues describe a prognostic model for the prediction of outcome of traumatic brain injury using data available on admission.
Editors' Summary
Background.
Traumatic brain injury (TBI) causes a large amount of morbidity and mortality worldwide. According to the Centers for Disease Control, for example, about 1.4 million Americans will sustain a TBI—a head injury—each year. Of these, 1.1 million will be treated and released from an emergency department, 235,000 will be hospitalized, and 50,000 will die. The burden of disease is much higher in the developing world, where the causes of TBI such as traffic accidents occur at higher rates and treatment may be less available.
Why Was This Study Done?
Given the resources required to treat TBI, a very useful research tool would be the ability to accurately predict on admission to hospital what the outcome of a given injury might be. Currently, scores such as the Glasgow Coma Scale are useful to predict outcome 24 h after the injury but not before.
Prognostic models are useful for several reasons. Clinically, they help doctors and patients make decisions about treatment. They are also useful in research studies that compare outcomes in different groups of patients and when planning randomized controlled trials. The study presented here is one of a number of analyses done by the IMPACT research group over the past several years using a large database that includes data from eight randomized controlled trials and three observational studies conducted between 1984 and 1997. There are other ongoing studies that also seek to develop new prognostic models; one such recent study was published in BMJ by a group involving the lead author of the PLoS Medicine paper described here.
What Did the Researchers Do and Find?
The authors analyzed data that had been collected prospectively on individual patients from the 11 studies included in the database and derived models to predict mortality and unfavorable outcome at 6 mo after injury for the 8,509 patients with severe or moderate TBI. They found that the strongest predictors of outcome were age, motor score, pupillary reactivity, and characteristics on the CT scan, including the presence of traumatic subarachnoid hemorrhage. A core prognostic model could be derived from the combination of age, motor score, and pupillary reactivity. A better score could be obtained by adding CT characteristics, secondary problems (hypotension and hypoxia), and laboratory measurements of glucose and hemoglobin. The scores were then tested to see how well they predicted outcome in a different group of patients—6,681 patients from the recent Medical Research Council Corticosteroid Randomisation after Significant Head Injury (MRC CRASH) trial.
What Do These Findings Mean?
In this paper the authors show that it is possible to produce prognostic models using characteristics collected on admission as part of routine care that can discriminate between patients with good and poor outcomes 6 mo after TBI, especially if the results from CT scans and laboratory findings are added to basic models. This paper has to be considered together with other studies, especially the paper mentioned above, which was recently published in the BMJ (MRC CRASH Trial Collaborators [2008] Predicting outcome after traumatic brain injury: practical prognostic models based on large cohort of international patients. BMJ 336: 425–429.). The BMJ study presented a set of similar, but subtly different models, with specific focus on patients in developing countries; in that case, the patients in the CRASH trial were used to produce the models, and the patients in the IMPACT database were used to verify one variant of the models. Unfortunately this related paper was not disclosed to us during the initial review process; however, during PLoS Medicine's subsequent consideration of this manuscript we learned of it. After discussion with the reviewers, we took the decision that the models described in the PLoS Medicine paper are sufficiently different from those reported in the other paper and as such proceeded with publication of the paper. Ideally, however, these two sets of models would have been reviewed and published side by side, so that readers could easily evaluate the respective merits and value of the two different sets of models in the light of each other. The two sets of models are, however, discussed in a Perspective article also published in PLoS Medicine (see below).
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050165.
This paper and the BMJ paper mentioned above are discussed further in a PLoS Medicine Perspective article by Andrews and Young
The TBI Impact site provides a tool to calculate the scores described in this paper
The CRASH trial, which is used to validate the scores mentioned here, has a Web site explaining the trial and its results
The R software, which was used for the prognostic analyses, is freely available
The MedlinePlus encyclopedia has information on head injury
The WHO site on neurotrauma discusses head injury from a global perspective
The CDC's National Center for Injury Prevention and Control gives statistics on head injury in the US and advice on prevention
doi:10.1371/journal.pmed.0050165
PMCID: PMC2494563  PMID: 18684008
21.  A latent class analysis of underage problem drinking: Evidence from a community sample of 16−20 year olds 
Drug and alcohol dependence  2005;83(3):199-209.
The aim of this paper is to shed light on the nature of underage problem drinking by using an empirically based method to characterize the variation in patterns of drinking in a community sample of underage drinkers. A total of 4056 16−20-year-old current drinkers from 212 communities in the US were surveyed by telephone as part of the National Evaluation of the Enforcing Underage Drinking Laws (EUDL) Program. Latent class models were used to create homogenous groups of drinkers with similar drinking patterns defined by multiple indicators of drinking behaviors and alcohol-related problems. Two types of underage problem drinkers were identified; risky drinkers (30%) and regular drinkers (27%). The most prominent behaviors among both types of underage problem drinkers were binge drinking and getting drunk. Being male, other drug use, early onset drinking and beliefs about friends drinking and getting drunk were all associated with an increased risk of being a problem drinker after adjustment for other factors. Beliefs that most friends drink and current marijuana use were the strongest predictors of both risky problem drinking (OR = 4.0; 95% CI = 3.1, 5.1 and OR = 4.0; 95% CI = 2.8, 5.6, respectively) and regular problem drinking (OR = 10.8; 95% CI = 7.0, 16.7 and OR = 10.2; 95% CI = 6.9, 15.2). Young adulthood (ages 18−20) was significantly associated with regular problem drinking but not risky problem drinking. The belief that most friends get drunk weekly was the strongest discriminator of risky and regular problem drinking patterns (OR = 5.3; 95% CI = 3.9, 7.1). These findings suggest that underage problem drinking is most strongly characterized by heavy drinking behaviors which can emerge in late adolescence and underscores its association with perceptions regarding friends drinking behaviors and illicit drug use.
doi:10.1016/j.drugalcdep.2005.11.013
PMCID: PMC2569969  PMID: 16359829
Adolescent; Alcohol; Drinking patterns; Epidemiology; Latent class analysis; Problem drinking
22.  What is important, what needs treating? How GPs perceive older patients’ multiple health problems: a mixed method research study 
BMC Research Notes  2012;5:443.
Background
GPs increasingly deal with multiple health problems of their older patients. They have to apply a hierarchical management approach that considers priorities to balance competing needs for treatment. Yet, the practice of setting individual priorities in older patients is largely unexplored. This paper analyses the GPs’ perceptions on important and unimportant health problems and how these affect their treatment.
Methods
GPs appraised the importance of health problems for a purposive sample of their older patients in semi-structured interviews. Prior to the interviews, the GPs had received a list of their patients’ health problems resulting from a geriatric assessment and were asked to rate the importance of each identified problem. In the interviews the GPs subsequently explained why they considered certain health problems important or not and how this affected treatment. Data was analysed using qualitative content analysis and quantitative methods.
Results
The problems GPs perceive as important are those that are medical and require active treatment or monitoring, or that induce empathy or awareness but cannot be assisted further. Unimportant problems are those that are well managed problems and need no further attention as well as age-related conditions or functional disabilities that provoke fatalism, or those considered outside the GPs’ responsibility. Statements of professional actions are closely linked to explanations of important problems and relate to physical problems rather than functional and social patient issues.
Conclusions
GPs tend to prioritise treatable clinical conditions. Treatment approaches are, however, vague or missing for complex chronic illnesses and disabilities. Here, patient empowerment strategies are of value and need to be developed and implemented. The professional concepts of ageing and disability should not impede but rather foster treatment and care. To this end, GPs need to be able to delegate care to a functioning primary care team.
Trial Registration
German Trial Register (DRKS): 00000792
doi:10.1186/1756-0500-5-443
PMCID: PMC3475051  PMID: 22897907
Health priorities; Multimorbidity; Old age; Family practice; Patient-centred care
23.  Obtaining Reliable Likelihood Ratio Tests from Simulated Likelihood Functions 
PLoS ONE  2014;9(10):e106136.
Mixed models
Models allowing for continuous heterogeneity by assuming that value of one or more parameters follow a specified distribution have become increasingly popular. This is known as ‘mixing’ parameters, and it is standard practice by researchers - and the default option in many statistical programs - to base test statistics for mixed models on simulations using asymmetric draws (e.g. Halton draws).
Problem 1: Inconsistent LR tests due to asymmetric draws:
This paper shows that when the estimated likelihood functions depend on standard deviations of mixed parameters this practice is very likely to cause misleading test results for the number of draws usually used today. The paper illustrates that increasing the number of draws is a very inefficient solution strategy requiring very large numbers of draws to ensure against misleading test statistics. The main conclusion of this paper is that the problem can be solved completely by using fully antithetic draws, and that using one dimensionally antithetic draws is not enough to solve the problem.
Problem 2: Maintaining the correct dimensions when reducing the mixing distribution:
A second point of the paper is that even when fully antithetic draws are used, models reducing the dimension of the mixing distribution must replicate the relevant dimensions of the quasi-random draws in the simulation of the restricted likelihood. Again this is not standard in research or statistical programs. The paper therefore recommends using fully antithetic draws replicating the relevant dimensions of the quasi-random draws in the simulation of the restricted likelihood and that this should become the default option in statistical programs. JEL classification: C15; C25.
doi:10.1371/journal.pone.0106136
PMCID: PMC4203670  PMID: 25329712
24.  Methodological issues in the study of violence against women 
Journal of Epidemiology and Community Health  2007;61(Suppl 2):ii26-ii31.
The objective of this paper is to review the methodological issues that arise when studying violence against women as a public health problem, focusing on intimate partner violence (IPV), since this is the form of violence that has the greatest consequences at a social and political level. The paper focuses first on the problems of defining what is meant by IPV. Secondly, the paper describes the difficulties in assessing the magnitude of the problem. Obtaining reliable data on this type of violence is a complex task, because of the methodological issues derived from the very nature of the phenomenon, such as the private, intimate context in which this violence often takes place, which means the problem cannot be directly observed. Finally, the paper examines the limitations and bias in research on violence, including the lack of consensus with regard to measuring events that may or may not represent a risk factor for violence against women or the methodological problem related to the type of sampling used in both aetiological and prevalence studies.
doi:10.1136/jech.2007.059907
PMCID: PMC2465770  PMID: 18000113
women; domestic violence; spouse abuse; public health
25.  Enlarge the Training Set Based on Inter-Class Relationship for Face Recognition from One Image per Person 
PLoS ONE  2013;8(7):e68539.
In some large-scale face recognition task, such as driver license identification and law enforcement, the training set only contains one image per person. This situation is referred to as one sample problem. Because many face recognition techniques implicitly assume that several (at least two) images per person are available for training, they cannot deal with the one sample problem. This paper investigates principal component analysis (PCA), Fisher linear discriminant analysis (LDA), and locality preserving projections (LPP) and shows why they cannot perform well in one sample problem. After that, this paper presents four reasons that make one sample problem itself difficult: the small sample size problem; the lack of representative samples; the underestimated intra-class variation; and the overestimated inter-class variation. Based on the analysis, this paper proposes to enlarge the training set based on the inter-class relationship. This paper also extends LDA and LPP to extract features from the enlarged training set. The experimental results show the effectiveness of the proposed method.
doi:10.1371/journal.pone.0068539
PMCID: PMC3713003  PMID: 23874661

Results 1-25 (140773)