PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (67793)

Clipboard (0)
None

Related Articles

1.  Content Analysis of Syndromic Twitter Data 
Objective
We present an annotation scheme developed to analyze syndromic Twitter data, and the results of its application to a set of respiratory syndrome-related tweets [1]. The scheme was designed to differentiate true positive tweets (where an individual is experiencing respiratory symptoms) from false positive tweets (where an individual is not experiencing respiratory symptoms), and to quantify more fine-grained information within the data.
Introduction
The popularity of Twitter, a social-networking service, creates the opportunity for researchers to collect large amounts of free, localizable data in real-time. Data takes the form of short, user-written messages, and has been employed for general syndromic surveillance [2] and surveillance of public attitudes toward the H1N1 flu outbreak [3]. Accessibility of tweets in real-time makes them particularly appropriate for use in early warning systems. Data collected through keyword search contains a significant amount of noise, however, annotation can help boost the signal for true positive tweets.
Methods
The annotation scheme was developed based on information relevant for early warning systems (e.g. who is experiencing symptoms, and when) as well as other information present in the tweets (e.g. aspirations regarding symptoms, or abuse of substances such as cough syrup). Categories included Experiencer: Self/Other, Temporality: Current/Non-Current, Sentiment: Positive/Negative, Information: Providing/Seeking, Language: Non-English, Aspiration, Hyperbole, and Substance Abuse. All categories with the exception of Language and Substance Abuse were defined in reference to diseases or symptoms. The scheme was applied to 1,100 respiratory syndrome-related tweets (544 false positive, 556 true positive) from a previously collected corpus of syndromic twitter data [2]. Inter-annotator agreement was calculated for 9% of the data (100 tweets).
Results
Inter-annotator agreement was generally good, however certain categories had lower scores. Categories for Experiencer, Temporality, Sentiment: Negative, Information: Providing, and Language all had Kappa values above .9, Sentiment: Positive, Aspiration, and Substance abuse had Kappa values above .7, and Information: Seeking and Hyperbole had Kappas above .6. There was good separation between true positive tweets and false positive tweets, especially for the Experiencer: Self, Temporality: Current, Sentiment: Negative, Aspiration, Hyperbole, and Substance Abuse categories (see Table). True positive data were more likely to belong to any category except Information: Providing, and Substance Abuse, in which cases false positive tweets had greater likelihood of category inclusion. Within the true positive data, we found that users were more likely to reference symptoms that they themselves were currently experiencing than they were to reference another person’s symptoms or non-current symptoms. Sentiment was largely negative, and there was significant use of aspiration and hyperbole.
Conclusions
Future work will apply the scheme to other syndromes, including constitutional, gastrointestinal, neurological, rash, and hemorrhagic.
Percentages of tweets included in each category.
PMCID: PMC3692812
social media; surveillance; respiratory syndrome
2.  When the truth isn’t too hard to handle: An event-related potential study on the pragmatics of negation 
Psychological science  2008;19(12):1213-1218.
Our brains rapidly map incoming language onto what we hold to be true. Yet there are claims that such integration and verification processes are delayed in sentences containing negation words like ‘not’. However, research studies have often confounded whether a statement is true and whether it is natural thing to say during normal communication. In an event-related potential (ERP) experiment, we aimed to disentangle effects of truth-value and pragmatic licensing on the comprehension of affirmative and negated real-world statements. As in affirmative sentences, false words elicited a larger N400 ERP than true words in pragmatically licensed negated sentences (e.g., “In moderation, drinking red wine isn’t bad/good…”), whereas true and false words elicited similar responses in unlicensed negated sentences (e.g., “A baby bunny’s fur isn’t very hard/soft…”). These results suggest that negation poses no principled obstacle for readers to immediately relate incoming words to what they hold to be true.
doi:10.1111/j.1467-9280.2008.02226.x
PMCID: PMC3225068  PMID: 19121125
3.  Tetrahydrobiopterin responsiveness in phenylketonuria: prediction with the 48-hour loading test and genotype 
Background
How to efficiently diagnose tetrahydrobiopterin (BH4) responsiveness in patients with phenylketonuria remains unclear. This study investigated the positive predictive value (PPV) of the 48-hour BH4 loading test and the additional value of genotype.
Methods
Data of the 48-hour BH4 loading test (20 mg BH4/kg/day) were collected at six Dutch university hospitals. Patients with ≥30% phenylalanine reduction at ≥1 time points during the 48 hours (potential responders) were invited for the BH4 extension phase, designed to establish true-positive BH4 responsiveness. This is defined as long-term ≥30% reduction in mean phenylalanine concentration and/or ≥4 g/day and/or ≥50% increase of natural protein intake. Genotype was collected if available.
Results
177/183 patients successfully completed the 48-hour BH4 loading test. 80/177 were potential responders and 67/80 completed the BH4 extension phase. In 58/67 true-positive BH4 responsiveness was confirmed (PPV 87%). The genotype was available for 120/177 patients. 41/44 patients with ≥1 mutation associated with long-term BH4 responsiveness showed potential BH4 responsiveness in the 48-hour test and 34/41 completed the BH4 extension phase. In 33/34 true-positive BH4 responsiveness was confirmed. 4/40 patients with two known putative null mutations were potential responders; 2/4 performed the BH4 extension phase but showed no true-positive BH4 responsiveness.
Conclusions
The 48-hour BH4 loading test in combination with a classified genotype is a good parameter in predicting true-positive BH4 responsiveness. We propose assessing genotype first, particularly in the neonatal period. Patients with two known putative null mutations can be excluded from BH4 testing.
doi:10.1186/1750-1172-8-103
PMCID: PMC3711849  PMID: 23842451
Phenylketonuria; PKU; Tetrahydrobiopterin; Sapropterin dihydrochloride; Pharmacological chaperone; Genotype; Loading test
4.  True Progression versus Pseudoprogression in the Treatment of Glioblastomas: A Comparison Study of Normalized Cerebral Blood Volume and Apparent Diffusion Coefficient by Histogram Analysis 
Korean Journal of Radiology  2013;14(4):662-672.
Objective
The purpose of this study was to differentiate true progression from pseudoprogression of glioblastomas treated with concurrent chemoradiotherapy (CCRT) with temozolomide (TMZ) by using histogram analysis of apparent diffusion coefficient (ADC) and normalized cerebral blood volume (nCBV) maps.
Materials and Methods
Twenty patients with histopathologically proven glioblastoma who had received CCRT with TMZ underwent perfusion-weighted imaging and diffusion-weighted imaging (b = 0, 1000 sec/mm2). The corresponding nCBV and ADC maps for the newly visible, entirely enhancing lesions were calculated after the completion of CCRT with TMZ. Two observers independently measured the histogram parameters of the nCBV and ADC maps. The histogram parameters between the true progression group (n = 10) and the pseudoprogression group (n = 10) were compared by use of an unpaired Student's t test and subsequent multivariable stepwise logistic regression analysis to determine the best predictors for the differential diagnosis between the two groups. Receiver operating characteristic analysis was employed to determine the best cutoff values for the histogram parameters that proved to be significant predictors for differentiating true progression from pseudoprogression. Intraclass correlation coefficient was used to determine the level of inter-observer reliability for the histogram parameters.
Results
The 5th percentile value (C5) of the cumulative ADC histograms was a significant predictor for the differential diagnosis between true progression and pseudoprogression (p = 0.044 for observer 1; p = 0.011 for observer 2). Optimal cutoff values of 892 × 10-6 mm2/sec for observer 1 and 907 × 10-6 mm2/sec for observer 2 could help differentiate between the two groups with a sensitivity of 90% and 80%, respectively, a specificity of 90% and 80%, respectively, and an area under the curve of 0.880 and 0.840, respectively. There was no other significant differentiating parameter on the nCBV histograms. Inter-observer reliability was excellent or good for all histogram parameters (intraclass correlation coefficient range: 0.70-0.99).
Conclusion
The C5 of the cumulative ADC histogram can be a promising parameter for the differentiation of true progression from pseudoprogression of newly visible, entirely enhancing lesions after CCRT with TMZ for glioblastomas.
doi:10.3348/kjr.2013.14.4.662
PMCID: PMC3725362  PMID: 23901325
Apparent diffusion coefficient; Cerebral blood volume; Glioblastoma multiforme; Histogram analysis; Pseudoprogression
5.  XVIth QTLMAS: simulated dataset and comparative analysis of submitted results for QTL mapping and genomic evaluation 
BMC Proceedings  2014;8(Suppl 5):S1.
Background
A common dataset was simulated and made available to participants of the XVIth QTL-MAS workshop. Tasks for the participants were to detect QTLs affecting three traits, to assess their possible pleiotropic effects, and to evaluate the breeding values in a candidate population without phenotypes using genomic information.
Methods
Four generations consisting of 20 males and 1000 females were generated by mating each male with 50 females. The genome consisted of 5 chromosomes, each of 100 Mb size and carrying 2,000 equally distributed SNPs. Three traits were simulated in order to mimic milk yield, fat yield and fat content. Genetic (co)variances were generated from 50 QTLs with pleiotropic effects. Phenotypes for all traits were expressed only in females, and were provided for the first 3 generations. Fourteen methods for detecting single-trait QTL and 3 methods for investigating their pleiotropic nature were proposed. QTL mapping results were compared according to the following criteria: number of true QTL detected; number of false positives; and the proportion of the true genetic variance explained by submitted positions. Eleven methods for estimating direct genomic values of the candidate population were proposed. Accuracies and bias of predictions were assessed by comparing estimated direct genomic values with true breeding values.
Results
The number of true detections ranged from 0 to 8 across methods and traits, false positives from 0 to 15, and the proportion of genetic variance captured from 0 to 0.82, respectively. The accuracy and bias of genomic predictions varied from 0.74 to 0.85 and from 0.86 to 1.34 across traits and methods, respectively.
Conclusions
The best results in terms of detection power were obtained by ridge regression that, however, led to a large number of false positives. Good results both in terms of true detections and false positives were obtained by the approaches that fit polygenic effects in the model. The investigation of the pleiotropic nature of the QTL permitted the identification of few additional markers compared to the single-trait analyses. Bayesian and grouped regularized regression methods performed similarly for genomic prediction while GBLUP produced the poorest results.
doi:10.1186/1753-6561-8-S5-S1
PMCID: PMC4195410  PMID: 25519515
6.  Anterior tibial artery aneurysm: Case report and literature review 
INTRODUCTION
We present a patient with a true anterior tibial artery aneurysm without any causative history.
PRESENTATION OF CASE
A 59 year old male was referred with a swelling on his left lateral ankle which he noticed 2 months ago, with symptoms of soaring pain. Additional radiological research showed a true arterial tibialis anterior aneurysm. True anterior tibial artery aneurysm is a rare condition. The aneurysm was repaired by resection and interposition of a venous bypass.
DISCUSSION
Patients may complain about symptoms like calf pain, distal ischemia, paresthesias due to nerve compression and the presence of a pulsating or increasing mass. Symptomatic aneurysms require surgical intervention, where bypass with a venous saphenous graft have shown good patency and endovascular treatment have shown good short term results. Asymptomatic and small aneurysm can be followed for several years with DUS.
CONCLUSION
Clinical features, radiographic findings, surgical management, and a review of the literature on true anterior tibial aneurysms are discussed.
doi:10.1016/j.ijscr.2012.09.015
PMCID: PMC3604659  PMID: 23333847
Arteria tibialis anterior; True aneurysm; Aneurysm
7.  Estimating Population Cause-Specific Mortality Fractions from in-Hospital Mortality: Validation of a New Method 
PLoS Medicine  2007;4(11):e326.
Background
Cause-of-death data for many developing countries are not available. Information on deaths in hospital by cause is available in many low- and middle-income countries but is not a representative sample of deaths in the population. We propose a method to estimate population cause-specific mortality fractions (CSMFs) using data already collected in many middle-income and some low-income developing nations, yet rarely used: in-hospital death records.
Methods and Findings
For a given cause of death, a community's hospital deaths are equal to total community deaths multiplied by the proportion of deaths occurring in hospital. If we can estimate the proportion dying in hospital, we can estimate the proportion dying in the population using deaths in hospital. We propose to estimate the proportion of deaths for an age, sex, and cause group that die in hospital from the subset of the population where vital registration systems function or from another population. We evaluated our method using nearly complete vital registration (VR) data from Mexico 1998–2005, which records whether a death occurred in a hospital. In this validation test, we used 45 disease categories. We validated our method in two ways: nationally and between communities. First, we investigated how the method's accuracy changes as we decrease the amount of Mexican VR used to estimate the proportion of each age, sex, and cause group dying in hospital. Decreasing VR data used for this first step from 100% to 9% produces only a 12% maximum relative error between estimated and true CSMFs. Even if Mexico collected full VR information only in its capital city with 9% of its population, our estimation method would produce an average relative error in CSMFs across the 45 causes of just over 10%. Second, we used VR data for the capital zone (Distrito Federal and Estado de Mexico) and estimated CSMFs for the three lowest-development states. Our estimation method gave an average relative error of 20%, 23%, and 31% for Guerrero, Chiapas, and Oaxaca, respectively.
Conclusions
Where accurate International Classification of Diseases (ICD)-coded cause-of-death data are available for deaths in hospital and for VR covering a subset of the population, we demonstrated that population CSMFs can be estimated with low average error. In addition, we showed in the case of Mexico that this method can substantially reduce error from biased hospital data, even when applied to areas with widely different levels of development. For countries with ICD-coded deaths in hospital, this method potentially allows the use of existing data to inform health policy.
Working in Mexico and using vital registration data, Chris Murray and colleagues achieved encouraging results with a new method to estimate population cause-specific mortality fractions.
Editors' Summary
Background.
Governments and international health agencies need accurate information on the leading causes of death in different populations to help them develop and monitor effective health policies and programs. It is pointless investing money in screening programs for a type of cancer in a country where that cancer is very rare, for example, or setting up treatment centers for an infectious disease in a region where the disease no longer occurs. In developed countries, most deaths are recorded in vital registration (VR) systems. These databases record the specific cause of death, which is assigned by doctors using the International Classification of Diseases (ICD), an internationally agreed-upon list of codes for hundreds of diseases. Across the developing world, however, only one death in four is recorded by VR systems; in some very poor countries, only one death in 20 is recorded accurately. With this paucity of cause-of-death data, developing countries cannot make good decisions about how to spend their limited resources.
Why Was This Study Done?
The establishment of full VR systems in all developing countries will take time and may not always be possible, but many of these nations already collect ICD-coded data on in-hospital deaths. Unfortunately, this information does not accurately reflect the causes of death across whole populations. For example, the diseases that affect rich people differ from those that affect poor people, and rich people are more likely to die in hospital than poor people. Thus, although for each cause of death, the number of deaths in hospital equals the total number of deaths in the community multiplied by the proportion of deaths occurring in hospital, this proportion is different for each cause. If these proportions could be estimated, then in-hospital death records could be used to determine the fraction of the population that dies from each cause—the population's “cause-specific mortality fractions” (CSMFs). In this study, the researchers have devised a method that allows them to do this, and have used near-complete VR data collected between 1998 and 2005 in Mexico to test their method.
What Did the Researchers Do and Find?
The researchers developed a mathematical method that estimates the proportion of deaths occurring in hospitals for people grouped together by their age, sex, and cause of death (an “age–sex–cause group”) using VR data from a subset of the whole population. They tested their method for 45 nonoverlapping but all-encompassing diseases using the Mexican VR data (which records when a person has died in the hospital). They found that if they decreased the amount of VR data used to estimate the proportion of each age, sex, cause group dying in hospital from 100% to 9%, the maximum relative error between the true and estimated CSMFs was only 12%. When they just used the VR information from the capital city (9% of the population), the average relative error in CSMFs (a measure of how much the estimated and true CSMFs differ) across all 45 causes of death was only 10%. Finally, when they used VR data for the main urban area of Mexico (where access to hospitals is good) to estimate CSMFs for the three least developed states of Mexico, the average relative errors were 20%, 23%, and 31%.
What Do These Findings Mean?
These findings indicate that the researchers' method can provide accurate estimates of population CSMFs using ICD-coded cause-of-death data from deaths in hospital and VR data that cover part of the population. Even when the VR data from a developed area are used to calculate the CSMFs in a poorly developed area, the method produces a more accurate estimate than in-hospital death data used alone. Because the researchers have only tested their method for one country, additional “validation studies” need to be done using data from other countries with a good-quality VR system. If the method does work in these other settings, then existing data on in-hospital deaths could be used to determine the leading causes of death in countries with poor VR systems. Such information would be invaluable in establishing effective health policies.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040326.
• An accompanying paper by the same authors describes an alternative approach to collecting accurate cause-of-death data in developing countries
• The World Health Organization provides information on health statistics and health information systems, on the International Classification of Diseases, and on the Health Metrics Network, a global collaboration focused on improving sources of vital statistics and cause-of-death data
• Grand Challenges in Global Health provides information on research into better ways for developing countries to measure their health status
doi:10.1371/journal.pmed.0040326
PMCID: PMC2080647  PMID: 18031195
8.  Soft Docking and Multiple Receptor Conformations in Virtual Screening 
Journal of medicinal chemistry  2004;47(21):5076-5084.
Protein conformational change is an important consideration in ligand-docking screens, but it is difficult to predict. A simple way to account for protein flexibility is to soften the criterion for steric fit between ligand and receptor. A more comprehensive but more expensive method would be to sample multiple receptor conformations explicitly. Here, these two approaches are compared. A “soft” scoring function was created by attenuating the repulsive term in the Lennard-Jones potential, allowing for a closer approach between ligand and protein. The standard, “hard” Lennard-Jones potential was used for docking to multiple receptor conformations. The Available Chemicals Directory (ACD) was screened against two cavity sites in the T4 lysozyme. These sites undergo small but significant conformational changes on ligand binding, making them good systems for soft docking. The ACD was also screened against the drug target aldose reductase, which can undergo large conformational changes on ligand binding. We evaluated the ability of the scoring functions to identify known ligands from among the over 200 000 decoy molecules in the database. The soft potential was always better at identifying known ligands than the hard scoring function when only a single receptor conformation was used. Conversely, the soft function was worse at identifying known leads than the hard function when multiple receptor conformations were used. This was true even for the cavity sites and was especially true for aldose reductase. To test the multiple-conformation method predictively, we screened the ACD for molecules that preferentially docked to the expanded conformation of aldose reductase, known to bind larger ligands. Six novel molecules that ranked among the top 0.66% of hits from the multiple-conformation calculation, but ranked relatively poorly in the soft docking calculation, were tested experimentally for enzyme inhibition. Four of these six inhibited the enzyme, the best with an IC50 of 8 μM. Although ligands can get better scores in soft docking, the same is also true for decoys. The improved ranking of such decoys can come at the expense of true ligands.
doi:10.1021/jm049756p
PMCID: PMC1413506  PMID: 15456251
9.  Changing from analog to digital images: Does it affect the accuracy of alignment measurements of the lower extremity? 
Acta Orthopaedica  2011;82(3):351-355.
Background and purpose
Medical imaging has changed from analog films to digital media. We examined and compared the accuracy of orthopedic measurements using different media.
Methods
Before knee arthroplasty, full-length standing radiographs of 52 legs were obtained. The mechanical axis (MA), tibio-femoral angle (TFA), and femur angle (FA) were measured and analyzed twice, by 2 radiologists, using (1) true-size films, (2) short films, (3) a digital high-resolution workstation, and (4) a web-based personal computer. The agreement between the 4 media was evaluated using the Bland-Altman method (limits of agreement) using the true-size films as a reference standard.
Results
The mean differences in measurements between the traditional true-size films and the 3 other methods were small: for MA –0.20 to 0.07 degrees, and for TFA –0.02 to 0.18 degrees. Also, the limits of agreement between the traditional true-size films and the three other methods were small.
Interpretation
The agreement of the alignment measurements across the 4 different media was good. Orthopedic angles can be measured as accurately from analog films as from digital screens, regardless of film or monitor size.
doi:10.3109/17453674.2011.570670
PMCID: PMC3235315  PMID: 21619504
10.  Joint Rare Variant Association Test of the Average and Individual Effects for Sequencing Studies 
PLoS ONE  2012;7(3):e32485.
For many complex traits, single nucleotide polymorphisms (SNPs) identified from genome-wide association studies (GWAS) only explain a small percentage of heritability. Next generation sequencing technology makes it possible to explore unexplained heritability by identifying rare variants (RVs). Existing tests designed for RVs look for optimal strategies to combine information across multiple variants. Many of the tests have good power when the true underlying associations are either in the same direction or in opposite directions. We propose three tests for examining the association between a phenotype and RVs, where two of them jointly consider the common association across RVs and the individual deviations from the common effect. On one hand, similar to some of the best existing methods, the individual deviations are modeled as random effects to borrow information across multiple RVs. On the other hand, unlike the existing methods which pool individual effects towards zero, we pool them towards a possibly non-zero common effect by adding a pooled variant into the model. The common effect and the individual effects are jointly tested. We show through extensive simulations that at least one of the three tests proposed here is the most powerful or very close to being the most powerful in various settings of true models. This is appealing in practice because the direction and size of the true effects of the associated RVs are unknown. Researchers can apply the developed tests to improve power under a wide range of true models.
doi:10.1371/journal.pone.0032485
PMCID: PMC3309869  PMID: 22468164
11.  ACCURACY OF COMPUTED TOMOGRAPHY IN DETERMINING LESION SIZE IN CANINE APPENDICULAR OSTEOSARCOMA 
Multidetector contrast enhanced computed tomography with acquisition of 0.625-mm thick transverse images was used to measure the extent of appendicular osteosarcoma in 10 dogs. The measured length of tumor based on CT was compared to the true length of tumor using histopathology. There was a statistically significant association with good correlation between the true length of osteosarcoma compared to the length of intramedullary/endosteal abnormalities on CT with a mean overestimation of 1.8% (SD = 15%). There was not a statistically significant association between the true tumor length and the length of periosteal proliferation on CT with a mean overestimation of 9.7% (SD = 30.3%). There was a statistically significant association, but with poor correlation, between the true tumor length compared to the length of abnormal contrast enhancement with a mean overestimation of 9.6% (SD = 34.8%). The extent of intramedullary/endosteal CT abnormalities assessed from submillimeter transverse images may be of value in assessing patient candidacy and surgical margins for limb-sparing surgery
doi:10.1111/j.1740-8261.2012.01930.x
PMCID: PMC3868340  PMID: 22413965
canine; computed tomography; CT; dog; OSA; osteosarcoma; multidetector
12.  Right and Righteous: Children’s Incipient Understanding and Evaluation of True and False Statements 
Two studies examined young children’s early understanding and evaluation of truth-telling and lying, and the role that factuality plays in their judgments. Study 1 (104 2- to 5-year-olds) found that even the youngest children reliably accepted true statements and rejected false statements, and that older children’s ability to label true and false statements as “truth” and “lie” emerged in tandem with their positive evaluation of true statements and “truth” and their negative evaluation of false statements and “lie.” The findings suggest that children’s early preference for factuality develops into a conception of “truth” and “lie” that is linked both to factuality and moral evaluation. Study 2 (128 3- to 5-year-olds) found that, whereas young children exhibited good understanding of the association of true and false statements with “truth,” “lie,” “mistake,” “right,” and “wrong,” they showed little awareness of assumptions about speaker knowledge underlying “lie” and “mistake.” The results further support the primacy of factuality in children’s early understanding and evaluation of truth and lies.
doi:10.1080/15248372.2012.673187
PMCID: PMC3891696  PMID: 24436637
13.  Initial experience of hypofractionated radiation retreatment with true beam and flattening filter free beam in selected case reports of recurrent nasopharyngeal carcinoma 
Aim
To show our preliminary experience in using TrueBeam with RapidArc technology and FFF beam for stereotactic re-irradiation of nasopharyngeal carcinoma.
Background
Thanks to new advanced techniques, as well as intensity modulated radiation therapy, it is possible to approach head and neck recurrences in selected patients. Volumetric Modulated Arc Therapy (VMAT) in its RapidArc® format, permits to reduce significantly the time to deliver complex intensity modulated plans, allowing to treat hypofractionated regimes within a few minutes. With TrueBeam it is possible to perform photon beams without usage of the flattening filter. It seems possible to expect a reduction of out-of-field dose when flattening filter free (FFF) beams are used. While research into the physics domain for FFF beams is increasing, there are very few clinical data where FFF beams are applied in clinical practice.
Materials and methods
We present here the cases of 4 patients with local or regional recurrence of nasopharyngeal carcinoma. All patients were treated using TrueBeam with RapidArc technology and FFF beam for stereotactic hypofractionated re-irradiation.
Results
All patients concluded SBRT and showed good tolerability. During follow-up, complete response at imaging evaluation (PET and/or MRI) for all treated patients was documented.
Conclusions
Our preliminary experience using TrueBeam with RapidArc technology and FFF beam for stereotactic hypofractionated re-irradiation of nasopharyngeal carcinoma was safe and effective in all 4 treated patients. Longer follow-up and a larger population of study is needed to confirm these promising results.
doi:10.1016/j.rpor.2012.07.012
PMCID: PMC3885887  PMID: 24669306
Reirradiation; Hypofractionation; Nasopharyngeal carcinoma; Flattening filter free beam
14.  Validation of the Symptom Pattern Method for Analyzing Verbal Autopsy Data 
PLoS Medicine  2007;4(11):e327.
Background
Cause of death data are a critical input to formulating good public health policy. In the absence of reliable vital registration data, information collected after death from household members, called verbal autopsy (VA), is commonly used to study causes of death. VA data are usually analyzed by physician-coded verbal autopsy (PCVA). PCVA is expensive and its comparability across regions is questionable. Nearly all validation studies of PCVA have allowed physicians access to information collected from the household members' recall of medical records or contact with health services, thus exaggerating accuracy of PCVA in communities where few deaths had any interaction with the health system. In this study we develop and validate a statistical strategy for analyzing VA data that overcomes the limitations of PCVA.
Methods and Findings
We propose and validate a method that combines the advantages of methods proposed by King and Lu, and Byass, which we term the symptom pattern (SP) method. The SP method uses two sources of VA data. First, it requires a dataset for which we know the true cause of death, but which need not be representative of the population of interest; this dataset might come from deaths that occur in a hospital. The SP method can then be applied to a second VA sample that is representative of the population of interest. From the hospital data we compute the properties of each symptom; that is, the probability of responding yes to each symptom, given the true cause of death. These symptom properties allow us first to estimate the population-level cause-specific mortality fractions (CSMFs), and to then use the CSMFs as an input in assigning a cause of death to each individual VA response. Finally, we use our individual cause-of-death assignments to refine our population-level CSMF estimates. The results from applying our method to data collected in China are promising. At the population level, SP estimates the CSMFs with 16% average relative error and 0.7% average absolute error, while PCVA results in 27% average relative error and 1.1% average absolute error. At the individual level, SP assigns the correct cause of death in 83% of the cases, while PCVA does so for 69% of the cases. We also compare the results of SP and PCVA when both methods have restricted access to the information from the medical record recall section of the VA instrument. At the population level, without medical record recall, the SP method estimates the CSMFs with 14% average relative error and 0.6% average absolute error, while PCVA results in 70% average relative error and 3.2% average absolute error. For individual estimates without medical record recall, SP assigns the correct cause of death in 78% of cases, while PCVA does so for 38% of cases.
Conclusions
Our results from the data collected in China suggest that the SP method outperforms PCVA, both at the population and especially at the individual level. Further study is needed on additional VA datasets in order to continue validation of the method, and to understand how the symptom properties vary as a function of culture, language, and other factors. Our results also suggest that PCVA relies heavily on household recall of medical records and related information, limiting its applicability in low-resource settings. SP does not require that additional information to adequately estimate causes of death.
Chris Murray and colleagues propose and, using data from China, validate a new strategy for analyzing verbal autopsy data that combines the advantages of previous methods.
Editors' Summary
Background.
All countries need to know the leading causes of death among their people. Only with accurate cause-of-death data can their public-health officials and medical professionals develop relevant health policies and programs and monitor how they affect the nation's health. In developed countries, vital registration systems record specific causes of death that have been certified by doctors for most deaths. But, in developing countries, vital registration systems are rarely anywhere near complete, a situation that is unlikely to change in the near future. An approach that is being used increasingly to get information on the patterns of death in poor countries is “verbal autopsy” (VA). Trained personnel interview household members about the symptoms the deceased had before his/her death, and the circumstances surrounding the death, using a standard form. These forms are then reviewed by a doctor, who assigns a cause of death from a list of codes called the International Classification of Diseases. This process is called physician-coded verbal autopsy (PCVA).
Why Was This Study Done?
PCVA is a costly, time-consuming way of analyzing VA data and may not be comparable across regions, because it relies on the views of local doctors about the likely causes of death. In addition, although several studies have suggested that PCVA is reasonably accurate, such studies have usually included information collected from household members about medical records or contacts with health services. In regions where there is little contact with health services, PCVA may be much more inaccurate. Ideally what is needed is a method for assigning causes of death from VA data that does not involve physician review. In this study, the researchers have developed a statistical method—the symptom pattern (SP) method—for analyzing VA data and asked whether it can overcome the limitations of PCVA.
What Did the Researchers Do and Find?
The SP method uses VA data collected about a group of patients for whom the true cause of death is known to calculate the probability for each cause of death that a household member will answer yes when asked about various symptoms. These so-called “symptom properties” can be used to calculate population cause-specific mortality fractions (CSMFs—the proportion of the population that dies from each disease) from VA data and, using a type of statistical analysis called Bayesian statistics, can be used to assign causes of deaths to individuals. When used with data from a VA study done in China, the SP method estimated population CSMFs with an average relative error of 16% (this measure indicates how much the estimated and true CSMFs deviate), whereas PCVA estimated them with an average relative error of 27%. At the individual level, the SP method assigned the correct cause of death in 83% of cases; PCVA was right only 69% of the time. Removing the medical record recall section of the VA data had little effect on the accuracy with which the two methods estimated population CSMFs. However, whereas the SP method still assigned the correct cause of death in 78% of individual cases, the PVCA did so in only 38% of cases
What Do These Findings Mean?
These findings suggest that the SP method for analyzing VA data can outperform PCVA at both the population and the individual level. In particular, the SP method may be much better than PCVA at assigning the cause of death for individuals who have had little contact with health services before dying, a common situation in the poorest regions of world. The SP method needs to be validated using data from other parts of the world and also needs to be tested in multi-country validation studies to build up information about how culture and language affect the likelihood of specific symptoms being reported in VAs for each cause of death. Provided the SP method works as well in other countries as it apparently does in China, its adoption, together with improvements in how VA data are collected, has the potential to improve the accuracy of cause-of-death data in developing countries.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040327.
• An accompanying paper by Murray and colleagues describes an alternative approach to collecting accurate cause-of-death data in developing countries
• World Health Organization provides information on health statistics and health information systems, on the International Classification of Diseases, on the Health Metrics Network, a global collaboration focused on improving sources of vital statistics and cause-of-death data, and on verbal autopsy standards
• Grand Challenges in Global Health provides information on research into better ways for developing countries to measure their health status
doi:10.1371/journal.pmed.0040327
PMCID: PMC2080648  PMID: 18031196
15.  Comparison of Amplicor, in-house PCR, and conventional culture for detection of Mycobacterium tuberculosis in clinical samples. 
Journal of Clinical Microbiology  1995;33(12):3221-3224.
Five hundred four clinical specimens (337 sputum and 167 bronchial samples) from 340 patients were tested for the presence of M. tuberculosis complex by the Amplicor M. tuberculosis test and by an in-house PCR. The results were compared with those obtained by conventional culture and by direct microscopy. Thirty specimens (from 14 patients) were positive by in-house PCR, 25 (from 13 patients) were positive by the Amplicor M. tuberculosis test, and 24 (from 10 patients) were positive by culture. Cultures from 16 specimens were contaminated with other bacteria. Strong inhibition of in-house PCR was found with three samples. After discordancy analyses, with clinical data as supportive evidence for tuberculosis, 27 true-positive and 458 true-negative samples were defined. On the basis of these figures, the sensitivities of the Amplicor M. tuberculosis test, in-house PCR, culture, and microscopy were 70.4, 92.6, 88.9, and 52.4%, respectively. The specificities of all four tests were higher than 98%. The good performance of the in-house PCR for detection of M. tuberculosis makes it a very useful additional tool in M. tuberculosis diagnostics. In contrast, the Amplicor test needs to be improved. Twenty-three of the Amplicor-negative samples were further tested for inhibition of the Amplicor system by retesting the DNA extracts after the addition of M. tuberculosis DNA. In 15 of these samples, 5 true positives and 10 true negatives, inhibition of the Amplicor test was demonstrated. This might explain the lack of sensitivity of the Amplicor test. If the inhibition problem can be solved, the Amplicor M. tuberculosis test, which is already rapid, very user-friendly, and reasonably priced, may certainly become very useful in microbiological laboratories.
PMCID: PMC228677  PMID: 8586706
16.  Duloxetine use in chronic painful conditions – individual patient data responder analysis 
Background
Duloxetine has been studied in four distinct chronic pain conditions – osteoarthritis (OA), fibromyalgia, chronic low back pain (CLBP) and diabetic peripheral neuropathic pain (DPNP). These trials have involved large numbers of patients with at least moderate pain, and have used similar methods for recording pain intensity, over about 12 weeks.
Methods
Data from the trials were pooled according to painful condition, and reanalysed at the level of the individual patient and using increasing levels of pain intensity reduction (<15%, 15–29%, 30–49%, ≥50%), with different imputation methods on withdrawal.
Results
The proportion of patients recording at least 50% pain intensity reduction plateaued after 2–6 weeks in fibromyalgia, and 8–12 weeks in other conditions. The duloxetine-specific benefit [number needed to treat (NNT) for at least 50% pain intensity reduction] was fairly constant after about 2 weeks for DPNP and fibromyalgia and after about 4 or 5 weeks for OA and CLBP. In all conditions, responses were bimodal, with patients generally experiencing either very good or very poor pain relief. Last-observation-carried-forward imputation produced numerically and occasionally statistically better (lower) NNTs than use of baseline-observation-carried-forward (true response).
Conclusions
Baseline-observation-carried-forward (true response), which combines the success of high levels of pain relief with the failure to experience pain relief on withdrawal of the drug is conservative and probably reflective of clinical practice experience. The distribution of effect was not normal; few patients had the average response and averages are not an appropriate descriptor for these data.
What's already known about this topic?
Last-observation-carried-forward (LOCF) imputation overestimates efficacy when adverse event withdrawals are high.
Previous analyses of duloxetine in four chronic pain conditions reported mainly average pain changes using LOCF imputation.
What does this study add?
Responses were bimodal: patients generally experienced very good or very poor pain relief. Last-observation-carried-forward results were numerically lower (better) than true responder.
Duloxetine was an effective analgesic in all four chronic pain conditions using highest level of evidence.
doi:10.1002/j.1532-2149.2013.00341.x
PMCID: PMC4302330  PMID: 23733529
17.  Filtering high-throughput protein-protein interaction data using a combination of genomic features 
BMC Bioinformatics  2005;6:100.
Background
Protein-protein interaction data used in the creation or prediction of molecular networks is usually obtained from large scale or high-throughput experiments. This experimental data is liable to contain a large number of spurious interactions. Hence, there is a need to validate the interactions and filter out the incorrect data before using them in prediction studies.
Results
In this study, we use a combination of 3 genomic features – structurally known interacting Pfam domains, Gene Ontology annotations and sequence homology – as a means to assign reliability to the protein-protein interactions in Saccharomyces cerevisiae determined by high-throughput experiments. Using Bayesian network approaches, we show that protein-protein interactions from high-throughput data supported by one or more genomic features have a higher likelihood ratio and hence are more likely to be real interactions. Our method has a high sensitivity (90%) and good specificity (63%). We show that 56% of the interactions from high-throughput experiments in Saccharomyces cerevisiae have high reliability. We use the method to estimate the number of true interactions in the high-throughput protein-protein interaction data sets in Caenorhabditis elegans, Drosophila melanogaster and Homo sapiens to be 27%, 18% and 68% respectively. Our results are available for searching and downloading at .
Conclusion
A combination of genomic features that include sequence, structure and annotation information is a good predictor of true interactions in large and noisy high-throughput data sets. The method has a very high sensitivity and good specificity and can be used to assign a likelihood ratio, corresponding to the reliability, to each interaction.
doi:10.1186/1471-2105-6-100
PMCID: PMC1127019  PMID: 15833142
18.  Dose verification for respiratory-gated volumetric modulated arc therapy (VMAT) 
Physics in Medicine and Biology  2011;56(15):4827-4838.
A novel commercial medical linac system (TrueBeam™, Varian Medical Systems, Palo Alto, CA) allows respiratory-gated volumetric modulated arc therapy (VMAT), a new modality for treating moving tumors with high precision and improved accuracy by allowing for regular motion associated with a patient's breathing during VMAT delivery. The purpose of this work is to adapt a previously-developed dose reconstruction technique to evaluate the fidelity of VMAT treatment during gated delivery under clinic-relevant periodic motion related to patient breathing. A Varian TrueBeam system was used in this study. VMAT plans were created for three patients with lung or pancreas tumors. Conventional 6 MV and 15 MV beams with flattening filter and high dose-rate 10 MV beams with no flattening filter were used in these plans. Each patient plan was delivered to a phantom first without gating and then with gating for three simulated respiratory periods (3, 4.5 and 6 seconds). Using the adapted log file-based dose reconstruction procedure supplemented with ion chamber array (Seven29™, PTW, Freiburg, Germany) measurements, the delivered dose was used to evaluate the fidelity of gated VMAT delivery. Comparison of Seven29 measurements with and without gating showed good agreement with gamma-index passing rates above 99% for 1%/1mm dose accuracy/distance-to-agreement criteria. With original plans as reference, gamma-index passing rates were 100% for the reconstituted plans (1%/1 mm criteria) and 93.5–100% for gated Seven29 measurements (3%/3 mm criteria). In the presence of leaf error deliberately introduced into the gated delivery of a pancreas patient plan, both dose reconstruction and Seven29 measurement consistently indicated substantial dosimetric differences from the original plan. In summary, a dose reconstruction procedure was demonstrated for evaluating the accuracy of respiratory-gated VMAT delivery. This technique showed that under clinical operation, the TrueBeam system faithfully realized treatment plans with gated delivery. This methodology affords a useful tool for machine and patient-specific quality assurance of the newly available respiratory-gated VMAT.
doi:10.1088/0031-9155/56/15/013
PMCID: PMC3360016  PMID: 21753232
Gated volumetric modulated arc therapy; QA; dose reconstruction
19.  Ultrasound Reference Chart Based on IVF Dates to Estimate Gestational Age at 6–9 weeks' Gestation 
ISRN Obstetrics and Gynecology  2012;2012:938583.
Accurate determination of gestational age underpins good obstetric care. We assessed the performance of six existing ultrasound reference charts to determine gestational age in 1268 singleton IVF pregnancies, where “true” gestational age could be precisely calculated from date of fertilisation. All charts generated dates significantly different to IVF dates (P < 0.0001 all comparisons). Thus we generated a new reference chart, The Monash Chart, based on a line of best fit describing crown-rump length across 6 + 1 to 9 + 0 weeks of gestation (true gestational age) in the IVF singleton cohort. The Monash Chart, but none of the existing charts, accurately determined gestational age among an independent IVF twin cohort (185 twin pairs). When applied to 3052 naturally-conceived singletons scans, The Monash Chart generated estimated due dates that were different to all existing charts (P ≤ 0.004 all comparisons). We conclude that commonly used ultrasound reference charts have inaccuracies. We have generated a CRL reference chart based on true gestational age in an IVF cohort that can accurately determine gestational age at 6–9 weeks of gestation.
doi:10.5402/2012/938583
PMCID: PMC3409520  PMID: 22888449
20.  Surgical Technique: Extraarticular Knee Resection with Prosthesis–Proximal Tibia-extensor Apparatus Allograft for Tumors Invading the Knee 
Background
Intraarticular extension of a tumor requires a conventional extraarticular resection with en bloc removal of the entire knee, including extensor apparatus. Knee arthrodesis usually has been performed as a reconstruction. To avoid the functional loss derived from the resection of the extensor apparatus, a modified technique, saving the continuity of the extensor apparatus, has been proposed, but at the expense of achieving wide margins. In tumors involving the joint cavity, the entire joint complex including the distal femur, proximal tibia, the full extensor apparatus, and the whole inviolated joint capsule must be excised. We propose a novel reconstructive technique to restore knee function after a true extrarticular resection.
Description of Technique
The approach involves a true en bloc extraarticular resection of the whole knee, including the entire extensor apparatus. We performed the reconstruction with a femoral megaprosthesis combined with a tibial allograft-prosthetic composite with its whole extensor apparatus (quadriceps tendon, patella, patellar tendon, and proximal tibia below the anterior tuberosity).
Patients and Methods
We retrospectively reviewed 14 patients (seven with bone and seven with soft tissue tumors) who underwent this procedure from 1996 to 2009. Clinical and radiographic evaluations were performed using the MSTS-ISOLS functional evaluation system. The minimum followup was 1 year (average, 4.5 years; range, 1–12 years).
Results
We achieved wide margins in 13 patients (two contaminated), and marginal in one. There were three local recurrences, all in the patients with marginal or contaminated resections. Active knee extension was obtained in all patients, with an extensor lag of 0° to 15° in primary procedures. MSTS-ISOLS scores ranged from 67% to 90%. No patients had neurovascular complications; two patients had deep infections.
Conclusions
Combining a true knee extraarticular resection with an allograft-prosthetic composite including the whole extensor apparatus generally allows wide resection margins while providing a mobile knee with good extension in patients traditionally needing a knee arthrodesis.
Level of Evidence
Level IV, therapeutic study. See the Guidelines for Authors for a complete description of levels of evidence.
doi:10.1007/s11999-011-1882-2
PMCID: PMC3171529  PMID: 21484473
21.  OTU Analysis Using Metagenomic Shotgun Sequencing Data 
PLoS ONE  2012;7(11):e49785.
Because of technological limitations, the primer and amplification biases in targeted sequencing of 16S rRNA genes have veiled the true microbial diversity underlying environmental samples. However, the protocol of metagenomic shotgun sequencing provides 16S rRNA gene fragment data with natural immunity against the biases raised during priming and thus the potential of uncovering the true structure of microbial community by giving more accurate predictions of operational taxonomic units (OTUs). Nonetheless, the lack of statistically rigorous comparison between 16S rRNA gene fragments and other data types makes it difficult to interpret previously reported results using 16S rRNA gene fragments. Therefore, in the present work, we established a standard analysis pipeline that would help confirm if the differences in the data are true or are just due to potential technical bias. This pipeline is built by using simulated data to find optimal mapping and OTU prediction methods. The comparison between simulated datasets revealed a relationship between 16S rRNA gene fragments and full-length 16S rRNA sequences that a 16S rRNA gene fragment having a length >150 bp provides the same accuracy as a full-length 16S rRNA sequence using our proposed pipeline, which could serve as a good starting point for experimental design and making the comparison between 16S rRNA gene fragment-based and targeted 16S rRNA sequencing-based surveys possible.
doi:10.1371/journal.pone.0049785
PMCID: PMC3506635  PMID: 23189163
22.  Speciality interests and career calling to medicine among first-year medical students 
The construct of calling has recently been applied to the vocation of medicine. We explored whether medical students endorse the presence of a calling or a search for a calling and how calling related to initial speciality interest. 574 first-year medical students (84 % response rate) were administered the Brief Calling Survey and indicated their speciality interest. For presence of a calling, the median response was mostly true for: ‘I have a calling to a particular kind of work’ and moderately true for: ‘I have a good understanding of my calling as it applies to my career’. For search for a calling, median response was mildly true: ‘I am trying to figure out my calling in my career’ and ‘I am searching for my calling as it applies to my career’. Mann–Whitney U (p < 0.05) results indicate that students interested in primary care (n = 185) versus non-primary care (n = 389) are more likely to endorse the presence of a calling. Students were more likely to endorse the presence of a calling rather than a search for a calling, with those interested in primary care expressing stronger presence of a calling to medicine.
doi:10.1007/s40037-012-0037-9
PMCID: PMC3576487  PMID: 23670652
Speciality; Calling; Medical students; Career
23.  Methods of utilizing baseline values for indirect response models 
This study derives and assesses modified equations for Indirect Response Models (IDR) for normalizing data for baseline values (R0) and evaluates different methods of utilizing baseline information. Pharmacodynamic response equations for the four basic IDR models were adjusted to reflect a ratio to, a change from (e.g., subtraction), or percent change relative to baseline. The original and modified IDR equations were fitted individually to simulated data sets and compared for recovery of true parameter values. Handling of baseline values was investigated using: estimation (E), fixing at the starting value (F1), and fixing at an average of starting and returning values of response profiles (F2). The performance of each method was evaluated using simulated data with variability under various scenarios of different doses, numbers of data points, type of IDR model, and degree of residual errors. The median error and inter-quartile range relative to true values were used as indicators of bias and precision for each method. Applying IDR models to normalized data required modifications in writing differential equations and initial conditions. Use of an observed/baseline ratio led to parameter estimates of kin = kout and inability to detect differences in kin values for groups with different R0, whereas the modified equations recovered the true values. An increase in variability increased the %Bias and %Imprecision for each R0 fitting method and was more pronounced for ‘F1’. The overall performance of ‘F2’ was as good as that of ‘E’ and better than ‘F1’. The %Bias in estimation of parameters SC50 (IC50) and kout followed the same trend, whereas use of ‘F1’ or ‘F2’ resulted in the least bias for Smax (Imax). The IDR equations need modifications to directly assess baseline-normalized data. In general, Method ‘E’ resulted in lesser bias and better precision compared to ‘F1’. With rich datasets including sufficient information on the return to baseline, Method ‘F2’ is reasonable. Method ‘E’ offers no significant advantage over ‘F1’ with datasets lacking information on the return to baseline phase. Handling baseline responses properly is an essential aspect of applying pharmacodynamic models.
doi:10.1007/s10928-009-9128-6
PMCID: PMC3712653  PMID: 19697107
Indirect response models; Turnover models; Baseline responses; Pharmacodynamics; Modeling and simulation
24.  Likelihood Inference of Non-Constant Diversification Rates with Incomplete Taxon Sampling 
PLoS ONE  2014;9(1):e84184.
Large-scale phylogenies provide a valuable source to study background diversification rates and investigate if the rates have changed over time. Unfortunately most large-scale, dated phylogenies are sparsely sampled (fewer than 5% of the described species) and taxon sampling is not uniform. Instead, taxa are frequently sampled to obtain at least one representative per subgroup (e.g. family) and thus to maximize diversity (diversified sampling). So far, such complications have been ignored, potentially biasing the conclusions that have been reached. In this study I derive the likelihood of a birth-death process with non-constant (time-dependent) diversification rates and diversified taxon sampling. Using simulations I test if the true parameters and the sampling method can be recovered when the trees are small or medium sized (fewer than 200 taxa). The results show that the diversification rates can be inferred and the estimates are unbiased for large trees but are biased for small trees (fewer than 50 taxa). Furthermore, model selection by means of Akaike's Information Criterion favors the true model if the true rates differ sufficiently from alternative models (e.g. the birth-death model is recovered if the extinction rate is large and compared to a pure-birth model). Finally, I applied six different diversification rate models – ranging from a constant-rate pure birth process to a decreasing speciation rate birth-death process but excluding any rate shift models – on three large-scale empirical phylogenies (ants, mammals and snakes with respectively 149, 164 and 41 sampled species). All three phylogenies were constructed by diversified taxon sampling, as stated by the authors. However only the snake phylogeny supported diversified taxon sampling. Moreover, a parametric bootstrap test revealed that none of the tested models provided a good fit to the observed data. The model assumptions, such as homogeneous rates across species or no rate shifts, appear to be violated.
doi:10.1371/journal.pone.0084184
PMCID: PMC3882215  PMID: 24400082
25.  Computed tomographic evaluation of the proximal femur: A predictive classification in displaced femoral neck fracture management 
Indian Journal of Orthopaedics  2014;48(5):476-483.
Background:
Femoral neck fracture is truly an enigma due to the high incidence of avascular necrosis and nonunion. Different methods have been described to determine the size of the femoral head fragment, as a small head has been said to be associated with poor outcome and nonunion due to inadequate implant purchase in the proximal fragment. These methods were two dimensional and were affected by radiography techniques, therefore did not determine true head size. Computed tomography (CT) is an important option to measure true head size as images can be obtained in three dimensions. Henceforth, we subjected patients to CT scan of hip in cases with displaced fracture neck of femur. The study aims to define the term small head or inadequate size femoral head” objectively for its prognostic significance.
Materials and Methods:
70 cases of displaced femoral neck fractures underwent CT scan preoperatively for proximal femoral geometric measurements of both hips. Dual energy X-ray absorptiometry scan was done in all cases. Patients were treated with either intertrochanteric osteotomy or lag screw osteosynthesis based on the size of the head fragment on plain radiographs.
Results:
The average femoral head fragment volume was 57 cu cm (range 28.3-84.91 cu cm; standard deviation 14 cu cm). Proximal fragment volume of >43 cu cm was termed adequate size (type I) and of ≤43 cu cm as small femoral head (type II). Fractures which united (n = 54) had a relatively large average head size (59 cu cm) when compared to fractures that did not (n = 16), which had a small average head size (49 cu cm) and this difference was statistically significant. In type I fractures union rate was comparable in both osteotomy and lag screw groups (P > 0.05). Lag screw fixation failed invariably, while osteotomy showed good results in type II fractures (P < 0.05).
Conclusion:
Computed tomography scan of the proximal femur is advisable for measuring true size of head fragment. An objective classification based on the femoral head size (type I and type II) is proposed. Osteosynthesis should be the preferred method of treatment in type I and osteotomy or prosthetic replacement is the method of choice for type II femoral neck fractures.
doi:10.4103/0019-5413.139857
PMCID: PMC4175861  PMID: 25298554
Lag screw; neck of femur; valgus osteotomy; Osteotomy; femoral; neck fracture; computed tomographic; bone screws

Results 1-25 (67793)