PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1392577)

Clipboard (0)
None

Related Articles

1.  A Risk Prediction Model for the Assessment and Triage of Women with Hypertensive Disorders of Pregnancy in Low-Resourced Settings: The miniPIERS (Pre-eclampsia Integrated Estimate of RiSk) Multi-country Prospective Cohort Study 
PLoS Medicine  2014;11(1):e1001589.
Beth Payne and colleagues use a risk prediction model, the Pre-eclampsia Integrated Estimate of RiSk (miniPIERS) to help inform the clinical assessment and triage of women with hypertensive disorders of pregnancy in low-resourced settings.
Please see later in the article for the Editors' Summary
Background
Pre-eclampsia/eclampsia are leading causes of maternal mortality and morbidity, particularly in low- and middle- income countries (LMICs). We developed the miniPIERS risk prediction model to provide a simple, evidence-based tool to identify pregnant women in LMICs at increased risk of death or major hypertensive-related complications.
Methods and Findings
From 1 July 2008 to 31 March 2012, in five LMICs, data were collected prospectively on 2,081 women with any hypertensive disorder of pregnancy admitted to a participating centre. Candidate predictors collected within 24 hours of admission were entered into a step-wise backward elimination logistic regression model to predict a composite adverse maternal outcome within 48 hours of admission. Model internal validation was accomplished by bootstrapping and external validation was completed using data from 1,300 women in the Pre-eclampsia Integrated Estimate of RiSk (fullPIERS) dataset. Predictive performance was assessed for calibration, discrimination, and stratification capacity. The final miniPIERS model included: parity (nulliparous versus multiparous); gestational age on admission; headache/visual disturbances; chest pain/dyspnoea; vaginal bleeding with abdominal pain; systolic blood pressure; and dipstick proteinuria. The miniPIERS model was well-calibrated and had an area under the receiver operating characteristic curve (AUC ROC) of 0.768 (95% CI 0.735–0.801) with an average optimism of 0.037. External validation AUC ROC was 0.713 (95% CI 0.658–0.768). A predicted probability ≥25% to define a positive test classified women with 85.5% accuracy. Limitations of this study include the composite outcome and the broad inclusion criteria of any hypertensive disorder of pregnancy. This broad approach was used to optimize model generalizability.
Conclusions
The miniPIERS model shows reasonable ability to identify women at increased risk of adverse maternal outcomes associated with the hypertensive disorders of pregnancy. It could be used in LMICs to identify women who would benefit most from interventions such as magnesium sulphate, antihypertensives, or transportation to a higher level of care.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Each year, ten million women develop pre-eclampsia or a related hypertensive (high blood pressure) disorder of pregnancy and 76,000 women die as a result. Globally, hypertensive disorders of pregnancy cause around 12% of maternal deaths—deaths of women during or shortly after pregnancy. The mildest of these disorders is gestational hypertension, high blood pressure that develops after 20 weeks of pregnancy. Gestational hypertension does not usually harm the mother or her unborn child and resolves after delivery but up to a quarter of women with this condition develop pre-eclampsia, a combination of hypertension and protein in the urine (proteinuria). Women with mild pre-eclampsia may not have any symptoms—the condition is detected during antenatal checks—but more severe pre-eclampsia can cause headaches, blurred vision, and other symptoms, and can lead to eclampsia (fits), multiple organ failure, and death of the mother and/or her baby. The only “cure” for pre-eclampsia is to deliver the baby as soon as possible but women are sometimes given antihypertensive drugs to lower their blood pressure or magnesium sulfate to prevent seizures.
Why Was This Study Done?
Women in low- and middle-income countries (LMICs) are more likely to develop complications of pre-eclampsia than women in high-income countries and most of the deaths associated with hypertensive disorders of pregnancy occur in LMICs. The high burden of illness and death in LMICs is thought to be primarily due to delays in triage (the identification of women who are or may become severely ill and who need specialist care) and delays in transporting these women to facilities where they can receive appropriate care. Because there is a shortage of health care workers who are adequately trained in the triage of suspected cases of hypertensive disorders of pregnancy in many LMICs, one way to improve the situation might be to design a simple tool to identify women at increased risk of complications or death from hypertensive disorders of pregnancy. Here, the researchers develop miniPIERS (Pre-eclampsia Integrated Estimate of RiSk), a clinical risk prediction model for adverse outcomes among women with hypertensive disorders of pregnancy suitable for use in community and primary health care facilities in LMICs.
What Did the Researchers Do and Find?
The researchers used data on candidate predictors of outcome that are easy to collect and/or measure in all health care settings and that are associated with pre-eclampsia from women admitted with any hypertensive disorder of pregnancy to participating centers in five LMICs to build a model to predict death or a serious complication such as organ damage within 48 hours of admission. The miniPIERS model included parity (whether the woman had been pregnant before), gestational age (length of pregnancy), headache/visual disturbances, chest pain/shortness of breath, vaginal bleeding with abdominal pain, systolic blood pressure, and proteinuria detected using a dipstick. The model was well-calibrated (the predicted risk of adverse outcomes agreed with the observed risk of adverse outcomes among the study participants), it had a good discriminatory ability (it could separate women who had a an adverse outcome from those who did not), and it designated women as being at high risk (25% or greater probability of an adverse outcome) with an accuracy of 85.5%. Importantly, external validation using data collected in fullPIERS, a study that developed a more complex clinical prediction model based on data from women attending tertiary hospitals in high-income countries, confirmed the predictive performance of miniPIERS.
What Do These Findings Mean?
These findings indicate that the miniPIERS model performs reasonably well as a tool to identify women at increased risk of adverse maternal outcomes associated with hypertensive disorders of pregnancy. Because miniPIERS only includes simple-to-measure personal characteristics, symptoms, and signs, it could potentially be used in resource-constrained settings to identify the women who would benefit most from interventions such as transportation to a higher level of care. However, further external validation of miniPIERS is needed using data collected from women living in LMICs before the model can be used during routine antenatal care. Moreover, the value of miniPIERS needs to be confirmed in implementation projects that examine whether its potential translates into clinical improvements. For now, though, the model could provide the basis for an education program to increase the knowledge of women, families, and community health care workers in LMICs about the signs and symptoms of hypertensive disorders of pregnancy.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001589.
The World Health Organization provides guidelines for the management of hypertensive disorders of pregnancy in low-resourced settings
The Maternal and Child Health Integrated Program provides information on pre-eclampsia and eclampsia targeted to low-resourced settings along with a tool-kit for LMIC providers
The US National Heart, Lung, and Blood Institute provides information about high blood pressure in pregnancy and a guide to lowering blood pressure in pregnancy
The UK National Health Service Choices website provides information about pre-eclampsia
The US not-for profit organization Preeclampsia Foundation provides information about all aspects of pre-eclampsia; its website includes some personal stories
The UK charity Healthtalkonline also provides personal stories about hypertensive disorders of pregnancy
MedlinePlus provides links to further information about high blood pressure and pregnancy (in English and Spanish); the MedlinePlus Encyclopedia has a video about pre-eclampsia (also in English and Spanish)
More information about miniPIERS and about fullPIERS is available
doi:10.1371/journal.pmed.1001589
PMCID: PMC3897359  PMID: 24465185
2.  Predictive Validity of the Braden Scale for Patients in Intensive Care Units 
Background
Patients in intensive care units are at higher risk for development of pressure ulcers than other patients. In order to prevent pressure ulcers from developing in intensive care patients, risk for development of pressure ulcers must be assessed accurately.
Objectives
To evaluate the predictive validity of the Braden scale for assessing risk for development of pressure ulcers in intensive care patients by using 4 years of data from electronic health records.
Methods
Data from the electronic health records of patients admitted to intensive care units between January 1, 2007, and December 31, 2010, were extracted from the data warehouse of an academic medical center. Predictive validity was measured by using sensitivity, specificity, positive predictive value, and negative predictive value. The receiver operating characteristic curve was generated, and the area under the curve was reported.
Results
A total of 7790 intensive care patients were included in the analysis. A cutoff score of 16 on the Braden scale had a sensitivity of 0.954, specificity of 0.207, positive predictive value of 0.114, and negative predictive value of 0.977. The area under the curve was 0.672 (95% CI, 0.663–0.683). The optimal cutoff for intensive care patients, determined from the receiver operating characteristic curve, was 13.
Conclusions
The Braden scale shows insufficient predictive validity and poor accuracy in discriminating intensive care patients at risk of pressure ulcers developing. The Braden scale may not sufficiently reflect characteristics of intensive care patients. Further research is needed to determine which possibly predictive factors are specific to intensive care units in order to increase the usefulness of the Braden scale for predicting pressure ulcers in intensive care patients.
doi:10.4037/ajcc2013991
PMCID: PMC4042540  PMID: 24186823
3.  Development of the interRAI Pressure Ulcer Risk Scale (PURS) for use in long-term care and home care settings 
BMC Geriatrics  2010;10:67.
Background
In long-term care (LTC) homes in the province of Ontario, implementation of the Minimum Data Set (MDS) assessment and The Braden Scale for predicting pressure ulcer risk were occurring simultaneously. The purpose of this study was, using available data sources, to develop a bedside MDS-based scale to identify individuals under care at various levels of risk for developing pressure ulcers in order to facilitate targeting risk factors for prevention.
Methods
Data for developing the interRAI Pressure Ulcer Risk Scale (interRAI PURS) were available from 2 Ontario sources: three LTC homes with 257 residents assessed during the same time frame with the MDS and Braden Scale for Predicting Pressure Sore Risk, and eighty-nine Ontario LTC homes with 12,896 residents with baseline/reassessment MDS data (median time 91 days), between 2005-2007. All assessments were done by trained clinical staff, and baseline assessments were restricted to those with no recorded pressure ulcer. MDS baseline/reassessment samples used in further testing included 13,062 patients of Ontario Complex Continuing Care Hospitals (CCC) and 73,183 Ontario long-stay home care (HC) clients.
Results
A data-informed Braden Scale cross-walk scale using MDS items was devised from the 3-facility dataset, and tested in the larger longitudinal LTC homes data for its association with a future new pressure ulcer, giving a c-statistic of 0.676. Informed by this, LTC homes data along with evidence from the clinical literature was used to create an alternate-form 7-item additive scale, the interRAI PURS, with good distributional characteristics and c-statistic of 0.708. Testing of the scale in CCC and HC longitudinal data showed strong association with development of a new pressure ulcer.
Conclusions
interRAI PURS differentiates risk of developing pressure ulcers among facility-based residents and home care recipients. As an output from an MDS assessment, it eliminates duplicated effort required for separate pressure ulcer risk scoring. Moreover, it can be done manually at the bedside during critical early days in an admission when the full MDS has yet to be completed. It can be calculated with established MDS instruments as well as with the newer interRAI suite instruments designed to follow persons across various care settings (interRAI Long-Term Care Facilities, interRAI Home Care, interRAI Palliative Care).
doi:10.1186/1471-2318-10-67
PMCID: PMC2955034  PMID: 20854670
4.  Prospective cohort study of routine use of risk assessment scales for prediction of pressure ulcers 
BMJ : British Medical Journal  2002;325(7368):797.
Objective
To evaluate whether risk assessment scales can be used to identify patients who are likely to get pressure ulcers.
Design
Prospective cohort study.
Setting
Two large hospitals in the Netherlands.
Participants
1229 patients admitted to the surgical, internal, neurological, or geriatric wards between January 1999 and June 2000.
Main outcome measure
Occurrence of a pressure ulcer of grade 2 or worse while in hospital.
Results
135 patients developed pressure ulcers during four weeks after admission. The weekly incidence of patients with pressure ulcers was 6.2% (95% confidence interval 5.2% to 7.2%). The area under the receiver operating characteristic curve was 0.56 (0.51 to 0.61) for the Norton scale, 0.55 (0.49 to 0.60) for the Braden scale, and 0.61 (0.56 to 0.66) for the Waterlow scale; the areas for the subpopulation, excluding patients who received preventive measures without developing pressure ulcers and excluding surgical patients, were 0.71 (0.65 to 0.77), 0.71 (0.64 to 0.78), and 0.68 (0.61 to 0.74), respectively. In this subpopulation, using the recommended cut-off points, the positive predictive value was 7.0% for the Norton, 7.8% for the Braden, and 5.3% for the Waterlow scale.
Conclusion
Although risk assessment scales predict the occurrence of pressure ulcers to some extent, routine use of these scales leads to inefficient use of preventive measures. An accurate risk assessment scale based on prospectively gathered data should be developed.
What is already known on this topicThe incidence of pressure ulcers in hospitalised patients varies between 2.7% and 29.5%Guidelines for prevention of pressure ulcers base the allocation of labour and resource intensive measures on the outcome of risk assessment scalesMost risk assessment scales are based on expert opinion or literature review and have not been evaluatedThe sensitivity and specificity of risk assessment scales varyWhat this study addsThe effectiveness of available risk assessment scales is limitedUse of the outcome of risk assessment scales leads to inefficient allocation of preventive measures
PMCID: PMC128943  PMID: 12376437
5.  Exploring Predictors of Complication in Older Surgical Patients: A Deficit Accumulation Index and the Braden Scale 
OBJECTIVES
To determine whether readily collected perioperative information might identify older surgical patients at higher risk for complication.
DESIGN
Retrospective cohort study
SETTING
Medical chart review at a single academic institution
PARTICIPANTS
102 patients aged 65 years and older who underwent abdominal surgery between January 2007 and December 2009.
MEASUREMENTS
Primary predictor variables were the first postoperative Braden Scale score (within 24 hours of surgery) and a Deficit Accumulation Index (DAI) constructed based on 39 available preoperative variables. The primary outcome was presence or absence of complication within 30 days of surgery date.
RESULTS
Of 102 patients, 64 experienced at least one complication with wound infection being the most common complication. In models adjusted for age, race, sex, and open vs. laparoscopic surgery, lower Braden Scale scores were predictive of 30-day postoperative complication (OR 1.30 [CI 95%, 1.06, 1.60]), longer length of stay (â = 1.44 (0.25) days; pvalue = ≤ 0.0001) and discharge to institution rather than home (OR 1.23 [CI 95%, 1.02, 1.48]). The cut-off value for the Braden Score with the highest predictive value for complication was ≤ 18 (OR 3.63 [CI 95%, 1.43, 9.19]; c statistic of 0.744). The DAI and several traditional surgical risk factors were not significantly associated with 30-day postoperative complications in this cohort.
CONCLUSION
This is the first study to identify the perioperative score on the Braden Scale, a widely used risk-stratifier for pressure ulcers, as an independent predictor of other adverse outcomes in geriatric surgical patients. Further studies are needed to confirm this finding as well as investigate other utilizations for this tool, which correlates well to phenotypic models of frailty.
doi:10.1111/j.1532-5415.2012.04109.x
PMCID: PMC3445658  PMID: 22906222
Braden Scale; Deficit Accumulation Index; Postoperative Complication; Frailty; Multi-disciplinary
6.  Enhancement of Decision Rules to Increase Generalizability and Performance of the Rule-Based System Assessing Risk for Pressure Ulcer 
Applied Clinical Informatics  2013;4(2):251-266.
Background
A rule-based system, the Braden Scale based Automated Risk Assessment Tool (BART), was developed to assess risk for pressure ulcer in a previous study. However, the BART illustrated two major areas in need of improvement, which were: 1) the enhancement of decision rules and 2) validation of generalizability to increase performance of BART.
Objectives
To enhance decision rules and validate generalizability of the enhanced BART.
Method
Two layers of decision rule enhancement were performed: 1) finding additional data items with the experts and 2) validating logics of decision rules utilizing a guideline modeling language. To refine the decision rules of the BART further, a survey study was conducted to ascertain the operational level of patient status description of the Braden Scale. The enhanced BART (BART2) was designed to assess levels of pressure ulcer risk of patients (N = 99) whose data were collected by the nurses. The patients’ level of pressure ulcer risk was assessed by the nurses using a Braden Scale, by an expert using a Braden Scale, and by the automatic BART2 electronic risk assessment. SPSS statistical software version 20 (IBM, 2011) was used to test the agreement between the three different risk assessments performed on each patient.
Results
The level of agreement between the BART2 and the expert pressure ulcer assessments was “very good (0.83)”. The sensitivity and the specificity of the BART2 were 86.8% and 90.3% respectively.
Conclusion
This study illustrated successful enhancement of decision rules and increased generalizability and performance of the BART2. Although the BART2 showed a “very good” level of agreement (kappa = 0.83) with an expert, the data reveal a need to improve the moisture parameter of the Braden Scale. Once the moisture parameter has been improved, BART2 will improve the quality of care, while accurately identifying the patients at risk for pressure ulcers.
doi:10.4338/ACI-2012-12-RA-0056
PMCID: PMC3716416  PMID: 23874362
Generalizability; decision support system; guideline interchange format; pressure ulcer risk
7.  Translation, adaptation, and validation of the Sunderland Scale and the Cubbin & Jackson Revised Scale in Portuguese 
Objective
To translate into Portuguese and evaluate the measuring properties of the Sunderland Scale and the Cubbin & Jackson Revised Scale, which are instruments for evaluating the risk of developing pressure ulcers during intensive care.
Methods
This study included the process of translation and adaptation of the scales to the Portuguese language, as well as the validation of these tools. To assess the reliability, Cronbach alpha values of 0.702 to 0.708 were identified for the Sunderland Scale and the Cubbin & Jackson Revised Scale, respectively. The validation criteria (predictive) were performed comparatively with the Braden Scale (gold standard), and the main measurements evaluated were sensitivity, specificity, positive predictive value, negative predictive value, and area under the curve, which were calculated based on cutoff points.
Results
The Sunderland Scale exhibited 60% sensitivity, 86.7% specificity, 47.4% positive predictive value, 91.5% negative predictive value, and 0.86 for the area under the curve. The Cubbin & Jackson Revised Scale exhibited 73.3% sensitivity, 86.7% specificity, 52.4% positive predictive value, 94.2% negative predictive value, and 0.91 for the area under the curve. The Braden scale exhibited 100% sensitivity, 5.3% specificity, 17.4% positive predictive value, 100% negative predictive value, and 0.72 for the area under the curve.
Conclusions
Both tools demonstrated reliability and validity for this sample. The Cubbin & Jackson Revised Scale yielded better predictive values for the development of pressure ulcers during intensive care.
doi:10.5935/0103-507X.20130021
PMCID: PMC4031838  PMID: 23917975
Validation studies; Risk assessment; Pressure ulcer/prevention & control; Pressure ulcer/nursing; Intensive care
8.  Risk Models to Predict Chronic Kidney Disease and Its Progression: A Systematic Review 
PLoS Medicine  2012;9(11):e1001344.
A systematic review of risk prediction models conducted by Justin Echouffo-Tcheugui and Andre Kengne examines the evidence base for prediction of chronic kidney disease risk and its progression, and suitability of such models for clinical use.
Background
Chronic kidney disease (CKD) is common, and associated with increased risk of cardiovascular disease and end-stage renal disease, which are potentially preventable through early identification and treatment of individuals at risk. Although risk factors for occurrence and progression of CKD have been identified, their utility for CKD risk stratification through prediction models remains unclear. We critically assessed risk models to predict CKD and its progression, and evaluated their suitability for clinical use.
Methods and Findings
We systematically searched MEDLINE and Embase (1 January 1980 to 20 June 2012). Dual review was conducted to identify studies that reported on the development, validation, or impact assessment of a model constructed to predict the occurrence/presence of CKD or progression to advanced stages. Data were extracted on study characteristics, risk predictors, discrimination, calibration, and reclassification performance of models, as well as validation and impact analyses. We included 26 publications reporting on 30 CKD occurrence prediction risk scores and 17 CKD progression prediction risk scores. The vast majority of CKD risk models had acceptable-to-good discriminatory performance (area under the receiver operating characteristic curve>0.70) in the derivation sample. Calibration was less commonly assessed, but overall was found to be acceptable. Only eight CKD occurrence and five CKD progression risk models have been externally validated, displaying modest-to-acceptable discrimination. Whether novel biomarkers of CKD (circulatory or genetic) can improve prediction largely remains unclear, and impact studies of CKD prediction models have not yet been conducted. Limitations of risk models include the lack of ethnic diversity in derivation samples, and the scarcity of validation studies. The review is limited by the lack of an agreed-on system for rating prediction models, and the difficulty of assessing publication bias.
Conclusions
The development and clinical application of renal risk scores is in its infancy; however, the discriminatory performance of existing tools is acceptable. The effect of using these models in practice is still to be explored.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Chronic kidney disease (CKD)—the gradual loss of kidney function—is increasingly common worldwide. In the US, for example, about 26 million adults have CKD, and millions more are at risk of developing the condition. Throughout life, small structures called nephrons inside the kidneys filter waste products and excess water from the blood to make urine. If the nephrons stop working because of injury or disease, the rate of blood filtration decreases, and dangerous amounts of waste products such as creatinine build up in the blood. Symptoms of CKD, which rarely occur until the disease is very advanced, include tiredness, swollen feet and ankles, puffiness around the eyes, and frequent urination, especially at night. There is no cure for CKD, but progression of the disease can be slowed by controlling high blood pressure and diabetes, both of which cause CKD, and by adopting a healthy lifestyle. The same interventions also reduce the chances of CKD developing in the first place.
Why Was This Study Done?
CKD is associated with an increased risk of end-stage renal disease, which is treated with dialysis or by kidney transplantation (renal replacement therapies), and of cardiovascular disease. These life-threatening complications are potentially preventable through early identification and treatment of CKD, but most people present with advanced disease. Early identification would be particularly useful in developing countries, where renal replacement therapies are not readily available and resources for treating cardiovascular problems are limited. One way to identify people at risk of a disease is to use a “risk model.” Risk models are constructed by testing the ability of different combinations of risk factors that are associated with a specific disease to identify those individuals in a “derivation sample” who have the disease. The model is then validated on an independent group of people. In this systematic review (a study that uses predefined criteria to identify all the research on a given topic), the researchers critically assess the ability of existing CKD risk models to predict the occurrence of CKD and its progression, and evaluate their suitability for clinical use.
What Did the Researchers Do and Find?
The researchers identified 26 publications reporting on 30 risk models for CKD occurrence and 17 risk models for CKD progression that met their predefined criteria. The risk factors most commonly included in these models were age, sex, body mass index, diabetes status, systolic blood pressure, serum creatinine, protein in the urine, and serum albumin or total protein. Nearly all the models had acceptable-to-good discriminatory performance (a measure of how well a model separates people who have a disease from people who do not have the disease) in the derivation sample. Not all the models had been calibrated (assessed for whether the average predicted risk within a group matched the proportion that actually developed the disease), but in those that had been assessed calibration was good. Only eight CKD occurrence and five CKD progression risk models had been externally validated; discrimination in the validation samples was modest-to-acceptable. Finally, very few studies had assessed whether adding extra variables to CKD risk models (for example, genetic markers) improved prediction, and none had assessed the impact of adopting CKD risk models on the clinical care and outcomes of patients.
What Do These Findings Mean?
These findings suggest that the development and clinical application of CKD risk models is still in its infancy. Specifically, these findings indicate that the existing models need to be better calibrated and need to be externally validated in different populations (most of the models were tested only in predominantly white populations) before they are incorporated into guidelines. The impact of their use on clinical outcomes also needs to be assessed before their widespread use is recommended. Such research is worthwhile, however, because of the potential public health and clinical applications of well-designed risk models for CKD. Such models could be used to identify segments of the population that would benefit most from screening for CKD, for example. Moreover, risk communication to patients could motivate them to adopt a healthy lifestyle and to adhere to prescribed medications, and the use of models for predicting CKD progression could help clinicians tailor disease-modifying therapies to individual patient needs.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001344.
This study is further discussed in a PLOS Medicine Perspective by Maarten Taal
The US National Kidney and Urologic Diseases Information Clearinghouse provides information about all aspects of kidney disease; the US National Kidney Disease Education Program provides resources to help improve the understanding, detection, and management of kidney disease (in English and Spanish)
The UK National Health Service Choices website provides information for patients on chronic kidney disease, including some personal stories
The US National Kidney Foundation, a not-for-profit organization, provides information about chronic kidney disease (in English and Spanish)
The not-for-profit UK National Kidney Federation support and information for patients with kidney disease and for their carers, including a selection of patient experiences of kidney disease
World Kidney Day, a joint initiative between the International Society of Nephrology and the International Federation of Kidney Foundations, aims to raise awareness about kidneys and kidney disease
doi:10.1371/journal.pmed.1001344
PMCID: PMC3502517  PMID: 23185136
9.  Reusability of EMR Data for Applying Cubbin and Jackson Pressure Ulcer Risk Assessment Scale in Critical Care Patients 
Healthcare Informatics Research  2013;19(4):261-270.
Objectives
The purposes of this study were to examine the predictive validity of the Cubbin and Jackson pressure ulcer risk assessment scale for the development of pressure ulcers in intensive care unit (ICU) patients retrospectively and to evaluate the reusability of Electronic Medical Records (EMR) data.
Methods
A retrospective design was used to examine 829 cases admitted to four ICUs in a tertiary care hospital from May 2010 to April 2011. Patients who were without pressure ulcers at admission to ICU, 18 years or older, and had stayed in ICU for 24 hours or longer were included. Sensitivity, specificity, positive predictive value, negative predictive value, and area under the curve (AUC) were calculated.
Results
The reported incidence rate of pressure ulcers among the study subjects was 14.2%. At the cut-off score of 24 of the Cubbin and Jackson scale, the sensitivity, specificity, positive predictive value, negative predictive value, and AUC were 72.0%, 68.8%, 27.7%, 93.7%, and 0.76, respectively. Eight items out 10 of the Cubbin and Jackson scale were readily available in the EMR data.
Conclusions
The Cubbin and Jackson scale performed slightly better than the Braden scale to predict pressure ulcer development. Eight items of the Cubbin and Jackson scale except mobility and hygiene can be extracted from the EMR, which initially demonstrated the reusability of EMR data for pressure ulcer risk assessment. If the Cubbin and Jackson scale is a part of the EMR assessment form, it would help nurses perform tasks to effectively prevent pressure ulcers with an EMR alert for high-risk patients.
doi:10.4258/hir.2013.19.4.261
PMCID: PMC3920038  PMID: 24523990
Electronic Health Records; Pressure Ulcer; Risk Assessment; Nursing Assessment; Intensive Care Units
10.  Prognostic Accuracy of WHO Growth Standards to Predict Mortality in a Large-Scale Nutritional Program in Niger 
PLoS Medicine  2009;6(3):e1000039.
Background
Important differences exist in the diagnosis of malnutrition when comparing the 2006 World Health Organization (WHO) Child Growth Standards and the 1977 National Center for Health Statistics (NCHS) reference. However, their relationship with mortality has not been studied. Here, we assessed the accuracy of the WHO standards and the NCHS reference in predicting death in a population of malnourished children in a large nutritional program in Niger.
Methods and Findings
We analyzed data from 64,484 children aged 6–59 mo admitted with malnutrition (<80% weight-for-height percentage of the median [WH]% [NCHS] and/or mid-upper arm circumference [MUAC] <110 mm and/or presence of edema) in 2006 into the Médecins Sans Frontières (MSF) nutritional program in Maradi, Niger. Sensitivity and specificity of weight-for-height in terms of Z score (WHZ) and WH% for both WHO standards and NCHS reference were calculated using mortality as the gold standard. Sensitivity and specificity of MUAC were also calculated. The receiver operating characteristic (ROC) curve was traced for these cutoffs and its area under curve (AUC) estimated. In predicting mortality, WHZ (NCHS) and WH% (NCHS) showed AUC values of 0.63 (95% confidence interval [CI] 0.60–0.66) and 0.71 (CI 0.68–0.74), respectively. WHZ (WHO) and WH% (WHO) appeared to provide higher accuracy with AUC values of 0.76 (CI 0.75–0.80) and 0.77 (CI 0.75–0.80), respectively. The relationship between MUAC and mortality risk appeared to be relatively weak, with AUC = 0.63 (CI 0.60–0.67). Analyses stratified by sex and age yielded similar results.
Conclusions
These results suggest that in this population of children being treated for malnutrition, WH indicators calculated using WHO standards were more accurate for predicting mortality risk than those calculated using the NCHS reference. The findings are valid for a population of already malnourished children and are not necessarily generalizable to a population of children being screened for malnutrition. Future work is needed to assess which criteria are best for admission purposes to identify children most likely to benefit from therapeutic or supplementary feeding programs.
Rebecca Grais and colleagues assess the accuracy of WHO growth standards in predicting death among malnourished children admitted to a large nutritional program in Niger.
Editors' Summary
Background.
Malnutrition causes more than a third of child deaths worldwide. The World Health Organization (WHO) estimates there are 178 million malnourished children globally, all of whom are vulnerable to disease and 20 million of whom are at risk of death. Poverty, rising food prices, food scarcity, and natural disasters all contribute significantly to malnutrition, but children's lives can be saved if aid agencies are able to identify and treat acute malnutrition early. This can be done by comparing a child's body measurements to those of healthy children.
In 1977 the US National Center for Health Statistics (NCHS) introduced child growth reference charts describing how US children grow. The charts enable the height of a child of a given age to be compared with the set of “percentile curves,” which show, for example, whether the child is on the 90th or the 10th centile—that is, whether taller than 90% or 10% of their peers. These NCHS reference charts were subsequently adopted by the WHO for international use. In 2006, the WHO began to use new growth charts, based on children from a variety of countries raised in optimal environments for healthy growth. These provide a standard for how all children should grow, regardless of ethnic background or wealth.
Why Was This Study Done?
It is known that the WHO standards and the NCHS reference differ in how they identify malnutrition. Estimates of malnutrition are higher with the WHO standard than the NCHS reference. This affects the cost of international programs to treat malnutrition, as more children will be diagnosed and treated when the WHO standards are used. However, it is not known how the different growth measures differ in predicting which children's lives are at risk from malnutrition. The researchers saw that the data in their nutritional program could help provide this information.
What Did the Researchers Do and Find?
The researchers examined data on the body measurements of over 60,000 children aged between 6 mo and 5 y enrolled in a Médecins sans Frontières (MSF) nutritional programme in Maradi, Niger during 2006. Children were assessed as having acute malnutrition (wasting) and enrolled in the feeding program if their weight-for-height was less than 80% of the NCHS average, if their mid-upper arm circumference (MUAC) was under 110 mm (for children 65–110 cm), or they had swelling in both feet.
The authors evaluated three measures to see which was most accurate at predicting that children would die under treatment: low weight-for-height as measured against each of the WHO standard and NCHS reference, and low MUAC. For each measure, they compared the proportion of correct predictions of death (sensitivity) and the proportion of correct predictions of survival (specificity) for a range of possible cutoffs (or thresholds) for diagnosis.
They found that the WHO standard gave more accurate predictions than the NCHS reference or the MUAC of which children would die under treatment. The results were similar when the children were grouped by age or sex.
What Do these Findings Mean?
The results suggest that, at least in this population, the WHO standards are a more accurate predictor of death following malnutrition. This agrees with what might be expected, as the WHO standard is more up-to-date as well as aiming to show how healthy children from a range of settings should grow.
Nevertheless, an important limitation is that the children in the study had already been diagnosed as malnourished and were receiving treatment. As a result, the authors cannot say definitively which measure is better at predicting what children in the general population are acutely malnourished and would benefit most from treatment.
It should also be noted that children were predominantly entered into the feeding program by the weight-for-height indicator rather than by the MUAC. This may be a reason why the MUAC appears worse at predicting death than weight-for-height. Missing and inaccurate data, for instance on the exact ages of some children, also limit the findings.
In addition, the findings do not provide guidance on the cutoffs that should be used in deciding whether to enter a child into a feeding program. Different cutoffs represent a trade-off between treating more children needlessly in order to catch all in need, and treating fewer children and missing some in need. The study also cannot be used to advise on whether weight-for-height or the MUAC is more appropriate in a given context. In certain crisis situations, for instance, some authorities suggest it may be more practical to use the MUAC, as it requires less equipment or training.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000039.
The UN Standing Committee on Nutrition homepage publishes international briefings on nutrition as a foundation for development
The US National Center for Health Statistics provides background information on its 1977 growth charts and how they were developed in the context of explaining how they differ from revised charts produced in 2000
The World Heath Organization publishes country profile information on its child growth standards and also on Niger
Médecins sans Frontières also provides information on its work in Niger
The EC-FAO Food Security Information for Action Programme is funded by the European Commission (EC) and implemented by the Food and Agriculture Organization of the United Nations (FAO). It aims to help nations formulate more effective anti-hunger policies and provides online materials, including a guide to nutritional status assessment and analysis, which includes information on the contexts in which different indicators are useful
doi:10.1371/journal.pmed.1000039
PMCID: PMC2650722  PMID: 19260760
11.  Development and Evaluation of a Simple and Effective Prediction Approach for Identifying Those at High Risk of Dyslipidemia in Rural Adult Residents 
PLoS ONE  2012;7(8):e43834.
Background
Dyslipidemia is an extremely prevalent but preventable risk factor for cardiovascular disease. However, many dyslipidemia patients remain undetected in resource limited settings. The study was performed to develop and evaluate a simple and effective prediction approach without biochemical parameters to identify those at high risk of dyslipidemia in rural adult population.
Methods
Demographic, dietary and lifestyle, and anthropometric data were collected by a cross-sectional survey from 8,914 participants living in rural areas aged 35–78 years. There were 6,686 participants randomly selected into a training group for constructing the artificial neural network (ANN) and logistic regression (LR) prediction models. The remaining 2,228 participants were assigned to a validation group for performance comparisons of ANN and LR models. The predictors of dyslipidemia risk were identified from the training group using multivariate logistic regression analysis. Predictive performance was evaluated by receiver operating characteristic (ROC) curve.
Results
Some risk factors were significantly associated with dyslipidemia, including age, gender, educational level, smoking, high-fat diet, vegetable and fruit intake, family history, physical activity, and central obesity. For the ANN model, the sensitivity, specificity, positive and negative likelihood ratio, positive and negative predictive values were 90.41%, 76.66%, 3.87, 0.13, 76.33%, and 90.58%, respectively, while LR model were only 57.37%, 70.91%, 1.97, 0.60, 62.09%, and 66.73%, respectively. The area under the ROC cure (AUC) value of the ANN model was 0.86±0.01, showing more accurate overall performance than traditional LR model (AUC = 0.68±0.01, P<0.001).
Conclusion
The ANN model is a simple and effective prediction approach to identify those at high risk of dyslipidemia, and it can be used to screen undiagnosed dyslipidemia patients in rural adult population. Further work is planned to confirm these results by incorporating multi-center and longer follow-up data.
doi:10.1371/journal.pone.0043834
PMCID: PMC3429495  PMID: 22952780
12.  DETECTION OF LUNG CANCER USING WEIGHTED DIGITAL ANALYSIS OF BREATH BIOMARKERS 
Background
A combination of biomarkers in a multivariate model may predict disease with greater accuracy than a single biomarker employed alone. We developed a non-linear method of multivariate analysis, weighted digital analysis (WDA), and evaluated its ability to predict lung cancer employing volatile biomarkers in the breath.
Methods
WDA generates a discriminant function to predict membership in disease vs no disease groups by determining weight, a cutoff value, and a sign for each predictor variable employed in the model. The weight of each predictor variable was the area under the curve (AUC) of the receiver operating characteristic (ROC) curve minus a fixed offset of 0.55, where the AUC was obtained by employing that predictor variable alone, as the sole marker of disease. The sign (±) was used to invert the predictor variable if a lower value indicated a higher probability of disease. When employed to predict the presence of a disease in a particular patient, the discriminant function was determined as the sum of the weights of all predictor variables that exceeded their cutoff values. The algorithm that generates the discriminant function is deterministic because parameters are calculated from each individual predictor variable without any optimization or adjustment. We employed WDA to re-evaluate data from a recent study of breath biomarkers of lung cancer, comprising the volatile organic compounds (VOCs) in the alveolar breath of 193 subjects with primary lung cancer and 211 controls with a negative chest CT.
Results
The WDA discriminant function accurately identified patients with lung cancer in a model employing 30 breath VOCs (ROC curve AUC = 0.90; sensitivity = 84.5%, specificity = 81.0%). These results were superior to multi-linear regression analysis of the same data set (AUC= 0.74, sensitivity = 68.4, specificity = 73.5%). WDA test accuracy did not vary appreciably with TNM (tumor, node, metastasis) stage of disease, and results were not affected by tobacco smoking (ROC curve AUC =0.92 in current smokers, 0.90 in former smokers). WDA was a robust predictor of lung cancer: random removal of 1/3 of the VOCs did not reduce the AUC of the ROC curve by >10% (99.7% CI).
Conclusions
A test employing WDA of breath VOCs predicted lung cancer with accuracy similar to chest computed tomography. The algorithm identified dependencies that were not apparent with traditional linear methods. WDA appears to provide a useful new technique for non-linear multivariate analysis of data.
doi:10.1016/j.cca.2008.02.021
PMCID: PMC2497457  PMID: 18420034
13.  Determining relative importance of variables in developing and validating predictive models 
Background
Multiple regression models are used in a wide range of scientific disciplines and automated model selection procedures are frequently used to identify independent predictors. However, determination of relative importance of potential predictors and validating the fitted models for their stability, predictive accuracy and generalizability are often overlooked or not done thoroughly.
Methods
Using a case study aimed at predicting children with acute lymphoblastic leukemia (ALL) who are at low risk of Tumor Lysis Syndrome (TLS), we propose and compare two strategies, bootstrapping and random split of data, for ordering potential predictors according to their relative importance with respect to model stability and generalizability. We also propose an approach based on relative increase in percentage of explained variation and area under the Receiver Operating Characteristic (ROC) curve for developing models where variables from our ordered list enter the model according to their importance. An additional data set aimed at identifying predictors of prostate cancer penetration is also used for illustrative purposes.
Results
Age is chosen to be the most important predictor of TLS. It is selected 100% of the time using the bootstrapping approach. Using the random split method, it is selected 99% of the time in the training data and is significant (at 5% level) 98% of the time in the validation data set. This indicates that age is a stable predictor of TLS with good generalizability. The second most important variable is white blood cell count (WBC). Our methods also identified an important predictor of TLS that was otherwise omitted if relying on any of the automated model selection procedures alone. A group at low risk of TLS consists of children younger than 10 years of age, without T-cell immunophenotype, whose baseline WBC is < 20 × 109/L and palpable spleen is < 2 cm. For the prostate cancer data set, the Gleason score and digital rectal exam are identified to be the most important indicators of whether tumor has penetrated the prostate capsule.
Conclusion
Our model selection procedures based on bootstrap re-sampling and repeated random split techniques can be used to assess the strength of evidence that a variable is truly an independent and reproducible predictor. Our methods, therefore, can be used for developing stable and reproducible models with good performances. Moreover, our methods can serve as a good tool for validating a predictive model. Previous biological and clinical studies support the findings based on our selection and validation strategies. However, extensive simulations may be required to assess the performance of our methods under different scenarios as well as check their sensitivity to a random fluctuation in the data.
doi:10.1186/1471-2288-9-64
PMCID: PMC2761416  PMID: 19751506
14.  Biomarker Profiling by Nuclear Magnetic Resonance Spectroscopy for the Prediction of All-Cause Mortality: An Observational Study of 17,345 Persons 
PLoS Medicine  2014;11(2):e1001606.
In this study, Würtz and colleagues conducted high-throughput profiling of blood specimens in two large population-based cohorts in order to identify biomarkers for all-cause mortality and enhance risk prediction. The authors found that biomarker profiling improved prediction of the short-term risk of death from all causes above established risk factors. However, further investigations are needed to clarify the biological mechanisms and the utility of these biomarkers to guide screening and prevention.
Please see later in the article for the Editors' Summary
Background
Early identification of ambulatory persons at high short-term risk of death could benefit targeted prevention. To identify biomarkers for all-cause mortality and enhance risk prediction, we conducted high-throughput profiling of blood specimens in two large population-based cohorts.
Methods and Findings
106 candidate biomarkers were quantified by nuclear magnetic resonance spectroscopy of non-fasting plasma samples from a random subset of the Estonian Biobank (n = 9,842; age range 18–103 y; 508 deaths during a median of 5.4 y of follow-up). Biomarkers for all-cause mortality were examined using stepwise proportional hazards models. Significant biomarkers were validated and incremental predictive utility assessed in a population-based cohort from Finland (n = 7,503; 176 deaths during 5 y of follow-up). Four circulating biomarkers predicted the risk of all-cause mortality among participants from the Estonian Biobank after adjusting for conventional risk factors: alpha-1-acid glycoprotein (hazard ratio [HR] 1.67 per 1–standard deviation increment, 95% CI 1.53–1.82, p = 5×10−31), albumin (HR 0.70, 95% CI 0.65–0.76, p = 2×10−18), very-low-density lipoprotein particle size (HR 0.69, 95% CI 0.62–0.77, p = 3×10−12), and citrate (HR 1.33, 95% CI 1.21–1.45, p = 5×10−10). All four biomarkers were predictive of cardiovascular mortality, as well as death from cancer and other nonvascular diseases. One in five participants in the Estonian Biobank cohort with a biomarker summary score within the highest percentile died during the first year of follow-up, indicating prominent systemic reflections of frailty. The biomarker associations all replicated in the Finnish validation cohort. Including the four biomarkers in a risk prediction score improved risk assessment for 5-y mortality (increase in C-statistics 0.031, p = 0.01; continuous reclassification improvement 26.3%, p = 0.001).
Conclusions
Biomarker associations with cardiovascular, nonvascular, and cancer mortality suggest novel systemic connectivities across seemingly disparate morbidities. The biomarker profiling improved prediction of the short-term risk of death from all causes above established risk factors. Further investigations are needed to clarify the biological mechanisms and the utility of these biomarkers for guiding screening and prevention.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
A biomarker is a biological molecule found in blood, body fluids, or tissues that may signal an abnormal process, a condition, or a disease. The level of a particular biomarker may indicate a patient's risk of disease, or likely response to a treatment. For example, cholesterol levels are measured to assess the risk of heart disease. Most current biomarkers are used to test an individual's risk of developing a specific condition. There are none that accurately assess whether a person is at risk of ill health generally, or likely to die soon from a disease. Early and accurate identification of people who appear healthy but in fact have an underlying serious illness would provide valuable opportunities for preventative treatment.
While most tests measure the levels of a specific biomarker, there are some technologies that allow blood samples to be screened for a wide range of biomarkers. These include nuclear magnetic resonance (NMR) spectroscopy and mass spectrometry. These tools have the potential to be used to screen the general population for a range of different biomarkers.
Why Was This Study Done?
Identifying new biomarkers that provide insight into the risk of death from all causes could be an important step in linking different diseases and assessing patient risk. The authors in this study screened patient samples using NMR spectroscopy for biomarkers that accurately predict the risk of death particularly amongst the general population, rather than amongst people already known to be ill.
What Did the Researchers Do and Find?
The researchers studied two large groups of people, one in Estonia and one in Finland. Both countries have set up health registries that collect and store blood samples and health records over many years. The registries include large numbers of people who are representative of the wider population.
The researchers first tested blood samples from a representative subset of the Estonian group, testing 9,842 samples in total. They looked at 106 different biomarkers in each sample using NMR spectroscopy. They also looked at the health records of this group and found that 508 people died during the follow-up period after the blood sample was taken, the majority from heart disease, cancer, and other diseases. Using statistical analysis, they looked for any links between the levels of different biomarkers in the blood and people's short-term risk of dying. They found that the levels of four biomarkers—plasma albumin, alpha-1-acid glycoprotein, very-low-density lipoprotein (VLDL) particle size, and citrate—appeared to accurately predict short-term risk of death. They repeated this study with the Finnish group, this time with 7,503 individuals (176 of whom died during the five-year follow-up period after giving a blood sample) and found similar results.
The researchers carried out further statistical analyses to take into account other known factors that might have contributed to the risk of life-threatening illness. These included factors such as age, weight, tobacco and alcohol use, cholesterol levels, and pre-existing illness, such as diabetes and cancer. The association between the four biomarkers and short-term risk of death remained the same even when controlling for these other factors.
The analysis also showed that combining the test results for all four biomarkers, to produce a biomarker score, provided a more accurate measure of risk than any of the biomarkers individually. This biomarker score also proved to be the strongest predictor of short-term risk of dying in the Estonian group. Individuals with a biomarker score in the top 20% had a risk of dying within five years that was 19 times greater than that of individuals with a score in the bottom 20% (288 versus 15 deaths).
What Do These Findings Mean?
This study suggests that there are four biomarkers in the blood—alpha-1-acid glycoprotein, albumin, VLDL particle size, and citrate—that can be measured by NMR spectroscopy to assess whether otherwise healthy people are at short-term risk of dying from heart disease, cancer, and other illnesses. However, further validation of these findings is still required, and additional studies should examine the biomarker specificity and associations in settings closer to clinical practice. The combined biomarker score appears to be a more accurate predictor of risk than tests for more commonly known risk factors. Identifying individuals who are at high risk using these biomarkers might help to target preventative medical treatments to those with the greatest need.
However, there are several limitations to this study. As an observational study, it provides evidence of only a correlation between a biomarker score and ill health. It does not identify any underlying causes. Other factors, not detectable by NMR spectroscopy, might be the true cause of serious health problems and would provide a more accurate assessment of risk. Nor does this study identify what kinds of treatment might prove successful in reducing the risks. Therefore, more research is needed to determine whether testing for these biomarkers would provide any clinical benefit.
There were also some technical limitations to the study. NMR spectroscopy does not detect as many biomarkers as mass spectrometry, which might therefore identify further biomarkers for a more accurate risk assessment. In addition, because both study groups were northern European, it is not yet known whether the results would be the same in other ethnic groups or populations with different lifestyles.
In spite of these limitations, the fact that the same four biomarkers are associated with a short-term risk of death from a variety of diseases does suggest that similar underlying mechanisms are taking place. This observation points to some potentially valuable areas of research to understand precisely what's contributing to the increased risk.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001606
The US National Institute of Environmental Health Sciences has information on biomarkers
The US Food and Drug Administration has a Biomarker Qualification Program to help researchers in identifying and evaluating new biomarkers
Further information on the Estonian Biobank is available
The Computational Medicine Research Team of the University of Oulu and the University of Bristol have a webpage that provides further information on high-throughput biomarker profiling by NMR spectroscopy
doi:10.1371/journal.pmed.1001606
PMCID: PMC3934819  PMID: 24586121
15.  Reporting and Methods in Clinical Prediction Research: A Systematic Review 
PLoS Medicine  2012;9(5):e1001221.
Walter Bouwmeester and colleagues investigated the reporting and methods of prediction studies in 2008, in six high-impact general medical journals, and found that the majority of prediction studies do not follow current methodological recommendations.
Background
We investigated the reporting and methods of prediction studies, focusing on aims, designs, participant selection, outcomes, predictors, statistical power, statistical methods, and predictive performance measures.
Methods and Findings
We used a full hand search to identify all prediction studies published in 2008 in six high impact general medical journals. We developed a comprehensive item list to systematically score conduct and reporting of the studies, based on recent recommendations for prediction research. Two reviewers independently scored the studies. We retrieved 71 papers for full text review: 51 were predictor finding studies, 14 were prediction model development studies, three addressed an external validation of a previously developed model, and three reported on a model's impact on participant outcome. Study design was unclear in 15% of studies, and a prospective cohort was used in most studies (60%). Descriptions of the participants and definitions of predictor and outcome were generally good. Despite many recommendations against doing so, continuous predictors were often dichotomized (32% of studies). The number of events per predictor as a measure of statistical power could not be determined in 67% of the studies; of the remainder, 53% had fewer than the commonly recommended value of ten events per predictor. Methods for a priori selection of candidate predictors were described in most studies (68%). A substantial number of studies relied on a p-value cut-off of p<0.05 to select predictors in the multivariable analyses (29%). Predictive model performance measures, i.e., calibration and discrimination, were reported in 12% and 27% of studies, respectively.
Conclusions
The majority of prediction studies in high impact journals do not follow current methodological recommendations, limiting their reliability and applicability.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
There are often times in our lives when we would like to be able to predict the future. Is the stock market going to go up, for example, or will it rain tomorrow? Being able predict future health is also important, both to patients and to physicians, and there is an increasing body of published clinical “prediction research.” Diagnostic prediction research investigates the ability of variables or test results to predict the presence or absence of a specific diagnosis. So, for example, one recent study compared the ability of two imaging techniques to diagnose pulmonary embolism (a blood clot in the lungs). Prognostic prediction research investigates the ability of various markers to predict future outcomes such as the risk of a heart attack. Both types of prediction research can investigate the predictive properties of patient characteristics, single variables, tests, or markers, or combinations of variables, tests, or markers (multivariable studies). Both types of prediction research can include also studies that build multivariable prediction models to guide patient management (model development), or that test the performance of models (validation), or that quantify the effect of using a prediction model on patient and physician behaviors and outcomes (impact assessment).
Why Was This Study Done?
With the increase in prediction research, there is an increased interest in the methodology of this type of research because poorly done or poorly reported prediction research is likely to have limited reliability and applicability and will, therefore, be of little use in patient management. In this systematic review, the researchers investigate the reporting and methods of prediction studies by examining the aims, design, participant selection, definition and measurement of outcomes and candidate predictors, statistical power and analyses, and performance measures included in multivariable prediction research articles published in 2008 in several general medical journals. In a systematic review, researchers identify all the studies undertaken on a given topic using a predefined set of criteria and systematically analyze the reported methods and results of these studies.
What Did the Researchers Do and Find?
The researchers identified all the multivariable prediction studies meeting their predefined criteria that were published in 2008 in six high impact general medical journals by browsing through all the issues of the journals (a hand search). They then scored the methods and reporting of each study using a comprehensive item list based on recent recommendations for the conduct of prediction research (for example, the reporting recommendations for tumor marker prognostic studies—the REMARK guidelines). Of 71 retrieved studies, 51 were predictor finding studies, 14 were prediction model development studies, three externally validated an existing model, and three reported on a model's impact on participant outcome. Study design, participant selection, definitions of outcomes and predictors, and predictor selection were generally well reported, but other methodological and reporting aspects of the studies were suboptimal. For example, despite many recommendations, continuous predictors were often dichotomized. That is, rather than using the measured value of a variable in a prediction model (for example, blood pressure in a cardiovascular disease prediction model), measurements were frequently assigned to two broad categories. Similarly, many of the studies failed to adequately estimate the sample size needed to minimize bias in predictor effects, and few of the model development papers quantified and validated the proposed model's predictive performance.
What Do These Findings Mean?
These findings indicate that, in 2008, most of the prediction research published in high impact general medical journals failed to follow current guidelines for the conduct and reporting of clinical prediction studies. Because the studies examined here were published in high impact medical journals, they are likely to be representative of the higher quality studies published in 2008. However, reporting standards may have improved since 2008, and the conduct of prediction research may actually be better than this analysis suggests because the length restrictions that are often applied to journal articles may account for some of reporting omissions. Nevertheless, despite some encouraging findings, the researchers conclude that the poor reporting and poor methods they found in many published prediction studies is a cause for concern and is likely to limit the reliability and applicability of this type of clinical research.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001221.
The EQUATOR Network is an international initiative that seeks to improve the reliability and value of medical research literature by promoting transparent and accurate reporting of research studies; its website includes information on a wide range of reporting guidelines including the REMARK recommendations (in English and Spanish)
A video of a presentation by Doug Altman, one of the researchers of this study, on improving the reporting standards of the medical evidence base, is available
The Cochrane Prognosis Methods Group provides additional information on the methodology of prognostic research
doi:10.1371/journal.pmed.1001221
PMCID: PMC3358324  PMID: 22629234
16.  A Prognostic Model for One-year Mortality in Patients Requiring Prolonged Mechanical Ventilation 
Critical care medicine  2008;36(7):2061-2069.
Objective
A measure that identifies patients who are at high risk of mortality after prolonged ventilation will help physicians communicate prognosis to patients or surrogate decision-makers. Our objective was to develop and validate a prognostic model for 1-year mortality in patients ventilated for 21 days or more.
Design
Prospective cohort study.
Setting
University-based tertiary care hospital
Patients
300 consecutive medical, surgical, and trauma patients requiring mechanical ventilation for at least 21 days were prospectively enrolled.
Measurements and Main Results
Predictive variables were measured on day 21 of ventilation for the first 200 patients and entered into logistic regression models with 1-year and 3-month mortality as outcomes. Final models were validated using data from 100 subsequent patients. One-year mortality was 51% in the development set and 58% in the validation set. Independent predictors of mortality included requirement for vasopressors, hemodialysis, platelet count ≤150 ×109/L, and age ≥50. Areas under the ROC curve for the development model and validation model were 0.82 (se 0.03) and 0.82 (se 0.05) respectively. The model had sensitivity of 0.42 (se 0.12) and specificity of 0.99 (se 0.01) for identifying patients who had ≥90% risk of death at 1 year. Observed mortality was highly consistent with both 3- and 12-month predicted mortality. These four predictive variables can be used in a simple prognostic score that clearly identifies low risk patients (no risk factors, 15% mortality) and high risk patients (3 or 4 risk factors, 97% mortality).
Conclusions
Simple clinical variables measured on day 21 of mechanical ventilation can identify patients at highest and lowest risk of death from prolonged ventilation.
doi:10.1097/CCM.0b013e31817b8925
PMCID: PMC2728216  PMID: 18552692
Mechanical Ventilation; Illness severity scores; Outcomes; Statistical Model; Critical Illness; Prognosis
17.  Management of Chronic Pressure Ulcers 
Executive Summary
In April 2008, the Medical Advisory Secretariat began an evidence-based review of the literature concerning pressure ulcers.
Please visit the Medical Advisory Secretariat Web site, http://www.health.gov.on.ca/english/providers/program/mas/tech/tech_mn.html to review these titles that are currently available within the Pressure Ulcers series.
Pressure ulcer prevention: an evidence based analysis
The cost-effectiveness of prevention strategies for pressure ulcers in long-term care homes in Ontario: projections of the Ontario Pressure Ulcer Model (field evaluation)
Management of chronic pressure ulcers: an evidence-based analysis
Objective
The Medical Advisory Secretariat (MAS) conducted a systematic review on interventions used to treat pressure ulcers in order to answer the following questions:
Do currently available interventions for the treatment of pressure ulcers increase the healing rate of pressure ulcers compared with standard care, a placebo, or other similar interventions?
Within each category of intervention, which one is most effective in promoting the healing of existing pressure ulcers?
Background
A pressure ulcer is a localized injury to the skin and/or underlying tissue usually over a bony prominence, as a result of pressure, or pressure in conjunction with shear and/or friction. Many areas of the body, especially the sacrum and the heel, are prone to the development of pressure ulcers. People with impaired mobility (e.g., stroke or spinal cord injury patients) are most vulnerable to pressure ulcers. Other factors that predispose people to pressure ulcer formation are poor nutrition, poor sensation, urinary and fecal incontinence, and poor overall physical and mental health.
The prevalence of pressure ulcers in Ontario has been estimated to range from a median of 22.1% in community settings to a median of 29.9% in nonacute care facilities. Pressure ulcers have been shown to increase the risk of mortality among geriatric patients by as much as 400%, to increase the frequency and duration of hospitalization, and to decrease the quality of life of affected patients. The cost of treating pressure ulcers has been estimated at approximately $9,000 (Cdn) per patient per month in the community setting. Considering the high prevalence of pressure ulcers in the Ontario health care system, the total cost of treating pressure ulcers is substantial.
Technology
Wounds normally heal in 3 phases (inflammatory phase, a proliferative phase of new tissue and matrix formation, and a remodelling phase). However, pressure ulcers often fail to progress past the inflammatory stage. Current practice for treating pressure ulcers includes treating the underlying causes, debridement to remove necrotic tissues and contaminated tissues, dressings to provide a moist wound environment and to manage exudates, devices and frequent turning of patients to provide pressure relief, topical applications of biologic agents, and nutritional support to correct nutritional deficiencies. A variety of adjunctive physical therapies are also in use.
Method
Health technology assessment databases and medical databases were searched from 1996 (Medline), 1980 (EMBASE), and 1982 (CINAHL) systematically up to March 2008 to identify randomized controlled trials (RCTs) on the following treatments of pressure ulcers: cleansing, debridement, dressings, biological therapies, pressure-relieving devices, physical therapies, nutritional therapies, and multidisciplinary wound care teams. Full literature search strategies are reported in appendix 1. English-language studies in previous systematic reviews and studies published since the last systematic review were included if they had more than 10 subjects, were randomized, and provided objective outcome measures on the healing of pressure ulcers. In the absence of RCTs, studies of the highest level of evidence available were included. Studies on wounds other than pressure ulcers and on surgical treatment of pressure ulcers were excluded. A total of 18 systematic reviews, 104 RCTs, and 4 observational studies were included in this review.
Data were extracted from studies using standardized forms. The quality of individual studies was assessed based on adequacy of randomization, concealment of treatment allocation, comparability of groups, blinded assessment, and intention-to-treat analysis. Meta-analysis to estimate the relative risk (RR) or weighted mean difference (WMD) for measures of healing was performed when appropriate. A descriptive synthesis was provided where pooled analysis was not appropriate or not feasible. The quality of the overall evidence on each intervention was assessed using the grading of recommendations assessment, development, and evaluation (GRADE) criteria.
Findings
Findings from the analysis of the included studies are summarized below:
Cleansing
There is no good trial evidence to support the use of any particular wound cleansing solution or technique for pressure ulcers.
Debridement
There was no evidence that debridement using collagenase, dextranomer, cadexomer iodine, or maggots significantly improved complete healing compared with placebo.
There were no statistically significant differences between enzymatic or mechanical debridement agents with the following exceptions:
Papain urea resulted in better debridement than collagenase.
Calcium alginate resulted in a greater reduction in ulcer size compared to dextranomer.
Adding streptokinase/streptodornase to hydrogel resulted in faster debridement.
Maggot debridement resulted in more complete debridement than conventional treatment.
There is limited evidence on the healing effects of debridement devices.
Dressings
Hydrocolloid dressing was associated with almost three-times more complete healing compared with saline gauze.
There is evidence that hydrogel and hydropolymer may be associated with 50% to 70% more complete healing of pressure ulcers than hydrocolloid dressing.
No statistically significant differences in complete healing were detected among other modern dressings.
There is evidence that polyurethane foam dressings and hydrocellular dressings are more absorbent and easier to remove than hydrocolloid dressings in ulcers with moderate to high exudates.
In deeper ulcers (stage III and IV), the use of alginate with hydrocolloid resulted in significantly greater reduction in the size of the ulcers compared to hydrocolloid alone.
Studies on sustained silver-releasing dressing demonstrated a tendency for reducing the risk of infection and promoting faster healing, but the sample sizes were too small for statistical analysis or for drawing conclusions.
Biological Therapies
The efficacy of platelet-derived growth factors (PDGFs), fibroblast growth factor, and granulocyte-macrophage colony stimulating factor in improving complete healing of chronic pressure ulcers has not been established.
Presently only Regranex, a recombinant PDGF, has been approved by Health Canada and only for treatment of diabetic ulcers in the lower extremities.
A March 2008 US Food and Drug Administration (FDA) communication reported increased deaths from cancers in people given three or more prescriptions for Regranex.
Limited low-quality evidence on skin matrix and engineered skin equivalent suggests a potential role for these products in healing refractory advanced chronic pressure ulcers, but the evidence is insufficient to draw a conclusion.
Adjunctive Physical Therapy
There is evidence that electrical stimulation may result in a significantly greater reduction in the surface area and more complete healing of stage II to IV ulcers compared with sham therapy. No conclusion on the efficacy of electrotherapy can be drawn because of significant statistical heterogeneity, small sample sizes, and methodological flaws.
The efficacy of other adjunctive physical therapies [electromagnetic therapy, low-level laser (LLL) therapy, ultrasound therapy, ultraviolet light therapy, and negative pressure therapy] in improving complete closure of pressure ulcers has not been established.
Nutrition Therapy
Supplementation with 15 grams of hydrolyzed protein 3 times daily did not affect complete healing but resulted in a 2-fold improvement in Pressure Ulcer Scale for Healing (PUSH) score compared with placebo.
Supplementation with 200 mg of zinc three times per day did not have any significant impact on the healing of pressure ulcers compared with a placebo.
Supplementation of 500 mg ascorbic acid twice daily was associated with a significantly greater decrease in the size of the ulcer compared with a placebo but did not have any significant impact on healing when compared with supplementation of 10 mg ascorbic acid three times daily.
A very high protein tube feeding (25% of energy as protein) resulted in a greater reduction in ulcer area in institutionalized tube-fed patients compared with a high protein tube feeding (16% of energy as protein).
Multinutrient supplements that contain zinc, arginine, and vitamin C were associated with a greater reduction in the area of the ulcers compared with standard hospital diet or to a standard supplement without zinc, arginine, or vitamin C.
Firm conclusions cannot be drawn because of methodological flaws and small sample sizes.
Multidisciplinary Wound Care Teams
The only RCT suggests that multidisciplinary wound care teams may significantly improve healing in the acute care setting in 8 weeks and may significantly shorten the length of hospitalization. However, since only an abstract is available, study biases cannot be assessed and no conclusions can be drawn on the quality of this evidence.
PMCID: PMC3377577  PMID: 23074533
18.  A multivariate Bayesian model for assessing morbidity after coronary artery surgery 
Critical Care  2006;10(3):R94.
Introduction
Although most risk-stratification scores are derived from preoperative patient variables, there are several intraoperative and postoperative variables that can influence prognosis. Higgins and colleagues previously evaluated the contribution of preoperative, intraoperative and postoperative predictors to the outcome. We developed a Bayes linear model to discriminate morbidity risk after coronary artery bypass grafting and compared it with three different score models: the Higgins' original scoring system, derived from the patient's status on admission to the intensive care unit (ICU), and two models designed and customized to our patient population.
Methods
We analyzed 88 operative risk factors; 1,090 consecutive adult patients who underwent coronary artery bypass grafting were studied. Training and testing data sets of 740 patients and 350 patients, respectively, were used. A stepwise approach enabled selection of an optimal subset of predictor variables. Model discrimination was assessed by receiver operating characteristic (ROC) curves, whereas calibration was measured using the Hosmer-Lemeshow goodness-of-fit test.
Results
A set of 12 preoperative, intraoperative and postoperative predictor variables was identified for the Bayes linear model. Bayes and locally customized score models fitted according to the Hosmer-Lemeshow test. However, the comparison between the areas under the ROC curve proved that the Bayes linear classifier had a significantly higher discrimination capacity than the score models. Calibration and discrimination were both much worse with Higgins' original scoring system.
Conclusion
Most prediction rules use sequential numerical risk scoring to quantify prognosis and are an advanced form of audit. Score models are very attractive tools because their application in routine clinical practice is simple. If locally customized, they also predict patient morbidity in an acceptable manner. The Bayesian model seems to be a feasible alternative. It has better discrimination and can be tailored more easily to individual institutions.
doi:10.1186/cc4951
PMCID: PMC1550964  PMID: 16813658
19.  The Utility of Carotid Ultrasonography in Identifying Severe Coronary Artery Disease in Asymptomatic Type 2 Diabetic Patients Without History of Coronary Artery Disease 
Diabetes Care  2013;36(5):1327-1334.
OBJECTIVE
Although many studies have shown that carotid intima-media thickness (IMT) is associated with coronary artery disease (CAD), it remains inconclusive whether assessment of carotid IMT is useful as a screening test for asymptomatic but severe CAD in diabetic patients.
RESEARCH DESIGN AND METHODS
A total of 333 asymptomatic type 2 diabetic patients without history of CAD underwent exercise electrocardiogram or myocardial perfusion scintigraphy for detection of silent myocardial ischemia, and those whose test results were positive were subjected to coronary computed tomography angiography or coronary angiography. The ability of carotid IMT to identify severe CAD corresponding to treatment with revascularization was examined by receiver-operating characteristic (ROC) curve analyses.
RESULTS
Among the 333 subjects, 17 were treated with revascularization. A multiple logistic regression analysis showed that maximum IMT was an independent predictor of severe CAD even after adjustment for conventional risk factors. ROC curve analyses revealed that the addition of maximum IMT to conventional risk factors significantly improved the prediction ability for severe CAD (from area under the curve, 0.67 to 0.79; P = 0.039). The greatest sensitivity and specificity were obtained when the cut-off value of maximum IMT was set at 2.45 mm (pretest probability, 5%; posttest probability, 11%; sensitivity, 71%). When we applied age-specific cut-off values, the sensitivity of screening further increased in both the nonelderly (pretest probability, 6%; posttest probability, 10%; sensitivity, 100%) and the elderly subjects (pretest probability, 5%; posttest probability, 15%; sensitivity, 100%).
CONCLUSIONS
Our study suggests that carotid maximum IMT is useful for screening asymptomatic type 2 diabetic patients with severe CAD equivalent to revascularization.
doi:10.2337/dc12-1327
PMCID: PMC3631883  PMID: 23404302
20.  Does consideration of either psychological or material disadvantage improve coronary risk prediction? Prospective observational study of Scottish men 
Objective
To assess the value of psychosocial risk factors in discriminating between individuals at higher and lower risk of coronary heart disease, using risk prediction equations.
Design
Prospective observational study.
Setting
Scotland.
Participants
5191 employed men aged 35 to 64 years and free of coronary heart disease at study enrolment
Main outcome measures
Area under receiver operating characteristic (ROC) curves for risk prediction equations including different risk factors for coronary heart disease.
Results
During the first 10 years of follow up, 203 men died of coronary heart disease and a further 200 were admitted to hospital with this diagnosis. Area under the ROC curve for the standard Framingham coronary risk factors was 74.5%. Addition of “vital exhaustion” and psychological stress led to areas under the ROC curve of 74.5% and 74.6%, respectively. Addition of current social class and lifetime social class to the standard Framingham equation gave areas under the ROC curve of 74.6% and 74.9%, respectively. In no case was there strong evidence for improved discrimination of the model containing the novel risk factor over the standard model.
Conclusions
Consideration of psychosocial risk factors, including those that are strong independent predictors of heart disease, does not substantially influence the ability of risk prediction tools to discriminate between individuals at higher and lower risk of coronary heart disease.
doi:10.1136/jech.2006.055921
PMCID: PMC2660009  PMID: 17699540
cardiovascular disease; risk assessment; Framingham risk score; primary prevention; psychosocial factors
21.  A 13-gene signature prognostic of HPV-negative OSCC: discovery and external validation 
Purpose
To identify a prognostic gene signature for HPV-negative OSCC patients.
Experimental Design
Two gene expression datasets were used; a training dataset from the Fred Hutchinson Cancer Research Center (FHCRC) (n=97), and a validation dataset from the MD Anderson Cancer Center (MDACC) (n=71). We applied L1/L2-penalized Cox regression models to the FHCRC data on the 131–gene signature previously identified to be prognostic in OSCC patients to identify a prognostic model specific for high-risk HPV-negative OSCC patients. The models were tested with the MDACC dataset using a receiver operating characteristic analysis.
Results
A 13-gene model was identified as the best predictor of HPV-negative OSCC-specific survival in the training dataset. The risk score for each patient in the validation dataset was calculated from this model and dichotomized at the median. The estimated 2-year mortality (± SE) of patients with high risk scores was 47.1 (±9.24)% compared with 6.35 (± 4.42)% for patients with low risk scores. ROC analyses showed that the areas under the curve for the age, gender, and treatment modality-adjusted models with risk score (0.78, 95%CI: 0.74-0.86) and risk score plus tumor stage (0.79, 95%CI: 0.75-0.87) were substantially higher than for the model with tumor stage (0.54, 95%CI: 0.48-0.62).
Conclusions
We identified and validated a 13-gene signature that is considerably better than tumor stage in predicting survival of HPV-negative OSCC patients. Further evaluation of this gene signature as a prognostic marker in other populations of patients with HPV-negative OSCC is warranted.
doi:10.1158/1078-0432.CCR-12-2647
PMCID: PMC3593802  PMID: 23319825
gene signature; prognosis; HPV-negative; OSCC
22.  Acute Respiratory Distress Syndrome after Trauma: Development and Validation of a Predictive Model 
Critical Care Medicine  2012;40(8):2295-2303.
Objective
To determine early clinical predictors of Acute Respiratory Distress Syndrome (ARDS) after major traumatic injury and characterize the performance of this ARDS prediction model, and two previously published ARDS prediction models, in an independent cohort of severely injured patients.
Design
Prospective cohort study
Setting
University-affiliated level I trauma center in Seattle, WA, and nine hospitals participating in the Inflammation and Host Response to Injury Consortium.
Patients
Model derivation utilized data from 224 patients participating in a randomized controlled trial. All models were validated in an independent cohort of 1,762 trauma patients.
Measurements and Main Results
Variables strongly associated with ARDS in bivariate analysis (p<0.01) were entered into a multiple logistic regression equation to generate an ARDS predictive model. We evaluated the performance of all models using the area under the receiver operator characteristic (ROC) curve. ARDS occurred in 79 subjects (35%) belonging to the development cohort and in 423 subjects (24%) from the validation cohort. Multivariable predictors of ARDS after trauma included subject age, Acute Physiology and Chronic Health Evaluation (APACHE) II Score, injury severity score, and the presence of blunt traumatic injury, pulmonary contusion, massive transfusion, and flail chest injury (area under the ROC curve 0.79 [95% C.I. 0.73, 0.85]). Validation of the prediction model resulted in an area under the ROC curve of 0.71 (95% C.I. 0.68, 0.74). Our model's performance in the validation cohort was superior to that of two other published ARDS prediction models (0.65 [95% C.I. 0.63, 0.68] and 0.66 [95% C.I. 0.64, 0.69], p<0.01 for all comparisons).
Conclusions
Using routinely available clinical data, our prediction model identifies patients at high risk for ARDS early after severe traumatic injury. This predictive model could facilitate enrollment of subjects into future clinical trials designed to prevent this serious complication.
doi:10.1097/CCM.0b013e3182544f6a
PMCID: PMC3400931  PMID: 22809905
Respiratory Distress Syndrome, Acute; Wounds and Injuries; Multiple Trauma, Receiver Operating Characteristic
23.  Norwegian survival prediction model in trauma: modelling effects of anatomic injury, acute physiology, age, and co-morbidity 
Introduction
Anatomic injury, physiological derangement, age, and injury mechanism are well-founded predictors of trauma outcome. We aimed to develop and validate the first Scandinavian survival prediction model for trauma.
Methods
Eligible were patients admitted to Oslo University Hospital Ullevål within 24 h after injury with Injury Severity Score ≥ 10, proximal penetrating injuries or received by a trauma team. The derivation dataset comprised 5363 patients (August 2000 to July 2006); the validation dataset comprised 2517 patients (August 2006 to July 2008). Exclusion because of missing data was < 1%. Outcome was 30-day mortality. Logistic regression analysis incorporated fractional polynomial modelling and interaction effects. Model validation included a calibration plot, Hosmer–Lemeshow test and receiver operating characteristic (ROC) curves.
Results
The new survival prediction model included the anatomic New Injury Severity Score (NISS), Triage Revised Trauma Score (T-RTS, comprising Glascow Coma Scale score, respiratory rate, and systolic blood pressure), age, pre-injury co-morbidity scored according to the American Society of Anesthesiologists Physical Status Classification System (ASA-PS), and an interaction term. Fractional polynomial analysis supported treating NISS and T-RTS as linear functions and age as cubic. Model discrimination between survivors and non-survivors was excellent. Area (95% confidence interval) under the ROC curve was 0.966 (0.959–0.972) in the derivation and 0.946 (0.930–0.962) in the validation dataset. Overall, low mortality and skewed survival probability distribution invalidated model calibration using the Hosmer–Lemeshow test.
Conclusions
The Norwegian survival prediction model in trauma (NORMIT) is a promising alternative to existing prediction models. External validation of the model in other trauma populations is warranted.
doi:10.1111/aas.12256
PMCID: PMC4276290  PMID: 24438461
24.  Prediction of pressure ulcer development in hospitalized patients: a tool for risk assessment 
Objectives
To identify independent predictors for development of pressure ulcers in hospitalized patients and to develop a simple prediction rule for pressure ulcer development.
Design
The Prevention and Pressure Ulcer Risk Score Evaluation (prePURSE) study is a prospective cohort study in which patients are followed up once a week until pressure ulcer occurrence, discharge from hospital, or length of stay over 12 weeks. Data were collected between January 1999 and June 2000.
Setting
Two large hospitals in the Netherlands.
Participants
Adult patients admitted to the surgical, internal, neurological and geriatric wards for more than 5 days were eligible. A consecutive sample of 1536 patients was visited, 1431 (93%) of whom agreed to participate. Complete follow up data were available for 1229 (80%) patients.
Main outcome measures
Occurrence of a pressure ulcer grade 2 or worse during admission to hospital.
Results
Independent predictors of pressure ulcers were age, weight at admission, abnormal appearance of the skin, friction and shear, and planned surgery in coming week. The area under the curve of the final prediction rule was 0.70 after bootstrapping. At a cut off score of 20, 42% of the patient weeks were identified as at risk for pressure ulcer development, thus correctly identifying 70% of the patient weeks in which a pressure ulcer occurred.
Conclusion
A simple clinical prediction rule based on five patient characteristics may help to identify patients at increased risk for pressure ulcer development and in need of preventive measures.
doi:10.1136/qshc.2005.015362
PMCID: PMC2563999  PMID: 16456213
decubitus ulcer; nursing; risk assessment; prognosis
25.  Testing for Helicobacter pylori in dyspeptic patients suspected of peptic ulcer disease in primary care: cross sectional study 
BMJ : British Medical Journal  2001;323(7304):71-75.
Objectives
To develop an easily applicable diagnostic scoring method to determine the presence of peptic ulcers in dyspeptic patients in a primary care setting; to evaluate whether Helicobacter pylori testing adds value to history taking.
Design
Cross sectional study.
Setting
General practitioners' offices in the Utrecht area of the Netherlands.
Participants
565 patients consulting a general practitioner about dyspeptic symptoms of at least two weeks' duration.
Main outcome measures
The presence or absence of peptic ulcer; independent predictors of the presence of peptic ulcer as obtained from history taking and the added value of H pylori testing were quantified by using multivariate logistic regression analyses.
Results
A history of peptic ulcer, pain on an empty stomach, and smoking were strong and independent diagnostic determinants of peptic ulcer disease, with odds ratios of 5.5 (95% confidence interval 2.6 to 11.8), 2.8 (1.0 to 4.0), and 2.0 (1.4 to 6.0) respectively. The area under the receiver operating characteristic curve (ROC area) of these determinants together was 0.71. Adding the H pylori test increased the ROC area only to 0.75. However, in a group of patients at high risk, identified by means of a simple scoring rule based on history taking, the predictive value for the presence of peptic ulcer increased from 16% to 26% after a positive H pylori test.
Conclusions
In the total group of dyspeptic patients in primary care, H pylori testing has no value in addition to history taking for diagnosing peptic ulcer disease. In a subgroup of patients at high risk of having peptic ulcer disease, however, it might be useful to test for and treat H pylori infections.
What is already known on this topicIn primary care, predicting the presence of peptic ulcer disease in dyspeptic patients on the basis of history taking is difficultInfection with Helicobacter pylori is associated with peptic ulcer diseaseMany non-invasive H pylori tests are available, but the value they add to history taking is not knownWhat this paper addsThree simple questions from history taking can distinguish between patients at high and low risk of peptic ulcer diseaseIn uninvestigated patients with dyspepsia in primary care, H pylori testing adds nothing to optimal history taking in the diagnosis of peptic ulcer diseaseIn patients at high risk of peptic ulcer disease it is useful to test for and treat H pylori infection
PMCID: PMC34540  PMID: 11451780

Results 1-25 (1392577)