PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1942151)

Clipboard (0)
None

Related Articles

1.  A Risk Prediction Model for the Assessment and Triage of Women with Hypertensive Disorders of Pregnancy in Low-Resourced Settings: The miniPIERS (Pre-eclampsia Integrated Estimate of RiSk) Multi-country Prospective Cohort Study 
PLoS Medicine  2014;11(1):e1001589.
Beth Payne and colleagues use a risk prediction model, the Pre-eclampsia Integrated Estimate of RiSk (miniPIERS) to help inform the clinical assessment and triage of women with hypertensive disorders of pregnancy in low-resourced settings.
Please see later in the article for the Editors' Summary
Background
Pre-eclampsia/eclampsia are leading causes of maternal mortality and morbidity, particularly in low- and middle- income countries (LMICs). We developed the miniPIERS risk prediction model to provide a simple, evidence-based tool to identify pregnant women in LMICs at increased risk of death or major hypertensive-related complications.
Methods and Findings
From 1 July 2008 to 31 March 2012, in five LMICs, data were collected prospectively on 2,081 women with any hypertensive disorder of pregnancy admitted to a participating centre. Candidate predictors collected within 24 hours of admission were entered into a step-wise backward elimination logistic regression model to predict a composite adverse maternal outcome within 48 hours of admission. Model internal validation was accomplished by bootstrapping and external validation was completed using data from 1,300 women in the Pre-eclampsia Integrated Estimate of RiSk (fullPIERS) dataset. Predictive performance was assessed for calibration, discrimination, and stratification capacity. The final miniPIERS model included: parity (nulliparous versus multiparous); gestational age on admission; headache/visual disturbances; chest pain/dyspnoea; vaginal bleeding with abdominal pain; systolic blood pressure; and dipstick proteinuria. The miniPIERS model was well-calibrated and had an area under the receiver operating characteristic curve (AUC ROC) of 0.768 (95% CI 0.735–0.801) with an average optimism of 0.037. External validation AUC ROC was 0.713 (95% CI 0.658–0.768). A predicted probability ≥25% to define a positive test classified women with 85.5% accuracy. Limitations of this study include the composite outcome and the broad inclusion criteria of any hypertensive disorder of pregnancy. This broad approach was used to optimize model generalizability.
Conclusions
The miniPIERS model shows reasonable ability to identify women at increased risk of adverse maternal outcomes associated with the hypertensive disorders of pregnancy. It could be used in LMICs to identify women who would benefit most from interventions such as magnesium sulphate, antihypertensives, or transportation to a higher level of care.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Each year, ten million women develop pre-eclampsia or a related hypertensive (high blood pressure) disorder of pregnancy and 76,000 women die as a result. Globally, hypertensive disorders of pregnancy cause around 12% of maternal deaths—deaths of women during or shortly after pregnancy. The mildest of these disorders is gestational hypertension, high blood pressure that develops after 20 weeks of pregnancy. Gestational hypertension does not usually harm the mother or her unborn child and resolves after delivery but up to a quarter of women with this condition develop pre-eclampsia, a combination of hypertension and protein in the urine (proteinuria). Women with mild pre-eclampsia may not have any symptoms—the condition is detected during antenatal checks—but more severe pre-eclampsia can cause headaches, blurred vision, and other symptoms, and can lead to eclampsia (fits), multiple organ failure, and death of the mother and/or her baby. The only “cure” for pre-eclampsia is to deliver the baby as soon as possible but women are sometimes given antihypertensive drugs to lower their blood pressure or magnesium sulfate to prevent seizures.
Why Was This Study Done?
Women in low- and middle-income countries (LMICs) are more likely to develop complications of pre-eclampsia than women in high-income countries and most of the deaths associated with hypertensive disorders of pregnancy occur in LMICs. The high burden of illness and death in LMICs is thought to be primarily due to delays in triage (the identification of women who are or may become severely ill and who need specialist care) and delays in transporting these women to facilities where they can receive appropriate care. Because there is a shortage of health care workers who are adequately trained in the triage of suspected cases of hypertensive disorders of pregnancy in many LMICs, one way to improve the situation might be to design a simple tool to identify women at increased risk of complications or death from hypertensive disorders of pregnancy. Here, the researchers develop miniPIERS (Pre-eclampsia Integrated Estimate of RiSk), a clinical risk prediction model for adverse outcomes among women with hypertensive disorders of pregnancy suitable for use in community and primary health care facilities in LMICs.
What Did the Researchers Do and Find?
The researchers used data on candidate predictors of outcome that are easy to collect and/or measure in all health care settings and that are associated with pre-eclampsia from women admitted with any hypertensive disorder of pregnancy to participating centers in five LMICs to build a model to predict death or a serious complication such as organ damage within 48 hours of admission. The miniPIERS model included parity (whether the woman had been pregnant before), gestational age (length of pregnancy), headache/visual disturbances, chest pain/shortness of breath, vaginal bleeding with abdominal pain, systolic blood pressure, and proteinuria detected using a dipstick. The model was well-calibrated (the predicted risk of adverse outcomes agreed with the observed risk of adverse outcomes among the study participants), it had a good discriminatory ability (it could separate women who had a an adverse outcome from those who did not), and it designated women as being at high risk (25% or greater probability of an adverse outcome) with an accuracy of 85.5%. Importantly, external validation using data collected in fullPIERS, a study that developed a more complex clinical prediction model based on data from women attending tertiary hospitals in high-income countries, confirmed the predictive performance of miniPIERS.
What Do These Findings Mean?
These findings indicate that the miniPIERS model performs reasonably well as a tool to identify women at increased risk of adverse maternal outcomes associated with hypertensive disorders of pregnancy. Because miniPIERS only includes simple-to-measure personal characteristics, symptoms, and signs, it could potentially be used in resource-constrained settings to identify the women who would benefit most from interventions such as transportation to a higher level of care. However, further external validation of miniPIERS is needed using data collected from women living in LMICs before the model can be used during routine antenatal care. Moreover, the value of miniPIERS needs to be confirmed in implementation projects that examine whether its potential translates into clinical improvements. For now, though, the model could provide the basis for an education program to increase the knowledge of women, families, and community health care workers in LMICs about the signs and symptoms of hypertensive disorders of pregnancy.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001589.
The World Health Organization provides guidelines for the management of hypertensive disorders of pregnancy in low-resourced settings
The Maternal and Child Health Integrated Program provides information on pre-eclampsia and eclampsia targeted to low-resourced settings along with a tool-kit for LMIC providers
The US National Heart, Lung, and Blood Institute provides information about high blood pressure in pregnancy and a guide to lowering blood pressure in pregnancy
The UK National Health Service Choices website provides information about pre-eclampsia
The US not-for profit organization Preeclampsia Foundation provides information about all aspects of pre-eclampsia; its website includes some personal stories
The UK charity Healthtalkonline also provides personal stories about hypertensive disorders of pregnancy
MedlinePlus provides links to further information about high blood pressure and pregnancy (in English and Spanish); the MedlinePlus Encyclopedia has a video about pre-eclampsia (also in English and Spanish)
More information about miniPIERS and about fullPIERS is available
doi:10.1371/journal.pmed.1001589
PMCID: PMC3897359  PMID: 24465185
2.  Predictive Validity of the Braden Scale for Patients in Intensive Care Units 
Background
Patients in intensive care units are at higher risk for development of pressure ulcers than other patients. In order to prevent pressure ulcers from developing in intensive care patients, risk for development of pressure ulcers must be assessed accurately.
Objectives
To evaluate the predictive validity of the Braden scale for assessing risk for development of pressure ulcers in intensive care patients by using 4 years of data from electronic health records.
Methods
Data from the electronic health records of patients admitted to intensive care units between January 1, 2007, and December 31, 2010, were extracted from the data warehouse of an academic medical center. Predictive validity was measured by using sensitivity, specificity, positive predictive value, and negative predictive value. The receiver operating characteristic curve was generated, and the area under the curve was reported.
Results
A total of 7790 intensive care patients were included in the analysis. A cutoff score of 16 on the Braden scale had a sensitivity of 0.954, specificity of 0.207, positive predictive value of 0.114, and negative predictive value of 0.977. The area under the curve was 0.672 (95% CI, 0.663–0.683). The optimal cutoff for intensive care patients, determined from the receiver operating characteristic curve, was 13.
Conclusions
The Braden scale shows insufficient predictive validity and poor accuracy in discriminating intensive care patients at risk of pressure ulcers developing. The Braden scale may not sufficiently reflect characteristics of intensive care patients. Further research is needed to determine which possibly predictive factors are specific to intensive care units in order to increase the usefulness of the Braden scale for predicting pressure ulcers in intensive care patients.
doi:10.4037/ajcc2013991
PMCID: PMC4042540  PMID: 24186823
3.  Risk Assessment Tool for Pressure Ulcer Development in Indian Surgical Wards 
The Indian Journal of Surgery  2012;77(3):206-212.
The aims of this paper were to compare the predictive validity of three pressure ulcer (PU) risk scales—the Norton scale, the Braden scale, and the Waterlow scale—and to choose the most appropriate calculator for predicting PU risk in surgical wards of India. This is an observational prospective cohort study in a tertiary educational hospital in New Delhi among 100 surgical ward patients from April to July 2011. The main outcomes measured included sensitivity, specificity, positive predictive value (PVP) and negative predictive value (PVN), and the area under the curve of the receiver operating characteristic (ROC) curve of the three PU risk assessment scales. Based on the cutoff points found most appropriate in this study, the sensitivity, specificity, PVP, and PVN were as follows: the Norton scale (cutoff, 16) had the values of 95.6, 93.5, 44.8, and 98.6, respectively; the Braden scale (cutoff, 17) had values of 100, 89.6, 42.5, and 100, respectively; and the Waterlow scale (cutoff, 11) had 91.3, 84.4, 38.8, and 97, respectively. According to the ROC curve, the Norton scale is the most appropriate tool. Factors such as physical condition, activity, mobility, body mass index (BMI), nutrition, friction, and shear are extremely significant in determining risk of PU development (p < 0.0001). The Norton scale is most effective in predicting PU risk in Indian surgical wards. BMI, mobility, activity, nutrition, friction, and shear are the most significant factors in Indian surgical ward settings with necessity for future comparison with established scales.
doi:10.1007/s12262-012-0779-y
PMCID: PMC4522249  PMID: 26246703
Pressure ulcer; Norton scale; Braden scale; Waterlow scale; Predictor of pressure ulcer
4.  Body Mass Index and Pressure Ulcers: Improved Predictability of Pressure Ulcers in Intensive Care Patients 
Background
Obesity contributes to immobility and subsequent pressure on skin surfaces. Knowledge of the relationship between obesity and development of pressure ulcers in intensive care patients will provide better understanding of which patients are at high risk for pressure ulcers and allow more efficient prevention.
Objectives
To examine the incidence of pressure ulcers in patients who differ in body mass index and to determine whether inclusion of body mass index enhanced use of the Braden scale in the prediction of pressure ulcers.
Methods
In this retrospective cohort study, data were collected from the medical records of 4 groups of patients with different body mass index values: underweight, normal weight, obese, and extremely obese. Data included patients’ demographics, body weight, score on the Braden scale, and occurrence of pressure ulcers.
Results
The incidence of pressure ulcers in the underweight, normal weight, obese, and extremely obese groups was 8.6%, 5.5%, 2.8%, and 9.9%, respectively. When both the score on the Braden scale and the body mass index were predictive of pressure ulcers, extremely obese patients were about 2 times more likely to experience an ulcer than were normal weight patients. In the final model, the area under the curve was 0.71. The baseline area under the curve for the Braden scale was 0.68.
Conclusions
Body mass index and incidence of pressure ulcers were related in intensive care patients. Addition of body mass index did not appreciably improve the accuracy of the Braden scale for predicting pressure ulcers.
doi:10.4037/ajcc2014535
PMCID: PMC4385001  PMID: 25362673
5.  Development of the interRAI Pressure Ulcer Risk Scale (PURS) for use in long-term care and home care settings 
BMC Geriatrics  2010;10:67.
Background
In long-term care (LTC) homes in the province of Ontario, implementation of the Minimum Data Set (MDS) assessment and The Braden Scale for predicting pressure ulcer risk were occurring simultaneously. The purpose of this study was, using available data sources, to develop a bedside MDS-based scale to identify individuals under care at various levels of risk for developing pressure ulcers in order to facilitate targeting risk factors for prevention.
Methods
Data for developing the interRAI Pressure Ulcer Risk Scale (interRAI PURS) were available from 2 Ontario sources: three LTC homes with 257 residents assessed during the same time frame with the MDS and Braden Scale for Predicting Pressure Sore Risk, and eighty-nine Ontario LTC homes with 12,896 residents with baseline/reassessment MDS data (median time 91 days), between 2005-2007. All assessments were done by trained clinical staff, and baseline assessments were restricted to those with no recorded pressure ulcer. MDS baseline/reassessment samples used in further testing included 13,062 patients of Ontario Complex Continuing Care Hospitals (CCC) and 73,183 Ontario long-stay home care (HC) clients.
Results
A data-informed Braden Scale cross-walk scale using MDS items was devised from the 3-facility dataset, and tested in the larger longitudinal LTC homes data for its association with a future new pressure ulcer, giving a c-statistic of 0.676. Informed by this, LTC homes data along with evidence from the clinical literature was used to create an alternate-form 7-item additive scale, the interRAI PURS, with good distributional characteristics and c-statistic of 0.708. Testing of the scale in CCC and HC longitudinal data showed strong association with development of a new pressure ulcer.
Conclusions
interRAI PURS differentiates risk of developing pressure ulcers among facility-based residents and home care recipients. As an output from an MDS assessment, it eliminates duplicated effort required for separate pressure ulcer risk scoring. Moreover, it can be done manually at the bedside during critical early days in an admission when the full MDS has yet to be completed. It can be calculated with established MDS instruments as well as with the newer interRAI suite instruments designed to follow persons across various care settings (interRAI Long-Term Care Facilities, interRAI Home Care, interRAI Palliative Care).
doi:10.1186/1471-2318-10-67
PMCID: PMC2955034  PMID: 20854670
6.  Prospective cohort study of routine use of risk assessment scales for prediction of pressure ulcers 
BMJ : British Medical Journal  2002;325(7368):797.
Objective
To evaluate whether risk assessment scales can be used to identify patients who are likely to get pressure ulcers.
Design
Prospective cohort study.
Setting
Two large hospitals in the Netherlands.
Participants
1229 patients admitted to the surgical, internal, neurological, or geriatric wards between January 1999 and June 2000.
Main outcome measure
Occurrence of a pressure ulcer of grade 2 or worse while in hospital.
Results
135 patients developed pressure ulcers during four weeks after admission. The weekly incidence of patients with pressure ulcers was 6.2% (95% confidence interval 5.2% to 7.2%). The area under the receiver operating characteristic curve was 0.56 (0.51 to 0.61) for the Norton scale, 0.55 (0.49 to 0.60) for the Braden scale, and 0.61 (0.56 to 0.66) for the Waterlow scale; the areas for the subpopulation, excluding patients who received preventive measures without developing pressure ulcers and excluding surgical patients, were 0.71 (0.65 to 0.77), 0.71 (0.64 to 0.78), and 0.68 (0.61 to 0.74), respectively. In this subpopulation, using the recommended cut-off points, the positive predictive value was 7.0% for the Norton, 7.8% for the Braden, and 5.3% for the Waterlow scale.
Conclusion
Although risk assessment scales predict the occurrence of pressure ulcers to some extent, routine use of these scales leads to inefficient use of preventive measures. An accurate risk assessment scale based on prospectively gathered data should be developed.
What is already known on this topicThe incidence of pressure ulcers in hospitalised patients varies between 2.7% and 29.5%Guidelines for prevention of pressure ulcers base the allocation of labour and resource intensive measures on the outcome of risk assessment scalesMost risk assessment scales are based on expert opinion or literature review and have not been evaluatedThe sensitivity and specificity of risk assessment scales varyWhat this study addsThe effectiveness of available risk assessment scales is limitedUse of the outcome of risk assessment scales leads to inefficient allocation of preventive measures
PMCID: PMC128943  PMID: 12376437
7.  Utility of Braden Scale Nutrition Subscale Ratings as an Indicator of Dietary Intake and Weight Outcomes among Nursing Home Residents at Risk for Pressure Ulcers 
Healthcare  2015;3(4):879-897.
The Braden Scale for Pressure Sore Risk© is a screening tool to determine overall risk of pressure ulcer development and estimate severity of specific risk factors for individual residents. Nurses often use the Braden nutrition subscale to screen nursing home (NH) residents for nutritional risk, and then recommend a more comprehensive nutritional assessment as indicated. Secondary data analysis from the Turn for Ulcer ReductioN (TURN) study’s investigation of U.S. and Canadian NH residents (n = 690) considered at moderate or high pressure ulcer (PrU) risk was used to evaluate the subscale’s utility for identifying nutritional intake risk factors. Associations were examined between Braden Nutritional Risk subscale screening, dietary intake (mean % meal intake and by meal timing, mean number of protein servings, protein sources, % intake of supplements and snacks), weight outcomes, and new PrU incidence. Of moderate and high PrU risk residents, 61.9% and 59.2% ate a mean meal % of <75. Fewer than 18% overall ate <50% of meals or refused meals. No significant differences were observed in weight differences by nutrition subscale risk or in mean number protein servings per meal (1.4 (SD = 0.58) versus 1.3 (SD = 0.53)) for moderate versus high PrU risk residents. The nutrition subscale approximates subsequent estimated dietary intake and can provide insight into meal intake patterns for those at either moderate or high PrU risk. Findings support the Braden Scale’s use as a preliminary screening method to identify focused areas for potential intervention.
doi:10.3390/healthcare3040879
PMCID: PMC4934619  PMID: 27417802
nutrition; nutritional risk; pressure ulcers; Braden Scale; nursing home; TURN Study
8.  Enhancement of Decision Rules to Increase Generalizability and Performance of the Rule-Based System Assessing Risk for Pressure Ulcer 
Applied Clinical Informatics  2013;4(2):251-266.
Background
A rule-based system, the Braden Scale based Automated Risk Assessment Tool (BART), was developed to assess risk for pressure ulcer in a previous study. However, the BART illustrated two major areas in need of improvement, which were: 1) the enhancement of decision rules and 2) validation of generalizability to increase performance of BART.
Objectives
To enhance decision rules and validate generalizability of the enhanced BART.
Method
Two layers of decision rule enhancement were performed: 1) finding additional data items with the experts and 2) validating logics of decision rules utilizing a guideline modeling language. To refine the decision rules of the BART further, a survey study was conducted to ascertain the operational level of patient status description of the Braden Scale. The enhanced BART (BART2) was designed to assess levels of pressure ulcer risk of patients (N = 99) whose data were collected by the nurses. The patients’ level of pressure ulcer risk was assessed by the nurses using a Braden Scale, by an expert using a Braden Scale, and by the automatic BART2 electronic risk assessment. SPSS statistical software version 20 (IBM, 2011) was used to test the agreement between the three different risk assessments performed on each patient.
Results
The level of agreement between the BART2 and the expert pressure ulcer assessments was “very good (0.83)”. The sensitivity and the specificity of the BART2 were 86.8% and 90.3% respectively.
Conclusion
This study illustrated successful enhancement of decision rules and increased generalizability and performance of the BART2. Although the BART2 showed a “very good” level of agreement (kappa = 0.83) with an expert, the data reveal a need to improve the moisture parameter of the Braden Scale. Once the moisture parameter has been improved, BART2 will improve the quality of care, while accurately identifying the patients at risk for pressure ulcers.
doi:10.4338/ACI-2012-12-RA-0056
PMCID: PMC3716416  PMID: 23874362
Generalizability; decision support system; guideline interchange format; pressure ulcer risk
9.  Venous Thrombosis Risk after Cast Immobilization of the Lower Extremity: Derivation and Validation of a Clinical Prediction Score, L-TRiP(cast), in Three Population-Based Case–Control Studies 
PLoS Medicine  2015;12(11):e1001899.
Background
Guidelines and clinical practice vary considerably with respect to thrombosis prophylaxis during plaster cast immobilization of the lower extremity. Identifying patients at high risk for the development of venous thromboembolism (VTE) would provide a basis for considering individual thromboprophylaxis use and planning treatment studies.
The aims of this study were (1) to investigate the predictive value of genetic and environmental risk factors, levels of coagulation factors, and other biomarkers for the occurrence of VTE after cast immobilization of the lower extremity and (2) to develop a clinical prediction tool for the prediction of VTE in plaster cast patients.
Methods and Findings
We used data from a large population-based case–control study (MEGA study, 4,446 cases with VTE, 6,118 controls without) designed to identify risk factors for a first VTE. Cases were recruited from six anticoagulation clinics in the Netherlands between 1999 and 2004; controls were their partners or individuals identified via random digit dialing. Identification of predictor variables to be included in the model was based on reported associations in the literature or on a relative risk (odds ratio) > 1.2 and p ≤ 0.25 in the univariate analysis of all participants. Using multivariate logistic regression, a full prediction model was created. In addition to the full model (all variables), a restricted model (minimum number of predictors with a maximum predictive value) and a clinical model (environmental risk factors only, no blood draw or assays required) were created. To determine the discriminatory power in patients with cast immobilization (n = 230), the area under the curve (AUC) was calculated by means of a receiver operating characteristic. Validation was performed in two other case–control studies of the etiology of VTE: (1) the THE-VTE study, a two-center, population-based case–control study (conducted in Leiden, the Netherlands, and Cambridge, United Kingdom) with 784 cases and 523 controls included between March 2003 and December 2008 and (2) the Milan study, a population-based case–control study with 2,117 cases and 2,088 controls selected between December 1993 and December 2010 at the Thrombosis Center, Fondazione IRCCS Ca’ Granda–Ospedale Maggiore Policlinico, Milan, Italy.
The full model consisted of 32 predictors, including three genetic factors and six biomarkers. For this model, an AUC of 0.85 (95% CI 0.77–0.92) was found in individuals with plaster cast immobilization of the lower extremity. The AUC for the restricted model (containing 11 predictors, including two genetic factors and one biomarker) was 0.84 (95% CI 0.77–0.92). The clinical model (consisting of 14 environmental predictors) resulted in an AUC of 0.77 (95% CI 0.66–0.87). The clinical model was converted into a risk score, the L-TRiP(cast) score (Leiden–Thrombosis Risk Prediction for patients with cast immobilization score), which showed an AUC of 0.76 (95% CI 0.66–0.86). Validation in the THE-VTE study data resulted in an AUC of 0.77 (95% CI 0.58–0.96) for the L-TRiP(cast) score. Validation in the Milan study resulted in an AUC of 0.93 (95% CI 0.86–1.00) for the full model, an AUC of 0.92 (95% CI 0.76–0.87) for the restricted model, and an AUC of 0.96 (95% CI 0.92–0.99) for the clinical model. The L-TRiP(cast) score resulted in an AUC of 0.95 (95% CI 0.91–0.99).
Major limitations of this study were that information on thromboprophylaxis was not available for patients who had plaster cast immobilization of the lower extremity and that blood was drawn 3 mo after the thrombotic event.
Conclusions
These results show that information on environmental risk factors, coagulation factors, and genetic determinants in patients with plaster casts leads to high accuracy in the prediction of VTE risk. In daily practice, the clinical model may be the preferred model as its factors are most easy to determine, while the model still has good predictive performance. These results may provide guidance for thromboprophylaxis and form the basis for a management study.
Using three population-based case-control studies, Banne Nemeth and colleagues derive and validate a clinical prediction score (L-TRiP(cast)) for venous thrombosis risk.
Editors' Summary
Background
Blood normally flows smoothly around the human body, but when a cut or other injury occurs, proteins called clotting factors make the blood gel (coagulate) at the injury site. The resultant clot (thrombus) plugs the wound and prevents blood loss. Sometimes, however, a thrombus forms inside an uninjured blood vessel and partly or completely blocks the blood flow. Clot formation inside one of the veins deep in the body (usually in a leg) is called deep vein thrombosis (DVT). DVT, which can cause pain, swelling, and redness in the affected limb, is treated with anticoagulants, drugs that stop the clot growing. If left untreated, part of the clot can break off and travel to the lungs, where it can cause a life-threatening pulmonary embolism. DVT and pulmonary embolism are known collectively as venous thromboembolism (VTE). Risk factors for VTE include age, oral contraceptive use, having an inherited blood clotting disorder, and prolonged inactivity (for example, being bedridden). An individual’s lifetime risk of developing VTE is about 11%; 10%–30% of people die within 28 days of diagnosis of VTE.
Why Was This Study Done?
Clinicians cannot currently accurately predict who will develop VTE, but it would be very helpful to be able to identify individuals at high risk for VTE because the condition can be prevented by giving anticoagulants before a clot forms (thromboprophylaxis). The ability to predict VTE would be particularly useful in patients who have had a lower limb immobilized in a cast after, for example, breaking a bone. These patients have an increased risk of VTE compared to patients without cast immobilization. However, their absolute risk of VTE is not high enough to justify giving everyone with a leg cast thromboprophylaxis because this therapy increases the risk of major bleeds. Here, the researchers investigate the predictive value of genetic and environmental factors and levels of coagulation factors and other biomarkers on VTE occurrence after cast immobilization of the lower leg and develop a clinical tool for the prediction of VTE in patients with plaster casts.
What Did the Researchers Do and Find?
The researchers used data from the MEGA study, a study of risk factors for VTE, to build a prediction model for a first VTE in patients with a leg cast; the prediction model included 32 predictors (the full model). They also built a restricted model, which included only 11 predictors but had maximum predictive value, and a clinical model, which included 14 environmental predictors that can all be determined without drawing blood or undertaking any assays. They then determined the ability of each model to distinguish between patients with a leg cast who did and did not develop VTE using receiver operating characteristic (ROC) curve analysis. The area under the curve (AUC) for the full model was 0.85, for the restricted model it was 0.85, and for the clinical model it was 0.77. (A predictive test that discriminates perfectly between individuals who do and do not subsequently develop a specific condition has an AUC of 1.00; a test that is no better at predicting outcomes than flipping a coin has an AUC of 0.5.) Similar or higher AUCs were obtained for all the models using data collected in two independent studies. Finally, the researchers converted the clinical model into a risk score by giving each variable in the model a numerical score. The sum of these scores was used to stratify individuals into categories of low or high risk for VTE. With a cutoff of 9 points, the risk score correctly identified 80.8% of the patients in the MEGA study with a plaster cast who developed VTE and 60.8% of the patients who did not develop VTE.
What Do These Findings Mean?
Some aspects of this study may limit the accuracy of its findings. For example, no information was available about which patients with a plaster cast received thromboprophylaxis. Nevertheless, these findings suggest that information on environmental risk factors, coagulation factors, and genetic determinants can be used to predict VTE risk in patients with a leg cast with high accuracy. Importantly, the risk score derived and validated by the researchers, which includes only predictors that can be easily determined in clinical practice, may help clinicians decide which patients with a leg cast should receive thromboprophylaxis and which should not be exposed to the risk of anticoagulant therapy, until an unambiguous guideline for these patients becomes available.
Additional Information
This list of resources contains links that can be accessed when viewing the PDF on a device or via the online version of the article at http://dx.doi.org/10.1371/journal.pmed.1001899.
The US National Heart, Lung, and Blood Institute provides information on deep vein thrombosis (including an animation about how DVT causes pulmonary embolisms) and on pulmonary embolism
The UK National Health Service Choices website has information on deep vein thrombosis (including personal stories) and on pulmonary embolism
The US non-profit organization National Blood Clot Alliance provides detailed information about deep vein thrombosis and pulmonary embolism for patients and professionals and includes a selection of personal stories about these conditions
MedlinePlus has links to further information about deep vein thrombosis and pulmonary embolism (in English and Spanish)
Wikipedia has a page on ROC curve analysis (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
More information about the MEGA study is available
doi:10.1371/journal.pmed.1001899
PMCID: PMC4640574  PMID: 26554832
10.  Exploring Predictors of Complication in Older Surgical Patients: A Deficit Accumulation Index and the Braden Scale 
OBJECTIVES
To determine whether readily collected perioperative information might identify older surgical patients at higher risk for complication.
DESIGN
Retrospective cohort study
SETTING
Medical chart review at a single academic institution
PARTICIPANTS
102 patients aged 65 years and older who underwent abdominal surgery between January 2007 and December 2009.
MEASUREMENTS
Primary predictor variables were the first postoperative Braden Scale score (within 24 hours of surgery) and a Deficit Accumulation Index (DAI) constructed based on 39 available preoperative variables. The primary outcome was presence or absence of complication within 30 days of surgery date.
RESULTS
Of 102 patients, 64 experienced at least one complication with wound infection being the most common complication. In models adjusted for age, race, sex, and open vs. laparoscopic surgery, lower Braden Scale scores were predictive of 30-day postoperative complication (OR 1.30 [CI 95%, 1.06, 1.60]), longer length of stay (â = 1.44 (0.25) days; pvalue = ≤ 0.0001) and discharge to institution rather than home (OR 1.23 [CI 95%, 1.02, 1.48]). The cut-off value for the Braden Score with the highest predictive value for complication was ≤ 18 (OR 3.63 [CI 95%, 1.43, 9.19]; c statistic of 0.744). The DAI and several traditional surgical risk factors were not significantly associated with 30-day postoperative complications in this cohort.
CONCLUSION
This is the first study to identify the perioperative score on the Braden Scale, a widely used risk-stratifier for pressure ulcers, as an independent predictor of other adverse outcomes in geriatric surgical patients. Further studies are needed to confirm this finding as well as investigate other utilizations for this tool, which correlates well to phenotypic models of frailty.
doi:10.1111/j.1532-5415.2012.04109.x
PMCID: PMC3445658  PMID: 22906222
Braden Scale; Deficit Accumulation Index; Postoperative Complication; Frailty; Multi-disciplinary
11.  Clinical Utility of Serologic Testing for Celiac Disease in Ontario 
Executive Summary
Objective of Analysis
The objective of this evidence-based evaluation is to assess the accuracy of serologic tests in the diagnosis of celiac disease in subjects with symptoms consistent with this disease. Furthermore the impact of these tests in the diagnostic pathway of the disease and decision making was also evaluated.
Celiac Disease
Celiac disease is an autoimmune disease that develops in genetically predisposed individuals. The immunological response is triggered by ingestion of gluten, a protein that is present in wheat, rye, and barley. The treatment consists of strict lifelong adherence to a gluten-free diet (GFD).
Patients with celiac disease may present with a myriad of symptoms such as diarrhea, abdominal pain, weight loss, iron deficiency anemia, dermatitis herpetiformis, among others.
Serologic Testing in the Diagnosis Celiac Disease
There are a number of serologic tests used in the diagnosis of celiac disease.
Anti-gliadin antibody (AGA)
Anti-endomysial antibody (EMA)
Anti-tissue transglutaminase antibody (tTG)
Anti-deamidated gliadin peptides antibodies (DGP)
Serologic tests are automated with the exception of the EMA test, which is more time-consuming and operator-dependent than the other tests. For each serologic test, both immunoglobulin A (IgA) or G (IgG) can be measured, however, IgA measurement is the standard antibody measured in celiac disease.
Diagnosis of Celiac Disease
According to celiac disease guidelines, the diagnosis of celiac disease is established by small bowel biopsy. Serologic tests are used to initially detect and to support the diagnosis of celiac disease. A small bowel biopsy is indicated in individuals with a positive serologic test. In some cases an endoscopy and small bowel biopsy may be required even with a negative serologic test. The diagnosis of celiac disease must be performed on a gluten-containing diet since the small intestine abnormalities and the serologic antibody levels may resolve or improve on a GFD.
Since IgA measurement is the standard for the serologic celiac disease tests, false negatives may occur in IgA-deficient individuals.
Incidence and Prevalence of Celiac Disease
The incidence and prevalence of celiac disease in the general population and in subjects with symptoms consistent with or at higher risk of celiac disease based on systematic reviews published in 2004 and 2009 are summarized below.
Incidence of Celiac Disease in the General Population
Adults or mixed population: 1 to 17/100,000/year
Children: 2 to 51/100,000/year
In one of the studies, a stratified analysis showed that there was a higher incidence of celiac disease in younger children compared to older children, i.e., 51 cases/100,000/year in 0 to 2 year-olds, 33/100,000/year in 2 to 5 year-olds, and 10/100,000/year in children 5 to 15 years old.
Prevalence of Celiac Disease in the General Population
The prevalence of celiac disease reported in population-based studies identified in the 2004 systematic review varied between 0.14% and 1.87% (median: 0.47%, interquartile range: 0.25%, 0.71%). According to the authors of the review, the prevalence did not vary by age group, i.e., adults and children.
Prevalence of Celiac Disease in High Risk Subjects
Type 1 diabetes (adults and children): 1 to 11%
Autoimmune thyroid disease: 2.9 to 3.3%
First degree relatives of patients with celiac disease: 2 to 20%
Prevalence of Celiac Disease in Subjects with Symptoms Consistent with the Disease
The prevalence of celiac disease in subjects with symptoms consistent with the disease varied widely among studies, i.e., 1.5% to 50% in adult studies, and 1.1% to 17% in pediatric studies. Differences in prevalence may be related to the referral pattern as the authors of a systematic review noted that the prevalence tended to be higher in studies whose population originated from tertiary referral centres compared to general practice.
Research Questions
What is the sensitivity and specificity of serologic tests in the diagnosis celiac disease?
What is the clinical validity of serologic tests in the diagnosis of celiac disease? The clinical validity was defined as the ability of the test to change diagnosis.
What is the clinical utility of serologic tests in the diagnosis of celiac disease? The clinical utility was defined as the impact of the test on decision making.
What is the budget impact of serologic tests in the diagnosis of celiac disease?
What is the cost-effectiveness of serologic tests in the diagnosis of celiac disease?
Methods
Literature Search
A literature search was performed on November 13th, 2009 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from January 1st 2003 and November 13th 2010. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria, full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search. Articles with unknown eligibility were reviewed with a second clinical epidemiologist, then a group of epidemiologists until consensus was established. The quality of evidence was assessed as high, moderate, low or very low according to GRADE methodology.
Studies that evaluated diagnostic accuracy, i.e., both sensitivity and specificity of serology tests in the diagnosis of celiac disease.
Study population consisted of untreated patients with symptoms consistent with celiac disease.
Studies in which both serologic celiac disease tests and small bowel biopsy (gold standard) were used in all subjects.
Systematic reviews, meta-analyses, randomized controlled trials, prospective observational studies, and retrospective cohort studies.
At least 20 subjects included in the celiac disease group.
English language.
Human studies.
Studies published from 2000 on.
Clearly defined cut-off value for the serology test. If more than one test was evaluated, only those tests for which a cut-off was provided were included.
Description of small bowel biopsy procedure clearly outlined (location, number of biopsies per patient), unless if specified that celiac disease diagnosis guidelines were followed.
Patients in the treatment group had untreated CD.
Studies on screening of the general asymptomatic population.
Studies that evaluated rapid diagnostic kits for use either at home or in physician’s offices.
Studies that evaluated diagnostic modalities other than serologic tests such as capsule endoscopy, push enteroscopy, or genetic testing.
Cut-off for serologic tests defined based on controls included in the study.
Study population defined based on positive serology or subjects pre-screened by serology tests.
Celiac disease status known before study enrolment.
Sensitivity or specificity estimates based on repeated testing for the same subject.
Non-peer-reviewed literature such as editorials and letters to the editor.
Population
The population consisted of adults and children with untreated, undiagnosed celiac disease with symptoms consistent with the disease.
Serologic Celiac Disease Tests Evaluated
Anti-gliadin antibody (AGA)
Anti-endomysial antibody (EMA)
Anti-tissue transglutaminase antibody (tTG)
Anti-deamidated gliadin peptides antibody (DGP)
Combinations of some of the serologic tests listed above were evaluated in some studies
Both IgA and IgG antibodies were evaluated for the serologic tests listed above.
Outcomes of Interest
Sensitivity
Specificity
Positive and negative likelihood ratios
Diagnostic odds ratio (OR)
Area under the sROC curve (AUC)
Small bowel biopsy was used as the gold standard in order to estimate the sensitivity and specificity of each serologic test.
Statistical Analysis
Pooled estimates of sensitivity, specificity and diagnostic odds ratios (DORs) for the different serologic tests were calculated using a bivariate, binomial generalized linear mixed model. Statistical significance for differences in sensitivity and specificity between serologic tests was defined by P values less than 0.05, where “false discovery rate” adjustments were made for multiple hypothesis testing. The bivariate regression analyses were performed using SAS version 9.2 (SAS Institute Inc.; Cary, NC, USA). Using the bivariate model parameters, summary receiver operating characteristic (sROC) curves were produced using Review Manager 5.0.22 (The Nordiac Cochrane Centre, The Cochrane Collaboration, 2008). The area under the sROC curve (AUC) was estimated by bivariate mixed-efects binary regression modeling framework. Model specification, estimation and prediction are carried out with xtmelogit in Stata release 10 (Statacorp, 2007). Statistical tests for the differences in AUC estimates could not be carried out.
The study results were stratified according to patient or disease characteristics such as age, severity of Marsh grade abnormalities, among others, if reported in the studies. The literature indicates that the diagnostic accuracy of serologic tests for celiac disease may be affected in patients with chronic liver disease, therefore, the studies identified through the systematic literature review that evaluated the diagnostic accuracy of serologic tests for celiac disease in patients with chronic liver disease were summarized. The effect of the GFD in patiens diagnosed with celiac disease was also summarized if reported in the studies eligible for the analysis.
Summary of Findings
Published Systematic Reviews
Five systematic reviews of studies that evaluated the diagnostic accuracy of serologic celiac disease tests were identified through our literature search. Seventeen individual studies identified in adults and children were eligible for this evaluation.
In general, the studies included evaluated the sensitivity and specificity of at least one serologic test in subjects with symptoms consistent with celiac disease. The gold standard used to confirm the celiac disease diagnosis was small bowel biopsy. Serologic tests evaluated included tTG, EMA, AGA, and DGP, using either IgA or IgG antibodies. Indirect immunoflurorescence was used for the EMA serologic tests whereas enzyme-linked immunosorbent assay (ELISA) was used for the other serologic tests.
Common symptoms described in the studies were chronic diarrhea, abdominal pain, bloating, unexplained weight loss, unexplained anemia, and dermatitis herpetiformis.
The main conclusions of the published systematic reviews are summarized below.
IgA tTG and/or IgA EMA have a high accuracy (pooled sensitivity: 90% to 98%, pooled specificity: 95% to 99% depending on the pooled analysis).
Most reviews found that AGA (IgA or IgG) are not as accurate as IgA tTG and/or EMA tests.
A 2009 systematic review concluded that DGP (IgA or IgG) seems to have a similar accuracy compared to tTG, however, since only 2 studies identified evaluated its accuracy, the authors believe that additional data is required to draw firm conclusions.
Two systematic reviews also concluded that combining two serologic celiac disease tests has little contribution to the accuracy of the diagnosis.
MAS Analysis
Sensitivity
The pooled analysis performed by MAS showed that IgA tTG has a sensitivity of 92.1% [95% confidence interval (CI) 88.0, 96.3], compared to 89.2% (83.3, 95.1, p=0.12) for IgA DGP, 85.1% (79.5, 94.4, p=0.07) for IgA EMA, and 74.9% (63.6, 86.2, p=0.0003) for IgA AGA. Among the IgG-based tests, the results suggest that IgG DGP has a sensitivity of 88.4% (95% CI: 82.1, 94.6), 44.7% (30.3, 59.2) for tTG, and 69.1% (56.0, 82.2) for AGA. The difference was significant when IgG DGP was compared to IgG tTG but not IgG AGA. Combining serologic celiac disease tests yielded a slightly higher sensitivity compared to individual IgA-based serologic tests.
IgA deficiency
The prevalence of total or severe IgA deficiency was low in the studies identified varying between 0 and 1.7% as reported in 3 studies in which IgA deficiency was not used as a referral indication for celiac disease serologic testing. The results of IgG-based serologic tests were positive in all patients with IgA deficiency in which celiac disease was confirmed by small bowel biopsy as reported in four studies.
Specificity
The MAS pooled analysis indicates a high specificity across the different serologic tests including the combination strategy, pooled estimates ranged from 90.1% to 98.7% depending on the test.
Likelihood Ratios
According to the likelihood ratio estimates, both IgA tTG and serologic test combinationa were considered very useful tests (positive likelihood ratio above ten and the negative likelihood ratio below 0.1).
Moderately useful tests included IgA EMA, IgA DGP, and IgG DGP (positive likelihood ratio between five and ten and the negative likelihood ratio between 0.1 and 0.2).
Somewhat useful tests: IgA AGA, IgG AGA, generating small but sometimes important changes from pre- to post-test probability (positive LR between 2 and 5 and negative LR between 0.2 and 0.5)
Not Useful: IgG tTG, altering pre- to post-test probability to a small and rarely important degree (positive LR between 1 and 2 and negative LR between 0.5 and 1).
Diagnostic Odds Ratios (DOR)
Among the individual serologic tests, IgA tTG had the highest DOR, 136.5 (95% CI: 51.9, 221.2). The statistical significance of the difference in DORs among tests was not calculated, however, considering the wide confidence intervals obtained, the differences may not be statistically significant.
Area Under the sROC Curve (AUC)
The sROC AUCs obtained ranged between 0.93 and 0.99 for most IgA-based tests with the exception of IgA AGA, with an AUC of 0.89.
Sensitivity and Specificity of Serologic Tests According to Age Groups
Serologic test accuracy did not seem to vary according to age (adults or children).
Sensitivity and Specificity of Serologic Tests According to Marsh Criteria
Four studies observed a trend towards a higher sensitivity of serologic celiac disease tests when Marsh 3c grade abnormalities were found in the small bowel biopsy compared to Marsh 3a or 3b (statistical significance not reported). The sensitivity of serologic tests was much lower when Marsh 1 grade abnormalities were found in small bowel biopsy compared to Marsh 3 grade abnormalities. The statistical significance of these findings were not reported in the studies.
Diagnostic Accuracy of Serologic Celiac Disease Tests in Subjects with Chronic Liver Disease
A total of 14 observational studies that evaluated the specificity of serologic celiac disease tests in subjects with chronic liver disease were identified. All studies evaluated the frequency of false positive results (1-specificity) of IgA tTG, however, IgA tTG test kits using different substrates were used, i.e., human recombinant, human, and guinea-pig substrates. The gold standard, small bowel biopsy, was used to confirm the result of the serologic tests in only 5 studies. The studies do not seem to have been designed or powered to compare the diagnostic accuracy among different serologic celiac disease tests.
The results of the studies identified in the systematic literature review suggest that there is a trend towards a lower frequency of false positive results if the IgA tTG test using human recombinant substrate is used compared to the guinea pig substrate in subjects with chronic liver disease. However, the statistical significance of the difference was not reported in the studies. When IgA tTG with human recombinant substrate was used, the number of false positives seems to be similar to what was estimated in the MAS pooled analysis for IgA-based serologic tests in a general population of patients. These results should be interpreted with caution since most studies did not use the gold standard, small bowel biopsy, to confirm or exclude the diagnosis of celiac disease, and since the studies were not designed to compare the diagnostic accuracy among different serologic tests. The sensitivity of the different serologic tests in patients with chronic liver disease was not evaluated in the studies identified.
Effects of a Gluten-Free Diet (GFD) in Patients Diagnosed with Celiac Disease
Six studies identified evaluated the effects of GFD on clinical, histological, or serologic improvement in patients diagnosed with celiac disease. Improvement was observed in 51% to 95% of the patients included in the studies.
Grading of Evidence
Overall, the quality of the evidence ranged from moderate to very low depending on the serologic celiac disease test. Reasons to downgrade the quality of the evidence included the use of a surrogate endpoint (diagnostic accuracy) since none of the studies evaluated clinical outcomes, inconsistencies among study results, imprecise estimates, and sparse data. The quality of the evidence was considered moderate for IgA tTg and IgA EMA, low for IgA DGP, and serologic test combinations, and very low for IgA AGA.
Clinical Validity and Clinical Utility of Serologic Testing in the Diagnosis of Celiac Disease
The clinical validity of serologic tests in the diagnosis of celiac disease was considered high in subjects with symptoms consistent with this disease due to
High accuracy of some serologic tests.
Serologic tests detect possible celiac disease cases and avoid unnecessary small bowel biopsy if the test result is negative, unless an endoscopy/ small bowel biopsy is necessary due to the clinical presentation.
Serologic tests support the results of small bowel biopsy.
The clinical utility of serologic tests for the diagnosis of celiac disease, as defined by its impact in decision making was also considered high in subjects with symptoms consistent with this disease given the considerations listed above and since celiac disease diagnosis leads to treatment with a gluten-free diet.
Economic Analysis
A decision analysis was constructed to compare costs and outcomes between the tests based on the sensitivity, specificity and prevalence summary estimates from the MAS Evidence-Based Analysis (EBA). A budget impact was then calculated by multiplying the expected costs and volumes in Ontario. The outcome of the analysis was expected costs and false negatives (FN). Costs were reported in 2010 CAD$. All analyses were performed using TreeAge Pro Suite 2009.
Four strategies made up the efficiency frontier; IgG tTG, IgA tTG, EMA and small bowel biopsy. All other strategies were dominated. IgG tTG was the least costly and least effective strategy ($178.95, FN avoided=0). Small bowel biopsy was the most costly and most effective strategy ($396.60, FN avoided =0.1553). The cost per FN avoided were $293, $369, $1,401 for EMA, IgATTG and small bowel biopsy respectively. One-way sensitivity analyses did not change the ranking of strategies.
All testing strategies with small bowel biopsy are cheaper than biopsy alone however they also result in more FNs. The most cost-effective strategy will depend on the decision makers’ willingness to pay. Findings suggest that IgA tTG was the most cost-effective and feasible strategy based on its Incremental Cost-Effectiveness Ratio (ICER) and convenience to conduct the test. The potential impact of IgA tTG test in the province of Ontario would be $10.4M, $11.0M and $11.7M respectively in the following three years based on past volumes and trends in the province and basecase expected costs.
The panel of tests is the commonly used strategy in the province of Ontario therefore the impact to the system would be $13.6M, $14.5M and $15.3M respectively in the next three years based on past volumes and trends in the province and basecase expected costs.
Conclusions
The clinical validity and clinical utility of serologic tests for celiac disease was considered high in subjects with symptoms consistent with this disease as they aid in the diagnosis of celiac disease and some tests present a high accuracy.
The study findings suggest that IgA tTG is the most accurate and the most cost-effective test.
AGA test (IgA) has a lower accuracy compared to other IgA-based tests
Serologic test combinations appear to be more costly with little gain in accuracy. In addition there may be problems with generalizability of the results of the studies included in this review if different test combinations are used in clinical practice.
IgA deficiency seems to be uncommon in patients diagnosed with celiac disease.
The generalizability of study results is contingent on performing both the serologic test and small bowel biopsy in subjects on a gluten-containing diet as was the case in the studies identified, since the avoidance of gluten may affect test results.
PMCID: PMC3377499  PMID: 23074399
12.  Translation, adaptation, and validation of the Sunderland Scale and the Cubbin & Jackson Revised Scale in Portuguese 
Objective
To translate into Portuguese and evaluate the measuring properties of the Sunderland Scale and the Cubbin & Jackson Revised Scale, which are instruments for evaluating the risk of developing pressure ulcers during intensive care.
Methods
This study included the process of translation and adaptation of the scales to the Portuguese language, as well as the validation of these tools. To assess the reliability, Cronbach alpha values of 0.702 to 0.708 were identified for the Sunderland Scale and the Cubbin & Jackson Revised Scale, respectively. The validation criteria (predictive) were performed comparatively with the Braden Scale (gold standard), and the main measurements evaluated were sensitivity, specificity, positive predictive value, negative predictive value, and area under the curve, which were calculated based on cutoff points.
Results
The Sunderland Scale exhibited 60% sensitivity, 86.7% specificity, 47.4% positive predictive value, 91.5% negative predictive value, and 0.86 for the area under the curve. The Cubbin & Jackson Revised Scale exhibited 73.3% sensitivity, 86.7% specificity, 52.4% positive predictive value, 94.2% negative predictive value, and 0.91 for the area under the curve. The Braden scale exhibited 100% sensitivity, 5.3% specificity, 17.4% positive predictive value, 100% negative predictive value, and 0.72 for the area under the curve.
Conclusions
Both tools demonstrated reliability and validity for this sample. The Cubbin & Jackson Revised Scale yielded better predictive values for the development of pressure ulcers during intensive care.
doi:10.5935/0103-507X.20130021
PMCID: PMC4031838  PMID: 23917975
Validation studies; Risk assessment; Pressure ulcer/prevention & control; Pressure ulcer/nursing; Intensive care
13.  Reusability of EMR Data for Applying Cubbin and Jackson Pressure Ulcer Risk Assessment Scale in Critical Care Patients 
Healthcare Informatics Research  2013;19(4):261-270.
Objectives
The purposes of this study were to examine the predictive validity of the Cubbin and Jackson pressure ulcer risk assessment scale for the development of pressure ulcers in intensive care unit (ICU) patients retrospectively and to evaluate the reusability of Electronic Medical Records (EMR) data.
Methods
A retrospective design was used to examine 829 cases admitted to four ICUs in a tertiary care hospital from May 2010 to April 2011. Patients who were without pressure ulcers at admission to ICU, 18 years or older, and had stayed in ICU for 24 hours or longer were included. Sensitivity, specificity, positive predictive value, negative predictive value, and area under the curve (AUC) were calculated.
Results
The reported incidence rate of pressure ulcers among the study subjects was 14.2%. At the cut-off score of 24 of the Cubbin and Jackson scale, the sensitivity, specificity, positive predictive value, negative predictive value, and AUC were 72.0%, 68.8%, 27.7%, 93.7%, and 0.76, respectively. Eight items out 10 of the Cubbin and Jackson scale were readily available in the EMR data.
Conclusions
The Cubbin and Jackson scale performed slightly better than the Braden scale to predict pressure ulcer development. Eight items of the Cubbin and Jackson scale except mobility and hygiene can be extracted from the EMR, which initially demonstrated the reusability of EMR data for pressure ulcer risk assessment. If the Cubbin and Jackson scale is a part of the EMR assessment form, it would help nurses perform tasks to effectively prevent pressure ulcers with an EMR alert for high-risk patients.
doi:10.4258/hir.2013.19.4.261
PMCID: PMC3920038  PMID: 24523990
Electronic Health Records; Pressure Ulcer; Risk Assessment; Nursing Assessment; Intensive Care Units
14.  Identification and replication of prediction models for ovulation, pregnancy and live birth in infertile women with polycystic ovary syndrome 
Human Reproduction (Oxford, England)  2015;30(9):2222-2233.
STUDY QUESTION
Can we build and validate predictive models for ovulation and pregnancy outcomes in infertile women with polycystic ovary syndrome (PCOS)?
SUMMARY ANSWER
We were able to develop and validate a predictive model for pregnancy outcomes in women with PCOS using simple clinical and biochemical criteria particularly duration of attempting conception, which was the most consistent predictor among all considered factors for pregnancy outcomes.
WHAT IS KNOWN ALREADY
Predictive models for ovulation and pregnancy outcomes in infertile women with polycystic ovary syndrome have been reported, but such models require validation.
STUDY DESIGN, SIZE, AND DURATION
This is a secondary analysis of the data from the Pregnancy in Polycystic Ovary Syndrome I and II (PPCOS-I and -II) trials. Both trials were double-blind, randomized clinical trials that included 626 and 750 infertile women with PCOS, respectively. PPCOS-I participants were randomized to either clomiphene citrate (CC), metformin, or their combination, and PPCOS-II participants to either letrozole or CC for up to five treatment cycles.
PARTICIPANTS/MATERIALS, SETTING, AND METHODS
Linear logistic regression models were fitted using treatment, BMI, and other published variables as predictors of ovulation, conception, clinical pregnancy, and live birth as the outcome one at a time. We first evaluated previously reported significant predictors, and then constructed new prediction models. Receiver operating characteristic (ROC) curves were constructed and the area under the curves (AUCs) was calculated to compare performance using different models and data. Chi-square tests were used to examine the goodness-of-fit and prediction power of logistic regression model.
MAIN RESULTS AND THE ROLE OF CHANCE
Predictive factors were similar between PPCOS-I and II, but the two participant samples differed statistically significantly but the differences were clinically minor on key baseline characteristics and hormone levels. Women in PPCOS-II had an overall more severe PCOS phenotype than women in PPCOS-I. The clinically minor but statistically significant differences may be due to the large sample sizes. Younger age, lower baseline free androgen index and insulin, shorter duration of attempting conception, and higher baseline sex hormone-binding globulin significantly predicted at least one pregnancy outcome. The ROC curves (with AUCs of 0.66–0.76) and calibration plots and chi-square tests indicated stable predictive power of the identified variables (P-values ≥0.07 for all goodness-of-fit and validation tests).
LIMITATIONS, REASONS FOR CAUTION
This is a secondary analysis. Although our primary objective was to confirm previously reported results and identify new predictors of ovulation and pregnancy outcomes among PPCOS-II participants, our approach is exploratory and warrants further replication.
WIDER IMPLICATIONS OF THE FINDINGS
We have largely confirmed the predictors that were identified in the PPCOS-I trial. However, we have also revealed new predictors, particularly the role of smoking. While a history of ever smoking was not a significant predictor for live birth, a closer look at current, quit, and never smoking revealed that current smoking was a significant risk factor.
STUDY FUNDING/COMPETING INTEREST(S)
The Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) Grants U10 HD27049, U10 HD38992, U10HD055925, U10 HD39005, U10 HD33172, U10 HD38998, U10 HD055936, U10 HD055942, and U10 HD055944; and U54-HD29834. Heilongjiang University of Chinese Medicine Grants 051277 and B201005. R.S.L. reports receiving consulting fees from Euroscreen, AstraZeneca, Clarus Therapeutics, and Takeda, and grant support from Ferring, Astra Zeneca, and Toba. K.R.H. reports receiving grant support from Roche Diagnostics and Ferring Pharmascience. G.C. reports receiving Honorarium and grant support from Abbvie Pharmaceuticals and Bayer Pharmaceuticals. M.P.D. holds equity from Advanced Reproductive Care Inc. and DS Biotech, receives fees from Advanced Reproductive Care Inc., Actamax, Auxogyn, ZSX Medical, Halt Medical, and Neomed, and receives grant support from Boehringer-Ingelheim, Abbott, and BioSante, Ferring Pharmaceuticals, and EMD Serono. H.Z. receives research support from the Chinese 1000-scholar plan. Others report no disclosures other than NIH grant support.
TRIAL REGISTRATION NUMBER
PPCOS-I and -II were respectively registered at Clinicaltrials.gov: NCT00719186 and NCT00719186.
doi:10.1093/humrep/dev182
PMCID: PMC4542721  PMID: 26202922
mathematical modeling; calibration; prediction; receiver operating characteristic; polycystic ovaries; pregnancy; conception; live birth
15.  Continence Index: A new screening questionnaire to predict the probability of future incontinence in older women in the community 
International urology and nephrology  2015;47(7):1091-1097.
Purpose
Urinary incontinence (UI) is a chronic, costly condition that impairs quality of life. To identify older women most at risk, the Medical Epidemiologic and Social aspects of Aging (MESA) dataset was mined to create a set of questions that can reliably predict future UI.
Methods
MESA data were collected during four household interviews at approximately one year intervals. Factors associated with becoming incontinent at the second interview (HH2) were identified using logistic regression (construction dataset). Based on p-values and odds ratios, eight potential predictive factors with their 256 combinations and corresponding prediction probabilities formed the Continence Index. It's predictive and discriminatory capability was tested against the same cohort's outcome in the fourth survey (HH4-validation dataset). Sensitivity analysis, area under receiver operating characteristic (ROC) curve, predicted probabilities, and confidence intervals were used to statistically validate the Continence Index.
Results
Body Mass Index, sneezing, post partum UI, urinary frequency, mild UI, belief of developing UI in the future, and difficulty stopping urinary stream and remembering names emerged as the strongest predictors of UI. The confidence intervals for prediction probabilities strongly agreed between construction and validation datasets. Calculated sensitivity, specificity, false positive and false negative values revealed that the areas under the ROC's (0.802 and 0.799) for the construction and validation datasets, respectively, indicated good discriminatory capabilities of the Index as a predictor.
Conclusions
The Continence Index will help identify older women most at risk for UI in order to apply targeted prevention strategies in women that are most likely to benefit.
doi:10.1007/s11255-015-1006-0
PMCID: PMC4485523  PMID: 25982584
16.  Risk Models to Predict Chronic Kidney Disease and Its Progression: A Systematic Review 
PLoS Medicine  2012;9(11):e1001344.
A systematic review of risk prediction models conducted by Justin Echouffo-Tcheugui and Andre Kengne examines the evidence base for prediction of chronic kidney disease risk and its progression, and suitability of such models for clinical use.
Background
Chronic kidney disease (CKD) is common, and associated with increased risk of cardiovascular disease and end-stage renal disease, which are potentially preventable through early identification and treatment of individuals at risk. Although risk factors for occurrence and progression of CKD have been identified, their utility for CKD risk stratification through prediction models remains unclear. We critically assessed risk models to predict CKD and its progression, and evaluated their suitability for clinical use.
Methods and Findings
We systematically searched MEDLINE and Embase (1 January 1980 to 20 June 2012). Dual review was conducted to identify studies that reported on the development, validation, or impact assessment of a model constructed to predict the occurrence/presence of CKD or progression to advanced stages. Data were extracted on study characteristics, risk predictors, discrimination, calibration, and reclassification performance of models, as well as validation and impact analyses. We included 26 publications reporting on 30 CKD occurrence prediction risk scores and 17 CKD progression prediction risk scores. The vast majority of CKD risk models had acceptable-to-good discriminatory performance (area under the receiver operating characteristic curve>0.70) in the derivation sample. Calibration was less commonly assessed, but overall was found to be acceptable. Only eight CKD occurrence and five CKD progression risk models have been externally validated, displaying modest-to-acceptable discrimination. Whether novel biomarkers of CKD (circulatory or genetic) can improve prediction largely remains unclear, and impact studies of CKD prediction models have not yet been conducted. Limitations of risk models include the lack of ethnic diversity in derivation samples, and the scarcity of validation studies. The review is limited by the lack of an agreed-on system for rating prediction models, and the difficulty of assessing publication bias.
Conclusions
The development and clinical application of renal risk scores is in its infancy; however, the discriminatory performance of existing tools is acceptable. The effect of using these models in practice is still to be explored.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Chronic kidney disease (CKD)—the gradual loss of kidney function—is increasingly common worldwide. In the US, for example, about 26 million adults have CKD, and millions more are at risk of developing the condition. Throughout life, small structures called nephrons inside the kidneys filter waste products and excess water from the blood to make urine. If the nephrons stop working because of injury or disease, the rate of blood filtration decreases, and dangerous amounts of waste products such as creatinine build up in the blood. Symptoms of CKD, which rarely occur until the disease is very advanced, include tiredness, swollen feet and ankles, puffiness around the eyes, and frequent urination, especially at night. There is no cure for CKD, but progression of the disease can be slowed by controlling high blood pressure and diabetes, both of which cause CKD, and by adopting a healthy lifestyle. The same interventions also reduce the chances of CKD developing in the first place.
Why Was This Study Done?
CKD is associated with an increased risk of end-stage renal disease, which is treated with dialysis or by kidney transplantation (renal replacement therapies), and of cardiovascular disease. These life-threatening complications are potentially preventable through early identification and treatment of CKD, but most people present with advanced disease. Early identification would be particularly useful in developing countries, where renal replacement therapies are not readily available and resources for treating cardiovascular problems are limited. One way to identify people at risk of a disease is to use a “risk model.” Risk models are constructed by testing the ability of different combinations of risk factors that are associated with a specific disease to identify those individuals in a “derivation sample” who have the disease. The model is then validated on an independent group of people. In this systematic review (a study that uses predefined criteria to identify all the research on a given topic), the researchers critically assess the ability of existing CKD risk models to predict the occurrence of CKD and its progression, and evaluate their suitability for clinical use.
What Did the Researchers Do and Find?
The researchers identified 26 publications reporting on 30 risk models for CKD occurrence and 17 risk models for CKD progression that met their predefined criteria. The risk factors most commonly included in these models were age, sex, body mass index, diabetes status, systolic blood pressure, serum creatinine, protein in the urine, and serum albumin or total protein. Nearly all the models had acceptable-to-good discriminatory performance (a measure of how well a model separates people who have a disease from people who do not have the disease) in the derivation sample. Not all the models had been calibrated (assessed for whether the average predicted risk within a group matched the proportion that actually developed the disease), but in those that had been assessed calibration was good. Only eight CKD occurrence and five CKD progression risk models had been externally validated; discrimination in the validation samples was modest-to-acceptable. Finally, very few studies had assessed whether adding extra variables to CKD risk models (for example, genetic markers) improved prediction, and none had assessed the impact of adopting CKD risk models on the clinical care and outcomes of patients.
What Do These Findings Mean?
These findings suggest that the development and clinical application of CKD risk models is still in its infancy. Specifically, these findings indicate that the existing models need to be better calibrated and need to be externally validated in different populations (most of the models were tested only in predominantly white populations) before they are incorporated into guidelines. The impact of their use on clinical outcomes also needs to be assessed before their widespread use is recommended. Such research is worthwhile, however, because of the potential public health and clinical applications of well-designed risk models for CKD. Such models could be used to identify segments of the population that would benefit most from screening for CKD, for example. Moreover, risk communication to patients could motivate them to adopt a healthy lifestyle and to adhere to prescribed medications, and the use of models for predicting CKD progression could help clinicians tailor disease-modifying therapies to individual patient needs.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001344.
This study is further discussed in a PLOS Medicine Perspective by Maarten Taal
The US National Kidney and Urologic Diseases Information Clearinghouse provides information about all aspects of kidney disease; the US National Kidney Disease Education Program provides resources to help improve the understanding, detection, and management of kidney disease (in English and Spanish)
The UK National Health Service Choices website provides information for patients on chronic kidney disease, including some personal stories
The US National Kidney Foundation, a not-for-profit organization, provides information about chronic kidney disease (in English and Spanish)
The not-for-profit UK National Kidney Federation support and information for patients with kidney disease and for their carers, including a selection of patient experiences of kidney disease
World Kidney Day, a joint initiative between the International Society of Nephrology and the International Federation of Kidney Foundations, aims to raise awareness about kidneys and kidney disease
doi:10.1371/journal.pmed.1001344
PMCID: PMC3502517  PMID: 23185136
17.  Predictive Modeling for Pressure Ulcers from Intensive Care Unit Electronic Health Records 
Our goal in this study is to find risk factors associated with Pressure Ulcers (PUs) and to develop predictive models of PU incidence. We focus on Intensive Care Unit (ICU) patients since patients admitted to ICU have shown higher incidence of PUs. The most common PU incidence assessment tool is the Braden scale, which sums up six subscale features. In an ICU setting it’s known drawbacks include omission of important risk factors, use of subscale features not significantly associated with PU incidence, and yielding too many false positives. To improve on this, we extract medication and diagnosis features from patient EHRs. Studying Braden, medication, and diagnosis features and combinations thereof, we evaluate six types of predictive models and find that diagnosis features significantly improve the models’ predictive power. The best models combine Braden and diagnosis. Finally, we report the top diagnosis features which compared to Braden improve AUC by 10%.
PMCID: PMC4525237  PMID: 26306245
EHRs; intensive care unit; pressure ulcers; machine learning; predictive model
18.  Prognostic Accuracy of WHO Growth Standards to Predict Mortality in a Large-Scale Nutritional Program in Niger 
PLoS Medicine  2009;6(3):e1000039.
Background
Important differences exist in the diagnosis of malnutrition when comparing the 2006 World Health Organization (WHO) Child Growth Standards and the 1977 National Center for Health Statistics (NCHS) reference. However, their relationship with mortality has not been studied. Here, we assessed the accuracy of the WHO standards and the NCHS reference in predicting death in a population of malnourished children in a large nutritional program in Niger.
Methods and Findings
We analyzed data from 64,484 children aged 6–59 mo admitted with malnutrition (<80% weight-for-height percentage of the median [WH]% [NCHS] and/or mid-upper arm circumference [MUAC] <110 mm and/or presence of edema) in 2006 into the Médecins Sans Frontières (MSF) nutritional program in Maradi, Niger. Sensitivity and specificity of weight-for-height in terms of Z score (WHZ) and WH% for both WHO standards and NCHS reference were calculated using mortality as the gold standard. Sensitivity and specificity of MUAC were also calculated. The receiver operating characteristic (ROC) curve was traced for these cutoffs and its area under curve (AUC) estimated. In predicting mortality, WHZ (NCHS) and WH% (NCHS) showed AUC values of 0.63 (95% confidence interval [CI] 0.60–0.66) and 0.71 (CI 0.68–0.74), respectively. WHZ (WHO) and WH% (WHO) appeared to provide higher accuracy with AUC values of 0.76 (CI 0.75–0.80) and 0.77 (CI 0.75–0.80), respectively. The relationship between MUAC and mortality risk appeared to be relatively weak, with AUC = 0.63 (CI 0.60–0.67). Analyses stratified by sex and age yielded similar results.
Conclusions
These results suggest that in this population of children being treated for malnutrition, WH indicators calculated using WHO standards were more accurate for predicting mortality risk than those calculated using the NCHS reference. The findings are valid for a population of already malnourished children and are not necessarily generalizable to a population of children being screened for malnutrition. Future work is needed to assess which criteria are best for admission purposes to identify children most likely to benefit from therapeutic or supplementary feeding programs.
Rebecca Grais and colleagues assess the accuracy of WHO growth standards in predicting death among malnourished children admitted to a large nutritional program in Niger.
Editors' Summary
Background.
Malnutrition causes more than a third of child deaths worldwide. The World Health Organization (WHO) estimates there are 178 million malnourished children globally, all of whom are vulnerable to disease and 20 million of whom are at risk of death. Poverty, rising food prices, food scarcity, and natural disasters all contribute significantly to malnutrition, but children's lives can be saved if aid agencies are able to identify and treat acute malnutrition early. This can be done by comparing a child's body measurements to those of healthy children.
In 1977 the US National Center for Health Statistics (NCHS) introduced child growth reference charts describing how US children grow. The charts enable the height of a child of a given age to be compared with the set of “percentile curves,” which show, for example, whether the child is on the 90th or the 10th centile—that is, whether taller than 90% or 10% of their peers. These NCHS reference charts were subsequently adopted by the WHO for international use. In 2006, the WHO began to use new growth charts, based on children from a variety of countries raised in optimal environments for healthy growth. These provide a standard for how all children should grow, regardless of ethnic background or wealth.
Why Was This Study Done?
It is known that the WHO standards and the NCHS reference differ in how they identify malnutrition. Estimates of malnutrition are higher with the WHO standard than the NCHS reference. This affects the cost of international programs to treat malnutrition, as more children will be diagnosed and treated when the WHO standards are used. However, it is not known how the different growth measures differ in predicting which children's lives are at risk from malnutrition. The researchers saw that the data in their nutritional program could help provide this information.
What Did the Researchers Do and Find?
The researchers examined data on the body measurements of over 60,000 children aged between 6 mo and 5 y enrolled in a Médecins sans Frontières (MSF) nutritional programme in Maradi, Niger during 2006. Children were assessed as having acute malnutrition (wasting) and enrolled in the feeding program if their weight-for-height was less than 80% of the NCHS average, if their mid-upper arm circumference (MUAC) was under 110 mm (for children 65–110 cm), or they had swelling in both feet.
The authors evaluated three measures to see which was most accurate at predicting that children would die under treatment: low weight-for-height as measured against each of the WHO standard and NCHS reference, and low MUAC. For each measure, they compared the proportion of correct predictions of death (sensitivity) and the proportion of correct predictions of survival (specificity) for a range of possible cutoffs (or thresholds) for diagnosis.
They found that the WHO standard gave more accurate predictions than the NCHS reference or the MUAC of which children would die under treatment. The results were similar when the children were grouped by age or sex.
What Do these Findings Mean?
The results suggest that, at least in this population, the WHO standards are a more accurate predictor of death following malnutrition. This agrees with what might be expected, as the WHO standard is more up-to-date as well as aiming to show how healthy children from a range of settings should grow.
Nevertheless, an important limitation is that the children in the study had already been diagnosed as malnourished and were receiving treatment. As a result, the authors cannot say definitively which measure is better at predicting what children in the general population are acutely malnourished and would benefit most from treatment.
It should also be noted that children were predominantly entered into the feeding program by the weight-for-height indicator rather than by the MUAC. This may be a reason why the MUAC appears worse at predicting death than weight-for-height. Missing and inaccurate data, for instance on the exact ages of some children, also limit the findings.
In addition, the findings do not provide guidance on the cutoffs that should be used in deciding whether to enter a child into a feeding program. Different cutoffs represent a trade-off between treating more children needlessly in order to catch all in need, and treating fewer children and missing some in need. The study also cannot be used to advise on whether weight-for-height or the MUAC is more appropriate in a given context. In certain crisis situations, for instance, some authorities suggest it may be more practical to use the MUAC, as it requires less equipment or training.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000039.
The UN Standing Committee on Nutrition homepage publishes international briefings on nutrition as a foundation for development
The US National Center for Health Statistics provides background information on its 1977 growth charts and how they were developed in the context of explaining how they differ from revised charts produced in 2000
The World Heath Organization publishes country profile information on its child growth standards and also on Niger
Médecins sans Frontières also provides information on its work in Niger
The EC-FAO Food Security Information for Action Programme is funded by the European Commission (EC) and implemented by the Food and Agriculture Organization of the United Nations (FAO). It aims to help nations formulate more effective anti-hunger policies and provides online materials, including a guide to nutritional status assessment and analysis, which includes information on the contexts in which different indicators are useful
doi:10.1371/journal.pmed.1000039
PMCID: PMC2650722  PMID: 19260760
19.  Adapting existing diabetes risk scores for an Asian population: a risk score for detecting undiagnosed diabetes in the Mongolian population 
BMC Public Health  2015;15:938.
Background
Most of the commonly used diabetes mellitus screening tools and risk scores have been developed with American or European populations in mind. Their applicability, therefore, to low and middle-income countries remains unquantified. Simultaneously, low and middle-income countries including Mongolia are currently witnessing rising diabetes prevalence. This research aims to develop and validate a diabetes risk score for the screening of undiagnosed type 2 diabetes mellitus in the Mongolian adult population.
Methods
Blood glucose measurements from 1018 Mongolians, as well as information on demography and risk factors prevalence was drawn from 2009 STEPS data. Existing risk scores were applied, measuring sensitivity using area under ROC-curves. Logistic regression models were used to identify additional independent predictors for undiagnosed diabetes. Finally, a new risk score was developed and Hosmer-Lemeshow tests were used to evaluate the agreement between the observed and predicted prevalence.
Results
The performance of existing risk scores to identify undiagnosed diabetes was moderate; with the area under ROC curves between 61–64 %. In addition to well-established risk factors, three new independent predictors for undiagnosed diabetes were identified. Incorporating these into a new risk score, the area under ROC curves increased to 77 % (95 % CI 71 %–82 %).
Conclusions
Existing European or American diabetes risk tools cannot be adopted in Asian countries without prior validation in the specific population. With this in mind, a low-cost, reliable screening tool for undiagnosed diabetes was developed and internally validated for Mongolians. The potential for cost and morbidity savings could be significant.
doi:10.1186/s12889-015-2298-9
PMCID: PMC4578253  PMID: 26395572
Type 2 diabetes; Risk scores; Undiagnosed; Mongolia; Screening
20.  Biomarker Profiling by Nuclear Magnetic Resonance Spectroscopy for the Prediction of All-Cause Mortality: An Observational Study of 17,345 Persons 
PLoS Medicine  2014;11(2):e1001606.
In this study, Würtz and colleagues conducted high-throughput profiling of blood specimens in two large population-based cohorts in order to identify biomarkers for all-cause mortality and enhance risk prediction. The authors found that biomarker profiling improved prediction of the short-term risk of death from all causes above established risk factors. However, further investigations are needed to clarify the biological mechanisms and the utility of these biomarkers to guide screening and prevention.
Please see later in the article for the Editors' Summary
Background
Early identification of ambulatory persons at high short-term risk of death could benefit targeted prevention. To identify biomarkers for all-cause mortality and enhance risk prediction, we conducted high-throughput profiling of blood specimens in two large population-based cohorts.
Methods and Findings
106 candidate biomarkers were quantified by nuclear magnetic resonance spectroscopy of non-fasting plasma samples from a random subset of the Estonian Biobank (n = 9,842; age range 18–103 y; 508 deaths during a median of 5.4 y of follow-up). Biomarkers for all-cause mortality were examined using stepwise proportional hazards models. Significant biomarkers were validated and incremental predictive utility assessed in a population-based cohort from Finland (n = 7,503; 176 deaths during 5 y of follow-up). Four circulating biomarkers predicted the risk of all-cause mortality among participants from the Estonian Biobank after adjusting for conventional risk factors: alpha-1-acid glycoprotein (hazard ratio [HR] 1.67 per 1–standard deviation increment, 95% CI 1.53–1.82, p = 5×10−31), albumin (HR 0.70, 95% CI 0.65–0.76, p = 2×10−18), very-low-density lipoprotein particle size (HR 0.69, 95% CI 0.62–0.77, p = 3×10−12), and citrate (HR 1.33, 95% CI 1.21–1.45, p = 5×10−10). All four biomarkers were predictive of cardiovascular mortality, as well as death from cancer and other nonvascular diseases. One in five participants in the Estonian Biobank cohort with a biomarker summary score within the highest percentile died during the first year of follow-up, indicating prominent systemic reflections of frailty. The biomarker associations all replicated in the Finnish validation cohort. Including the four biomarkers in a risk prediction score improved risk assessment for 5-y mortality (increase in C-statistics 0.031, p = 0.01; continuous reclassification improvement 26.3%, p = 0.001).
Conclusions
Biomarker associations with cardiovascular, nonvascular, and cancer mortality suggest novel systemic connectivities across seemingly disparate morbidities. The biomarker profiling improved prediction of the short-term risk of death from all causes above established risk factors. Further investigations are needed to clarify the biological mechanisms and the utility of these biomarkers for guiding screening and prevention.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
A biomarker is a biological molecule found in blood, body fluids, or tissues that may signal an abnormal process, a condition, or a disease. The level of a particular biomarker may indicate a patient's risk of disease, or likely response to a treatment. For example, cholesterol levels are measured to assess the risk of heart disease. Most current biomarkers are used to test an individual's risk of developing a specific condition. There are none that accurately assess whether a person is at risk of ill health generally, or likely to die soon from a disease. Early and accurate identification of people who appear healthy but in fact have an underlying serious illness would provide valuable opportunities for preventative treatment.
While most tests measure the levels of a specific biomarker, there are some technologies that allow blood samples to be screened for a wide range of biomarkers. These include nuclear magnetic resonance (NMR) spectroscopy and mass spectrometry. These tools have the potential to be used to screen the general population for a range of different biomarkers.
Why Was This Study Done?
Identifying new biomarkers that provide insight into the risk of death from all causes could be an important step in linking different diseases and assessing patient risk. The authors in this study screened patient samples using NMR spectroscopy for biomarkers that accurately predict the risk of death particularly amongst the general population, rather than amongst people already known to be ill.
What Did the Researchers Do and Find?
The researchers studied two large groups of people, one in Estonia and one in Finland. Both countries have set up health registries that collect and store blood samples and health records over many years. The registries include large numbers of people who are representative of the wider population.
The researchers first tested blood samples from a representative subset of the Estonian group, testing 9,842 samples in total. They looked at 106 different biomarkers in each sample using NMR spectroscopy. They also looked at the health records of this group and found that 508 people died during the follow-up period after the blood sample was taken, the majority from heart disease, cancer, and other diseases. Using statistical analysis, they looked for any links between the levels of different biomarkers in the blood and people's short-term risk of dying. They found that the levels of four biomarkers—plasma albumin, alpha-1-acid glycoprotein, very-low-density lipoprotein (VLDL) particle size, and citrate—appeared to accurately predict short-term risk of death. They repeated this study with the Finnish group, this time with 7,503 individuals (176 of whom died during the five-year follow-up period after giving a blood sample) and found similar results.
The researchers carried out further statistical analyses to take into account other known factors that might have contributed to the risk of life-threatening illness. These included factors such as age, weight, tobacco and alcohol use, cholesterol levels, and pre-existing illness, such as diabetes and cancer. The association between the four biomarkers and short-term risk of death remained the same even when controlling for these other factors.
The analysis also showed that combining the test results for all four biomarkers, to produce a biomarker score, provided a more accurate measure of risk than any of the biomarkers individually. This biomarker score also proved to be the strongest predictor of short-term risk of dying in the Estonian group. Individuals with a biomarker score in the top 20% had a risk of dying within five years that was 19 times greater than that of individuals with a score in the bottom 20% (288 versus 15 deaths).
What Do These Findings Mean?
This study suggests that there are four biomarkers in the blood—alpha-1-acid glycoprotein, albumin, VLDL particle size, and citrate—that can be measured by NMR spectroscopy to assess whether otherwise healthy people are at short-term risk of dying from heart disease, cancer, and other illnesses. However, further validation of these findings is still required, and additional studies should examine the biomarker specificity and associations in settings closer to clinical practice. The combined biomarker score appears to be a more accurate predictor of risk than tests for more commonly known risk factors. Identifying individuals who are at high risk using these biomarkers might help to target preventative medical treatments to those with the greatest need.
However, there are several limitations to this study. As an observational study, it provides evidence of only a correlation between a biomarker score and ill health. It does not identify any underlying causes. Other factors, not detectable by NMR spectroscopy, might be the true cause of serious health problems and would provide a more accurate assessment of risk. Nor does this study identify what kinds of treatment might prove successful in reducing the risks. Therefore, more research is needed to determine whether testing for these biomarkers would provide any clinical benefit.
There were also some technical limitations to the study. NMR spectroscopy does not detect as many biomarkers as mass spectrometry, which might therefore identify further biomarkers for a more accurate risk assessment. In addition, because both study groups were northern European, it is not yet known whether the results would be the same in other ethnic groups or populations with different lifestyles.
In spite of these limitations, the fact that the same four biomarkers are associated with a short-term risk of death from a variety of diseases does suggest that similar underlying mechanisms are taking place. This observation points to some potentially valuable areas of research to understand precisely what's contributing to the increased risk.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001606
The US National Institute of Environmental Health Sciences has information on biomarkers
The US Food and Drug Administration has a Biomarker Qualification Program to help researchers in identifying and evaluating new biomarkers
Further information on the Estonian Biobank is available
The Computational Medicine Research Team of the University of Oulu and the University of Bristol have a webpage that provides further information on high-throughput biomarker profiling by NMR spectroscopy
doi:10.1371/journal.pmed.1001606
PMCID: PMC3934819  PMID: 24586121
21.  Assessing Predictive Validity of Pressure Ulcer Risk Scales- A Systematic Review and Meta-Analysis 
Iranian Journal of Public Health  2016;45(2):122-133.
Background:
The purpose of this study was to present a scientific reason for pressure ulcer risk scales: Cubbin& Jackson modified Braden, Norton, and Waterlow, as a nursing diagnosis tool by utilizing predictive validity of pressure sores.
Methods:
Articles published between 1966 and 2013 from periodicals indexed in the Ovid Medline, Embase, CINAHL, KoreaMed, NDSL, and other databases were selected using the key word “pressure ulcer”. QUADAS-II was applied for assessment for internal validity of the diagnostic studies. Selected studies were analyzed using meta-analysis with MetaDisc 1.4.
Results:
Seventeen diagnostic studies with high methodological quality, involving 5,185 patients, were included. In the results of the meta-analysis, sROC AUC of Braden, Norton, and Waterflow scale was over 0.7, showing moderate predictive validity, but they have limited interpretation due to significant differences between studies. In addition, Waterlow scale is insufficient as a screening tool owing to low sensitivity compared with other scales.
Conclusion:
The contemporary pressure ulcer risk scale is not suitable for uninform practice on patients under standardized criteria. Therefore, in order to provide more effective nursing care for bedsores, a new or modified pressure ulcer risk scale should be developed upon strength and weaknesses of existing tools.
PMCID: PMC4841867  PMID: 27114977
Pressure ulcer; Sensitivity; Specificity; Meta-analysis
22.  Management of Chronic Pressure Ulcers 
Executive Summary
In April 2008, the Medical Advisory Secretariat began an evidence-based review of the literature concerning pressure ulcers.
Please visit the Medical Advisory Secretariat Web site, http://www.health.gov.on.ca/english/providers/program/mas/tech/tech_mn.html to review these titles that are currently available within the Pressure Ulcers series.
Pressure ulcer prevention: an evidence based analysis
The cost-effectiveness of prevention strategies for pressure ulcers in long-term care homes in Ontario: projections of the Ontario Pressure Ulcer Model (field evaluation)
Management of chronic pressure ulcers: an evidence-based analysis
Objective
The Medical Advisory Secretariat (MAS) conducted a systematic review on interventions used to treat pressure ulcers in order to answer the following questions:
Do currently available interventions for the treatment of pressure ulcers increase the healing rate of pressure ulcers compared with standard care, a placebo, or other similar interventions?
Within each category of intervention, which one is most effective in promoting the healing of existing pressure ulcers?
Background
A pressure ulcer is a localized injury to the skin and/or underlying tissue usually over a bony prominence, as a result of pressure, or pressure in conjunction with shear and/or friction. Many areas of the body, especially the sacrum and the heel, are prone to the development of pressure ulcers. People with impaired mobility (e.g., stroke or spinal cord injury patients) are most vulnerable to pressure ulcers. Other factors that predispose people to pressure ulcer formation are poor nutrition, poor sensation, urinary and fecal incontinence, and poor overall physical and mental health.
The prevalence of pressure ulcers in Ontario has been estimated to range from a median of 22.1% in community settings to a median of 29.9% in nonacute care facilities. Pressure ulcers have been shown to increase the risk of mortality among geriatric patients by as much as 400%, to increase the frequency and duration of hospitalization, and to decrease the quality of life of affected patients. The cost of treating pressure ulcers has been estimated at approximately $9,000 (Cdn) per patient per month in the community setting. Considering the high prevalence of pressure ulcers in the Ontario health care system, the total cost of treating pressure ulcers is substantial.
Technology
Wounds normally heal in 3 phases (inflammatory phase, a proliferative phase of new tissue and matrix formation, and a remodelling phase). However, pressure ulcers often fail to progress past the inflammatory stage. Current practice for treating pressure ulcers includes treating the underlying causes, debridement to remove necrotic tissues and contaminated tissues, dressings to provide a moist wound environment and to manage exudates, devices and frequent turning of patients to provide pressure relief, topical applications of biologic agents, and nutritional support to correct nutritional deficiencies. A variety of adjunctive physical therapies are also in use.
Method
Health technology assessment databases and medical databases were searched from 1996 (Medline), 1980 (EMBASE), and 1982 (CINAHL) systematically up to March 2008 to identify randomized controlled trials (RCTs) on the following treatments of pressure ulcers: cleansing, debridement, dressings, biological therapies, pressure-relieving devices, physical therapies, nutritional therapies, and multidisciplinary wound care teams. Full literature search strategies are reported in appendix 1. English-language studies in previous systematic reviews and studies published since the last systematic review were included if they had more than 10 subjects, were randomized, and provided objective outcome measures on the healing of pressure ulcers. In the absence of RCTs, studies of the highest level of evidence available were included. Studies on wounds other than pressure ulcers and on surgical treatment of pressure ulcers were excluded. A total of 18 systematic reviews, 104 RCTs, and 4 observational studies were included in this review.
Data were extracted from studies using standardized forms. The quality of individual studies was assessed based on adequacy of randomization, concealment of treatment allocation, comparability of groups, blinded assessment, and intention-to-treat analysis. Meta-analysis to estimate the relative risk (RR) or weighted mean difference (WMD) for measures of healing was performed when appropriate. A descriptive synthesis was provided where pooled analysis was not appropriate or not feasible. The quality of the overall evidence on each intervention was assessed using the grading of recommendations assessment, development, and evaluation (GRADE) criteria.
Findings
Findings from the analysis of the included studies are summarized below:
Cleansing
There is no good trial evidence to support the use of any particular wound cleansing solution or technique for pressure ulcers.
Debridement
There was no evidence that debridement using collagenase, dextranomer, cadexomer iodine, or maggots significantly improved complete healing compared with placebo.
There were no statistically significant differences between enzymatic or mechanical debridement agents with the following exceptions:
Papain urea resulted in better debridement than collagenase.
Calcium alginate resulted in a greater reduction in ulcer size compared to dextranomer.
Adding streptokinase/streptodornase to hydrogel resulted in faster debridement.
Maggot debridement resulted in more complete debridement than conventional treatment.
There is limited evidence on the healing effects of debridement devices.
Dressings
Hydrocolloid dressing was associated with almost three-times more complete healing compared with saline gauze.
There is evidence that hydrogel and hydropolymer may be associated with 50% to 70% more complete healing of pressure ulcers than hydrocolloid dressing.
No statistically significant differences in complete healing were detected among other modern dressings.
There is evidence that polyurethane foam dressings and hydrocellular dressings are more absorbent and easier to remove than hydrocolloid dressings in ulcers with moderate to high exudates.
In deeper ulcers (stage III and IV), the use of alginate with hydrocolloid resulted in significantly greater reduction in the size of the ulcers compared to hydrocolloid alone.
Studies on sustained silver-releasing dressing demonstrated a tendency for reducing the risk of infection and promoting faster healing, but the sample sizes were too small for statistical analysis or for drawing conclusions.
Biological Therapies
The efficacy of platelet-derived growth factors (PDGFs), fibroblast growth factor, and granulocyte-macrophage colony stimulating factor in improving complete healing of chronic pressure ulcers has not been established.
Presently only Regranex, a recombinant PDGF, has been approved by Health Canada and only for treatment of diabetic ulcers in the lower extremities.
A March 2008 US Food and Drug Administration (FDA) communication reported increased deaths from cancers in people given three or more prescriptions for Regranex.
Limited low-quality evidence on skin matrix and engineered skin equivalent suggests a potential role for these products in healing refractory advanced chronic pressure ulcers, but the evidence is insufficient to draw a conclusion.
Adjunctive Physical Therapy
There is evidence that electrical stimulation may result in a significantly greater reduction in the surface area and more complete healing of stage II to IV ulcers compared with sham therapy. No conclusion on the efficacy of electrotherapy can be drawn because of significant statistical heterogeneity, small sample sizes, and methodological flaws.
The efficacy of other adjunctive physical therapies [electromagnetic therapy, low-level laser (LLL) therapy, ultrasound therapy, ultraviolet light therapy, and negative pressure therapy] in improving complete closure of pressure ulcers has not been established.
Nutrition Therapy
Supplementation with 15 grams of hydrolyzed protein 3 times daily did not affect complete healing but resulted in a 2-fold improvement in Pressure Ulcer Scale for Healing (PUSH) score compared with placebo.
Supplementation with 200 mg of zinc three times per day did not have any significant impact on the healing of pressure ulcers compared with a placebo.
Supplementation of 500 mg ascorbic acid twice daily was associated with a significantly greater decrease in the size of the ulcer compared with a placebo but did not have any significant impact on healing when compared with supplementation of 10 mg ascorbic acid three times daily.
A very high protein tube feeding (25% of energy as protein) resulted in a greater reduction in ulcer area in institutionalized tube-fed patients compared with a high protein tube feeding (16% of energy as protein).
Multinutrient supplements that contain zinc, arginine, and vitamin C were associated with a greater reduction in the area of the ulcers compared with standard hospital diet or to a standard supplement without zinc, arginine, or vitamin C.
Firm conclusions cannot be drawn because of methodological flaws and small sample sizes.
Multidisciplinary Wound Care Teams
The only RCT suggests that multidisciplinary wound care teams may significantly improve healing in the acute care setting in 8 weeks and may significantly shorten the length of hospitalization. However, since only an abstract is available, study biases cannot be assessed and no conclusions can be drawn on the quality of this evidence.
PMCID: PMC3377577  PMID: 23074533
23.  DETECTION OF LUNG CANCER USING WEIGHTED DIGITAL ANALYSIS OF BREATH BIOMARKERS 
Background
A combination of biomarkers in a multivariate model may predict disease with greater accuracy than a single biomarker employed alone. We developed a non-linear method of multivariate analysis, weighted digital analysis (WDA), and evaluated its ability to predict lung cancer employing volatile biomarkers in the breath.
Methods
WDA generates a discriminant function to predict membership in disease vs no disease groups by determining weight, a cutoff value, and a sign for each predictor variable employed in the model. The weight of each predictor variable was the area under the curve (AUC) of the receiver operating characteristic (ROC) curve minus a fixed offset of 0.55, where the AUC was obtained by employing that predictor variable alone, as the sole marker of disease. The sign (±) was used to invert the predictor variable if a lower value indicated a higher probability of disease. When employed to predict the presence of a disease in a particular patient, the discriminant function was determined as the sum of the weights of all predictor variables that exceeded their cutoff values. The algorithm that generates the discriminant function is deterministic because parameters are calculated from each individual predictor variable without any optimization or adjustment. We employed WDA to re-evaluate data from a recent study of breath biomarkers of lung cancer, comprising the volatile organic compounds (VOCs) in the alveolar breath of 193 subjects with primary lung cancer and 211 controls with a negative chest CT.
Results
The WDA discriminant function accurately identified patients with lung cancer in a model employing 30 breath VOCs (ROC curve AUC = 0.90; sensitivity = 84.5%, specificity = 81.0%). These results were superior to multi-linear regression analysis of the same data set (AUC= 0.74, sensitivity = 68.4, specificity = 73.5%). WDA test accuracy did not vary appreciably with TNM (tumor, node, metastasis) stage of disease, and results were not affected by tobacco smoking (ROC curve AUC =0.92 in current smokers, 0.90 in former smokers). WDA was a robust predictor of lung cancer: random removal of 1/3 of the VOCs did not reduce the AUC of the ROC curve by >10% (99.7% CI).
Conclusions
A test employing WDA of breath VOCs predicted lung cancer with accuracy similar to chest computed tomography. The algorithm identified dependencies that were not apparent with traditional linear methods. WDA appears to provide a useful new technique for non-linear multivariate analysis of data.
doi:10.1016/j.cca.2008.02.021
PMCID: PMC2497457  PMID: 18420034
24.  Development and Evaluation of a Simple and Effective Prediction Approach for Identifying Those at High Risk of Dyslipidemia in Rural Adult Residents 
PLoS ONE  2012;7(8):e43834.
Background
Dyslipidemia is an extremely prevalent but preventable risk factor for cardiovascular disease. However, many dyslipidemia patients remain undetected in resource limited settings. The study was performed to develop and evaluate a simple and effective prediction approach without biochemical parameters to identify those at high risk of dyslipidemia in rural adult population.
Methods
Demographic, dietary and lifestyle, and anthropometric data were collected by a cross-sectional survey from 8,914 participants living in rural areas aged 35–78 years. There were 6,686 participants randomly selected into a training group for constructing the artificial neural network (ANN) and logistic regression (LR) prediction models. The remaining 2,228 participants were assigned to a validation group for performance comparisons of ANN and LR models. The predictors of dyslipidemia risk were identified from the training group using multivariate logistic regression analysis. Predictive performance was evaluated by receiver operating characteristic (ROC) curve.
Results
Some risk factors were significantly associated with dyslipidemia, including age, gender, educational level, smoking, high-fat diet, vegetable and fruit intake, family history, physical activity, and central obesity. For the ANN model, the sensitivity, specificity, positive and negative likelihood ratio, positive and negative predictive values were 90.41%, 76.66%, 3.87, 0.13, 76.33%, and 90.58%, respectively, while LR model were only 57.37%, 70.91%, 1.97, 0.60, 62.09%, and 66.73%, respectively. The area under the ROC cure (AUC) value of the ANN model was 0.86±0.01, showing more accurate overall performance than traditional LR model (AUC = 0.68±0.01, P<0.001).
Conclusion
The ANN model is a simple and effective prediction approach to identify those at high risk of dyslipidemia, and it can be used to screen undiagnosed dyslipidemia patients in rural adult population. Further work is planned to confirm these results by incorporating multi-center and longer follow-up data.
doi:10.1371/journal.pone.0043834
PMCID: PMC3429495  PMID: 22952780
25.  Comparison of predictive modeling approaches for 30-day all-cause non-elective readmission risk 
Background
This paper explores the importance of electronic medical records (EMR) for predicting 30-day all-cause non-elective readmission risk of patients and presents a comparison of prediction performance of commonly used methods.
Methods
The data are extracted from eight Advocate Health Care hospitals. Index admissions are excluded from the cohort if they are observation, inpatient admissions for psychiatry, skilled nursing, hospice, rehabilitation, maternal and newborn visits, or if the patient expires during the index admission. Data are randomly and repeatedly divided into fitting and validating sets for cross validations. Approaches including LACE, STEPWISE logistic, LASSO logistic, and AdaBoost, are compared with sample sizes varying from 2,500 to 80,000.
Results
Our results confirm that LACE has moderate discrimination power with the area under receiver operating characteristic curve (AUC) around 0.65-0.66, which can be improved to 0.73-0.74 when additional variables from EMR are considered. These variables include Inpatient in the last six months, Number of emergency room visits or inpatients in the last year, Braden score, Polypharmacy, Employment status, Discharge disposition, Albumin level, and medical condition variables such as Leukemia, Malignancy, Renal failure with hemodialysis, History of alcohol substance abuse, Dementia and Trauma. When sample size is small (≤5000), LASSO is the best; when sample size is large (≥20,000), the predictive performance is similar. The STEPWISE method has a slightly lower AUC (0.734) comparing to LASSO (0.737) and AdaBoost (0.737). More than one half of the selected predictors can be false positives when using a single method and a single division of fitting/validating data.
Conclusions
True predictors can be identified by repeatedly dividing data into fitting/validating subsets and referring the final model based on summarizing results. LASSO is a better alternative to the STEPWISE logistic regression, especially when sample size is not large. The evidence for adequate sample size can be explored by fitting models on gradually reduced samples. Our model comparison strategy is not only good for 30-day all-cause non-elective readmission risk predictions, but also applicable to other types of predictive models in clinical studies.
doi:10.1186/s12874-016-0128-0
PMCID: PMC4769572  PMID: 26920363
Predictive Models; Readmission Risk; STEPWISE; LASSO; Ada Boost

Results 1-25 (1942151)