|Home | About | Journals | Submit | Contact Us | Français|
To develop a risk stratification score to predict warfarin-associated hemorrhage
Optimal decision-making regarding warfarin use for atrial fibrillation requires estimation of hemorrhage risk.
We followed 9,186 patients with atrial fibrillation contributing 32,888 person-years of follow-up on warfarin, obtaining data from clinical databases and validating hemorrhage events using medical record review. We used Cox regression models to develop a hemorrhage risk stratification score, selecting candidate variables using bootstrapping approaches. The final model was internally validated via split-sample testing and compared to six published hemorrhage risk schemes.
We observed 461 first major hemorrhages during follow-up (1.4% annually). Five independent variables were included in the final model and weighted by regression coefficients: anemia (3 points), severe renal disease (e.g., glomerular filtration rate < 30 ml/min or dialysis-dependent, 3 points), age ≥ 75 years (2 points), prior bleeding (1 point), and hypertension (1 point). Major hemorrhage rates ranged from 0.4% (0 points) to 17.3% per year (10 points). Collapsed into a 3-category risk score, major hemorrhage rates were 0.8% in the low risk group (0-3 points), 2.6% in intermediate risk (4 points), and 5.8% in high risk (5-10 points). The c-index for the continuous risk score was 0.74 and 0.69 for the 3-category score, higher than in the other risk schemes. There was net reclassification improvement versus all six comparators (from 27% to 56%).
A simple 5-variable risk score was effective in quantifying the risk of warfarin-associated hemorrhage in a large community-based cohort of patients with atrial fibrillation.
Oral anticoagulants such as warfarin can substantially reduce the thromboembolic consequences of atrial fibrillation(1). However, anticoagulant-associated hemorrhage deters many clinicians from prescribing warfarin(2). Accurate risk stratification according to hemorrhage risk would facilitate the anticoagulation decision for individual patients, and could help control for varying hemorrhage risk across different studies or when comparing the safety of various antithrombotic agents. We describe the development and internal validation of a new hemorrhage risk stratification tool and compare its performance to other published hemorrhage risk schemes.
The AnTicoagulation and Risk factors In Atrial fibrillation (ATRIA) study followed 13,559 adults with nonvavular, non-transient atrial fibrillation enrolled in Kaiser Permanente of Northern California, a large integrated healthcare system. Details of the cohort assembly have been described previously; briefly, subjects were identified by searching clinical databases for International Classification of Diseases, Ninth Revisions, Clinical Modification (ICD-9) codes for atrial fibrillation between July 1, 1996 and December 31, 1997, and followed through September 30, 2003(3,4). Warfarin exposure was determined using a previously described and validated algorithm based on the number of days supplied per prescription, refill patterns, and intervening INR measurements(3). Clinical variables were identified using ICD-9 codes, pharmacy prescriptions, and laboratory databases(3).
We identified six published, validated risk stratification schemes developed to predict warfarin-associated hemorrhage (Table 1) and searched for those specific risk factors in the ATRIA cohort (5-10). Variables unavailable in ATRIA (e.g., patient genotype) or not directly applicable to atrial fibrillation (e.g., acute pulmonary embolism) were not included as potential variables. Prior bleeding history was defined as any prior outpatient or inpatient ICD-9 diagnosis code of hemorrhage, including by specific organ system (e.g., prior intracranial or gastrointestinal bleeding), in the aggregate (e.g., all-cause prior bleeding), and by timing (within 90 days or > 90 days). High fall risk was defined as any prior hospitalization with a discharge diagnosis code indicating mechanical fall that occurred in either the inpatient or outpatient setting.
Clinical laboratory databases were used to identify anemia (hemoglobin < 13g/dL in men and < 12 g/dL in women), thrombocytopenia (platelet count < 90,000), and renal insufficiency (measured by serum creatinine and estimated glomerular filtration rate (eGFR))(11). Abnormal laboratory values were considered abnormal from 3 months prior to 1 year after the date of the measurement, censored by a preceding or subsequent normal test value. If results were unavailable within the time window, the test was assumed normal based on the assumption that tests would be ordered if there were clinical suspicions.
Clopidogrel and ticlopidine exposure was determined from pharmacy databases and duration defined from prescription start date to 2 months after the end of the medication supply. Accurate assessment of aspirin and non-steroidal anti-inflammatory drugs (NSAIDs) exposure was not possible since these medications were predominantly obtained without prescription.
We searched computerized databases for primary discharge ICD-9 codes for extracranial hemorrhages (i.e., gastrointestinal, genitourinary, retroperitoneal) and primary and secondary diagnoses of intracranial hemorrhage (intracerebral, subarachnoid, or subdural hemorrhages). Medical charts from potential hemorrhagic events were reviewed by a clinical outcomes committee using a formal study protocol. Only events that occurred during or within 5 days of preceding warfarin exposure were included. Hemorrhages not present on admission that occurred during the hospitalization or as a result of a procedure were excluded. We restricted the analysis to “major hemorrhages”, defined as fatal, requiring transfusion of ≥ 2 units packed blood cells, or hemorrhage into a critical anatomic site (e.g., intracranial, retroperitoneal).
All follow-up periods on warfarin were included in the analysis. Cox proportional hazards regression models using time-varying covariates were used to examine the relationships between potential risk factors and hemorrhage outcomes with time origins set at the beginning of each follow-up period. Risk factor values were updated over follow-up with the proviso that no values were changed within seven days of an endpoint bleeding event.
The cohort was randomly divided into a split-sample “derivation” and “validation” cohort using a 2:1 ratio; models using time-varying covariates were developed in the derivation cohort and performance tested in the validation cohort. Covariates associated with major hemorrhage with a hazard ratio ≥ 1.5 were considered for potential inclusion in the final multivariable model. Since variable selection procedures may produce unstable results, we applied backward elimination selection on 1000 bootstrap samples from the derivation set, with ≥ 0.05 the significance level set for removing a variable. Final model variables were those selected in >50% of bootstrap samples(12). Model discrimination was evaluated using the c-index(13) and calibration by the goodness-of-fit test. Variables from the final multivariable Cox regression model were converted to a risk score, with points assigned to each predictor approximately proportional to the magnitude of the regression coefficients rounded to the nearest integer.
The risk score was collapsed into “low,” “intermediate,” and “high” risk groups based on the observed major hemorrhage rate. Since there are no definitive or clinically-determined cut-points for rates of major hemorrhage at which anticoagulation would be universally contraindicated, we chose thresholds in our point score that appeared to optimally aggregate low and high risk groups. We then applied the ATRIA model and six other risk schemes using time-varying covariates to the ATRIA cohort to compare model performance using the c-index, risk stratification capacity (the proportion of the cohort assigned to clinically meaningful risk categories), and by a recently published extension of the net reclassification improvement (NRI) metric(14). For NRI calculations, all schemes were compared using a low/intermediate/high categorization to provide a common scale. This study was approved by the respective institutional committees on human research boards.
There were 9,186 individuals in the ATRIA cohort contributing 32,888 person-years of warfarin exposure (median warfarin duration = 3.5 years [IQR: 1.2-6.0]). Because anticoagulated patients could discontinue warfarin and subsequently resume therapy, individual patients could contribute multiple periods on warfarin; 2,790 patients (30%) had > 1 period on warfarin and 709 patients (8%) had > 2 periods on warfarin.
We identified 461 validated incident warfarin-associated major hemorrhages, an annualized rate of 1.40% hemorrhages per year. The derivation cohort contained 307 major hemorrhages among 6,123 patients and the validation cohort 154 major hemorrhages among 3,063 patients.
Table 2 compares the characteristics of subjects with and without major hemorrhage in the derivation cohort. Variables associated with major hemorrhage at a hazard ratio ≥ 1.5 on bivariate analysis were considered for the final model and tested in 1000 bootstrap samples. Among the various definitions of renal disease and prior hemorrhage, “severe renal disease” (defined as eGFR < 30 mL/min or dialysis-dependent) and “any prior hemorrhage diagnosis (all-cause)” were selected over alternative definitions based on bootstrap analysis. Five final variables emerged in > 50% of bootstrap samples: anemia, severe renal disease, age ≥ 75 years, any prior hemorrhage diagnosis, and diagnosed hypertension. Based on the final model's regression coefficients, anemia and severe renal disease were assigned 3 points, age ≥ 75 years 2 points, and prior hemorrhage and diagnosed hypertension 1 point each, resulting in a risk scheme with a possible range of 0 – 10 points (Table 3).
When applied to the validation set, the model generated regression coefficients similar to those in the derivation dataset, with good discrimination (c-index 0.74 [0.70-0.78]) and acceptable calibration by the goodness-of-fit test (p = 0.29). Bleeding rates in the combined cohort ranged from 0.4% to 17.3% per year (Table 4). The continuous risk score was collapsed to a 3-category scheme, where ‘low risk’ (0-3 points) patients had hemorrhage rates of < 1% per year, and ‘high risk’ (5 – 10 points) had rates > 5% per year (Table 4). The high risk category effectively concentrated hemorrhage events such that 42% of hemorrhage events occurred in only 10.2% of cohort person-years. The vast majority of remaining patients and person-years were low risk (Figure).
Compared with other risk schemes, the ATRIA risk score had the highest c-index point estimates for both the full range of scores and the 3-category scale and identified a comparatively large proportion of the cohort as either low or high-risk (Table 5). In contrast, other risk schemes either led to much smaller fractions of the cohort categorized as high-risk or observed relatively low event rates in their high-risk category. The ATRIA scheme led to sizable net reclassification improvement when compared to all other risk schemes, ranging from 27.7% to 56.6% improvement (Table 5).
Accurate prediction of hemorrhage risk on warfarin is vital to the anticoagulation decision. Based on five easily available clinical variables, the ATRIA score reflects the experience of a large, diverse group of patients with atrial fibrillation assembled from community care and followed for a longer time period than prior studies. The model development used rigorous contemporary methods such as split-sample testing and bootstrap sampling approaches to underwrite internal validity.
When collapsed into a 3-category risk score, the ATRIA risk scheme was able to identify sizable proportions of patients who fell into the most clinically meaningful categories, i.e., low or high risk for hemorrhage. The low-risk category, accounting for 83% of follow-up, had an observed major hemorrhage rate of < 1% per year. The high-risk category represented only 10.2% of patient follow-up yet accounted for 42% of the major bleeding events. The ATRIA scheme led to improvements in accurate net reclassification when compared to alternative schemes. The c-index of 0.74, while not representing perfect discrimination, indicates good performance for a prediction model and compares favorably to other widely used risk stratification schemes such as the CHADS2 stroke risk index, which has a c-index of ~0.6(15). Certainly, identifying novel predictors of bleeding and improving current methods of risk stratification are important areas of further investigation.
The variables in our model have each been linked to increased hemorrhage risk in prior studies(5-10). Anemia was strongly associated with future bleeding risk. Although we were unable to determine the mechanism of association, anemia may reflect a predisposition to hemorrhage or recent subclinical hemorrhage. Severe renal disease was also a powerful predictor of hemorrhage risk. All-cause prior bleeding was associated with future bleeding, and presumably identifies patients with a potential bleeding lesion or diathesis. Finally, older age and hypertension were independently associated with hemorrhage risk. Similar to other hemorrhage risk schemes, this analysis focused on all-cause major hemorrhage, both intracranial and extracranial. Although intracranial hemorrhages are the most important outcomes, the rarity of such events makes their risk prediction challenging(16). High quality models to predict intracranial hemorrhage are vitally needed.
Our risk model is clinically applicable when counseling patients about the relative benefits and harms of anticoagulation. Particularly as newer, easier to administer anticoagulants become available, accurate estimates of hemorrhage risk will strongly influence the anticoagulation decision. Our risk score may not affect the anticoagulation decision for most patients at high risk for stroke, since they derive a large benefit from anticoagulation. However, bleeding risk is significantly more influential in patients at moderate or lower stroke risk. Our bleeding risk estimates can be incorporated into formal decision-analysis models or can be used to counsel individual patients about their estimated risks of stroke and bleeding. For such patients, providing estimates of the risk of bleeding on anticoagulation can be a very informative addition to individualized patient decision-making.
There are several limitations to our analysis. Our assessment of clinical risk factors was based on computerized databases that did not have information on several covariates such as measurements of blood pressure and genotype. We lacked information about non-prescription use of aspirin or NSAIDs. Although the hemorrhage rate in ATRIA was generally lower than described by the other risk schemes, the rates are similar to some recent randomized trials(17). Finally, it will be important to test the ATRIA risk scheme in a separate population. Although internal validation reduces the likelihood of chance playing a major role in development of our model, external validity needs to be tested empirically.
The risk of anticoagulant-associated hemorrhage is a major deterrent to more widespread use of anticoagulants. Risk stratification schemes can help clinicians estimate the magnitude of hemorrhage risk when prescribing or continuing anticoagulant therapy. Such schemes can also provide important information for comparing the hemorrhage risk of patients enrolled in clinical studies or when comparing the safety of different anticoagulation strategies(18).
Funding: This study was supported by the National Institute on Aging (R01 AG15478 and K23 AG028978), the National Heart, Lung and Blood Institute (U19 HL91179 and RC2HL101589), the Eliot B. and Edith C. Shoolman fund of the Massachusetts General Hospital (Boston, MA), and a research grant from Daiichi Sankyo, Inc. The funding sources had no role in study design, data collection, data analysis, data interpretation, or preparation of this manuscript. Dr. Fang had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Potential Financial Conflicts of Interest: D.E. Singer has consulted for Boehringer Ingelheim, Daiichi Sankyo, Inc., Johnson & Johnson, Inc., Merck and Co., Bayer Schering Pharma, and Sanofi Aventis, Inc., and has received research support from Daiichi Sankyo, Inc. A.S. Go has received research support from Johnson & Johnson, Inc.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.