Models used in cost-effectiveness analysis (CEA) of screening programs may include 1 or many birth cohorts of patients. As many screening programs involve multiple screens over many years for each birth cohort, the actual implementation of screening often involves multiple concurrent recipient cohorts. Consequently, some advocate modeling all recipient cohorts rather than 1 birth cohort, arguing it more accurately represents actual implementation. However, reporting the cost-effectiveness estimates for multiple cohorts on aggregate rather than per cohort will fail to account for any heterogeneity in cost-effectiveness between cohorts. Such heterogeneity may be policy relevant where there is considerable variation in cost-effectiveness between cohorts, as in the case of cancer screening programs with multiple concurrent recipient birth cohorts, each at different stages of screening at any one point in time.
The purpose of this study is to illustrate the potential disadvantages of aggregating cost-effectiveness estimates over multiple cohorts, without first considering the disaggregate estimates.
We estimate the cost-effectiveness of 2 alternative cervical screening tests in a multicohort model and compare the aggregated and per-cohort estimates. We find instances in which the policy choices suggested by the aggregate and per-cohort results differ. We use this example to illustrate a series of potential disadvantages of aggregating CEA estimates over cohorts.
Recent recommendations that CEAs should consider the cost-effectiveness of more than just a single cohort appear justified, but the aggregation of estimates across multiple cohorts into a single estimate does not.
cohort model; multicohort model; population model
The high prevalence of childhood obesity has raised concerns regarding long-term patterns of adult health and has generated calls for obesity screening of young children. This study examined patterns of obesity and the predictive utility of obesity screening for children of different ages in terms of adult health outcomes. Using the National Longitudinal Survey of Youth, the Population Study of Income Dynamics, and National Health and Nutrition Evaluation Surveys, we estimated the sensitivity, specificity and predictive value of childhood BMI to identify 2, 5, 10, or 15 year-olds who will become obese adults. We constructed models assessing the relationship of childhood BMI to obesity-related diseases through middle age stratified by sex and race/ethnicity. 12% of 18 year-olds were obese. While 50% of these adolescents would not have been identified by screening at age 5, 9% would have been missed at age 15. Approximately 70% of obese children at age 5 became non-obese at age 18. The predictive utility of obesity screening below the age of 10 was low, even when maternal obesity was also included. The elevated risk of diabetes, obesity, and hypertension in middle age predicted by obesity at age 15 was significantly higher than at age 5 (e.g., the RR of diabetes for obese white male 15 year-olds was 4.5; for 5 year-olds, it was 1.6). Early childhood obesity assessment adds limited predictive utility to strategies that also include later childhood assessment. Targeted approaches in later childhood or universal strategies to prevent unhealthy weight gain should be considered.
Child; adolescent; adult; obesity; risk assessment; type 2 diabetes mellitus; hypertension; forecasting
We evaluated the cost-effectiveness of adjuvant chemotherapy using 5-fluorouracil, leucovorin (5FU/LV), and oxaliplatin (FOLFOX) compared with 5FU/LV alone and 5FU/LV compared with observation alone for patients who had resected stage II colon cancer.
We developed two Markov models to represent the adjuvant chemotherapy and follow-up periods and a single Markov model to represent the observation group. We used calibration to estimate the transition probabilities among different toxicity levels. The base-case considered 60-year-old patients who had undergone an uncomplicated hemicolectomy for stage II colon cancer and was medically fit to receive 6 months of adjuvant chemotherapy. We measured health outcomes in quality-adjusted life-years (QALYs) and estimated costs using 2007 US$.
In the base-case, adjuvant chemotherapy of FOLFOX regimen had an incremental cost-effectiveness ratio (ICER) of $54,359/QALY compared with the 5FU/LV regimen and the 5FU/LV regimen had an ICER of $14,584/QALY compared with the observation group from the third-party payer perspective. The ICER values were most sensitive to 5-year relapse probability, cost of adjuvant chemotherapy, and the discount rate for the FOLFOX arm, whereas the ICER value of 5FU/LV was most sensitive to the 5-year relapse probability, 5-year survival probability, and the relapse cost. The probabilistic sensitivity analysis indicate that the ICER of 5FU/LV is less than $50,000/QALY with a probability of 99.62% and the ICER of FOLFOX as compared to 5FU/LV is less than $50,000/QALY and $100,000/QALY with a probability of 44.48% and 97.24%, respectively.
While adjuvant chemotherapy with 5FU/LV is cost-effective at all ages for patients who had undergone an uncomplicated hemicolectomy for stage II colon cancer, FOLFOX is not likely to be cost-effective as compared to 5FU/LV.
Colon Cancer; Stage II; Cost-effectiveness; Markov Model; Calibration
Almost a decade ago Morton & Torgerson (p. 1084) indicated that perceived medical benefits could be due to “…regression-to-the-mean.” Despite this caution, the regression-to-the-mean “…effects on the identification of changes in institutional performance do not seem to have been considered previously in any depth.” (Jones & Spiegelhalter, p. 1646). As a response, Jones & Spiegelhalter provide a methodology to adjust for regression-to-the-mean when modeling recent changes in institutional performance for one variable quality indicators (QIs). Therefore, in our view, Jones & Spiegelhalter provide a breakthrough methodology for performance measures. At the same time, in the interests of parsimony, it is useful to aggregate individual QIs into a composite score. Our question is: Can we develop and demonstrate a methodology that extends the ‘regression-to-the-mean’ literature to allow for composite quality indicators? Using a latent variable modeling approach, we extend the methodology to the composite indicator case. We demonstrate the approach on four indicators collected by the National Database of Nursing Quality Indicators® (NDNQI®). A simulation study further demonstrates its “proof of concept.”
provider profiling; individual changes; NDNQI; regression-to-the-mean; Bayesian Analysis
Increasingly, women with a strong family history of breast cancer are seeking genetic testing as a starting point to making significant decisions regarding management of their cancer risks. Individuals who are found to be carriers of a BRCA1 or BRCA2 mutation have a substantially elevated risk for breast cancer and are frequently faced with the decision of whether or not to undergo risk reducing mastectomy.
In order to provide BRCA1/2 carriers with ongoing decision support for breast cancer risk management, a computer-based interactive decision aid was developed and tested against usual care in a randomized controlled trial.
Following genetic counseling, 214 female (aged 21-75) BRCA1/2 mutation carriers were randomized to Usual Care (UC; N=114) or Usual Care plus Decision Aid (DA; N=100) arms. UC participants received no further intervention; DA participants were sent the CD-ROM based decision aid to view at home.
Main Outcome Measures
The authors measured general distress, cancer specific distress and genetic testing specific distress at 1-, 6- and 12-month follow up time points, post-randomization.
Longitudinal analyses revealed a significant longitudinal impact of the DA on cancer specific distress (B= 5.67, z = 2.81, p = 0.005) which varied over time (DA group by time; B = -2.19, z = -2.47, p = 0.01) and on genetic testing specific distress (B = 5.55, z = 2.46, p = 0.01) which also varied over time (DA group by time; B= -2.46, z = -2.51, p = 0.01). Individuals randomized to UC reported significantly decreased distress in the month following randomization, whereas individuals randomized to the DA maintained their post-disclosure distress over the short-term. By 12-months, the overall decrease in distress between the two groups was similar.
This report provides new insight into the long-term longitudinal effects of DAs.
The objective of this study was to evaluate the feasibility and outcomes of incorporating value of information (VOI) analysis into a stakeholder-driven research prioritization process in a US-based setting.
Within a program to prioritize comparative effectiveness research areas in cancer genomics, over a period of 7 months, we developed decision-analytic models and calculated upper-bound VOI estimates for three previously selected genomic tests. Thirteen stakeholders representing patient advocates, payers, test developers, regulators, policy-makers, and community-based oncologists ranked the tests before and after receiving VOI results. The stakeholders were surveyed about the usefulness and impact of the VOI findings.
The estimated upper-bound VOI ranged from $33M to $2.8 billion for the three research areas. Seven stakeholders indicated the results modified their rankings, nine stated VOI data was useful, and all indicated they would support its use in future prioritization processes. Some stakeholders indicated expected value of sampled information might be the preferred choice when evaluating specific study designs.
Our study was limited by the size and the potential for selection bias in the composition of the external stakeholder group, lack of a randomized design to assess effect of VOI data on rankings, and the use of expected value of perfect information versus expected value of sample information methods.
Value of information analyses may have a meaningful role in research topic prioritization for comparative effectiveness research in the US, particularly when large differences in VOI across topic areas are identified. Additional research is needed to facilitate the use of more complex value of information analyses in this setting.
Comparing prediction models using reclassification within subgroups at intermediate risk is often of clinical interest.
To demonstrate a method for obtaining an unbiased estimate for the Net Reclassification Improvement (NRI) evaluated only on a subset, or the clinical NRI. Study Design and Setting: We derived the expected value of the clinical NRI under the null hypothesis using the same principles as the overall NRI. We then conducted a simulation study based on a logistic model with a known predictor and a potential predictor, varying the effects of the known and potential predictors to test the performance of our bias-corrected clinical NRI measure. Finally, data from the Women’s Health Study, a prospective cohort of 24,171 female health professionals, were used as an example of the proposed method.
Our bias-corrected estimate is shown to have a mean of zero in the null case under a range of simulated parameters and, unlike the naïve estimate, to be unbiased. We also provide two methods for obtaining a variance estimate, both with reasonable type 1 errors.
Our proposed method is an improvement over currently used methods of calculating the clinical NRI and is recommended to reduce overly optimistic results.
health utility; cost-effectiveness analysis; care givers; health related quality of life
Risk perceptions are legitimate targets for behavioral interventions because they can motivate medical decisions and health behaviors. However, some survey respondents may not know (or may not indicate) their risk perceptions. The scope of “don't know” (DK) responding is unknown.
Examine the prevalence and correlates of responding DK to items assessing perceived risk of colorectal cancer.
Two nationally representative, population-based, cross-sectional surveys (2005 National Health Interview Survey [NHIS]; 2005 Health Information National Trends Survey [HINTS]), and one primary care clinic-based survey comprised of individuals from low-income communities. Analyses included 31,202 (NHIS), 1,937 (HINTS), and 769 (clinic) individuals.
Five items assessed perceived risk of colorectal cancer. Four of the items differed in format and/or response scale: comparative risk (NHIS, HINTS); absolute risk (HINTS, clinic), and “likelihood” and “chance” response scales (clinic). Only the clinic-based survey included an explicit DK response option.
“Don't know” responding was 6.9% (NHIS), 7.5% (HINTS-comparative), and 8.7% (HINTS-absolute). “Don't know” responding was 49.1% and 69.3% for the “chance” and “likely” response options (clinic). Correlates of DK responding were characteristics generally associated with disparities (e.g., low education), but the pattern of results varied among samples, question formats, and response scales.
The surveys were developed independently and employed different methodologies and items. Consequently, the results were not directly comparable. There may be multiple explanations for differences in the magnitude and characteristics of DK responding.
“Don't know” responding is more prevalent in populations affected by health disparities. Either not assessing or not analyzing DK responses could further disenfranchise these populations and negatively affect the validity of research and the efficacy of interventions seeking to eliminate health disparities.
Risk perception; measurement; colorectal cancer; item response; disparities
Simulation models designed to evaluate cancer prevention strategies make assumptions on background mortality–the competing risk of death from causes other than the cancer being studied. Researchers often use the U.S. lifetables and assume homogeneous other-cause mortality rates. However, this can lead to bias because common risk factors such as smoking and obesity also predispose individuals for deaths from other causes such as cardiovascular disease.
We obtained calendar year-, age- and sex-specific other-cause mortality rates by removing deaths due to a specific cancer from U.S. all-cause life tables. Prevalence across 12 risk factor groups (3 smoking (never, past, and current smoker) and 4 body mass index (BMI) categories (<25, 25-30, 30-35, 35+ kg/m2) were estimated from national surveys (National Health and Nutrition Examination Surveys (NHANES) 1971-2004). Using NHANES linked-mortality data, we estimated hazard ratios for death by BMI/smoking using a Poisson regression model. Finally, we combined these results to create 12 sets of BMI and smoking-specific other-cause life tables for U.S. adults aged 40 and older that can be used in simulation models of lung, colorectal, or breast cancer.
We found substantial differences in background mortality when accounting for BMI and smoking. Ignoring the heterogeneity in background mortality in cancer simulation models can lead to underestimation of competing risk of deaths for higher risk individuals (e.g. male, 60-year old, white obese smokers) by as high as 45%.
Not properly accounting for competing risks of death may introduce bias when using simulation modeling to evaluate population health strategies for prevention, screening, or treatment. Further research is warranted on how these biases may impact cancer screening strategies targeted to high-risk individuals.
The following is a summary report from a special symposium, entitled “Translating Research into Practice: Setting a Research Agenda for Clinical Decision Tools in Cancer Prevention, Early Detection, and Treatment”, that was held on October 23, 2005 in San Francisco at the Annual Meeting of the Society for Medical Decision Making (SMDM). The symposium was designed to answer the question: “What are the top two research priorities in the field of patients’ cancer-related decision aids?” After introductory remarks by Dr. Barry, each of four panelists–Drs. Hilary Llewellyn-Thomas, Ellen Peters, Laura Siminoff, and Dale Collins–addressed the question and provided their rationale during prepared remarks. The moderator, Dr. Michael Barry, then facilitated a discussion between the panelists, with input from the audience, to further explore and add to the various proposed research questions. Finally, Dr. Amber Barnato conducted a simple vote count (see Table 1) to prioritize the panelists’ and the audience’s recommendations.
US colorectal cancer screening guidelines for people at average risk for colorectal cancer endorse multiple screening options and recommend that screening decisions reflect individual patient preferences.
We used the Analytic Hierarchy Process (AHP) to ascertain decision priorities of people at average risk for colorectal cancer attending primary care practices in Rochester NY, Birmingham AL, and Indianapolis IN. The analysis included four decision criteria, three sub-criteria, and ten options.
484 people completed the study; 66% were female, 49% were African-American, 9% had low literacy skills, and 27% had low numeracy skills. Overall, preventing cancer was given the highest priority (mean priority 55%), followed by avoiding screening test side effects (mean priority 17%), minimizing false positive test results (mean priority 15%), and the combined priority of screening frequency, test preparation, and the test procedure(s) (mean priority 14%). Hierarchical cluster analysis revealed six distinct priority groupings containing multiple instances of decision priorities that differed from the average value by a factor of four or more. More than 90% of the study participants fully understood the concepts involved, 79% met AHP analysis quality standards, and 88% were willing to use similar methods to help make important healthcare decisions.
These results highlight the need to facilitate incorporation of patient preferences into colorectal cancer screening decisions. The large number of study participants able and willing to perform the complex AHP analysis used for this study suggests that the AHP is a useful tool for identifying the patient-specific priorities needed to ensure that screening decisions appropriately reflect individual patient preferences.
We developed preference-based and summated scale scoring for the Testing Morbidities Index (TMI) classification, which addresses short-term effects on quality of life from diagnostic testing before, during and after a testing procedure.
The two TMI value functions utilize multiattribute value techniques; one is patient-based and the other has a societal perspective. 206 breast biopsy patients and 466 (societal) subjects informed the models. Due to a lack of standard short-term methods for this application, we utilized the visual analog scale (VAS). Waiting trade-off (WTO) tolls provided an additional option for linear transformation of the TMI. We randomized participants to one of three surveys: the first derived weights for generic testing morbidity attributes and levels of severity with the VAS; a second developed VAS values and WTO tolls for linear transformation of the TMI to a death-healthy scale; the third addressed initial validation in a specific test (breast biopsy). 188 patients and 425 community subjects participated in initial validation, comparing direct VAS and WTO values to the TMI. Alternative TMI scoring as a non-preference summated scale was included, given evidence of construct and content validity.
The patient model can use an additive function, while the societal model is multiplicative. Direct VAS and the VAS-scaled TMI were correlated across modeling groups (r=0.45 to 0.62) and agreement was comparable to the value function validation of the Health Utilities Index 2. Mean Absolute Difference (MAD) calculations showed a range of 0.07–0.10 in patients and 0.11–0.17 in subjects. MAD for direct WTO tolls compared to the WTO-scaled TMI varied closely around one quality-adjusted life day.
The TMI shows initial promise in measuring short-term testing-related health states.
Good decisions depend on an accurate understanding of the comparative effectiveness of decision alternatives. The best way convey data needed to support these comparisons is unknown.
To determine how well five commonly used data presentation formats convey comparative effectiveness information.
Internet survey using a factorial design.
279 members of an online survey panel.
Study participants compared outcomes associated with three hypothetical screening test options relative to five possible outcomes with probabilities ranging from 2 per 5,000 (0.04%) to 500 per 1,000 (50%). Data presentation formats included a table, a “magnified” bar chart, a risk scale, a frequency diagram, and an icon array.
Outcomes included the number of correct ordinal judgments regarding the more likely of two outcomes, the ratio of perceived versus actual relative likelihoods of the paired outcomes, the inter-subject consistency of responses, and perceived clarity.
The mean number of correct ordinal judgments was 12 of 15 (80%), with no differences among data formats. On average, there was a 3.3-fold difference between perceived and actual likelihood ratios,95%CI: 3.0 to 3.6. Comparative judgments based on flow charts, icon arrays, and tables were all significantly more accurate and consistent than those based on risk scales and bar charts, p < 0.001. The most clearly perceived formats were the table and the flow chart. Low subjective numeracy was associated with less accurate and more variable data interpretations and lower perceived clarity for icon displays, bar charts, and flow diagrams.
None of the data presentation formats studied can reliably provide patients, especially those with low subjective numeracy, with an accurate understanding of comparative effectiveness information.
The aim of the current study was to learn how people integrate attitudes about multiple health conditions to make a decision about genetic testing uptake.
This study recruited 294 healthy young adults from a parent research project, the Multiplex Initiative, conducted in a large health care system in Detroit, Michigan. All participants were offered a multiplex genetic test that assessed risk for 8 common health conditions (e.g., type 2 diabetes). Data were collected from a baseline survey, a web-based survey, and at the time of testing.
Averaging attitudes across diseases predicted test uptake but did not contribute beyond peak attitudes, the highest attitude toward testing for a single disease in the set. Peak attitudes were found sufficient to predict test uptake.
The effects of set size and mode of presentation could not be examined because these factors were constant in the multiplex test offered.
These findings support theories suggesting that people use representative evaluations in attitude formation. The implication of these findings for further developments in genetic testing is that the communication and impact of multiplex testing may need to be considered in the light of a bias toward peak attitudes.
cognitive psychology; judgment and decision psychology; patient choice modeling; social judgment theory
The impact of choice on consumer decision-making is controversial in U.S. health policy.
Our objective was to determine how choice set size influences decision-making among Medicare beneficiaries choosing prescription drug plans.
We randomly assigned members of an internet-enabled panel age 65 and over to sets of prescription drug plans of varying sizes (2, 5, 10, and 16) and asked them to choose a plan. Respondents answered questions about the plan they chose, the choice set, and the decision process. We used ordered probit models to estimate the effect of choice set size on the study outcomes.
Both the benefits of choice, measured by whether the chosen plan is close to the ideal plan, and the costs, measured by whether the respondent found decision-making difficult, increased with choice set size. Choice set size was not associated with the probability of enrolling in any plan.
Medicare beneficiaries face a tension between not wanting to choose from too many options and feeling happier with an outcome when they have more alternatives. Interventions that reduce cognitive costs when choice sets are large may make this program more attractive to beneficiaries.
Medicare Part D; prescription drugs; choice; decision-making; insurance
Electronic personal health records offer a promising way to communicate medical test results to patients. We compared the usability of tables and horizontal bar graphs for presenting medical test results electronically.
We conducted experiments with a convenience sample of 106 community-dwelling adults. In the first experiment, participants viewed either table or bar graph formats (between subjects) that presented medical test results with normal and abnormal findings. In a second experiment, participants viewed table and bar graph formats (within subjects) that presented test results with normal, borderline, and abnormal findings.
Participants required less viewing time when using bar graphs rather than tables. This overall difference was due to superior performance of bar graphs in vignettes with many test results. Bar graphs and tables performed equally well with regard to recall accuracy and understanding. In terms of ease of use, participants did not prefer bar graphs to tables when they viewed only one format. When participants viewed both formats, those with experience with bar graphs preferred bar graphs, and those with experience with tables found bar graphs equally easy to use. Preference for bar graphs was strongest when viewing tests with borderline results.
Compared to horizontal bar graphs, tables required more time and experience to achieve the same results, suggesting that tables can be a more burdensome format to use. The current practice of presenting medical test results in a tabular format merits reconsideration.
electronic personal health record; medical test results; format; usability
The Institute of Medicine (IOM) has called for a new health care paradigm that integrates patient values into discussions of the risks and benefits of treatment. Although CVD affects one-third of Americans, little is known about how adults regard the potential harms or complications of treatment.
We sought to determine the preferences of community dwelling adults for 15 potential harms or complications resulting from treatment of cardiovascular disease (CVD).
In a telephone survey, adults over 18 years of age residing on Long Island, New York were asked to score the preferences for 15 potential harms or complications of treatment of CVD on a scale from 0 to 100. All statistical analyses were based on non-parametric methods. Multivariable general linear model analyses were performed to identify demographic factors associated with the score assigned for each adverse outcome.
The 807 individuals surveyed generated 723 unique sequences of scores for the 15 outcomes. The ranking of scores from least to most acceptable was stroke, major myocardial infarction (MI), cognitive dysfunction, renal failure, death, prolonged ventilator support, heart failure, angina, sternal wound infection, major bleeding, re-operation, prolonged recovery in a nursing home, cardiac readmission, minor MI and percutaneous coronary intervention. Demographic factors accounted for less than 7% of the observed variation in the score attributed to each outcome.
Individual community-dwelling adults living on Long Island, New York assign unique values to their preferences for potential harms encountered following treatment of CVD. Thus, risk-benefit discussions and treatment decisions regarding CVD should be harmonized to the value system of each individual.
With the increasing complexity of decisions in pediatric medicine, there is a growing need to understand the pediatric decision-making process.
To conduct a narrative review of the current research on parent decision making about pediatric treatments and identify areas in need of further investigation.
Articles presenting original research on parent decision making were identified from MEDLINE (1966–6/2011), using the terms “decision making,” “parent,” and “child.” We included papers focused on treatment decisions but excluded those focused on information disclosure to children, vaccination, and research participation decisions.
We found 55 papers describing 52 distinct studies, the majority being descriptive, qualitative studies of the decision-making process, with very limited assessment of decision outcomes. Although parents’ preferences for degree of participation in pediatric decision making vary, most are interested in sharing the decision with the provider. In addition to the provider, parents are influenced in their decision making by changes in their child’s health status, other community members, prior knowledge, and personal factors, such as emotions and faith. Parents struggle to balance these influences as well as to know when to include their child in decision making.
Current research demonstrates a diversity of influences on parent decision making and parent decision preferences; however, little is known about decision outcomes or interventions to improve outcomes. Further investigation, using prospective methods, is needed in order to understand how to support parents through the difficult treatment decisions.
randomized trial methodology; risk factor evaluation; population-based studies; scale development/validation; patient decision making; provider decision making; risk communication or risk perception; health state preferences, utilities, and valuations; judgment and decision psychology; informed consent; education; pediatrics
We examined physician diagnostic certainty as one reason for cross-national medical practice variation. Data are from a factorial experiment conducted in the United States, the United Kingdom, and Germany, estimating 384 generalist physicians’ diagnostic and treatment decisions for videotaped vignettes of actor patients depicting a presentation consistent with coronary heart disease (CHD). Despite identical vignette presentations, we observed significant differences across health care systems, with US physicians being the most certain and German physicians the least certain (p < .0001). Physicians were least certain of a CHD diagnoses when patients were younger and female (p < .0086), and there was additional variation by health care system (as represented by country) depending on patient age (p < .0100) and race (p < .0021). Certainty was positively correlated with several clinical actions, including test ordering, prescriptions, referrals to specialists, and time to follow-up.
clinical decision making; medical practice variation; health disparities
Preference-based measures of health-related quality of life all use the same dead = 0.00 to perfect health = 1.00 scale, but there are substantial differences among measures.
The objective is to examine agreement in classifying patients as better, stable, or worse.
The EQ-5D, Health Utilities Index Mark 2 and Mark 3, Quality of Well-Being – Self-Administered, Short-Form 36 (Short-Form 6D), and disease-targeted measures were administered prospectively in two clinical cohorts.
The study was conducted at academic medical centers: University of California, Los Angeles; University of California, San Diego; University of Wisconsin-Madison; and University of Southern California.
Patients undergoing cataract extraction surgery with lens replacement completed the 25-item National Eye Institute Visual Function Questionnaire (NEI-VFQ-25). Patients newly refereed to congestive heart failure specialty clinics completed the Minnesota Living with Heart Failure Questionnaire (MLHF).
In both cohorts subjects completed surveys at baseline, one and six months. The NEI-VFQ-25 and MLHF were used as gold standards to assign patients to categories of change. Agreement was assessed using kappa.
376 cataract patients were recruited. Complete data for baseline and the one-month follow-up were available on all measures for 210 cases. Using criteria specified by Altman, agreement was poor for six of nine pairs of comparisons and fair for three pairs. 160 heart failure patients were recruited. Complete data for baseline and the six-month follow-up were available for 86 cases. Agreement was negligible for five pairs and fair for one.
The study was conducted on selected patients at a few academic medical centers.
The results underscore the lack of interchangeability among different preference-based measures.
Mathematical and simulation models are increasingly used to plan for and evaluate health sector responses to disasters, yet no clear consensus exists regarding best practices for the design, conduct, and reporting of such models. We examined a large selection of published health sector disaster response models to generate a set of best practice guidelines for such models.
We reviewed a spectrum of published disaster response models addressing public health or healthcare delivery, focusing in particular on the type of disaster and response decisions considered, decision makers targeted, choice of outcomes evaluated, modeling methodology, and reporting format. We developed initial recommendations for best practices for creating and reporting such models and refined these guidelines after soliciting feedback from response modeling experts and from members of the Society for Medical Decision Making.
We propose six recommendations for model construction and reporting, inspired by the most exemplary models: Health sector disaster response models should address real-world problems; be designed for maximum usability by response planners; strike the appropriate balance between simplicity and complexity; include appropriate outcomes, which extend beyond those considered in traditional cost-effectiveness analyses; and be designed to evaluate the many uncertainties inherent in disaster response. Finally, good model reporting is particularly critical for disaster response models.
Quantitative models are critical tools for planning effective health sector responses to disasters. The recommendations we propose can increase the applicability and interpretability of future models, thereby improving strategic, tactical, and operational aspects of preparedness planning and response.
Disaster Planning; Mass Casualty Incidents; Computer Simulation; Cost-benefit Analysis; Guideline
Shared Decision Making (SDM) and Decision Aids (DAs) increase patients’ involvement in healthcare decisions and enhance satisfaction with their choices. Studies of SDM and DAs have primarily occurred in academic centers and large health systems, but most primary care is delivered in smaller practices and over 20% of Americans live in rural areas where poverty, disease prevalence and limited access to care may increase the need for SDM and DAs.
To explore perceptions and practices of rural primary care clinicians regarding SDM and DAs.
Cross sectional survey.
Setting and Participants
Primary care clinicians affiliated with the Oregon Rural Practice-based Research Network (ORPRN).
Surveys were returned by 181 of 231 eligible participants (78%), 174 could be analyzed. Two-thirds of participants were physicians, 84% practiced family medicine, and 55% were male. Sixty five percent of respondents were unfamiliar with the term “SDM”, but following definition, 97% reported they found the approach useful for conditions with multiple treatment options. Over 90% of clinicians perceived helping patients make decisions regarding chronic pain and health behavior change as moderate/hard in difficulty. Although 69% of respondents preferred that patients play an equal role in making decisions, they estimate this happens only 35% of the time. Time was reported as the largest barrier to engaging in SDM (63%). Respondents were receptive to using DAs to facilitate SDM in printed (95%) or web-based formats (72%) and topic preference varied by clinician specialty and decision difficulty.
Rural clinicians recognized the value of SDM and were receptive to using DAs in multiple formats. Integration of DAs to facilitate SDM in routine patient care may require addressing practice operation and reimbursement.
Primary Care; Translating Research Into Practice; Shared Decision Making – Decision Aid Tools; Decision Aids – Decision Aid Tools; Survey Methods – Statistical Methods