PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (110)
 

Clipboard (0)
None

Select a Filter Below

Journals
Year of Publication
more »
issn:0272-989
1.  [No title available] 
PMCID: PMC3818438  PMID: 23811760
2.  [No title available] 
PMCID: PMC3946215  PMID: 24106235
3.  [No title available] 
PMCID: PMC3948210  PMID: 24125789
4.  Too Much of a Good Thing? When to Stop Catch-Up Vaccination 
During the 20th century, deaths from a range of serious infectious diseases decreased dramatically due to the development of safe and effective vaccines. However, infant immunization coverage has increased only marginally since the 1960s, and many people remain susceptible to vaccine-preventable diseases. “Catch-up vaccination” for age groups beyond infancy can be an attractive and effective means of immunizing people who were missed earlier. However, as newborn vaccination rates increase, catch-up vaccination becomes less attractive: the number of susceptible people decreases, so the cost to find and vaccinate each unvaccinated person may increase; additionally, the number of infected individuals decreases, so each unvaccinated person faces a lower risk of infection. This paper presents a general framework for determining the optimal time to discontinue a catch-up vaccination program. We use a cost-effectiveness framework: we consider the cost per quality-adjusted life year gained of catch-up vaccination efforts, as a function of newborn immunization rates over time and consequent disease prevalence and incidence. We illustrate our results with the example of hepatitis B catch-up vaccination in China. We contrast results from a dynamic modeling approach with an approach that ignores the impact of vaccination on future disease incidence. The latter approach is likely to be simpler for decision makers to understand and implement because of lower data requirements.
doi:10.1177/0272989X13493142
PMCID: PMC4247340  PMID: 23858015
vaccine; epidemic control; hepatitis B
5.  Patient Time Requirements for Anticoagulation Therapy with Warfarin 
Background
Most patients receiving warfarin are man- aged in outpatient office settings or anticoagulation clinics that require frequent visits for monitoring.
Objective
To measure the amount and value of time required of patients for chronic anticoagulation therapy with warfarin.
Design/Participants
Prospective observation of a cohort of adult patients treated at a university-based anticoagulation program.
Measurements
Participants completed a questionnaire and a prospective diary of the time required for 1 visit to the anticoagulation clinic, including travel, waiting, and the clinic visit. The authors reviewed subjects’ medical records to obtain additional information, including the frequency of visits to the anti- coagulation clinic. They used the human capital method to estimate the value of time.
Results
Eighty-five subjects completed the study. The mean (median) total time per visit was 147 minutes (123). Subjects averaged 15 visits per year (14) and spent 39.0 hours (29.3) per year on their visits. Other anticoagulation-related activities, such as communication with providers, pharmacy trips, and extra time preparing food, added an average of 52.7 hours (19.0) per year. The mean annual value of patient time spent traveling, waiting, and attending anticoagulation visits was $707 (median $591). The mean annual value when also including other anticoagulation-related activities was $1799 (median $1132).
Conclusions
The time required of patients for anticoagulation visits was considerable, averaging approximately 2.5 hours per visit and almost 40 hours per year. Methods for reducing patient time requirements, such as home-based testing, could reduce costs for patients, employers, and companions.
doi:10.1177/0272989X09343960
PMCID: PMC4181607  PMID: 19773584
anticoagulation; warfarin; time; human capital method; health economics
6.  The Impact of a Novel Computer-Based Decision Aid on Shared Decision-Making for Colorectal Cancer Screening: A Randomized Trial (Running head: SDM for CRC Screening) 
Background
Eliciting patients’ preferences within a framework of shared decision-making (SDM) has been advocated as a strategy for increasing colorectal cancer (CRC) screening adherence. Our objective was to assess the effectiveness of a novel decision aid on SDM in the primary care setting.
Methods
An interactive, computer-based decision aid for CRC screening was developed and evaluated within the context of a randomized controlled trial. A total of 665 average-risk patients (mean age, 57 years; 60% female; 63% Black, 6% Hispanics) were allocated to one of two intervention arms (decision aid alone, decision aid plus personalized risk assessment) or a control arm. The interventions were delivered just prior to a scheduled primary care visit. Outcome measures (patient preferences, knowledge, satisfaction with the decision making process [SDMP], concordance between patient preference and test ordered, and intentions) were evaluated using pre/post-study visit questionnaires and electronic scheduling.
Results
Overall, 95% of patients in the intervention arms identified a preferred screening option based on values placed on individual test features. Mean cumulative knowledge, SDMP and intention scores were significantly higher for both intervention groups compared with the control group. Concordance between patient preference and test ordered was 59%. Patients who preferred colonoscopy were more likely to have a test ordered than those who preferred an alternative option (83% vs. 70%; P<0.01). Intention scores were significantly higher when the test ordered reflected patient preferences.
Conclusions
Our interactive computer-based decision aid facilitates SDM but overall effectiveness is determined by the extent to which providers comply with patient preferences.
doi:10.1177/0272989X10369007
PMCID: PMC4165390  PMID: 20484090
7.  The Numeracy Understanding in Medicine Instrument (NUMi): A Measure of Health Numeracy Developed Using Item Response Theory 
Background
Health numeracy can be defined as the ability to understand medical information presented with numbers, tables and graphs, probability, and statistics and to use that information to communicate with one’s health care provider, take care of one’s health, and participate in medical decisions.
Objective
To develop the Numeracy Understanding in Medicine Instrument (NUMi) using Item Response Theory scaling methods.
Design
A 20 item test was formed drawing from an item bank of numeracy questions. Items were calibrated using responses from 1000 participants and a 2 parameter Item Response Theory (IRT) model. Construct validity was assessed by comparing scores on the NUMi to established measures of print and numeric health literacy, mathematic achievement, and cognitive aptitude.
Participants
Community and clinical populations in the Milwaukee and Chicago metropolitan areas.
Results
Twenty-nine percent of the 1000 respondents were Hispanic, 24% Non-Hispanic white, and 42% Non-Hispanic black. Forty-one percent (41%) had no more than a high school education. The mean score on the NUMi was 13.2 (SD 4.6) with a Cronbach’s alpha of 0.86. Difficulty and discrimination IRT parameters of the 20 items ranged from −1.70 to 1.45 and 0.39 to 1.98, respectively. Performance on the NUMi was strongly correlated with the WRAT-arithmetic test (0.73, p<0.001), the Lipkus expanded numeracy scale (0.69, p<0.001), the Medical Data Interpretation Test (0.75, p<0.001), and the Wonderlic Cognitive Ability Test (0.82, p<0.001). Performance was moderately correlated to the Short Test of Functional Health Literacy (0.43, p<0.001).
Limitations
The NUMi was found to be most discriminating among respondents with a lower than average level of health numeracy.
Conclusions
The NUMi can be applied in research and clinical settings as a robust measure of the health numeracy construct.
doi:10.1177/0272989X12447239
PMCID: PMC4162626  PMID: 22635285
8.  Measuring risk perceptions: What does the excessive use of 50% mean? 
Objectives
Risk perceptions are central to good health decisions. People can judge valid probabilities, but use 50% disproportionately. We hypothesized that 50% is more likely than other responses to reflect not knowing the probability, especially among individuals with low education and numeracy, and evaluated the usefulness of eliciting “don’t know” explanations.
Methods
Respondents (n=1020) judged probabilities for “living” or “dying” in the next 10 years, indicating whether they gave a good estimate or did not know the chances. They completed demographics, medical history, and numeracy questions.
Results
Overall, 50% was more likely than other probabilities to be explained as “don’t know” (vs. “a good estimate.”) Correlations of using 50% with low education and numeracy were mediated by expressing “don’t know.” Judged probabilities for survival and mortality explained as “don’t know” had lower correlations with age, diseases, and specialist visits.
Conclusions
When judging risks, 50% may reflect not knowing the probability, especially among individuals with low numeracy and education. Probabilities expressed as “don’t know” are less valid. Eliciting uncertainty could benefit theoretical models and educational efforts.
doi:10.1177/0272989X11404077
PMCID: PMC4152727  PMID: 21521797
9.  Estimating the Cost of No-shows and Evaluating the Effects of Mitigation Strategies 
Objective
To measure the cost of non-attendance (“no-shows”) and benefit of overbooking and interventions to reduce no-shows for an outpatient endoscopy suite.
Methods
We used a discrete event simulation model to determine improved overbooking scheduling policies and examine the effect of no-shows on procedure utilization and expected net gain, defined as the difference in expected revenue based on CMS reimbursement rates and variable costs based on the sum of patient waiting time and provider and staff overtime. No-show rates were estimated from historical attendance (18% on average, with a sensitivity range of 12 to 24%). We then evaluated the effectiveness of scheduling additional patients and the effect of no-show reduction interventions on the expected net gain.
Results
The base schedule booked 24 patients per day. The daily expected net gain with perfect attendance is $4,433.32. The daily loss attributed to the base case no-show rate of 18% is $725.42 (16.36% of net gain), ranging from $472.14 to $1,019.29 (10.7% to 23.0% of net gain). Implementing no-show interventions reduced net loss by $166.61 to $463.09 (3.8% to 10.5% of net gain). The overbooking policy of 9 additional patients per day resulted in no loss in expected net gain when compared to the reference scenario.
Conclusions
No-shows can significantly decrease the expected net gain of outpatient procedure centers. Overbooking can help mitigate the impact of no-shows on a suite’s expected net gain and has a lower expected cost of implementation to the provider than intervention strategies.
doi:10.1177/0272989X13478194
PMCID: PMC4153419  PMID: 23515215
10.  Estimating a Preference-Based Index from the Clinical Outcomes in Routine Evaluation–Outcome Measure (CORE-OM) 
Medical Decision Making  2013;33(3):381-395.
Background. The Clinical Outcomes in Routine Evaluation–Outcome Measure (CORE-OM) is used to evaluate the effectiveness of psychological therapies in people with common mental disorders. The objective of this study was to estimate a preference-based index for this population using CORE-6D, a health state classification system derived from the CORE-OM consisting of a 5-item emotional component and a physical item, and to demonstrate a novel method for generating states that are not orthogonal. Methods. Rasch analysis was used to identify 11 emotional health states from CORE-6D that were frequently observed in the study population and are, thus, plausible (in contrast, conventional statistical design might generate implausible states). Combined with the 3 response levels of the physical item of CORE-6D, they generate 33 plausible health states, 18 of which were selected for valuation. A valuation survey of 220 members of the public in South Yorkshire, United Kingdom, was undertaken using the time tradeoff (TTO) method. Regression analysis was subsequently used to predict values for all possible states described by CORE-6D. Results. A number of multivariate regression models were built to predict values for the 33 health states of CORE-6D, using the Rasch logit value of the emotional state and the response level of the physical item as independent variables. A cubic model with high predictive value (adjusted R2 = 0.990) was selected to predict TTO values for all 729 CORE-6D health states. Conclusion. The CORE-6D preference-based index will enable the assessment of cost-effectiveness of interventions for people with common mental disorders using existing and prospective CORE-OM data sets. The new method for generating states may be useful for other instruments with highly correlated dimensions.
doi:10.1177/0272989X12464431
PMCID: PMC4107796  PMID: 23178639
condition specific; CORE-6D; CORE-OM; health state valuation; mental health; preference-based index; time trad-eoff
11.  Health Numeracy: The Importance of Domain in Assessing Numeracy 
Background
Existing research concludes that measures of general numeracy can be used to predict individuals’ ability to assess health risks. We posit that the domain in which questions are posed affects the ability to perform mathematical tasks, raising the possibility of a separate construct of “health numeracy” that is distinct from general numeracy.
Objective
To determine whether older adults’ ability to perform simple math depends on domain.
Design
Community-based participants completed four math questions posed in three different domains: a health domain, a financial domain, and a pure math domain.
Participants
962 individuals aged 55 and older, representative of the community-dwelling U.S. population over age 54.
Results
We found that respondents performed significantly worse when questions were posed in the health domain (54 percent correct) than in either the pure math domain (66 percent correct) or the financial domain (63 percent correct).
Limitations
Our experimental measure of numeracy consisted of only four questions, and it is possible that the apparent effect of domain is specific to the mathematical tasks that these questions require.
Conclusions
These results suggest that health numeracy is strongly related to general numeracy but that the two constructs may not be the same. Further research is needed into how different aspects of general numeracy and health numeracy translate into actual medical decisions.
doi:10.1177/0272989X13493144
PMCID: PMC4106034  PMID: 23824401
12.  Evaluation of markers and risk prediction models: Overview of relationships between NRI and decision-analytic measures 
BACKGROUND
For the evaluation and comparison of markers and risk prediction models, various novel measures have recently been introduced as alternatives to the commonly used difference in the area under the ROC curve (ΔAUC). The Net Reclassification Improvement (NRI) is increasingly popular to compare predictions with one or more risk thresholds, but decision-analytic approaches have also been proposed.
OBJECTIVE
We aimed to identify the mathematical relationships between novel performance measures for the situation that a single risk threshold T is used to classify patients as having the outcome or not.
METHODS
We considered the NRI and three utility-based measures that take misclassification costs into account: difference in Net Benefit (ΔNB), difference in Relative Utility (ΔRU), and weighted NRI (wNRI). We illustrate the behavior of these measures in 1938 women suspect of ovarian cancer (prevalence 28%).
RESULTS
The three utility-based measures appear transformations of each other, and hence always lead to consistent conclusions. On the other hand, conclusions may differ when using the standard NRI, depending on the adopted risk threshold T, prevalence P and the obtained differences in sensitivity and specificity of the two models that are compared. In the case study, adding the CA-125 tumor marker to a baseline set of covariates yielded a negative NRI yet a positive value for the utility-based measures.
CONCLUSIONS
The decision-analytic measures are each appropriate to indicate the clinical usefulness of an added marker or compare prediction models, since these measures each reflect misclassification costs. This is of practical importance as these measures may thus adjust conclusions based on purely statistical measures. A range of risk thresholds should be considered in applying these measures.
doi:10.1177/0272989X12470757
PMCID: PMC4066820  PMID: 23313931
13.  Effect of Adding a Values Clarification Exercise to a Decision Aid on Heart Disease Prevention: A Randomized Trial 
Background
Experts have called for the inclusion of values clarification (VC) exercises in decision aids (DA) as a means of improving their effectiveness, but little research has examined the effects of such exercises.
Objective
To determine whether adding a VC exercise to a DA on heart disease prevention improves decision making outcomes.
Design
Randomized trial.
Setting
UNC Decision Support Laboratory.
Patients
Adults ages 40–80 with no history of cardiovascular disease.
Intervention
A web-based heart disease prevention DA with or without a VC exercise.
Measurements
Pre and post-intervention decisional conflict and intent to reduce CHD risk. Post-intervention self-efficacy and perceived values concordance.
Results
We enrolled 137 participants (62 in DA; 75 in VC) with moderate decisional conflict (DA 2.4; VC 2.5) and no baseline differences among groups. After the interventions, we found no clinically or statistically significant differences between groups in decisional conflict (DA 1.8; VC 1.9; absolute difference VC-DA 0.1, 95% CI −0.1 to 0.3), intent to reduce CHD risk (DA 98%; VC 100%; absolute differences VC-DA: 2%, 95% CI −0.02% to 5%), perceived values concordance (DA 95%, VC 92%; absolute difference VC-DA −3%, 95% CI −11 to +5%), or self efficacy for risk reduction (DA 97%, VC 92%; absolute difference VC-DA −5%, 95% CI −13 to +3%). However, VC tended to change some decisions about risk reduction strategies.
Limitations
Use of a hypothetical scenario. Ceiling effects for some outcomes.
Conclusions
Adding a VC intervention to a DA did not further improve decision making outcomes in a population of highly educated and motivated adults responding to scenario-based questions. Work is needed to determine the effects of VC on more diverse populations and more distal outcomes.
doi:10.1177/0272989X10369008
PMCID: PMC3996504  PMID: 20484089
14.  Effect of Guidelines on Primary Care Physician Use of PSA Screening: Results from the Community Tracking Study Physician Survey 
Background
Little is known about the effect of guidelines that recommend shared decision making on physician practice patterns. The objective of this study was to determine the association between physicians’ perceived effect of guidelines on clinical practice and self-reported prostate-specific antigen (PSA) screening patterns.
Methods
This was a cross-sectional study using a nationally representative sample of 3914 primary care physicians participating in the 1998–1999 Community Tracking Study Physician Survey. Responses to a case vignette that asked physicians what proportion of asymptomatic 60-year-old white men they would screen with a PSA were divided into 3 distinct groups: consistent PSA screeners (screen all), variable screeners (screen 1%–99%), and consistent nonscreeners (screen none). Logistic regression was used to determine the association between PSA screening patterns and physician-reported effect of guidelines (no effect v. any magnitude effect).
Results
Only 27% of physicians were variable PSA screeners; the rest were consistent screeners (60%) and consistent nonscreeners (13%). Only 8% of physicians perceived guidelines to have no effect on their practice. After adjustment for demographic and practice characteristics, variable screeners were more likely to report any magnitude effect of guidelines on their practice when compared with physicians in the other 2 groups (adjusted odds ratio 1.73; 95% confidence interval = 1.25–2.38; P = 0.001).
Conclusions
Physicians who perceive an effect of guidelines on their practice are almost twice as likely to exhibit screening PSA practice variability, whereas physicians who do not perceive an effect of guidelines on their practice are more likely to be consistent PSA screeners or consistent PSA nonscreeners.
doi:10.1177/0272989X08315243
PMCID: PMC3991564  PMID: 18556635
prostate-specific antigen; mass screening; guidelines; physicians’ practice patterns
15.  Are Providers More Likely to Contribute to Healthcare Disparities Under High Levels of Cognitive Load? How Features of the Healthcare Setting May Lead to Biases in Medical Decision Making 
Systematic reviews of healthcare disparities suggest that clinicians’ diagnostic and therapeutic decision making varies by clinically irrelevant characteristics, such as patient race, and that this variation may contribute to healthcare disparities. However, there is little understanding of the particular features of the healthcare setting under which clinicians are most likely to be inappropriately influenced by these characteristics. This study delineates several hypotheses to stimulate future research in this area. It is posited that healthcare settings in which providers experience high levels of cognitive load will increase the likelihood of racial disparities via 2 pathways. First, providers who experience higher levels of cognitive load are hypothesized to make poorer medical decisions and provide poorer care for all patients, due to lower levels of controlled processing (H1). Second, under greater levels of cognitive load, it is hypothesized that healthcare providers’ medical decisions and interpersonal behaviors will be more likely to be influenced by racial stereotypes, leading to poorer processes and outcomes of care for racial minority patients (H2). It is further hypothesized that certain characteristics of healthcare settings will result in higher levels of cognitive load experienced by providers (H3). Finally, it is hypothesized that minority patients will be disproportionately likely to be treated in healthcare settings in which providers experience greater levels of cognitive load (H4a), which will result in racial disparities due to lower levels of controlled processing by providers (H4b) and the influence of racial stereotypes (H4c).The study concludes with implications for research and practice that flow from this framework.
doi:10.1177/0272989X09341751
PMCID: PMC3988900  PMID: 19726783
healthcare disparities; stereotyping; organizations; race/ethnicity; social cognition; cognitive load
17.  Multicohort Models in Cost-Effectiveness Analysis: Why Aggregating Estimates over Multiple Cohorts Can Hide Useful Information 
Background
Models used in cost-effectiveness analysis (CEA) of screening programs may include 1 or many birth cohorts of patients. As many screening programs involve multiple screens over many years for each birth cohort, the actual implementation of screening often involves multiple concurrent recipient cohorts. Consequently, some advocate modeling all recipient cohorts rather than 1 birth cohort, arguing it more accurately represents actual implementation. However, reporting the cost-effectiveness estimates for multiple cohorts on aggregate rather than per cohort will fail to account for any heterogeneity in cost-effectiveness between cohorts. Such heterogeneity may be policy relevant where there is considerable variation in cost-effectiveness between cohorts, as in the case of cancer screening programs with multiple concurrent recipient birth cohorts, each at different stages of screening at any one point in time.
Objective
The purpose of this study is to illustrate the potential disadvantages of aggregating cost-effectiveness estimates over multiple cohorts, without first considering the disaggregate estimates.
Analysis
We estimate the cost-effectiveness of 2 alternative cervical screening tests in a multicohort model and compare the aggregated and per-cohort estimates. We find instances in which the policy choices suggested by the aggregate and per-cohort results differ. We use this example to illustrate a series of potential disadvantages of aggregating CEA estimates over cohorts.
Conclusions
Recent recommendations that CEAs should consider the cost-effectiveness of more than just a single cohort appear justified, but the aggregation of estimates across multiple cohorts into a single estimate does not.
doi:10.1177/0272989X12453503
PMCID: PMC3606654  PMID: 22927697
cohort model; multicohort model; population model
18.  The utility of childhood and adolescent obesity assessment in relation to adult health 
The high prevalence of childhood obesity has raised concerns regarding long-term patterns of adult health and has generated calls for obesity screening of young children. This study examined patterns of obesity and the predictive utility of obesity screening for children of different ages in terms of adult health outcomes. Using the National Longitudinal Survey of Youth, the Population Study of Income Dynamics, and National Health and Nutrition Evaluation Surveys, we estimated the sensitivity, specificity and predictive value of childhood BMI to identify 2, 5, 10, or 15 year-olds who will become obese adults. We constructed models assessing the relationship of childhood BMI to obesity-related diseases through middle age stratified by sex and race/ethnicity. 12% of 18 year-olds were obese. While 50% of these adolescents would not have been identified by screening at age 5, 9% would have been missed at age 15. Approximately 70% of obese children at age 5 became non-obese at age 18. The predictive utility of obesity screening below the age of 10 was low, even when maternal obesity was also included. The elevated risk of diabetes, obesity, and hypertension in middle age predicted by obesity at age 15 was significantly higher than at age 5 (e.g., the RR of diabetes for obese white male 15 year-olds was 4.5; for 5 year-olds, it was 1.6). Early childhood obesity assessment adds limited predictive utility to strategies that also include later childhood assessment. Targeted approaches in later childhood or universal strategies to prevent unhealthy weight gain should be considered.
doi:10.1177/0272989X12447240
PMCID: PMC3968272  PMID: 22647830
Child; adolescent; adult; obesity; risk assessment; type 2 diabetes mellitus; hypertension; forecasting
19.  Cost-effectiveness of adjuvant FOLFOX and 5FU/LV chemotherapy for patients with stage II colon cancer 
Purpose
We evaluated the cost-effectiveness of adjuvant chemotherapy using 5-fluorouracil, leucovorin (5FU/LV), and oxaliplatin (FOLFOX) compared with 5FU/LV alone and 5FU/LV compared with observation alone for patients who had resected stage II colon cancer.
Methods
We developed two Markov models to represent the adjuvant chemotherapy and follow-up periods and a single Markov model to represent the observation group. We used calibration to estimate the transition probabilities among different toxicity levels. The base-case considered 60-year-old patients who had undergone an uncomplicated hemicolectomy for stage II colon cancer and was medically fit to receive 6 months of adjuvant chemotherapy. We measured health outcomes in quality-adjusted life-years (QALYs) and estimated costs using 2007 US$.
Results
In the base-case, adjuvant chemotherapy of FOLFOX regimen had an incremental cost-effectiveness ratio (ICER) of $54,359/QALY compared with the 5FU/LV regimen and the 5FU/LV regimen had an ICER of $14,584/QALY compared with the observation group from the third-party payer perspective. The ICER values were most sensitive to 5-year relapse probability, cost of adjuvant chemotherapy, and the discount rate for the FOLFOX arm, whereas the ICER value of 5FU/LV was most sensitive to the 5-year relapse probability, 5-year survival probability, and the relapse cost. The probabilistic sensitivity analysis indicate that the ICER of 5FU/LV is less than $50,000/QALY with a probability of 99.62% and the ICER of FOLFOX as compared to 5FU/LV is less than $50,000/QALY and $100,000/QALY with a probability of 44.48% and 97.24%, respectively.
Conclusion
While adjuvant chemotherapy with 5FU/LV is cost-effective at all ages for patients who had undergone an uncomplicated hemicolectomy for stage II colon cancer, FOLFOX is not likely to be cost-effective as compared to 5FU/LV.
doi:10.1177/0272989X12470755
PMCID: PMC3960917  PMID: 23313932
Colon Cancer; Stage II; Cost-effectiveness; Markov Model; Calibration
20.  Identifying Individual Changes in Performance with Composite Quality Indicators while Accounting for Regression-to-the-Mean 
Almost a decade ago Morton & Torgerson (p. 1084) indicated that perceived medical benefits could be due to “…regression-to-the-mean.” Despite this caution, the regression-to-the-mean “…effects on the identification of changes in institutional performance do not seem to have been considered previously in any depth.” (Jones & Spiegelhalter, p. 1646). As a response, Jones & Spiegelhalter provide a methodology to adjust for regression-to-the-mean when modeling recent changes in institutional performance for one variable quality indicators (QIs). Therefore, in our view, Jones & Spiegelhalter provide a breakthrough methodology for performance measures. At the same time, in the interests of parsimony, it is useful to aggregate individual QIs into a composite score. Our question is: Can we develop and demonstrate a methodology that extends the ‘regression-to-the-mean’ literature to allow for composite quality indicators? Using a latent variable modeling approach, we extend the methodology to the composite indicator case. We demonstrate the approach on four indicators collected by the National Database of Nursing Quality Indicators® (NDNQI®). A simulation study further demonstrates its “proof of concept.”
doi:10.1177/0272989X12461855
PMCID: PMC3538092  PMID: 23035127
provider profiling; individual changes; NDNQI; regression-to-the-mean; Bayesian Analysis
21.  Longitudinal Changes in Patient Distress following Interactive Decision Aid Use among BRCA1/2 Carriers: A Randomized Trial 
Background
Increasingly, women with a strong family history of breast cancer are seeking genetic testing as a starting point to making significant decisions regarding management of their cancer risks. Individuals who are found to be carriers of a BRCA1 or BRCA2 mutation have a substantially elevated risk for breast cancer and are frequently faced with the decision of whether or not to undergo risk reducing mastectomy.
Objective
In order to provide BRCA1/2 carriers with ongoing decision support for breast cancer risk management, a computer-based interactive decision aid was developed and tested against usual care in a randomized controlled trial.
Design
Following genetic counseling, 214 female (aged 21-75) BRCA1/2 mutation carriers were randomized to Usual Care (UC; N=114) or Usual Care plus Decision Aid (DA; N=100) arms. UC participants received no further intervention; DA participants were sent the CD-ROM based decision aid to view at home.
Main Outcome Measures
The authors measured general distress, cancer specific distress and genetic testing specific distress at 1-, 6- and 12-month follow up time points, post-randomization.
Results
Longitudinal analyses revealed a significant longitudinal impact of the DA on cancer specific distress (B= 5.67, z = 2.81, p = 0.005) which varied over time (DA group by time; B = -2.19, z = -2.47, p = 0.01) and on genetic testing specific distress (B = 5.55, z = 2.46, p = 0.01) which also varied over time (DA group by time; B= -2.46, z = -2.51, p = 0.01). Individuals randomized to UC reported significantly decreased distress in the month following randomization, whereas individuals randomized to the DA maintained their post-disclosure distress over the short-term. By 12-months, the overall decrease in distress between the two groups was similar.
Conclusion
This report provides new insight into the long-term longitudinal effects of DAs.
doi:10.1177/0272989X10381283
PMCID: PMC3935602  PMID: 20876346
22.  Value of Information Analysis within a Stakeholder-Driven Research Prioritization Process in a US Setting: An Application in Cancer Genomics 
Objective
The objective of this study was to evaluate the feasibility and outcomes of incorporating value of information (VOI) analysis into a stakeholder-driven research prioritization process in a US-based setting.
Methods
Within a program to prioritize comparative effectiveness research areas in cancer genomics, over a period of 7 months, we developed decision-analytic models and calculated upper-bound VOI estimates for three previously selected genomic tests. Thirteen stakeholders representing patient advocates, payers, test developers, regulators, policy-makers, and community-based oncologists ranked the tests before and after receiving VOI results. The stakeholders were surveyed about the usefulness and impact of the VOI findings.
Results
The estimated upper-bound VOI ranged from $33M to $2.8 billion for the three research areas. Seven stakeholders indicated the results modified their rankings, nine stated VOI data was useful, and all indicated they would support its use in future prioritization processes. Some stakeholders indicated expected value of sampled information might be the preferred choice when evaluating specific study designs.
Limitations
Our study was limited by the size and the potential for selection bias in the composition of the external stakeholder group, lack of a randomized design to assess effect of VOI data on rankings, and the use of expected value of perfect information versus expected value of sample information methods.
Conclusions
Value of information analyses may have a meaningful role in research topic prioritization for comparative effectiveness research in the US, particularly when large differences in VOI across topic areas are identified. Additional research is needed to facilitate the use of more complex value of information analyses in this setting.
doi:10.1177/0272989X13484388
PMCID: PMC3933300  PMID: 23635833
23.  A Bias Corrected Net Reclassification Improvement for Clinical Subgroups 
Background
Comparing prediction models using reclassification within subgroups at intermediate risk is often of clinical interest.
Objective
To demonstrate a method for obtaining an unbiased estimate for the Net Reclassification Improvement (NRI) evaluated only on a subset, or the clinical NRI. Study Design and Setting: We derived the expected value of the clinical NRI under the null hypothesis using the same principles as the overall NRI. We then conducted a simulation study based on a logistic model with a known predictor and a potential predictor, varying the effects of the known and potential predictors to test the performance of our bias-corrected clinical NRI measure. Finally, data from the Women’s Health Study, a prospective cohort of 24,171 female health professionals, were used as an example of the proposed method.
Results
Our bias-corrected estimate is shown to have a mean of zero in the null case under a range of simulated parameters and, unlike the naïve estimate, to be unbiased. We also provide two methods for obtaining a variance estimate, both with reasonable type 1 errors.
Conclusion
Our proposed method is an improvement over currently used methods of calculating the clinical NRI and is recommended to reduce overly optimistic results.
doi:10.1177/0272989X12461856
PMCID: PMC3605042  PMID: 23042826
24.  Evidence of spillover effects of illness among household members: EQ-5D scores from a US population sample 
doi:10.1177/0272989X12464434
PMCID: PMC3606811  PMID: 23100461
health utility; cost-effectiveness analysis; care givers; health related quality of life
25.  “Don't Know” Responses to Risk Perception Measures: Implications for Underserved Populations 
Background
Risk perceptions are legitimate targets for behavioral interventions because they can motivate medical decisions and health behaviors. However, some survey respondents may not know (or may not indicate) their risk perceptions. The scope of “don't know” (DK) responding is unknown.
Objective
Examine the prevalence and correlates of responding DK to items assessing perceived risk of colorectal cancer.
Methods
Two nationally representative, population-based, cross-sectional surveys (2005 National Health Interview Survey [NHIS]; 2005 Health Information National Trends Survey [HINTS]), and one primary care clinic-based survey comprised of individuals from low-income communities. Analyses included 31,202 (NHIS), 1,937 (HINTS), and 769 (clinic) individuals.
Measures
Five items assessed perceived risk of colorectal cancer. Four of the items differed in format and/or response scale: comparative risk (NHIS, HINTS); absolute risk (HINTS, clinic), and “likelihood” and “chance” response scales (clinic). Only the clinic-based survey included an explicit DK response option.
Results
“Don't know” responding was 6.9% (NHIS), 7.5% (HINTS-comparative), and 8.7% (HINTS-absolute). “Don't know” responding was 49.1% and 69.3% for the “chance” and “likely” response options (clinic). Correlates of DK responding were characteristics generally associated with disparities (e.g., low education), but the pattern of results varied among samples, question formats, and response scales.
Limitations
The surveys were developed independently and employed different methodologies and items. Consequently, the results were not directly comparable. There may be multiple explanations for differences in the magnitude and characteristics of DK responding.
Conclusions
“Don't know” responding is more prevalent in populations affected by health disparities. Either not assessing or not analyzing DK responses could further disenfranchise these populations and negatively affect the validity of research and the efficacy of interventions seeking to eliminate health disparities.
doi:10.1177/0272989X12464435
PMCID: PMC3613223  PMID: 23468476
Risk perception; measurement; colorectal cancer; item response; disparities

Results 1-25 (110)