PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1456938)

Clipboard (0)
None

Related Articles

1.  Development and validation of a predictive model of acute glucose response to exercise in individuals with type 2 diabetes 
Background
Our purpose was to develop and test a predictive model of the acute glucose response to exercise in individuals with type 2 diabetes.
Design and methods
Data from three previous exercise studies (56 subjects, 488 exercise sessions) were combined and used as a development dataset. A mixed-effects Least Absolute Shrinkage Selection Operator (LASSO) was used to select predictors among 12 potential predictors. Tests of the relative importance of each predictor were conducted using the Lindemann Merenda and Gold (LMG) algorithm. Model structure was tested using likelihood ratio tests. Model accuracy in the development dataset was assessed by leave-one-out cross-validation.
Prospectively captured data (47 individuals, 436 sessions) was used as a test dataset. Model accuracy was calculated as the percentage of predictions within measurement error. Overall model utility was assessed as the number of subjects with ≤1 model error after the third exercise session. Model accuracy across individuals was assessed graphically. In a post-hoc analysis, a mixed-effects logistic regression tested the association of individuals’ attributes with model error.
Results
Minutes since eating, a non-linear transformation of minutes since eating, post-prandial state, hemoglobin A1c, sulfonylurea status, age, and exercise session number were identified as novel predictors. Minutes since eating, its transformations, and hemoglobin A1c combined to account for 19.6% of the variance in glucose response. Sulfonylurea status, age, and exercise session each accounted for <1.0% of the variance. In the development dataset, a model with random slopes for pre-exercise glucose improved fit over a model with random intercepts only (likelihood ratio 34.5, p < 0.001). Cross-validated model accuracy was 83.3%.
In the test dataset, overall accuracy was 80.2%. The model was more accurate in pre-prandial than postprandial exercise (83.6% vs. 74.5% accuracy respectively). 31/47 subjects had ≤1 model error after the third exercise session. Model error varied across individuals and was weakly associated with within-subject variability in pre-exercise glucose (Odds ratio 1.49, 95% Confidence interval 1.23-1.75).
Conclusions
The preliminary development and test of a predictive model of acute glucose response to exercise is presented. Further work to improve this model is discussed.
doi:10.1186/1758-5996-5-33
PMCID: PMC3701573  PMID: 23816355
Prediction; Exercise; Type 2 Diabetes; Blood Glucose
2.  Sample size requirements to detect an intervention by time interaction in longitudinal cluster randomized clinical trials with random slopes 
In longitudinal cluster randomized clinical trials (cluster-RCT), subjects are nested within a higher level unit such as clinics and are evaluated for outcome repeatedly over the study period. This study design results in a three level hierarchical data structure. When the primary goal is to test the hypothesis that an intervention has an effect on the rate of change in the outcome over time and the between-subject variation in slopes is substantial, the subject-specific slopes are often modeled as random coefficients in a mixed-effects linear model. In this paper, we propose approaches for determining the samples size for each level of a 3-level hierarchical trial design based on ordinary least squares (OLS) estimates for detecting a difference in mean slopes between two intervention groups when the slopes are modeled as random. Notably, the sample size is not a function of the variances of either the second or the third level random intercepts and depends on the number of second and third level data units only through their product. Simulation results indicate that the OLS-based power and sample sizes are virtually identical to the empirical maximum likelihood based estimates even with varying cluster sizes. Sample sizes for random versus fixed slope models are also compared. The effects of the variance of the random slope on the sample size determinations are shown to be enormous. Therefore, when between-subject variations in outcome trends are anticipated to be significant, sample size determinations based on a fixed slope model can result in a seriously underpowered study.
doi:10.1016/j.csda.2012.11.016
PMCID: PMC3580878  PMID: 23459110
longitudinal cluster RCT; three level data; power; sample size; random slope; effect size
3.  Long-term graft function changes in kidney transplant recipients 
NDT Plus  2010;3(Suppl 2):ii2-ii8.
Background. Monitoring changes in glomerular filtration rate (GFR) is the recommended method for assessing the progression of kidney disease. The aim of this study was to assess the decline of graft function defined by the annualized change in GFR and the factors which affect it.
Methods. Four thousand four hundred and eighty-eight patients, transplanted during the years 1990, 1994, 1998 and 2002 in 34 centres in Spain with allograft survival of at least 1 year, were included in the study. GFR was estimated using the four-variable equation of the Modification of Diet in Renal Diseases (MDRD) study. Linear mixed effects model was applied to determine the relation between the covariates and the annualized change in GFR after transplantation.
Results. The average GFR at 12 months was 51.4 ± 18.9 mL/min/1.73 m2; most patients were in stage 3 of chronic kidney disease classification. The average patient slope, calculated in a linear model with varying-intercept and varying-slope without covariates, was −1.12 ± 0.05 mL/min/year (slope ± standard error). Some variables were related to both the 12-month GFR (intercept) and the slope: recipient gender, hepatitis C virus (HCV) status, estimated GFR (eGFR) at 3 months and proteinuria at 12 months. Some variables were only related to the slope of eGFR: time on dialysis, primary renal disease and immunosuppression. Others affected only the 12-month GFR: donor age, delayed graft function, acute rejection and systolic blood pressure at 12 months. Higher graft function at 3 months had a negative impact on the GFR slope. Cyclosporine-based immunosuppression had a less favourable effect on the rates of change in allograft function.
Conclusions. There was a slow decline in GFR. Poor graft function was not associated with an increased rate of decline of allograft function. Immunosuppression with cyclosporine displayed the worst declining GFR rate.
doi:10.1093/ndtplus/sfq063
PMCID: PMC2875040  PMID: 20508857
glomerular filtration rate; immunosuppression; kidney transplantation
4.  Detection of resistance mutations and CD4 slopes in individuals experiencing sustained virological failure 
Journal of the International AIDS Society  2014;17(4Suppl 3):19737.
Introduction
Several resistance mutations have been shown to affect viral fitness, and the presence of certain mutations might result in clinical benefit for patients kept on a virologically failing regimen due to an exhaustion of drug options. We sought to quantify the effect of resistance mutations on CD4 slopes in patients undergoing episodes of viral failure.
Materials and Methods
Patients from the EuroSIDA and UK CHIC cohorts undergoing at least one episode of virological failure (>3 consecutive RNA measurements >500 on ART) with at least three CD4 measurements and a resistance test during the episode were included. Mutations were identified using the IAS-US (2013) list, and were presumed to be present from detection until the end of an episode. Multivariable linear mixed models with a random intercept and slope adjusted for age, baseline CD4 count, hepatitis C, drug type, RNA (log-scale), risk group and subtype were used to estimate CD4 slopes. Individual mutations with a population prevalence of >10% were tested for their effect on the CD4 slope.
Results
A total of 2731 patients experiencing a median of 1 (range 1–4) episodes were included in this analysis. The prevalence of any resistance per episode was 88.4%; NNRTI resistance was most common (78.5%). Overall, CD4 counts declined by 17.1 (−19.7; −14.5) cells per year; this decline was less marked with partial viral suppression (current HIV RNA more than 1.5 log below the setpoint; p=0.01). In multivariable models adjusting for viral load, CD4 decline was slower during episodes with detected resistance compared to episodes without detected resistance (21.0 cells/year less, 95% CI 11.75–30.31, p<0.001). Among those with more than one resistance mutation, there was only weak evidence that class-specific mutations had any effect on the CD4 slope (Table 1). The effects of individual mutations (incl. M184V) were explored, but none were significantly associated with the CD4 slope; for these comparisons, a Bonferroni-corrected p-value level was 0.003.
Conclusions
In our study population, detected resistance was associated with slightly less steep CD4 declines. This may be due to a biological effect of resistance on CD4 slopes, or other unmeasured factors such as poor adherence among individuals without resistance. Among individuals with detected drug resistance, we found no evidence suggesting that the presence of individual mutations was associated with beneficial CD4 slope changes.
doi:10.7448/IAS.17.4.19737
PMCID: PMC4225350  PMID: 25397482
5.  Neighborhood influences on the association between maternal age and birth weight: A multilevel investigation of age-related disparities in health 
Social science & medicine (1982)  2008;66(9):2048-2060.
It was hypothesized that the relationship between maternal age and infant birthweight varies significantly across neighborhoods and that such variation can be predicted by neighborhood characteristics. We analyzed 229,613 singleton births of mothers aged 20–45 from Chicago, USA in 1997–2002. Random coefficient models were used to estimate the between-neighborhood variation in age-birthweight slopes, and both intercepts- and-slopes-as-outcomes models were used to evaluate area-level predictors of such variation.
The crude maternal age-birthweight slopes for neighborhoods ranged from a decrease of 17 grams to an increase of 10 grams per year of maternal age. Adjustment for individual-level covariates reduced but did not eliminate this between-neighborhood variation. Concentrated poverty was a significant neighborhood-level predictor of the age-birthweight slope, explaining 44.4 percent of the between-neighborhood variation in slopes. Neighborhoods of higher economic disadvantage showed a more negative age-birthweight slope. The findings support the hypothesis that the relationship between maternal age and birthweight varies between neighborhoods. Indicators of neighborhood disadvantage help to explain such differences.
doi:10.1016/j.socscimed.2008.01.027
PMCID: PMC2794800  PMID: 18313187
birth weight; maternal age; poverty; social environment; socioeconomic factors; multi-level modeling
6.  Comparison of the variability of the annual rates of change in FEV1 determined from serial measurements of the pre- versus post-bronchodilator FEV1 over 5 years in mild to moderate COPD: Results of the lung health study 
Respiratory Research  2012;13(1):70.
Background
The impact of interventions on the progressive course of COPD is currently assessed by the slope of the annual decline in FEV1 determined from serial measurements of the post-, in preference to the pre-, bronchodilator FEV1. We therefore compared the yearly slope and the variability of the slope of the pre- versus the post-bronchodilator FEV1 in men and women with mild to moderate COPD who participated in the 5-year Lung Health Study (LHS).
Methods
Data were analyzed from 4484 of the 5887 LHS participants who had measurements of pre- and post-bronchodilator FEV1 at baseline (screening visit 2) and all five annual visits. The annual rate of decline in FEV1 (±SE) measured pre- and post-bronchodilator from the first to the fifth annual visit was estimated separately using a random coefficient model adjusted for relevant covariates. Analyses were performed separately within each of the three randomized intervention groups. In addition, individual rates of decline in pre- and post-bronchodilator FEV1 were also determined for each participant. Furthermore, sample sizes were estimated for determining the significance of differences in slopes of decline between different interventions using pre- versus post-bronchodilator measurements.
Results
Within each intervention group, mean adjusted and unadjusted slope estimates were slightly higher for the pre- than the post-bronchodilator FEV1 (range of differences 2.6-5.2 ml/yr) and the standard errors around these estimates were only minimally higher for the pre- versus the post-bronchodilator FEV1 (range 0.05-0.11 ml/yr). Conversely, the standard deviations of the mean FEV1 determined at each annual visit were consistently slightly higher (range of differences 0.011 to 0.035 L) for the post- compared to the pre-bronchodilator FEV1. Within each group, the proportion of individual participants with a statistically significant slope was similar (varying by only 1.4 to 2.7%) comparing the estimates from the pre- versus the post-bronchodilator FEV1. However, sample size estimates were slightly higher when the pre- compared to the post-bronchodilator value was used to determine the significance of specified differences in slopes between interventions.
Conclusion
Serial measurements of the pre-bronchodilator FEV1 are generally sufficient for comparing the impact of different interventions on the annual rate of change in FEV1.
doi:10.1186/1465-9921-13-70
PMCID: PMC3439318  PMID: 22894725
FEV1 decline; Chronic obstructive pulmonary disease (COPD); Lung health study; Pre-bronchodilator; Post-bronchodilator
7.  Translational methods in biostatistics: linear mixed effect regression models of alcohol consumption and HIV disease progression over time 
Longitudinal studies are helpful in understanding how subtle associations between factors of interest change over time. Our goal is to apply statistical methods which are appropriate for analyzing longitudinal data to a repeated measures epidemiological study as a tutorial in the appropriate use and interpretation of random effects models. To motivate their use, we study the association of alcohol consumption on markers of HIV disease progression in an observational cohort. To make valid inferences, the association among measurements correlated within a subject must be taken into account.
We describe a linear mixed effects regression framework that accounts for the clustering of longitudinal data and that can be fit using standard statistical software. We apply the linear mixed effects model to a previously published dataset of HIV infected individuals with a history of alcohol problems who are receiving HAART (n = 197). The researchers were interested in determining the effect of alcohol use on HIV disease progression over time. Fitting a linear mixed effects multiple regression model with a random intercept and random slope for each subject accounts for the association of observations within subjects and yields parameters interpretable as in ordinary multiple regression. A significant interaction between alcohol use and adherence to HAART is found: subjects who use alcohol and are not fully adherent to their HIV medications had higher log RNA (ribonucleic acid) viral load levels than fully adherent non-drinkers, fully adherent alcohol users, and non-drinkers who were not fully adherent.
Longitudinal studies are increasingly common in epidemiological research. Software routines that account for correlation between repeated measures using linear mixed effects methods are now generally available and straightforward to utilize. These models allow the relaxation of assumptions needed for approaches such as repeated measures ANOVA, and should be routinely incorporated into the analysis of cohort studies.
doi:10.1186/1742-5573-4-8
PMCID: PMC2147003  PMID: 17880699
8.  For debate: substituting placebo controls in long-term Alzheimer's prevention trials 
Introduction
Novel compounds with potential to attenuate or stop the progression of Alzheimer's disease (AD) from its presymptomatic stage to dementia are being tested in man. The study design commonly used is the long-term randomized, placebo-controlled trial (RPCT), meaning that many patients will receive placebo for 18 months or longer. It is ethically problematic to expose presymptomatic AD patients, who by definition are at risk of developing dementia, to prolonged placebo treatment. As an alternative to long-term RPCTs we propose a novel clinical study design, termed the placebo group simulation approach (PGSA), using mathematical models to forecast outcomes of presymptomatic AD patients from their own baseline data. Forecasted outcomes are compared with outcomes observed on candidate drugs, thus replacing a concomitant placebo group.
Methods
First models were constructed using mild cognitive impairment (MCI) data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. One outcome is the Alzheimer Disease Assessment Scale - cognitive subscale (ADAScog) score after 24 months, predicted in a linear regression model; the other is the trajectory over 36 months of a composite neuropsychological test score (Neuro-Psychological Battery (NP-Batt)), using a mixed model. Demographics and clinical, biological and neuropsychological baseline values were tested as potential predictors in both models.
Results
ADAScog scores after 24 months are predicted from gender, obesity, Functional Assessment Questionnaire (FAQ) and baseline scores of Mini-Mental State Examination, ADAScog and NP-Batt with an R2 of 0.63 and a residual standard deviation of 0.67, allowing reasonably precise estimates of sample means. The model of the NP-Batt trajectory has random intercepts and slopes and fixed effects for body mass index, time, apolipoprotein E4, age, FAQ, baseline scores of ADAScog and NP-Batt, and four interaction terms. Estimates of the residual standard deviation range from 0.3 to 0.5 on a standard normal scale. If novel drug candidates are expected to diminish the negative slope of scores with time, a change of 0.04 per year could be detected in samples of 400 with a power of about 80%.
Conclusions
First PGSA models derived from ADNI MCI data allow prediction of cognitive endpoints and trajectories that correspond well with real observed values. Corroboration of these models with data from other observational studies is ongoing. It is suggested that the PGSA may complement RPCT designs in forthcoming long-term drug studies with presymptomatic AD individuals.
doi:10.1186/alzrt68
PMCID: PMC3226271  PMID: 21418632
9.  Multilevel Modeling of a Clustered Continuous Outcome 
Nursing research  2005;54(6):406-413.
Background
Multilevel models were designed to analyze data generated from a nested structure (e.g., nurses within hospitals) because conventional linear regression models underestimate standard errors and, in turn, overestimate test statistics.
Objectives
To introduce 2 types of multilevel models, the random intercept model and the random coefficient model, to describe the correlation among observations within a cluster, and to demonstrate how to identify the superior model.
Method
The conceptual and mathematical bases for the 2 multilevel model types are presented. Intraclass correlation is defined and assessment of model fit is detailed. An empirical example is presented in which average work hours per week and burnout are analyzed using data from 4,320 staff nurses clustered in 19 hospitals.
Results
Average work hours were positively associated with nurse burnout. The multilevel models corrected the problem of underestimated standard errors in conventional linear regression models. Graphs displaying the hospital-level differences illustrated the 2 multilevel model types. Although the multilevel models corrected the underestimation of standard errors, the results did not differ substantively for the conventional or the 2 multilevel models. The intraclass correlation coefficient was .044, indicating that the extent of shared variance among nurses in a hospital was low. The random intercept model fit the data better than did the random coefficient model.
Conclusions
Multilevel models provide a more accurate and comprehensive description of relationships in clustered data than do conventional models, by correcting underestimated standard errors, by estimating components of variance at several levels, and by estimating cluster-specific intercepts and slopes.
PMCID: PMC1540459  PMID: 16317362
clustered data; hierarchical structure; multilevel models
10.  Comparing methods to estimate treatment effects on a continuous outcome in multicentre randomized controlled trials: A simulation study 
Background
Multicentre randomized controlled trials (RCTs) routinely use randomization and analysis stratified by centre to control for differences between centres and to improve precision. No consensus has been reached on how to best analyze correlated continuous outcomes in such settings. Our objective was to investigate the properties of commonly used statistical models at various levels of clustering in the context of multicentre RCTs.
Methods
Assuming no treatment by centre interaction, we compared six methods (ignoring centre effects, including centres as fixed effects, including centres as random effects, generalized estimating equation (GEE), and fixed- and random-effects centre-level analysis) to analyze continuous outcomes in multicentre RCTs using simulations over a wide spectrum of intraclass correlation (ICC) values, and varying numbers of centres and centre size. The performance of models was evaluated in terms of bias, precision, mean squared error of the point estimator of treatment effect, empirical coverage of the 95% confidence interval, and statistical power of the procedure.
Results
While all methods yielded unbiased estimates of treatment effect, ignoring centres led to inflation of standard error and loss of statistical power when within centre correlation was present. Mixed-effects model was most efficient and attained nominal coverage of 95% and 90% power in almost all scenarios. Fixed-effects model was less precise when the number of centres was large and treatment allocation was subject to chance imbalance within centre. GEE approach underestimated standard error of the treatment effect when the number of centres was small. The two centre-level models led to more variable point estimates and relatively low interval coverage or statistical power depending on whether or not heterogeneity of treatment contrasts was considered in the analysis.
Conclusions
All six models produced unbiased estimates of treatment effect in the context of multicentre trials. Adjusting for centre as a random intercept led to the most efficient treatment effect estimation across all simulations under the normality assumption, when there was no treatment by centre interaction.
doi:10.1186/1471-2288-11-21
PMCID: PMC3056845  PMID: 21338524
11.  Estimating Energy Expenditure from Heart Rate in Older Adults: A Case for Calibration 
PLoS ONE  2014;9(4):e93520.
Background
Accurate measurement of free-living energy expenditure is vital to understanding changes in energy metabolism with aging. The efficacy of heart rate as a surrogate for energy expenditure is rooted in the assumption of a linear function between heart rate and energy expenditure, but its validity and reliability in older adults remains unclear.
Objective
To assess the validity and reliability of the linear function between heart rate and energy expenditure in older adults using different levels of calibration.
Design
Heart rate and energy expenditure were assessed across five levels of exertion in 290 adults participating in the Baltimore Longitudinal Study of Aging. Correlation and random effects regression analyses assessed the linearity of the relationship between heart rate and energy expenditure and cross-validation models assessed predictive performance.
Results
Heart rate and energy expenditure were highly correlated (r = 0.98) and linear regardless of age or sex. Intra-person variability was low but inter-person variability was high, with substantial heterogeneity of the random intercept (s.d. = 0.372) despite similar slopes. Cross-validation models indicated individual calibration data substantially improves accuracy predictions of energy expenditure from heart rate, reducing the potential for considerable measurement bias. Although using five calibration measures provided the greatest reduction in the standard deviation of prediction errors (1.08 kcals/min), substantial improvement was also noted with two (0.75 kcals/min).
Conclusion
These findings indicate standard regression equations may be used to make population-level inferences when estimating energy expenditure from heart rate in older adults but caution should be exercised when making inferences at the individual level without proper calibration.
doi:10.1371/journal.pone.0093520
PMCID: PMC4005766  PMID: 24787146
12.  Segregation and linkage analysis for longitudinal measurements of a quantitative trait 
BMC Genetics  2003;4(Suppl 1):S21.
We present a method for using slopes and intercepts from a linear regression of a quantitative trait as outcomes in segregation and linkage analyses. We apply the method to the analysis of longitudinal systolic blood pressure (SBP) data from the Framingham Heart Study. A first-stage linear model was fit to each subject's SBP measurements to estimate both their slope over time and an intercept, the latter scaled to represent the mean SBP at the average observed age (53.7 years). The subject-specific intercepts and slopes were then analyzed using segregation and linkage analysis. We describe a method for using the standard errors of the first-stage intercepts and slopes as weights in the genetic analyses. For the intercepts, we found significant evidence of a Mendelian gene in segregation analysis and suggestive linkage results (with LOD scores ≥ 1.5) for specific markers on chromosomes 1, 3, 5, 9, 10, and 17. For the slopes, however, the data did not support a Mendelian model, and thus no formal linkage analyses were conducted.
doi:10.1186/1471-2156-4-S1-S21
PMCID: PMC1866456  PMID: 14975089
13.  Echocardiographic Indices Do Not Reliably Track Changes in Left-Sided Filling Pressure in Healthy Subjects or Patients with Heart Failure with Preserved Ejection Fraction 
Background
In select patient populations, Doppler echocardiographic indices may be used to estimate left sided filling pressures. It is not known, however, whether changes in these indices track changes in left-sided filling pressures within individual healthy subjects or patients with heart failure with preserved ejection fraction (HFpEF). This knowledge is important as it would support, or refute, the serial use of these indices to estimate changes in filling pressures associated with the titration of medical therapy in patients with heart failure.
Methods and Results
Forty seven volunteers were enrolled: 11 highly screened elderly outpatients with a clear diagnosis of HFpEF, 24 healthy elderly subjects and 12 healthy young subjects. Each patient underwent right heart catheterization with simultaneous transthoracic echo. Pulmonary capillary wedge pressure (PCWP) and key echo indices (E/e’ & E/Vp) were measured at two baselines and during four preload altering maneuvers: lower body negative pressure (LBNP) -15 mmHg; LBNP -30 mmHg; rapid saline infusion of 10-15 ml/kg; and rapid saline infusion of 20-30 ml/kg. A random coefficient mixed model regression of PCWP vs. E/e’ and PCWP vs. E/Vp was performed for: 1) a composite of all data points; 2) a composite of all data points within each of the three groups. Linear regression analysis was performed for individual subjects. With this protocol, PCWP was manipulated from 0.8 to 28.8 mmHg. For E/e’, the composite random effects mixed model regression was PCWP = 0.58×E/e’+7.02 (p<0.001) confirming the weak but significant relationship between these two variables. Individual subject linear regression slopes (range: -6.76 to 11.03) and r2 (0.00 to 0.94) were highly variable and often very different than those derived for the composite and group regressions. For E/Vp, the composite random coefficient mixed model regression was PCWP = 1.95×E/Vp +7.48 (p=0.005); once again, individual subject linear regression slopes (range: -16.42 to 25.39) and r2 (range: 0.02 to 0.94) were highly variable and often very different than those derived for the composite and group regressions.
Conclusions
Within individual subjects the non-invasive indices E/e’ and E/Vp do not reliably track changes in left-sided filling pressures as these pressures vary, precluding the use of these techniques in research studies with healthy volunteers or the titration of medical therapy in patients with HFpEF.
doi:10.1161/CIRCIMAGING.110.960575
PMCID: PMC3205913  PMID: 21788358
echocardiography; pressure; ultrasonics; Doppler; heart failure
14.  Between-centre differences and treatment effects in randomized controlled trials: A case study in traumatic brain injury 
Trials  2011;12:201.
Background
In Traumatic Brain Injury (TBI), large between-centre differences in outcome exist and many clinicians believe that such differences influence estimation of the treatment effect in randomized controlled trial (RCTs). The aim of this study was to assess the influence of between-centre differences in outcome on the estimated treatment effect in a large RCT in TBI.
Methods
We used data from the MRC CRASH trial on the efficacy of corticosteroid infusion in patients with TBI. We analyzed the effect of the treatment on 14 day mortality with fixed effect logistic regression. Next we used random effects logistic regression with a random intercept to estimate the treatment effect taking into account between-centre differences in outcome. Between-centre differences in outcome were expressed with a 95% range of odds ratios (OR) for centres compared to the average, based on the variance of the random effects (tau2). A random effects logistic regression model with random slopes was used to allow the treatment effect to vary by centre. The variation in treatment effect between the centres was expressed in a 95% range of the estimated treatment ORs.
Results
In 9978 patients from 237 centres, 14-day mortality was 19.5%. Mortality was higher in the treatment group (OR = 1.22, p = 0.00010). Using a random effects model showed large between-centre differences in outcome (95% range of centre effects: 0.27- 3.71), but did not substantially change the estimated treatment effect (OR = 1.24, p = 0.00003). There was limited, although statistically significant, between-centre variation in the treatment effect (OR = 1.22, 95% treatment OR range: 1.17-1.26).
Conclusion
Large between-centre differences in outcome do not necessarily affect the estimated treatment effect in RCTs, in contrast to current beliefs in the clinical area of TBI.
doi:10.1186/1745-6215-12-201
PMCID: PMC3170218  PMID: 21867540
15.  A new approach to analyse longitudinal epidemiological data with an excess of zeros 
Background
Within longitudinal epidemiological research, ‘count’ outcome variables with an excess of zeros frequently occur. Although these outcomes are frequently analysed with a linear mixed model, or a Poisson mixed model, a two-part mixed model would be better in analysing outcome variables with an excess of zeros. Therefore, objective of this paper was to introduce the relatively ‘new’ method of two-part joint regression modelling in longitudinal data analysis for outcome variables with an excess of zeros, and to compare the performance of this method to current approaches.
Methods
Within an observational longitudinal dataset, we compared three techniques; two ‘standard’ approaches (a linear mixed model, and a Poisson mixed model), and a two-part joint mixed model (a binomial/Poisson mixed distribution model), including random intercepts and random slopes. Model fit indicators, and differences between predicted and observed values were used for comparisons. The analyses were performed with STATA using the GLLAMM procedure.
Results
Regarding the random intercept models, the two-part joint mixed model (binomial/Poisson) performed best. Adding random slopes for time to the models changed the sign of the regression coefficient for both the Poisson mixed model and the two-part joint mixed model (binomial/Poisson) and resulted into a much better fit.
Conclusion
This paper showed that a two-part joint mixed model is a more appropriate method to analyse longitudinal data with an excess of zeros compared to a linear mixed model and a Poisson mixed model. However, in a model with random slopes for time a Poisson mixed model also performed remarkably well.
doi:10.1186/1471-2288-13-27
PMCID: PMC3599839  PMID: 23425202
Two-part joint model; Excess of zeros; Count; Mixed modelling; Longitudinal; Statistical methods
16.  Multivariate Longitudinal Modeling of Cognitive Aging 
GeroPsych  2012;25(1):15-24.
We illustrate the use of the parallel latent growth curve model using data from OCTO-Twin. We found a significant intercept-intercept and slope-slope association between processing speed and visuospatial ability. Within-person correlations among the occasion-specific residuals were significant, suggesting that the occasion-specific fluctuations around individual’s trajectories, after controlling for intraindividual change, are related between both outcomes. Random and fixed effects for visuospatial ability are reduced when we include structural parameters (directional growth curve model) providing information about changes in visuospatial abilities after controlling for processing speed. We recommend this model to researchers interested in the analysis of multivariate longitudinal change, as it permits decomposition and directly interpretable estimates of association among initial levels, rates of change, and occasion-specific variation.
doi:10.1024/1662-9647/a000051
PMCID: PMC3625423  PMID: 23589712
cognitive aging; longitudinal analysis; growth curve modeling; multivariate analysis
17.  Modeling coding-sequence evolution within the context of residue solvent accessibility 
Background
Protein structure mediates site-specific patterns of sequence divergence. In particular, residues in the core of a protein (solvent-inaccessible residues) tend to be more evolutionarily conserved than residues on the surface (solvent-accessible residues).
Results
Here, we present a model of sequence evolution that explicitly accounts for the relative solvent accessibility of each residue in a protein. Our model is a variant of the Goldman-Yang 1994 (GY94) model in which all model parameters can be functions of the relative solvent accessibility (RSA) of a residue. We apply this model to a data set comprised of nearly 600 yeast genes, and find that an evolutionary-rate ratio ω that varies linearly with RSA provides a better model fit than an RSA-independent ω or an ω that is estimated separately in individual RSA bins. We further show that the branch length t and the transition-transverion ratio κ also vary with RSA. The RSA-dependent GY94 model performs better than an RSA-dependent Muse-Gaut 1994 (MG94) model in which the synonymous and non-synonymous rates individually are linear functions of RSA. Finally, protein core size affects the slope of the linear relationship between ω and RSA, and gene expression level affects both the intercept and the slope.
Conclusions
Structure-aware models of sequence evolution provide a significantly better fit than traditional models that neglect structure. The linear relationship between ω and RSA implies that genes are better characterized by their ω slope and intercept than by just their mean ω.
doi:10.1186/1471-2148-12-179
PMCID: PMC3527230  PMID: 22967129
18.  The Hispanic Americans Baseline Alcohol Survey (HABLAS):Predictive invariance of Demographic Characteristics on Attitudes towards Alcohol across Hispanic National Groups# 
International journal of Hispanic psychology  2010;3(2):https://www.novapublishers.com/catalog/product_info.php?products_id=22273.
This study compares the demographic predictors of items assessing attitudes towards drinking across Hispanic national groups. Data were from the 2006 Hispanic Americans Baseline Alcohol Survey (HABLAS), which used a multistage cluster sample design to interview 5,224 individuals randomly selected from the household population in Miami, New York, Philadelphia, Houston, and Los Angeles. Predictive invariance of demographic predictors of alcohol attitudes over four Hispanic national groups (Puerto Rican, Cuban, Mexican, and South/Central Americans) was examined using multiple-group seemingly unrelated probit regression. The analyses examined whether the influence of various demographic predictors varied across the Hispanic national groups in their regression coefficients, item intercepts, and error correlations.
The hypothesis of predictive invariance was supported. Hispanic groups did not differ in how demographic predictors related to individual attitudinal items (regression slopes were invariant). In addition, the groups did not differ in attitudinal endorsement rates once demographic covariates were taken into account (item intercepts were invariant). Although Hispanic groups have different attitudes about alcohol, the influence of multiple demographic characteristics on alcohol attitudes operates similarly across Hispanic groups. Future models of drinking behavior in adult Hispanics need not posit moderating effects of group on the relation between these background characteristics and attitudes.
PMCID: PMC4219324  PMID: 25379120
Hispanic groups; alcohol; attitudes
19.  Optimum Methadone Compliance Testing 
Executive Summary
Objective
The objective of this analysis was to determine the diagnostic utility of oral fluid testing collected with the Intercept oral fluid collection device.
Clinical Need: Target Population and Condition
Opioids (opiates or narcotics) are a class of drugs derived from the opium poppy plant that typically relieve pain and produce a euphoric feeling. Methadone is a long-acting synthetic opioid used to treat opioid dependence and chronic pain. It prevents symptoms of opioid withdrawal, reduces opioid cravings and blocks the euphoric effects of short-acting opioids such as heroin and morphine. Opioid dependence is associated with harms including an increased risk of exposure to Human Immunodeficiency Virus and Hepatitis C as well as other health, social and psychological crises. The goal of methadone treatment is harm reduction. Treatment with methadone for opioid dependence is often a long-term therapy. The Ontario College of Physicians and Surgeons estimates that there are currently 250 physicians qualified to prescribe methadone, and 15,500 people in methadone maintenance programs across Ontario.
Drug testing is a clinical tool whose purpose is to provide objective meaningful information, which will reinforce positive behavioral changes in patients and guide further treatment needs. Such information includes knowledge of whether the patient is taking their methadone as prescribed and reducing or abstaining from using opioid and other drugs of abuse use. The results of drug testing can be used with behavior modification techniques (contingency management techniques) where positive reinforcements such as increased methadone take-home privileges, sustained employment or parole are granted for drug screens negative for opioid use, and negative reinforcement including loss of these privileges for drug screens positive for opioid used.
Body fluids including blood, oral fluid, often referred to as saliva, and urine may contain metabolites and the parent drug of both methadone and drugs of abuse and provide a means for drug testing. Compared with blood which has a widow of detection of several hours, urine has a wider window of detection, approximately 1 to 3 days, and is therefore considered more useful than blood for drug testing. Because of this, and the fact that obtaining a urine specimen is relatively easy, urine drug screening is considered the criterion measure (gold standard) for methadone maintenance monitoring. However, 2 main concerns exist with urine specimens: the possibility of sample tampering by the patient and the necessity for observed urine collection. Urine specimens may be tampered with in 3 ways: dilution, adulteration (contamination) with chemicals, and substitution (patient submits another persons urine specimen). To circumvent sample tampering the supervised collection of urine specimens is a common and recommended practice. However, it has been suggested that this practice may have negative effects including humiliation experienced by patient and staff, and may discourage patients from staying in treatment. Supervised urine specimen collection may also present an operational problem as staff must be available to provide same-sex supervision. Oral fluid testing has been proposed as a replacement for urine because it can be collected easily under direct supervision without infringement of privacy and reduces the likelihood of sample tampering. Generally, the results of oral fluid drug testing are similar to urine drug testing but there are some differences, such as lower concentrations of substances in oral fluid than urine, and some drugs remain detectable for longer periods of time in urine than oral fluid.
The Technology Being Reviewed
The Intercept Oral Specimen Collection Device (Ora-Sure Technologies, Bethlehem, PA) consists of an absorbent pad mounted on a plastic stick. The pad is coated with common salts. The absorbent pad is inserted into the mouth and placed between the cheek and gums for 3 minutes on average. The pad absorbs the oral fluid. After 3 minutes (range 2min-5 min) the collection device is removed from the mouth and the absorbent pad is placed in a small vial which contains 0.8mL of pH-balanced preservative, for transportation to a laboratory for analysis. It is recommended that the person undergoing oral fluid drug testing have nothing to eat or drink for a 10- minute period before the oral fluid specimen is collected. This will remove opportunity for adulteration. Likewise, it is recommended that the person be observed for the duration of the collection period to prevent adulteration of the specimen. An average of 0.4 mL of saliva can be collected. The specimen may be stored at 4C to 37C and tested within 21 days of collection (or within 6 weeks if frozen).
The oral fluid specimen must be analyzed in a laboratory setting. There is no point-of-care (POC) oral fluid test kit for drugs of abuse (other than for alcohol). In the laboratory the oral fluid is extracted from the vial after centrifugation and a screening test is completed to eliminate negative specimens. Similar to urinalysis, oral fluid specimens are analyzed first by enzyme immunoassay with positive specimens sent for confirmatory testing. Comparable cut-off values to urinalysis by enzyme immunoassay have been developed for oral fluids
Review Strategy
 
Research Question
What is the diagnostic utility of the Intercept oral specimen device?
Inclusion criteria:
Studies evaluating paired urine and oral fluid specimens from the same individual with the Intercept oral fluid collection device.
The population studied includes drug users.
Exclusion criteria:
Studies testing for marijuana (THC) only.
Outcomes:
Sensitivity and Specificity of oral fluid testing compared to urinalysis for methadone (methadone metabolite), opiates, cocaine, benzodiazepines, and alcohol.
Quality of the Body of Evidence
The Grading of Recommendations Assessment, Development and Evaluation (GRADE) system was used to evaluate the overall quality of the body of evidence (defined as 1 or more studies) supporting the research questions explored in this systematic review. A description of the GRADE system is reported in Appendix 1.
Summary of Findings
A total of 854 potential citations were retrieved. After reviewing titles and abstracts, 2 met the inclusion and exclusion criteria. Two other relevant studies were found after corresponding with the author of the 2 studies retrieved from the literature search. Therefore a total of 4 published studies are included in this analysis. All 4 studies carried out by the same investigator meet the definition of Medical Advisory Secretariat level III (not a-randomized controlled trial with contemporaneous controls) study design. In each of the studies, paired urine and oral fluid specimens where obtained from drug users. Urine collection was not observed in the studies however, laboratory tests for pH and creatinine were used to determine the reliability of the specimen. Urine specimens thought to be diluted and unreliable were removed from the evaluation. Urinalysis was used as the criterion measurement for which to determine the sensitivity and specificity of oral fluid testing by the Intercept oral fluid device for opiates, benzodiazepines, cocaine and marijuana. Alcohol was not tested in any of the 4 studies. From these 4 studies, the following conclusions were drawn:
The evidence indicates that oral fluid testing with the Intercept oral fluid device has better specificity than sensitivity for opiates, benzodiazepines, cocaine and marijuana.
The sensitivity of oral fluids testing with the Intercept oral fluid device seems to be from best to worst: cocaine > benzodiazepines >opiates> marijuana.
The sensitivity and specificity for opiates of the Intercept oral fluid device ranges from 75 to 90% and 97- 100% respectively.
The consequences of opiate false-negatives by oral fluid testing with the Intercept oral fluid device need to be weighed against the disadvantages of urine testing, including invasion of privacy issues and adulteration and substitution of the urine specimen.
The window of detection is narrower for oral fluid drug testing than urinalysis and because of this oral fluid testing may best be applied in situations where there is suspected frequent drug use. When drug use is thought to be less frequent or remote, urinalysis may offer a wider (24-48 hours more than oral fluids) window of detection.
The narrow window of detection for oral fluid testing may mean more frequent testing is needed compared to urinalysis. This may increase the expense for drug testing in general.
POC oral fluid testing is not yet available and may limit the practical utility of this drug testing methodology. POC urinalysis by immunoassay is available.
The possible applications of oral fluid testing may include:
Because of its narrow window of detection compared to urinalysis oral fluid testing may best be used during periods of suspected frequent or recent drug use (within 24 hours of drug testing). This is not to say that oral fluid testing is superior to urinalysis during these time periods.
In situations where an observed urine specimen is difficult to obtain. This may include persons with “shy bladder syndrome” or with other urinary conditions limiting their ability to provide an observed urine specimen.
When the health of the patient would make urine testing unreliable (e,g., renal disease)
As an alternative drug testing method when urine specimen tampering practices are suspected to be affecting the reliability of the urinalysis test.
Possible limiting Factors to Diffusion of Oral Fluid Technology
No oral fluid POC test equivalent to onsite urine dips or POC analyzer reducing immediacy of results for patient care.
Currently, physicians get reimbursed directly for POC urinalysis. Oral fluid must be analyzed in a lab setting removing physician reimbursement, which is a source of program funding for many methadone clinics.
Small amount of oral fluid specimen obtained; repeat testing on same sample will be difficult.
Reliability of positive oral fluid methadone (parent drug) results may decrease because of possible contamination of oral cavity after ingestion of dose. Therefore high methadone levels may not be indicative of compliance with treatment. Oral fluid does not as yet test for methadone metabolite.
There currently is no licensed provincial laboratory that analyses oral fluid specimens.
Abbreviations
2-ethylidene- 1,5-dimethyl-3,3-diphenylpyrrolidine
enzyme immunoassay
Enzyme Linked Immunosorbent Assay (ELISA),
Enzyme Multiplied Immunoassay Test (EMIT)
Gas chromatography
gas chromatography/mass spectrometry
High-performance liquid chromatography
Limit of Detection
Mass spectrometry
Methadone Maintenance Treatment
Oral fluid testing
Phencyclidine
Point of Care Testing
tetrahydrocannabinol
11-nor-delta-9-tetrhydrocannabinol-9-carboxylic acid
urine drug testing
PMCID: PMC3379523  PMID: 23074492
20.  Hemoglobin A1c and Mean Glucose in Patients With Type 1 Diabetes 
Diabetes Care  2011;34(3):540-544.
OBJECTIVE
To determine the relationship between mean sensor glucose concentrations and hemoglobin A1c (HbA1c) values measured in the Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications laboratory at the University of Minnesota in a cohort of subjects with type 1 diabetes from the Juvenile Diabetes Research Foundation continuous glucose monitoring randomized trial.
RESEARCH DESIGN AND METHODS
Near-continuous glucose sensor data (≥4 days/week) were collected for 3 months before a central laboratory–measured HbA1c was performed for 252 subjects aged 8–74 years, the majority of whom had stable HbA1c values (77% within ±0.4% of the patient mean).
RESULTS
The slope (95% CI) for mean sensor glucose concentration (area under the curve) versus a centrally measured HbA1c was 24.4 mg/dL (22.0–26.7) for each 1% change in HbA1c, with an intercept of −16.2 mg/dL (−32.9 to 0.6). Although the slope did not vary with age or sex, there was substantial individual variability, with mean sensor glucose concentrations ranging from 128 to 187 mg/dL for an HbA1c of 6.9–7.1%. The root mean square of the errors between the actual mean sensor glucose concentration versus the value calculated using the regression equation was 14.3 mg/dL, whereas the median absolute difference was 10.1 mg/dL.
CONCLUSIONS
There is substantial individual variability between the measured versus calculated mean glucose concentrations. Consequently, estimated average glucose concentrations calculated from measured HbA1c values should be used with caution.
doi:10.2337/dc10-1054
PMCID: PMC3041177  PMID: 21266647
21.  Pretreatment CD4 Cell Slope and Progression to AIDS or Death in HIV-Infected Patients Initiating Antiretroviral Therapy—The CASCADE Collaboration: A Collaboration of 23 Cohort Studies 
PLoS Medicine  2010;7(2):e1000239.
Analyzing data from several thousand cohort study participants, Marcel Wolbers and colleagues find that the rate of CD4 T cell decline is not useful in deciding when to start HIV treatment.
Background
CD4 cell count is a strong predictor of the subsequent risk of AIDS or death in HIV-infected patients initiating combination antiretroviral therapy (cART). It is not known whether the rate of CD4 cell decline prior to therapy is related to prognosis and should, therefore, influence the decision on when to initiate cART.
Methods and Findings
We carried out survival analyses of patients from the 23 cohorts of the CASCADE (Concerted Action on SeroConversion to AIDS and Death in Europe) collaboration with a known date of HIV seroconversion and with at least two CD4 measurements prior to initiating cART. For each patient, a pre-cART CD4 slope was estimated using a linear mixed effects model. Our primary outcome was time from initiating cART to a first new AIDS event or death. We included 2,820 treatment-naïve patients initiating cART with a median (interquartile range) pre-cART CD4 cell decline of 61 (46–81) cells/µl per year; 255 patients subsequently experienced a new AIDS event or death and 125 patients died. In an analysis adjusted for established risk factors, the hazard ratio for AIDS or death was 1.01 (95% confidence interval 0.97–1.04) for each 10 cells/µl per year reduction in pre-cART CD4 cell decline. There was also no association between pre-cART CD4 cell slope and survival. Alternative estimates of CD4 cell slope gave similar results. In 1,731 AIDS-free patients with >350 CD4 cells/µl from the pre-cART era, the rate of CD4 cell decline was also not significantly associated with progression to AIDS or death (hazard ratio 0.99, 95% confidence interval 0.94–1.03, for each 10 cells/µl per year reduction in CD4 cell decline).
Conclusions
The CD4 cell slope does not improve the prediction of clinical outcome in patients with a CD4 cell count above 350 cells/µl. Knowledge of the current CD4 cell count is sufficient when deciding whether to initiate cART in asymptomatic patients.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
More than 30 million people are currently infected with the human immunodeficiency virus (HIV), the cause of acquired immunodeficiency syndrome (AIDS). Most people who become infected with HIV do not become ill immediately although some develop a short flu-like illness shortly after infection. This illness is called “seroconversion” illness because it coincides with the appearance of antibodies to HIV in the blood. The next stage of HIV infection has no major symptoms and may last up to 10 years. During this time, HIV slowly destroys immune system cells (including CD4 cells, a type of lymphocyte). Without treatment, the immune system loses the ability to fight off infections by other disease-causing organisms and HIV-positive people then develop so-called opportunistic infections, Kaposi sarcoma (a skin cancer), or non-Hodgkin lymphoma (a cancer of the lymph nodes) that determine the diagnosis of AIDS. Although HIV-positive people used to die within 10 years of infection on average, the development in 1996 of combination antiretroviral therapy (cART; cocktails of powerful antiretroviral drugs) means that, at least for people living in developed countries, HIV/AIDS is now a chronic, treatable condition.
Why Was This Study Done?
The number of CD4 cells in the blood is a strong predictor of the likelihood of AIDS or death in untreated HIV-positive individuals and in people starting cART. Current guidelines recommend, therefore, that cART is started in HIV-positive patients without symptoms when their CD4 cell count drops below a specified cutoff level (typically 350 cells/µl.) In addition, several guidelines suggest that clinicians should also consider cART in symptom-free HIV-positive patients with a CD4 cell count above the cutoff level if their CD4 cell count has rapidly declined. However, it is not actually known whether the rate of CD4 cell decline (so-called “CD4 slope”) before initiating cART is related to a patient's outcome, so should clinicians consider this measurement when deciding whether to initiate cART? In this study, the researchers use data from CASCADE (Concerted Action on SeroConversion to AIDS and Death in Europe), a large collaborative study of 23 groups of HIV-positive individuals whose approximate date of HIV infection is known, to answer this question.
What Did the Researchers Do and Find?
The researchers undertook survival analyses of patients in the CASCADE collaboration for whom at least two CD4 cell counts had been recorded before starting cART. They calculated a pre-cART CD4 cell count slope from these counts and used statistical methods to investigate whether there was an association between the rate of decline in CD4 cell count and the time from initiating cART to the primary outcome—a first new AIDS-defining event or death. 2820 HIV-positive patients initiating cART were included in the study; the average pre-cART CD4 cell decline among them was 61 cells/µl/year. 255 of the patients experienced a new AIDS-related event or died after starting cART but the researchers found no evidence for an association between the primary outcome and the pre-cART CD4 slope or between survival and this slope. In addition, the rate of CD4 cell count decline was not significantly associated with progression to AIDS or death among 1731 HIV-positive, symptom-free patients with CD4 cell counts above 350 cells/µl who were studied before cART was developed.
What Do These Findings Mean?
These findings suggest that knowledge of the rate of CD4 cell count decline will not improve the prediction of clinical outcome in HIV-positive patients with a CD4 cell count above 350 cells/µl. Indeed, the findings show that the rate of CD4 cell decline in individual patients is highly variable over time. Consequently, a rate measured at one time cannot be used to reliably predict a patient's future CD4 cell count. Because this was an observational study, patients with the greatest rate of decline in their CD4 cell count might have received better care than other patients, a possibility that would lessen the effect of the rate of CD4 cell count decline on outcomes. Nevertheless, the findings of this study strongly suggest that knowledge of the current CD4 cell count and an assessment of other established risk factors for progression to AIDS are sufficient when deciding whether to initiate cART in symptom-free HIV-positive patients.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000239.
Information is available from the US National Institute of Allergy and Infectious Diseases on HIV infection and AIDS
HIV InSite has comprehensive information on all aspects of HIV/AIDS, including information on treatments and treatment guidelines
Information is available from Avert, an international AIDS charity, on all aspects of HIV/AIDS, including information on treatments for HIV and AIDS, when to start treatment, and the stages of HIV infection (in English and Spanish)
Information on CASCADE is available
doi:10.1371/journal.pmed.1000239
PMCID: PMC2826377  PMID: 20186270
22.  Ankle taping improves proprioception before and after exercise in young men. 
Ankle sprains are common sports injuries. Inadequate foot position awareness is thought to be the fundamental cause of these injuries. Ankle taping may decrease risk of injury through improving foot position awareness. The benefit of taping is thought to decrease with duration of exercise because of poor tape adherence to human skin. This study was a randomized, crossover, controlled comparison experiment that tested the hypothesis that ankle taping improves foot position awareness before and after exercise. A sample of 24 healthy young blindfolded volunteers, wearing their own athletic shoes, indicated perceived slope direction and estimated slope amplitude when bearing full body weight and standing on a series of blocks. The top slope of the blocks varied between 0 degree and 25 degrees, in 2.5 degrees increments, to orient the plantar surface with respect to the leg toward pronation, supination, plantarflexion, and dorsiflexion, relative to its position on a flat surface. Foot position awareness, which was considered the reciprocal of surface slope estimate error, varied with testing condition, particularly when surface slope was greater than 10 degrees, presumably the most important range considering ankle injuries. In this higher range absolute position error was 4.23 degrees taped, and 5.53 degrees untaped (P < 0.001). Following exercise, in the higher range absolute position error was 2.5% worse when taped and 35.5% worse when untaped (P < 0.001). These data support the hypothesis that ankle taping improves proprioception before and after exercise. They also indicate that foot position awareness declines with exercise. Compared to barefoot data (position error 1.97 degrees), foot position error was 107.5% poorer with athletic footwear when untaped (absolute position error 4.11 degrees), and 58.1% worse when taped (position error 3.13 degrees). This suggests that ankle taping partly corrects impaired proprioception caused by modern athletic footwear and exercise. Footwear could be optimized to reduce the incidence of these injuries.
PMCID: PMC1332234  PMID: 8808537
23.  Cross-shift changes in FEV1 in relation to wood dust exposure: the implications of different exposure assessment methods 
Background: Exposure-response analyses in occupational studies rely on the ability to distinguish workers with regard to exposures of interest.
Aims: To evaluate different estimates of current average exposure in an exposure-response analysis on dust exposure and cross-shift decline in FEV1 among woodworkers.
Methods: Personal dust samples (n = 2181) as well as data on lung function parameters were available for 1560 woodworkers from 54 furniture industries. The exposure to wood dust for each worker was calculated in eight different ways using individual measurements, group based exposure estimates, a weighted estimate of individual and group based exposure estimates, and predicted values from mixed models. Exposure-response relations on cross-shift changes in FEV1 and exposure estimates were explored.
Results: A positive exposure-response relation between average dust exposure and cross-shift FEV1 was shown for non-smokers only and appeared to be most pronounced among pine workers. In general, the highest slope and standard error (SE) was revealed for grouping by a combination of task and factory size, the lowest slope and SE was revealed for estimates based on individual measurements, with the weighted estimate and the predicted values in between. Grouping by quintiles of average exposure for task and factory combinations revealed low slopes and high SE, despite a high contrast.
Conclusion: For non-smokers, average dust exposure and cross-shift FEV1 were associated in an exposure dependent manner, especially among pine workers. This study confirms the consequences of using different exposure assessment strategies studying exposure-response relations. It is possible to optimise exposure assessment combining information from individual and group based exposure estimates, for instance by applying predicted values from mixed effects models.
doi:10.1136/oem.2003.011601
PMCID: PMC1740672  PMID: 15377768
24.  Regression dilution bias: Tools for correction methods and sample size calculation 
Upsala Journal of Medical Sciences  2012;117(3):279-283.
Background
Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study.
Aims and methods
In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate.
Results
The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design.
Conclusions
Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.
doi:10.3109/03009734.2012.668143
PMCID: PMC3410287  PMID: 22401135
Correction methods; measurement errors; regression dilution bias; SAS and R programs
25.  Long-Term Asthma Trend Monitoring in New York City: A Mixed Model Approach 
Objective
Show the benefits of using a generalized linear mixed model (GLMM) to examine long-term trends in asthma syndrome data.
Introduction
Over the last decade, the application of syndromic surveillance systems has expanded beyond early event detection to include long-term disease trend monitoring. However, statistical methods employed for analyzing syndromic data tend to focus on early event detection. Generalized linear mixed models (GLMMs) may be a useful statistical framework for examining long-term disease trends because, unlike other models, GLMMs account for clustering common in syndromic data, and GLMMs can assess disease rates at multiple spatial and temporal levels (1). We show the benefits of the GLMM by using a GLMM to estimate asthma syndrome rates in New York City from 2007 to 2012, and to compare high and low asthma rates in Harlem and the Upper East Side (UES) of Manhattan.
Methods
Asthma related emergency department (ED) visits, and patient age and ZIP code were obtained from data reported daily to the NYC Department of Health and Mental Hygiene. Demographic data were obtained from 2010 US Census. ZIP codes that represented high and low asthma rates in Harlem and the UES of Manhattan were chosen for closer inspection. The ratio of weekly asthma syndrome visits to total ED visits was modeled with a Poisson GLMM with week and ZIP code random intercepts (2). Age and ethnicity were adjusted for because of their association with asthma rates (3).
Results
The GLMM showed citywide asthma rates remained stable from 2007 to 2012, but seasonal differences and significant inter-ZIP code variation were present. The Harlem ZIP code asthma rate that was estimated with the GLMM was significantly higher (5.83%, 95% CI: 3.65%, 9.49%) than the asthma rate in UES ZIP code (0.78%, 95% CI: 0.50%, 1.21%). A linear time component to the GLMM showed no appreciable change over time despite the seasonal fluctuations in asthma rate. GLMM based asthma rates are shown over time (Figure 1).
Conclusions
GLMMs have several strengths as statistical frameworks for monitoring trends including: Disease rates can be estimated at multiple spatial and temporal levels,Standard error adjustment for clustering in syndromic data allows for accurate, statistical assessment of changes over time and differences between subgroups,“Strength borrowed” (4) from the aggregated data informs small subgroups and smooths trends,Integration of covariate data reduces bias in estimated rates.
GLMMs have previously been suggested for early event detection with syndromic surveillance data (5), but the versatility of GLMM makes them useful for monitoring long-term disease trends as well. In comparison to GLMMs, standard errors from single level GLMs do not account for clustering and can lead to inaccurate statistical hypothesis testing. Bayesian hierarchical models (6), share many of the strengths of GLMMS, but are more complicated to fit. In the future, GLMMs could provide a framework for grouping similar ZIP codes based on their model estimates (e.g. seasonal trends and influence on overall trend), and analyzing long-term disease trends with syndromic data.
PMCID: PMC3692769
Asthma; Long term trends; Generalized Mixed Models

Results 1-25 (1456938)