Generalized linear and nonlinear mixed models (GMMMs and NLMMs) are commonly used to represent non-Gaussian or nonlinear longitudinal or clustered data. A common assumption is that the random effects are Gaussian. However, this assumption may be unrealistic in some applications, and misspecification of the random effects density may lead to maximum likelihood parameter estimators that are inconsistent, biased, and inefficient. Because testing if the random effects are Gaussian is difficult, previous research has recommended using a flexible random effects density. However, computational limitations have precluded widespread use of flexible random effects densities for GLMMs and NLMMs. We develop a SAS macro, SNP_NLMM, that overcomes the computational challenges to fit GLMMs and NLMMs where the random effects are assumed to follow a smooth density that can be represented by the seminonparametric formulation proposed by Gallant and Nychka (1987). The macro is flexible enough to allow for any density of the response conditional on the random effects and any nonlinear mean trajectory. We demonstrate the SNP_NLMM macro on a GLMM of the disease progression of toenail infection and on a NLMM of intravenous drug concentration over time.
PMCID: PMC3969790
random effects; nonlinear mixed models; generalized linear mixed models; SAS; SNP
Funk, Michele Jonsson | Fusco, Jennifer S | Cole, Stephen R | Thomas, James C | Porter, Kholoud | Kaufman, Jay S | Davidian, Marie | White, Alice D | Hartmann, Katherine E | Eron, Joseph J
Background
To estimate the clinical benefit of HAART initiation versus deferral in a given month among patients with CD4 counts <800 cells/µL.
Methods
In this observational cohort study of HIV-1 seroconverters from CASCADE, we constructed monthly sequential nested subcohorts from 1/1996 to 5/2009 including all eligible HAART-naïve, AIDS-free individuals with a CD4 count <800 cells/uL. The primary outcome was time to AIDS or death among those who initiated HAART in the baseline month compared to those who did not, pooled across subcohorts and stratified by CD4. Using inverse-probability-of-treatment-weighted survival curves and Cox proportional hazards models, we estimated the absolute and relative effect of treatment with robust 95% confidence intervals (in parentheses).
Results
Of 9,455 patients with 52,268 person-years of follow-up, 812 (8.6%) developed AIDS and 544 (5.8%) died. Within CD4 strata of 200–349, 350–499, and 500–799 cells/µL, HAART initiation was associated with adjusted hazard ratios for AIDS/death of 0.59 (0.43,0.81), 0.75 (0.49,1.14), and 1.10 (0.67,1.79), respectively; and with adjusted 3-year cumulative risk differences of −4.8% (−7.0%,−2.6%), −2.9% (−5.0%,−0.9%), and 0.3% (−3.7%,4.2%), respectively. In the analysis of all-cause mortality, HAART initiation was associated with adjusted hazard ratios of 0.71 (0.44,1.15), 0.51 (0.33,0.80) and 1.02 (0.49,2.12), respectively. Numbers needed to treat to prevent one AIDS event or death within 3 years were 21 (14,38) and 34 (20,115) in CD4 strata of 200–349 and 350–499 cells/µL, respectively.
Conclusions
Compared to deferring in a given month, HAART initiation at CD4 counts <500 (but not 500–799) cells/µL was associated with slower disease progression.
doi:10.1001/archinternmed.2011.401
PMCID: PMC3960856
PMID: 21949165
Extensive baseline covariate information is routinely collected on
participants in randomized clinical trials, and it is well-recognized that a
proper covariate-adjusted analysis can improve the efficiency of inference on
the treatment effect. However, such covariate adjustment has engendered
considerable controversy, as post hoc selection of covariates
may involve subjectivity and lead to biased inference, while prior specification
of the adjustment may exclude important variables from consideration.
Accordingly, how to select covariates objectively to gain maximal efficiency is
of broad interest. We propose and study the use of modern variable selection
methods for this purpose in the context of a semiparametric framework, under
which variable selection in modeling the relationship between outcome and
covariates is separated from estimation of the treatment effect, circumventing
the potential for selection bias associated with standard analysis of covariance
methods. We demonstrate that such objective variable selection techniques
combined with this framework can identify key variables and lead to unbiased and
efficient inference on the treatment effect. A critical issue in finite samples
is validity of estimators of uncertainty, such as standard errors and confidence
intervals for the treatment effect. We propose an approach to estimation of
sampling variation of estimated treatment effect and show its superior
performance relative to that of existing methods.
doi:10.1002/sim.5433
PMCID: PMC3855673
PMID: 22733628
covariate adjustment; false selection rate control; oracle property; semiparametric treatment effect estimation; shrinkage methods; variable selection
Summary
Because the number of patients waiting for organ transplants exceeds the number of organs available, a better understanding of how transplantation affects the distribution of residual lifetime is needed to improve organ allocation. However, there has been little work to assess the survival benefit of transplantation from a causal perspective. Previous methods developed to estimate the causal effects of treatment in the presence of time-varying confounders have assumed that treatment assignment was independent across patients, which is not true for organ transplantation. We develop a version of G-estimation that accounts for the fact that treatment assignment is not independent across individuals to estimate the parameters of a structural nested failure time model. We derive the asymptotic properties of our estimator and confirm through simulation studies that our method leads to valid inference of the effect of transplantation on the distribution of residual lifetime. We demonstrate our method on the survival benefit of lung transplantation using data from the United Network for Organ Sharing.
doi:10.1111/biom.12084
PMCID: PMC3865173
PMID: 24128090
Causal Inference; G-Estimation; Lung Transplantation; Martingale Theory; Structural Nested Failure Time Models
Background
Controlled studies investigating risk factors for the common presenting problem of chronic cough in dogs are lacking.
Hypothesis/Objectives
To identify demographic and historical factors associated with chronic cough in dogs, and associations between the characteristics of cough and diagnosis.
Animals
Dogs were patients of an academic internal medicine referral service. Coughing dogs had a duration of cough ≥ 2 months (n=115). Control dogs had presenting problems other than cough (n=104).
Methods
Owners completed written questionnaires. Demographic information and diagnoses were obtained from medical records. Demographic and historical data were compared between coughing and control dogs. Demographic data and exposure to environmental tobacco smoke (ETS) also were compared with hospital accessions and adult smoking rates, respectively. Characteristics of cough were compared among diagnoses.
Results
Most coughing dogs had a diagnosis of large airway disease (n=88; 77%). Tracheobronchomalacia was diagnosed in 59 dogs (51%), including 79% of toy breed dogs. Demographic risk factors included older age, smaller body weight, and being toy breed (p<0.001). No association was found between coughing and month (p=0.239) or season (p=0.414) of presentation. Exposure to ETS was not confirmed to be a risk factor (p=0.243). No historical description of cough was unique to a particular diagnosis.
Conclusions and clincal importance
Associations with age, size, and toy breeds were strong. Tracheobronchomalacia is frequent in dogs with chronic cough, but descriptions of cough should be used cautiously in prioritizing differential diagnoses. The association between exposure to ETS and chronic cough deserves additional study.
doi:10.1111/j.1939-1676.2010.0530.x
PMCID: PMC3852423
PMID: 20492480
respiratory tract; bronchii; tracheal collapse; environmental; tobacco smoke
Longitudinal experiments often involve multiple outcomes measured repeatedly within a set of study participants. While many questions can be answered by modeling the various outcomes separately, some questions can only be answered in a joint analysis of all of them. In this paper, we will present a review of the many approaches proposed in the statistical literature. Four main model families will be presented, discussed and compared. Focus will be on presenting advantages and disadvantages of the different models rather than on the mathematical or computational details.
doi:10.1177/0962280212445834
PMCID: PMC3404254
PMID: 22523185
Mixed models; Random effects; Shared parameters; Marginal models; Conditional models; Latent variables
The vast majority of settings for which frequentist statistical properties are derived assume a fixed, a priori known sample size. Familiar properties then follow, such as, for example, the consistency, asymptotic normality, and efficiency of the sample average for the mean parameter, under a wide range of conditions. We are concerned here with the alternative situation in which the sample size is itself a random variable which may depend on the data being collected. Further the rule governing this may be deterministic or probabilistic. There are many important practical examples of such settings, including missing data, sequential trials, and informative cluster size. It is well known that special issues can arise when evaluating the properties of statistical procedures under such sampling schemes, and much has been written about specific areas3-4. Our aim is to place these various related examples into a single framework derived from the joint modeling of the outcomes and sampling process, and so derive generic results that in turn provide insight, and in some cases practical consequences, for different settings. It is shown that, even in the simplest case of estimating a mean, some of the results appear counter-intuitive. In many examples the sample average may exhibit small sample bias and, even when it is unbiased, may not be optimal. Indeed there may be no minimum variance unbiased estimator for the mean. Such results follow directly from key attributes such as non-ancilliarity of the sample size, and incompleteness of the minimal su cient statistic of the sample size and sample sum. Although our results have direct and obvious implications for estimation following group sequential trials, there are also ramifications for a range of other settings, such as random cluster sizes, censored time-to-event data, and the joint modeling of longitudinal and time-to-event data. Here we use the simplest sequential group sequential setting to develop and explicate the main results. Some implications for random sample sizes and missing data are also considered. Consequences for other related settings will be considered elsewhere.
doi:10.1177/0962280212445801
PMCID: PMC3404233
PMID: 22514029
Frequentist Inference; Generalized Sample Average; Informative Cluster Size; Joint Modeling; Likelihood Inference; Missing at Random; Random Cluster Size
Summary
A treatment regime is a rule that assigns a treatment, among a set of possible treatments, to a patient as a function of his/her observed characteristics, hence “personalizing” treatment to the patient. The goal is to identify the optimal treatment regime that, if followed by the entire population of patients, would lead to the best outcome on average. Given data from a clinical trial or observational study, for a single treatment decision, the optimal regime can be found by assuming a regression model for the expected outcome conditional on treatment and covariates, where, for a given set of covariates, the optimal treatment is the one that yields the most favorable expected outcome. However, treatment assignment via such a regime is suspect if the regression model is incorrectly specified. Recognizing that, even if misspecified, such a regression model defines a class of regimes, we instead consider finding the optimal regime within such a class by finding the regime the optimizes an estimator of overall population mean outcome. To take into account possible confounding in an observational study and to increase precision, we use a doubly robust augmented inverse probability weighted estimator for this purpose. Simulations and application to data from a breast cancer clinical trial demonstrate the performance of the method.
doi:10.1111/j.1541-0420.2012.01763.x
PMCID: PMC3556998
PMID: 22550953
Doubly robust estimator; Inverse probability weighting; Outcome regression; Personalized medicine; Potential outcomes; Propensity score
Mixed models are commonly used to represent longitudinal or repeated measures data. An additional complication arises when the response is censored, for example, due to limits of quantification of the assay used. While Gaussian random effects are routinely assumed, little work has characterized the consequences of misspecifying the random-effects distribution nor has a more flexible distribution been studied for censored longitudinal data. We show that, in general, maximum likelihood estimators will not be consistent when the random-effects density is misspecified, and the effect of misspecification is likely to be greatest when the true random-effects density deviates substantially from normality and the number of noncensored observations on each subject is small. We develop a mixed model framework for censored longitudinal data in which the random effects are represented by the flexible seminonparametric density and show how to obtain estimates in SAS procedure NLMIXED. Simulations show that this approach can lead to reduction in bias and increase in efficiency relative to assuming Gaussian random effects. The methods are demonstrated on data from a study of hepatitis C virus.
doi:10.1093/biostatistics/kxr026
PMCID: PMC3276268
PMID: 21914727
Censoring; HCV; HIV; Limit of quantification; Longitudinal data; Random effects
Summary
Studies of clinical characteristics frequently measure covariates with a single observation. This may be a mis-measured version of the “true” phenomenon due to sources of variability like biological fluctuations and device error. Descriptive analyses and outcome models that are based on mis-measured data generally will not reflect the corresponding analyses based on the “true” covariate. Many statistical methods are available to adjust for measurement error. Imputation methods like regression calibration and moment reconstruction are easily implemented but are not always adequate. Sophisticated methods have been proposed for specific applications like density estimation, logistic regression, and survival analysis. However, it is frequently infeasible for an analyst to adjust each analysis separately, especially in preliminary studies where resources are limited. We propose an imputation approach called Moment Adjusted Imputation (MAI) that is flexible and relatively automatic. Like other imputation methods, it can be used to adjust a variety of analyses quickly, and it performs well under a broad range of circumstances. We illustrate the method via simulation and apply it to a study of systolic blood pressure and health outcomes in patients hospitalized with acute heart failure.
doi:10.1111/j.1541-0420.2011.01569.x
PMCID: PMC3208089
PMID: 21385161
Conditional score; Measurement error; Non-linear models; Regression calibration
doi:10.1111/j.1751-5823.2011.00144.x
PMCID: PMC3173780
PMID: 21927532
Summary
A routine challenge is that of making inference on parameters in a statistical model of interest from longitudinal data subject to drop out, which are a special case of the more general setting of monotonely coarsened data. Considerable recent attention has focused on doubly robust estimators, which in this context involve positing models for both the missingness (more generally, coarsening) mechanism and aspects of the distribution of the full data, that have the appealing property of yielding consistent inferences if only one of these models is correctly specified. Doubly robust estimators have been criticized for potentially disastrous performance when both of these models are even only mildly misspecified. We propose a doubly robust estimator applicable in general monotone coarsening problems that achieves comparable or improved performance relative to existing doubly robust methods, which we demonstrate via simulation studies and by application to data from an AIDS clinical trial.
doi:10.1111/j.1541-0420.2010.01476.x
PMCID: PMC3061242
PMID: 20731640
Coarsening at random; Discrete hazard; Dropout; Longitudinal data; Missing at random
Doubly robust estimation combines a form of outcome regression with a model for the exposure (i.e., the propensity score) to estimate the causal effect of an exposure on an outcome. When used individually to estimate a causal effect, both outcome regression and propensity score methods are unbiased only if the statistical model is correctly specified. The doubly robust estimator combines these 2 approaches such that only 1 of the 2 models need be correctly specified to obtain an unbiased effect estimator. In this introduction to doubly robust estimators, the authors present a conceptual overview of doubly robust estimation, a simple worked example, results from a simulation study examining performance of estimated and bootstrapped standard errors, and a discussion of the potential advantages and limitations of this method. The supplementary material for this paper, which is posted on the Journal's Web site (http://aje.oupjournals.org/), includes a demonstration of the doubly robust property (Web Appendix 1) and a description of a SAS macro (SAS Institute, Inc., Cary, North Carolina) for doubly robust estimation, available for download at http://www.unc.edu/∼mfunk/dr/.
doi:10.1093/aje/kwq439
PMCID: PMC3070495
PMID: 21385832
causal inference; epidemiologic methods; propensity score
The Superior Yield of the New Strategy of Enoxaparin, Revascularization, and GlYcoprotein IIb/IIIa inhibitors (SYNERGY) was a randomized, open-label, multicenter clinical trial comparing 2 anticoagulant drugs on the basis of time-to-event endpoints. In contrast to other studies of these agents, the primary, intent-to-treat analysis did not find evidence of a difference, leading to speculation that premature discontinuation of the study agents by some subjects may have attenuated the apparent treatment effect and thus to interest in inference on the difference in survival distributions were all subjects in the population to follow the assigned regimens, with no discontinuation. Such inference is often attempted via ad hoc analyses that are not based on a formal definition of this treatment effect. We use SYNERGY as a context in which to describe how this effect may be conceptualized and to present a statistical framework in which it may be precisely identified, which leads naturally to inferential methods based on inverse probability weighting.
doi:10.1093/biostatistics/kxq054
PMCID: PMC3062147
PMID: 20797983
Dynamic treatment regime; Inverse probability weighting; Potential outcomes; Proportional hazards model
While marginal models, random-effects models, and conditional models are routinely considered to be the three main modeling families for continuous and discrete repeated measures with linear and generalized linear mean structures, respectively, it is less common to consider non-linear models, let alone frame them within the above taxonomy. In the latter situation, indeed, when considered at all, the focus is often exclusively on random-effects models. In this paper, we consider all three families, exemplify their great flexibility and relative ease of use, and apply them to a simple but illustrative set of data on tree circumference growth of orange trees.
doi:10.1198/tast.2009.07256
PMCID: PMC2774254
PMID: 20160890
Conditional model; Marginal model; Random-effect model; Serial correlation; Transition model
Summary
Considerable recent interest has focused on doubly robust estimators for a population mean response in the presence of incomplete data, which involve models for both the propensity score and the regression of outcome on covariates. The usual doubly robust estimator may yield severely biased inferences if neither of these models is correctly specified and can exhibit nonnegligible bias if the estimated propensity score is close to zero for some observations. We propose alternative doubly robust estimators that achieve comparable or improved performance relative to existing methods, even with some estimated propensity scores close to zero.
doi:10.1093/biomet/asp033
PMCID: PMC2798744
PMID: 20161511
Causal inference; Enhanced propensity score model; Missing at random; No unmeasured con-founders; Outcome regression
Considerable recent interest has focused on doubly robust estimators for a population mean response in the presence of incomplete data, which involve models for both the propensity score and the regression of outcome on covariates. The usual doubly robust estimator may yield severely biased inferences if neither of these models is correctly specified and can exhibit nonnegligible bias if the estimated propensity score is close to zero for some observations. We propose alternative doubly robust estimators that achieve comparable or improved performance relative to existing methods, even with some estimated propensity scores close to zero.
doi:10.1093/biomet/asp033
PMCID: PMC2798744
PMID: 20161511
Causal inference; Enhanced propensity score model; Missing at random; No unmeasured confounders; Outcome regression
SUMMARY
We propose a procedure for estimating the survival function of a time-to-event random variable under arbitrary patterns of censoring. The method is predicated on the mild assumption that the distribution of the random variable, and hence the survival function, has a density that lies in a class of ‘smooth’ densities whose elements can be represented by an infinite Hermite series. Truncation of the series yields a ‘parametric’ expression that can well-approximate any plausible survival density, and hence survival function, provided the degree of truncation is suitably chosen. The representation admits a convenient expression for the likelihood for the ‘parameters’ in the approximation under arbitrary censoring/truncation that is straightforward to compute and maximize. A test statistic for comparing two survival functions, which is based on an integrated weighted difference of estimates of each under this representation, is proposed. Via simulation studies and application to a number of data sets, we demonstrate that the approach yields reliable inferences and can result in gains in efficiency over traditional nonparametric methods.
doi:10.1002/sim.3368
PMCID: PMC2605407
PMID: 18613273
bootstrap; information criteria; integrated weighted difference; interval censoring; seminonparametric density representation; truncation
SUMMARY
There is considerable debate regarding whether and how covariate adjusted analyses should be used in the comparison of treatments in randomized clinical trials. Substantial baseline covariate information is routinely collected in such trials, and one goal of adjustment is to exploit covariates associated with outcome to increase precision of estimation of the treatment effect. However, concerns are routinely raised over the potential for bias when the covariates used are selected post hoc; and the potential for adjustment based on a model of the relationship between outcome, covariates, and treatment to invite a “fishing expedition” for that leading to the most dramatic effect estimate. By appealing to the theory of semiparametrics, we are led naturally to a characterization of all treatment effect estimators and to principled, practically-feasible methods for covariate adjustment that yield the desired gains in efficiency and that allow covariate relationships to be identified and exploited while circumventing the usual concerns. The methods and strategies for their implementation in practice are presented. Simulation studies and an application to data from an HIV clinical trial demonstrate the performance of the techniques relative to existing methods.
doi:10.1002/sim.3113
PMCID: PMC2562926
PMID: 17960577
baseline variables; clinical trials; covariate adjustment; efficiency; semiparametric theory; variable selection
SUMMARY
We propose a similarity-based regression method to detect associations between traits and multimarker genotypes. The model regresses similarity in traits for pairs of ”unrelated” individuals on their haplotype similarities, and detects the significance by a score test for which the limiting distribution is derived. The proposed method allows for covariates, uses phase-independent similarity measures to bypass the needs to impute phase information, and is applicable to traits of general types (e.g., quantitative and qualitative traits). We also show that the gene-trait similarity regression is closely connected with random effects haplotype analysis, although commonly they are considered as separate modeling tools. This connection unites the classic haplotype sharing methods with the variance component approaches, which enables direct derivation of analytical properties of the sharing statistics even when the similarity regression model becomes analytically challenging.
doi:10.1111/j.1541-0420.2008.01176.x
PMCID: PMC2748404
PMID: 19210740
Haplotype-based association test; Haplotype sharing; Haplotype similarity
Summary
Joint modeling of a primary response and a longitudinal process via shared random effects is widely used in many areas of application. Likelihood-based inference on joint models requires model specification of the random effects. Inappropriate model specification of random effects can compromise inference. We present methods to diagnose random effect model misspecification of the type that leads to biased inference on joint models. The methods are illustrated via application to simulated data, and by application to data from a study of bone mineral density in perimenopausal women and data from an HIV clinical trial.
doi:10.1111/j.1541-0420.2008.01171.x
PMCID: PMC2748157
PMID: 19173697
Censoring; Random effect; Remeasurement method; SIMEX
The pretest–posttest study is commonplace in numerous applications. Typically, subjects are randomized to two treatments, and response is measured at baseline, prior to intervention with the randomized treatment (pretest), and at prespecified follow-up time (posttest). Interest focuses on the effect of treatments on the change between mean baseline and follow-up response. Missing posttest response for some subjects is routine, and disregarding missing cases can lead to invalid inference. Despite the popularity of this design, a consensus on an appropriate analysis when no data are missing, let alone for taking into account missing follow-up, does not exist. Under a semiparametric perspective on the pretest–posttest model, in which limited distributional assumptions on pretest or posttest response are made, we show how the theory of Robins, Rotnitzky and Zhao may be used to characterize a class of consistent treatment effect estimators and to identify the efficient estimator in the class. We then describe how the theoretical results translate into practice. The development not only shows how a unified framework for inference in this setting emerges from the Robins, Rotnitzky and Zhao theory, but also provides a review and demonstration of the key aspects of this theory in a familiar context. The results are also relevant to the problem of comparing two treatment means with adjustment for baseline covariates.
doi:10.1214/088342305000000151
PMCID: PMC2600547
PMID: 19081743
Analysis of covariance; covariate adjustment; influence function; inverse probability weighting; missing at random
Summary
A general framework for regression analysis of time-to-event data subject to arbitrary patterns of censoring is proposed. The approach is relevant when the analyst is willing to assume that distributions governing model components that are ordinarily left unspecified in popular semiparametric regression models, such as the baseline hazard function in the proportional hazards model, have densities satisfying mild “smoothness” conditions. Densities are approximated by a truncated series expansion that, for fixed degree of truncation, results in a “parametric” representation, which makes likelihood-based inference coupled with adaptive choice of the degree of truncation, and hence flexibility of the model, computationally and conceptually straightforward with data subject to any pattern of censoring. The formulation allows popular models, such as the proportional hazards, proportional odds, and accelerated failure time models, to be placed in a common framework; provides a principled basis for choosing among them; and renders useful extensions of the models straightforward. The utility and performance of the methods are demonstrated via simulations and by application to data from time-to-event studies.
doi:10.1111/j.1541-0420.2007.00928.x
PMCID: PMC2575078
PMID: 17970813
Accelerated failure time model; Heteroscedasticity; Information criteria; Interval censoring; Proportional hazards model; Proportional odds model; Seminonparametric (SNP) density; Time-dependent covariates
Summary
The primary goal of a randomized clinical trial is to make comparisons among two or more treatments. For example, in a two-arm trial with continuous response, the focus may be on the difference in treatment means; with more than two treatments, the comparison may be based on pairwise differences. With binary outcomes, pairwise odds-ratios or log-odds ratios may be used. In general, comparisons may be based on meaningful parameters in a relevant statistical model. Standard analyses for estimation and testing in this context typically are based on the data collected on response and treatment assignment only. In many trials, auxiliary baseline covariate information may also be available, and it is of interest to exploit these data to improve the efficiency of inferences. Taking a semiparametric theory perspective, we propose a broadly-applicable approach to adjustment for auxiliary covariates to achieve more efficient estimators and tests for treatment parameters in the analysis of randomized clinical trials. Simulations and applications demonstrate the performance of the methods.
doi:10.1111/j.1541-0420.2007.00976.x
PMCID: PMC2574960
PMID: 18190618
Covariate adjustment; Hypothesis test; k-arm trial; Kruskal-Wallis test; Log-odds ratio; Longitudinal data; Semiparametric theory
Inference on the association between a primary endpoint and features of longitudinal profiles of a continuous response is of central interest in medical and public health research. Joint models that represent the association through shared dependence of the primary and longitudinal data on random effects are increasingly popular; however, existing inferential methods may be inefficient or sensitive to assumptions on the random effects distribution. We consider a semiparametric joint model that makes only mild assumptions on this distribution and develop likelihood-based inference on the association and distribution, which offers improved performance relative to existing methods that is insensitive to the true random effects distribution. Moreover, the estimated distribution can reveal interesting population features, as we demonstrate for a study of the association between longitudinal hormone levels and bone status in peri-menopausal women.
doi:10.1016/j.csda.2006.10.008
PMCID: PMC2000853
PMID: 18704154
Conditional score; Generalized linear model; Mixed effects model; Pseudo-likelihood; Seminonparametric density