Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2720551

Formats

Article sections

- Abstract
- 1. Introduction
- 2. The Cox model with penalized splines
- 3. Smoothing parameter selection
- 4. Data Application
- 5. Design of the simulation study
- 6. Simulation results
- 7. Conclusions
- References

Authors

Related links

Comput Stat Data Anal. Author manuscript; available in PMC 2010 May 15.

Published in final edited form as:

Comput Stat Data Anal. 2009 May 15; 53(7): 2605–2616.

doi: 10.1016/j.csda.2008.12.008PMCID: PMC2720551

NIHMSID: NIHMS105877

Corresponding Author: Elizabeth J. Malloy, Email: ude.nacirema@yollam, 202-885-3614 (o), 202-885-3155 (f)

See other articles in PMC that cite the published article.

This article presents an application and a simulation study of model fit criteria for selecting the optimal degree of smoothness for penalized splines in Cox models. The criteria considered were the Akaike information criterion, the corrected AIC, two formulations of the Bayesian information criterion, and a generalized cross-validation method. The estimated curves selected by the five methods were compared to each other in a study of rectal cancer mortality in autoworkers. In the stimulation study, we estimated the fit of the penalized spline models in six exposure-response scenarios, using the five model fit criteria. The methods were compared based on a mean squared-error score and the power and size of hypothesis tests for any effect and for detecting nonlinearity. All comparisons were made across a range in the total sample size and number of cases.

The Cox regression model [8] is widely used in epidemiological research to examine the association between an exposure and a health outcome. In a typical approach to the analysis of epidemiologic data with a continuous exposure variable, the exposure is transformed to an ordinal or nominal polytomous variable and relative risk (RR) is modeled as a step function of the exposure. This approach is attractive because there are no constraints on the change in RR between exposure categories and because it is conceptually and computationally straightforward to implement. However, the selection of cutpoints used to define the exposure categories influences the shape of the dose-response relationship and this model sensitivity has raised concerns [43]. Moreover, a step function does not take advantage of the information within categories [18,19]. To avoid these pitfalls, as well as to avoid parametric constraints on the shape of the exposure-response curve, a variety of smoothing techniques have been recommended by epidemiologists [16,18,35]. Over the past decade, splines and other smoothing methods have appeared in a wide range of studies, from etiologic investigations of air pollution [31], occupational exposures [36], cancer risk assessment [6], and diet-disease associations [7,15], to microrarray studies [10].

In several recent papers we have examined methods for selecting the optimal amount of smoothing for splines in an occupational cohort study of autoworkers exposed to metalworking fluids (MWF) [12,16,26,39]. The amount of smoothness (degrees of freedom) determines the RR predicted by the model for a given level of exposure and is therefore a critical aspect of model selection. Thurston et al. [39] presented the relative risk of prostate cancer mortality as a smoothed function of cumulative exposure to soluble metalworking fluid using penalized splines with different degrees of freedom. Malloy et al. [26] presented RR for rectal cancer in relation to cumulative exposure to straight metalworking fluid with penalized spline models fit using three different model fit criteria, Akaike information criteria (AIC), a corrected AIC (AICc), and cross-validation (CV), and varying numbers of knots. Classical methods for assessing model fit, such as Mallows’ Cp and Allen’s PRESSp, do not generalize to Cox models [41]. In this paper we examine criteria for assessing model fit that have been used for model selection or smoothing parameter selection in generalized linear models and, more recently, for Cox regression. The methods considered here, AIC, AICc, generalized cross-validation (GCV), and two forms of Bayesian information criteria (BIC) are all based on penalizing the partial likelihood for high degrees of freedom and are computationally efficient to implement. We fit penalized splines in Cox models to the autoworkers dataset previously analyzed by Malloy et al. [26] to compare the results of the five model fit criteria. We then apply the same criteria in a simulation study designed to reflect several plausible scenarios that are typical of epidemiological studies. This allows us to investigate the properties and behavior of the different goodness of fit methods and to assess how well each one captures the true exposure-response curve.

The remainder of the paper is organized as follows. In Section 2, we describe the Cox model with penalized splines. In Section 3, the criteria used to select the smoothing parameter are reviewed. The use of these criteria is illustrated in Section 4 in a study of rectal cancer mortality in an occupational cohort of autoworkers exposed to metalworking fluid. The Monte Carlo simulation study used to examine the properties of these selection criteria is described in Section 5. Results and conclusions follow in Sections 6 and 7, respectively.

The Cox model is used to analyze censored survival data. Suppose the observed data are the triplets (*t _{i}*,

$$\lambda (t\mid {x}_{i})={\lambda}_{0}(t)exp\phantom{\rule{0.16667em}{0ex}}(\beta {x}_{i})$$

where *λ*(*t*|*x _{i}*) is the hazard function given the predictor

$$\lambda (t\mid {x}_{i})={\lambda}_{0}(t)exp\left[s({x}_{i})\right]$$

(1)

where *s*(*x _{i}*) is a linear combination of B-spline basis functions,

$$s({x}_{i})=\sum _{k=1}^{K}{b}_{k}{f}_{k}({x}_{i}).$$

(2)

B-splines are piecewise-defined polynomials joined at knots, with cubic B-splines being the most frequently used basis functions. An algorithm for computing the B-splines is given in de Boor [9] and Eilers and Marx [11] provides an excellent summary of the properties of B-splines and their use in nonparametric regression.

Penalized splines are available in existing software packages such as R and S-PLUS. They are not available, however, in SAS, which is commonly used by epidemiologists, or in Stata, another frequently used package. Estimates for the unknown parameters in the B-spline expansion (2) are found by maximizing the penalized partial log-likelihood,

$$l(b)-\theta \int {\left[{s}^{\u2033}(x)\right]}^{2}dx,$$

(3)

where *b* = (*b*_{1},…,*b _{k}*)′ is the vector of parameters associated with the spline,

The model fit criteria used to select the optimal smoothing parameter in (3), *θ*, can be formulated in terms of the degrees of freedom (*df*) of the penalized spline. The *df* is the effective number of parameters estimated in model (1). With no penalty (*θ* = 0) there are *K* terms in the spline basis expansion (2) giving *df* = *K*. Taking *θ* → ∞ gives *df* = 1 and fits *s(x)*= *βx* [17]. The remaining possible values of *df* range between *df* = 1 and *df* = *K* and can be computed using the method discussed in Chapter 5 of Therneau and Grambsch [38]. We examine several measures of model fit for selecting the optimal smoothness, defined in terms of *df* as displayed in Table 1.

The AIC method estimates the expected Kullback-Leibler (KL) information [24], a measure of the information lost when using an approximating distribution for estimation and inference instead of the true (unknown) distribution [4]. The degrees of freedom of the model give a bias correction to the expected KL information in large samples and act as a penalty on the number of parameters in the model. The optimal model minimizes AIC with respect to *df* providing a balance between model fit (via the log-likelihood) and parsimony (*df*). However, AIC has displayed deficiencies in detecting nonlinearities in Poisson models [28] and AIC tends to under-penalize the spline in nonparametric regression [22], leading to more variability in the fitted curves. A corrected AIC, AICc, adjusts for this over fitting by replacing the degrees of freedom in the AIC formula with a finite sample correction based on the number of events (uncensored cases) as in Therneau and Grambsch [37]. The statistical software packages R and S-PLUS fit the Cox model with penalized splines using AIC by default. AICc is also available in these two packages and implementation of both is straightforward.

We also examine methods based on a Bayesian motivation for model selection and an approximate generalized cross-validation statistic. The Bayesian information criterion (BIC), also known as the Schwarz criterion [32], estimates the Bayes factor for comparing candidate models to one another and can be applied even when no prior distributions are explicitly specified. Volinsky and Raftery [42] examined the use of BIC in Cox regression models for variable selection and used the number of events, *r*, in place of the sample size, which we denote BICr. For clarity, we denote by BICn the BIC criterion using the sample size. Volinsky and Raftery [42] reported an improvement in prediction using the BICr criterion over the BICn. It is not clear if this behavior generalizes to models fit using the penalized partial likelihood, therefore we consider both definitions of BIC. Finally, we also apply the GCV-type criterion used by Tibshirani [40] in Cox models for variable selection. Both BIC and the GCV criterion are minimized with respect to *df*. We demonstrate the use of these five criteria, AIC, AICc, BICn, BICr, and GCV, for selecting the optimal degrees of freedom when modeling exposure-response data from a cohort study of autoworkers exposed to metalworking fluids.

Descriptions of the autoworkers cohort study are discussed in detail in previous publications [13,14]. We present a brief outline of this study here.

The cohort consists of 46,399 autoworkers from three manufacturing plants in Michigan. All employees who worked for at least three years prior to January 1, 1985 were included in the cohort and followed from 1941 to 1995 [13]. An extensive exposure assessment was conducted to retrospectively estimate past levels of particulate exposure to specific types of metalworking fluids [20]. By combining these exposure estimates with employment records, cumulative exposures to straight, soluble and synthetic fluids were estimated for each subject, measured in mg/m^{3}-years.

In this paper, we examined the dose-response curves for the association between rectal cancer mortality and particulate exposure to metal working fluids, as reported previously [26]. Because of small numbers of females, we excluded them from this analysis. The distribution of exposure was skewed, with mean of 2.8 mg/m^{3}-years and a median of 0.02 mg/m^{3}-years. Cox models were fit using penalized splines to estimate the exposure-response relationships, where the model in (1) was extended to include a vector of covariates as discussed in chapters 3 and 5 of Therneau and Grambsch [38]. The optimal smoothness was *df* = 1.99 for AIC, 1.95 for AICc, 1.01 for BICn and BICr, and 7.40 for GCV. AIC and AICc were fairly consistent with each other in both degrees of freedom and shape, as were BICn and BICr. The AIC methods differed from the BIC methods, both by slightly higher degrees of freedom as well as a decline in predicted RR in the higher exposure range. By contrast, the GCV criterion selected very large degrees of freedom, resulting in a biologically implausible fitted dose-response curve. Malloy et al. [26] used a true leave-one-out CV to select the smoothness and found results similar to AIC. We have opted for Tibshirani’s [40] GCV criterion here as it is computationally feasible to implement and is the natural CV counterpart for the linear regression setting. This analysis suggests that GCV has some deficiencies, and we have explored this further in the simulation studies that follow.

Data were generated to follow the Cox regression model in (1) with a nonlinear exposure-response relationship, *λ(t*|*x _{i}*) =

Let the random variable *T* denote survival time. If *λ*(*t*) is the hazard function, then the cumulative hazard function is
$\mathrm{\Lambda}(t)=\underset{0}{\overset{t}{\int}}\lambda (t)dt={e}^{s(x)}{\mathrm{\Lambda}}_{0}(t)$, where Λ_{0}(*t*) is the cumulative baseline hazard function. The survivor function is *S*(*t*)=*e*^{−Λ(}^{t}^{)} so that the cumulative distribution function (CDF) of *T* is *F(t)*=1−*S(t*) = 1 − e^{−Λ(}^{t}^{)}. Using a Weibull distribution for the survival times, *λ*_{0}(*t*)=*ωvt ^{ω}*

$$F(t)=1-exp[-\nu {t}^{\omega}{e}^{s(x)}].$$

(4)

It is well known that the probability integral transformation *F(T)*~*U*(0,1)where *U* (0,1) is the uniform distribution on the interval from 0 to 1 (see for example, Casella and Berger [5] p. 52). Therefore, to generate survival times, *T*_{1}, for the event of interest, we solved (4) for *T* as
${T}_{1}={\left[-{\scriptstyle \frac{1}{\nu}}log(1-{U}_{1}){e}^{-s(x)}\right]}^{1/\omega}$ where *U*_{1}~ *U*(0,1).

We also included a competing risk time, *T _{2}*, and an end-of-study time,

The parameters *ω*,ν,*τ*, and *γ*, the form of the true *s*(*x*) curves and the exposure covariate *x* were chosen to reflect situations that typically arise in epidemiologic studies of potential carcinogens, as illustrated by the application in the previous section. Breslow and Day [3] suggested that a value of *ω* = 5 is typical of many cancers and so we used this value. We fixed γ at 0.01, which corresponds to approximately 1% censoring per year for the competing risk. We chose *ν* such that the proportion of cases after *τ* = 20 years of follow-up was 5 to 40%. We used the follow forms for *s*(*x*):

*s(x)*=*β*log(*x*+1),*β*> 0 (logarithm)*s(x)*=*βx(x*−*b*_{m}),*β*< 0 (quadratic)*s(x)*=*β*sin(*x*/*b*),_{p}*β*> 0 (sine)*s(x)*=*β(x*−*b*_{mid})_{+}, where ${x}_{+}=\{\begin{array}{l}x,x>0\hfill \\ 0,x\le 0\hfill \end{array},\beta >0\phantom{\rule{0.16667em}{0ex}}(\text{threshold})$*s(x)*=*βx, β*> 0 (linear)*s(x)*= 0 (null)

where *β* was chosen such that the hazard ratio for a given scenario at approximately the median exposure across the 1000 simulated data sets is double that at no exposure, *b _{m}* is twice the average of the maximum exposure across the data sets, and

Estimated log(RR) = *s*(*x*) curves for each method and high case counts corresponding to those with median *RMSE*. Black curves are the true exposure-response curves used to generate the response.

One thousand data sets were simulated with *n* = 2000 subjects per data set using the same exposure variable for each of the six exposure-response curves, with large and small sample sizes and a range of case proportions. Cox regression models were fit in the R software package and each of the five selection criteria (AIC, AICc, BICn, BICr, and GCV) were used to determine the optimal smooth parameter (*df*) for each simulated data set. The penalized spline function in R was used to automatically select *df* using AIC and AICc. As R requires 17 cubic B-spline basis functions be used in the basis expansion in (2), we too use these 17 basis functions for all methods and scenarios. This is consistent with the results of Ruppert [30] who found that the number of knots, which determines the number of basis functions in (2), typically has a small effect on the smoothness provided a sufficient number have been chosen. A grid search for *θ* was used to determine the optimal *df* for the other three methods. In addition, the Cox model with a linear exposure-response curve was also fit for comparison to the five other methods.

To evaluate and compare the methods, the square root of the mean-squared error (*RMSE*) was calculated for the fitted curve as compared to the true curve in each simulated data set using

$$\mathit{RMS}{E}_{j}={\left(\frac{1}{n}\sum _{i=1}^{n}{\left[s({x}_{i})-{\widehat{s}}_{j}({x}_{i})\right]}^{2}\right)}^{1/2},$$

(5)

where *ŝ _{j}*(

The performance of these methods can also be assessed by the validity and power of hypothesis tests based on the optimal models selected. We tested two hypotheses using a likelihood ratio test (LRT). We tested the penalized spline fit versus the model with no exposure effect. This is equivalent to testing for any effect in the model with the basis expansion representation in (2). That is, we test H_{0}: *b*_{1}= *b*_{2} = … = *b _{K}* = 0 versus H

Based on goodness of fit as measured by *RMSE* in (5), no one method performed the best across all six scenarios. By definition given in the previous section, the *MSE* criterion has the smallest *RMSE* as it is designed to minimize the *RMSE*. It is presented only as a benchmark since it cannot be computed unless the true dose-response is known. For the four scenarios with nonlinearities (logarithm, quadratic, sine, and threshold) and with a high number of cases, the AIC and AICc criteria had the smallest *RMSE* values and the GCV method had the highest *RMSE* of all penalized methods, as in Table 2 and Figure 3. Compared to other penalized spline criteria, GCV performed better with a low number of cases (Table 3), having smallest *RMSE* values for the quadratic, sine and threshold scenarios while AICc had the smallest for the logarithm. Note that GCV did not perform well when the number of cases was dropped to approximately 100, whether 50% or 5% cases (as seen in Table 4), consistent with its poor performance in the application. For a small absolute number of cases, GCV uniformly had the highest and most variable *RMSE* scores for all scenarios except for the sine. The linear and null scenarios were fit best by the parametric linear exposure-response model with the BICn method providing the best fit of the penalized spline methods for the low case counts and GCV for the high case counts. Regardless of case size, AIC and AICc had the highest *RMSE* for the linear and null scenarios, although, based on the fits to the linear scenario in Figure 2, the AIC estimated curve still captured the overall linear relationship fairly well.

We investigated the logarithmic scenario in more detail than the others because it was similar to the estimated exposure-response in the application and the AIC methods were quite variable for this scenario in the simulations. The plot of the logarithm fits in Figure 2 illustrates a concern with both AIC methods: high curvature of the estimated function reflecting high degrees of freedom and an implausible model fit. Similar behavior was also seen for the logarithm scenario with a low proportion of cases (data not shown). Figure 4 gives further estimated logarithm fits with high case counts. These three plots are the estimated curves corresponding to the 75^{th} and 25^{th} percentiles of *RMSE* (top two plots) in addition to the best fitting curve, that which gave the minimum *RMSE* for each criterion. From these we see that all methods, even GCV, give smooth fits with low curvature, even for the estimated curves at the 75^{th} percentile of *RMSE*. The *MSE* selected curve has a relatively high degrees of freedom, in particular, *df* = 5.37 for the *MSE* curve corresponding to the lowest *RMSE*.

Fit of logarithm scenario corresponding to the 75^{th} percentile of *RMSE* (top left), 25^{th} percentile of *RMSE* (top right), and minimum of *RMSE* (bottom) for high case counts.

In general, the AIC and AICc methods tended to perform similarly, with AICc having the smaller median *RMSE* scores for most scenarios and case sizes. This similarity can also be seen in Figure 5, which presents pairwise plots of log(*RMSE*) values for each criterion, using the logarithm and null scenarios with high case counts as illustrations. (Note that the other scenarios and case numbers provided somewhat similar results and are thus not shown). The AIC and AICc methods also had more variable errors and exhibited more outliers in the right tail of the *RMSE* distribution, as displayed in Figure 3, indicating more simulated data sets with estimated curves further from the truth than the other methods. Likewise, as expected the BICn and BICr methods performed similarly overall, as demonstrated also in Figure 5. BICr fit better for the four nonlinear models and BICn for linear and null. With low case counts, the BIC methods gave median fits that were close to linear in the logarithm, quadratic, and threshold scenarios with optimal degrees of freedom selected to be approximately 1 in these situations (output omitted). With high case counts, the BIC methods selected less linear fits except for the logarithm scenario, as displayed in Figure 2. Based on Figure 5, the GCV method demonstrated similar behavior as the BICn and BICr for the logarithm and null scenarios. With a high number of cases, this was observed regardless of curve being estimated. For smaller numbers of cases, the GCV was not similar to any of the other methods, except for the linear and null scenarios where all of BICn, BICr, and GCV selected models close to linear.

Pairwise scatter plots of log(*RMSE*) values for logarithm scenario (blue graphs above diagonal) and null scenario (red graphs below diagonal).

The results in Table 5 illustrate the power and size of likelihood ratio hypothesis tests of no effect and of a nonlinear dose-response relationship. Results are presented for low case counts. The tests for any effect were similar across the penalized spline methods. There was a high degree of power for the logarithm (93–95%), quadratic (100%), sine (95–100%), and linear (100%) scenarios. There was much less power to detect an effect for the threshold scenario, with AIC and AICc having the highest power (over 75%) and BICn and BICr the lowest (62% and 66%, respectively). The threshold scenario was interesting as the true log relative risk is zero across the first half of the exposure range and then increases linearly. The median fit displayed in Figure 2 shows the difficulty in estimating a function which is non-differentiable at a single value, particularly if the change-point is of interest. Like the penalized spline methods, the parametric linear fit had high power in most scenarios, despite its poor fit, although for the sine and threshold curves the power was particularly low, 37% and 47%, respectively. With respect to the null model, the BICn method had estimated size at the nominal testing level of 5%, BICr was at 6% and the linear fit was at 4%. The AIC and AICc methods were well above the nominal level for both tests at 15% and 14%.

For the second hypothesis test examined, all methods were able to significantly detect the nonlinearity in the sine curve, with power over 97% for all (including *MSE*), yet this can be considered one of the most difficult curves to fit based on the high *RMSE* scores in Tables 1 and and2.2. For the logarithm curve, which was also hard to fit, however, the methods had low power to detect the nonlinearity. Power was similarly low for the quadratic and threshold scenarios. On the other hand, when the curve was truly linear, the AIC and AICc methods rejected linearity in over 74% of the samples. BICn, BICr, and GCV only did so in about 10–13% of the samples. These three methods also fit the linear scenario slightly better than the AIC methods. Results were not substantially different when case counts increased, with power increasing with increasing case counts and the size of the appropriate test remaining similar to those in Table 5 for the linear and null scenarios.

The results for the test of nonlinearity when there is no effect (the null scenario) are curious. As this is testing a linear relationship versus a nonlinear relationship defined by the basis expansion in equation (2), it is difficult to assess the meaning of these results when there is no true effect. The high number of significant p-values for the AIC and AICc methods (over 74%) suggest nonlinearity is appropriate, yet the *RMSE* scores in Tables 1 and and22 are only marginally larger than those for the other methods, for the null scenario, suggesting the overall fit from these methods is on par with the others. Furthermore the degrees of freedom selected using AIC and AICc are much higher and more variable than the other methods for this (and all) scenarios (output omitted). Inspection of individual plots suggests that the AIC methods are varying around the horizontal axis in an attempt to fit the null case – using up a large number of degrees of freedom and rejecting linearity in favor of nonlinearity in up to 30% of the 1000 simulated datasets.

To further explore the effect of sample size on the estimates, we also considered a simulation where the total sample size was dropped to 200 but the proportion of cases remained similar to those in Table 2. The first line of Table 5 gives the results for the logarithm scenario, where we see that the median *RMSE* values are similar across methods. This behavior was also seen in the other non-linear scenarios but there were larger differences between the AIC criteria and the other methods for the linear and null scenarios (output omitted.) Here, for the logarithm curve, we see that when the average number of cases across the 1000 simulated data sets is similar, but the total sample size is not, the *RMSE* median and IQR are still close, as when there were approximately 100 cases and 740 cases. With the total sample size held constant, the *RMSE* values decreased in size, on average, and become less variable when the number of cases increased. This was consistent for all dose-response scenarios and model selection criteria examined.

In the simulation study, none of these methods, AIC, AICc, BICn, BICr, or GCV, was clearly superior in estimating all of the six exposure-response curves examined here (the sine curve was included to represent a seasonal adjustment). The BIC criteria are conservative in that they tend to choose the degrees of freedom close to 1 with very little variability. The AIC and AICc methods have much more variable *df* and *RMSE* scores, displaying a few very large *RMSE* scores, but also having some of the smallest *RMSE* values overall, suggesting they are able to achieve good fit. The AIC methods also rejected the null hypothesis of no effect in favor of an association in the null scenario (no true association) too often, incorrectly detecting an association when one does not exist. The GCV criteria had the smallest median *RMSE* for the quadratic, sine, and threshold curves with a lower proportion of cases but all methods gave similar fits for these scenarios. While the AIC and AICc criteria did display some deficiencies in the models selected, including biologically implausible models with high degrees of freedom, AICc selected fewer degrees of freedom. Thus we consider AICc to be the most reliable and flexible of the criteria for fitting penalized spline curves in the Cox model, particularly if there is strong prior reason to believe a nonlinearity is present.

In the occupational application, both AIC selected curves followed a nonlinear pattern, with the log of the hazard ratio for rectal cancer mortality initially increasing with exposure, followed by a decline. By contrast the curves selected by BIC were more linear. The pattern of attenuation or even decline in RR at high exposure levels is often seen in occupational studies and has been attributed to bias [34]. Thus, when fitting models to occupational data, AICc would be a reasonable choice for a model selection criterion for penalized splines in Cox models. GCV performed erratically in the application.

Researchers interested in estimating exposure-response relationships require tools for accurately examining departures from the assumption of linearity. While the focus of this paper was primarily on the bias in exposure-response curves estimated with penalized splines that used standard model fit criteria for selecting the smoothing parameter, they are by no means the only method. Fractional polynomials, as described by Royston and Altman [29], and restricted cubic splines were examined in a simulation study by Holländer and Schumacher [21] and they found the fractional polynomials to be superior. Their study was limited to two nonlinear exposure-response curves, a step function and a V-shaped curve, and uncensored data simulated using a uniform exposure distribution. We selected nonlinear scenarios that would be familiar to epidemiologists and used a skewed exposure distribution, common in environmental studies. In the simulation study of Lee [25], who examined AICc, GCV, CV, Mallows’ Cp and two risk estimation methods for selecting the smoothing parameter in nonparametric regression, they too found no one method to be best and considered the six criteria to be reasonable. Although our results suggest some deficiencies may exist in using penalized splines with the criteria investigated here, we do see strong evidence of their ability to detect and correctly model nonlinear relationships in Cox models.

Supported by Grant CA74386 from National Cancer Institute and Grant ESO7142 from NIEHS.

**Publisher's Disclaimer: **This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

1. Akaike H. In: Petrov BN, Csaki F, editors. Information theory and an extension of the maximum likelihood principle; 2nd International Symposium on Information Theory; Budapest: Akademiai Kiado; 1973. pp. 267–281.

2. Bender R, Augustin T, Blettner M. Generating survival times to simulate Cox proportional hazard models. Stat Med. 2005;24:1713–1723. [PubMed]

3. Breslow NE, Day NE. Statistical methods in cancer research: volume II - the design and analysis of cohort studies. Lyon: International Agency for Research on Cancer; 1987. [PubMed]

4. Burnham KP, Anderson DR. Model selection and multi-model inference: a practical information-theoretic approach. New York: Springer-Verlag; 2002.

5. Casella G, Berger RL. Statistical inference. Belmont CA: Duxbury Press; 1990.

6. Cheng H, Aylward L, Beall C, et al. TCDD exposure-response analysis and risk assessment. Risk Anal. 2006;26:1059–1071. [PubMed]

7. Cho E, Smith-Warner SA, Spiegelman D, et al. Dairy foods, calcium and colorectal cancer: a pooled analysis of 10 cohort studies. J Natl Cancer I. 2004;96:1015–1022. [PubMed]

8. Cox DR. Regression models and life tables (with discussion) J R Stat Soc B. 1972;34:187–200.

9. de Boor C. A Practical Guide to Splines. New York: Springer-Verlag; 1978.

10. Eckel JE, Gennings C, Therneau TM, et al. Normalization of two-channel microarray experiments: a semiparametric approach. Bioinformatics. 2005;21:1078–1083. [PubMed]

11. Eilers PHC, Marx BD. Flexible smoothing with B-splines and penalties. Stat Sci. 1996;11:89–102.

12. Eisen EA, Agalliu I, Coull B, et al. Smoothing methods applied to occupational cohort studies; illustrated by penalized splines. Occup Environ Med. 2004;61:854–860. [PMC free article] [PubMed]

13. Eisen EA, Bardin J, Gore R, et al. Exposure-response models based on extended follow-up of a cohort mortality study in the automobile industry. Scand J Work Env Hea. 2001;27:240–249. [PubMed]

14. Eisen EA, Tolbert PE, Monson RR, Smith TJ. Mortality studies of machining fluid exposure in the automobile industry, I: a standardized mortality ratio analysis. Am J Ind Med. 1992;22:809–824. [PubMed]

15. Genkinger JM, Hunter DJ, Spiegelman D, et al. Alcohol intake and ovarian cancer risk: a pooled analysis of 10 cohort studies. Brit J Cancer. 2006;94:757–762. [PMC free article] [PubMed]

16. Govindarajulu US, Spiegelman D, Thurston SW, et al. Comparing smoothing techniques in Cox models for exposure-response relationships. Stat Med. 2007;26:3735–3752. [PubMed]

17. Gray RJ. Flexible methods for analyzing survival data using splines, with applications to breast cancer prognosis. J Am Stat Assoc. 1992;87:942–951.

18. Greenland S. Dose-response and trend analysis in epidemiology: alternatives to categorical analysis. Epidemiology. 1995;6:356–365. [PubMed]

19. Greenland S. Problems in the average-risk interpretation of categorical dose-response analyses. Epidemiology. 1995;6:563–565. [PubMed]

20. Hallock MF, Smith TJ, Woskie SR, et al. Estimation of historical exposures to machining fluids in the automotive industry. Am J Ind Med. 1994;26:621–634. [PubMed]

21. Holländer N, Schumacher M. Estimating the functional form of a continuous covariate’s effect on survival time. Comp Stat Data An. 2006;50:1131–1151.

22. Hurvich CM, Simonoff JS, Tsai CL. Smoothing parameter selection in nonparametric regression using an improved Akaike information criterion. J R Stat Soc B. 1998;60:271–293.

23. Hurvich CM, Tsai CL. Regression and time series model selection in small samples. Biometrika. 1989;76:297–307.

24. Kullback S, Leibler RA. On information and sufficiency. Ann Math Statist. 1951;22:79–86.

25. Lee TCM. Smoothing parameter selection for smoothing splines: a simulation study. Comp Stat Data An. 2003;42:139–148.

26. Malloy EJ, Miller KL, Eisen EA. Rectal cancer and exposure to metalworking fluids in the automobile manufacturing industry. Occup Environ Med. 2007;64:244–249. [PMC free article] [PubMed]

27. O’Sullivan F. Nonparametric estimation of relative risk using splines and cross-validation. SIAM J Sci Stat Comput. 1988;9:531–542.

28. Roberts S, Martin MA. The question of nonlinearity in the dose-response relation between particulate matter air pollution and mortality: can Akaike’s information criterion be trusted to take the right turn? Am J Epidemiol. 2006;164:1242–1250. [PubMed]

29. Royston P, Altman R. Regression using fractional polynomials of continuous covariates: parsimonious parametric modeling (with discussion) Appl Statis. 1994;43:429–467.

30. Ruppert D. Selecting the number of knots for penalized splines. J Comput Graph Stat. 2002;11:735–757.

31. Schwartz J, Laden F, Zanobetti A. The concentration-response relation between PM_{2.5} and daily deaths. Environ Health Persp. 2002;110:1025–1029. [PMC free article] [PubMed]

32. Schwarz G. Estimating the dimension of a model. Ann Stat. 1978;6:461–464.

33. Sleeper LA, Harrington DP. Regression splines in the Cox model with application to covariate effects in liver disease. J Am Stat Assoc. 1990;85:941–949.

34. Staynor L, Steenland K, Dosemeci M, et al. Attenuation of exposure-response curves in occupational cohort studies at high exposure levels. Scand J Work Env Hea. 2003;4:317–24. [PubMed]

35. Steenland K. Smoothing is soothing and splines are fine. Occup Environ Med. 2005;62:141–142. [PMC free article] [PubMed]

36. Steenland K, Deddens JA. A practical guide to dose-response analyses and risk assessment in occupational epidemiology. Epidemiology. 2004;15:63–70. [PubMed]

37. Therneau TM, Grambsch PM. Technical report, Division of Biostatistics. Mayo Clinic; Rochester, MN: 1998. Penalized Cox models and frailty.

38. Therneau TM, Grambsch PM. Modeling Survival Data: Extending the Cox Model. New York: Springer; 2000.

39. Thurston SW, Eisen EA, Schwartz J. Smoothing in survival models: An application to workers exposed to metalworking fluids. Epidemiology. 2002;13:685–692. [PubMed]

40. Tibshirani R. The lasso method for variable selection in the Cox model. Stat Med. 1997;16:385–395. [PubMed]

41. Verweij PJM, Van Houwelingen HC. Cross-validation in survival analysis. Stat Med. 1993;12:2305–2314. [PubMed]

42. Volinsky CT, Raftery AE. Bayesian information criterion for censored survival models. Biometrics. 2000;56:256–262. [PubMed]

43. Wartenberg D, Savitz D. Evaluating exposure cut-point bias in epidemiologic studies of electric and magnetic fields. Bioelectromagnetics. 1993;14:237–245. [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |