Social and physical scientists are often interested in how certain variables depend on one another. They often assume that a functional relationship exists between these two variables, and they run experiments or collect samples to learn about this relationship. They are aware, however, that the data does not follow a deterministic equation, but follows the general stochastic equation

where

*Y* is a dependent variable,

*x* is an independent or controlled variable, and ε is random error with mean 0. One of the simplest models relating a response,

*Y*, to a single predictor,

*x*, is the

*p*^{th}–order polynomial regression model

While this model is appealing and can be very useful, it may be inadequate since it assumes that only one polynomial describes the average relationship between

*Y* and

*x*. A more complex model would allow several polynomials to do this. An example of such a model is a piecewise polynomial regression spline. A piecewise polynomial regression spline with

*k* knots divides the domain of

*f*(

*x*) into

*k* + 1 mutually exclusive regions, and to each region corresponds a unique polynomial describing the average relationship between

*Y* and

*x*. With different polynomials describing different parts of the function,

*f*(

*x*) is not restricted to have the same smoothness throughout its domain. A piecewise polynomial regression spline of order

*p* with

*k* knots at

**t**_{k} = (

*t*_{1},

*t*_{2}, …,

*t*_{k}) is written as

where β = (β

_{0}, β

_{1}, …,

*β*_{p+k}) is a fixed set of parameters and

.

A natural polynomial regression spline takes a form nearly identical to the piecewise regression spline seen above. It is different, however, in that it restricts the function to be linear at the boundaries. This sets the last term on the right-hand side of

(2) to 0. A natural polynomial regression spline of order

*p* thus takes the form

If

*f* (

*x*) were modeled as such,

(1) could then be re-written as

where

and β = (β

_{0}, β

_{1}, β

_{2}, …, β

_{1+k}), and for

*m* observations collected at (

*x*_{1},

*x*_{2}, …,

*x*_{m}), it would be written as

The natural polynomial regression spline written in

Equation (3) illustrates the concept behind splines, but the design matrix composed of the row vectors

**c** (

*x*_{1},

**t**_{k}), …,

**c** (

*x*_{m},

**t**_{k}) is unstable and thus rarely used. B-splines, which are computationally more stable, are preferred (

Hastie and Tibshirani, 1990;

Zhou and Shen, 2001). The functional form of a b-spline is more complex than that of the natural polynomial regression spline, but is readily available (

deBoor, 1978). We write it as

where

**b**(

*x*,

**t**_{k}) is the design vector associated with a b-spline evaluated at

*x* with

*k* knots at

**t**_{k} = (

*t*_{1},

*t*_{2}, …,

*t*_{k}), and α is a fixed set of parameters. For

*m* observations collected at (

*x*_{1}, …,

*x*_{m}), the model would be written as

With this model, estimating

*f*(

*x*) becomes a problem of estimating the number of knots,

*k*, the locations of those knots,

**t**_{k}, and α.

A variety of methods have been developed in estimating

*k* and

**t**_{k}.

Halpern (1973) approached this problem using Bayesian methods. Allowing knots to only be placed at the design points in the experiment, he considered all of the subsets of the design points. Halpern assigned prior probabilities to all of these subsets, and calculated the corresponding posterior probabilities of these subsets. His estimator of the function was based on these posterior probabilities. Denison, Mallick, and Smith (1998) placed priors on the number of knots,

*k*, and their locations,

**t**_{k}. With these priors, they calculated a joint posterior distribution which included the variables

*k* and

**t**_{k} and then sampled from this posterior distribution using reversible jump MCMC methods. They too restricted the knots to be located only at the design points of the experiment.

DiMatteo, Genovese, and Kass (2001) proposed a method similar to that of Denison, et al. They did not restrict the knots to be located only at the design points of the experiment and they penalized models with unnecessarily large values of

*k* by averaging over the fixed effect coefficients, α.

Friedman (1991) and

Stone et al. (1997) try to solve the problem of knot selection using backward and/or forward knot selection. This process continues until the “best” model has been identified.

Zhou and Shen (2001) used an alternative method in identifying the locations of the knots. They constructed an algorithm which favored adding knots in locations where several knots had already been added.

Lindstrom (1999) used similar methods when selecting knot locations.

Splines have also been used to model curves that vary within and between subjects (

Shi et al., 1996,

Rice and Wu, 2001,

Behseta et al., 2005,

Crainiceanu et al., 2005).

Brumback and Rice (1998) use splines to model nested functions which vary between subjects and between groups of subjects. Functional models which only account for variation between and within subjects take the general form

where

*Y*_{i} (

*x*_{j}) is the observation of the

*i*^{th} individual at

*x*_{j},

*f* (

*x*) can be thought of as a population curve, and

*G*_{i} (

*x*) is a random curve specific to subject

*i*. While these functions can be modeled in a variety of ways, we model these two functions as free-knot b-splines with

*k* and

*k*′ knots located at

**t**_{k} and

**t**_{k′}, respectively. Setting

*f*(

*x*) =

**b**(

*x*;

**t**_{k})α and

*G*_{i}(

*x*) =

**b**(

*x*;

**t**_{k′})γ

_{i}, where γ

_{i} are independent random vectors such that γ

_{i} ~

*N*(0, Σ

_{γ}),

equation (4) becomes

Bigelow and Dunson (2007) consider a variation of this model and identify the location of the knots by sampling from the full posterior distribution. They lessen the computational burden of this sampling procedure by setting

**t**_{k′} =

**t**_{k} and restricting Σ

_{γ} to be diagonal. In this paper, we consider a more flexible model and explore the computational issues associated with it. To be more specific, we eliminate the restrictions imposed by Bigelow and Dunson and then try to identify the number and location of the knots by sampling from

*p*(

*k*,

**t**_{k},

*k*′,

**t**_{k′}|

*y*). This posterior can not be sampled from exactly, however, because eliminating the diagonal restriction on Σ

_{γ} makes the likelihood

*p* (

*y*|

*k*,

**t**_{k},

*k*′,

**t**_{k′}) intractable. We thus sample from two approximations to this posterior; each approximation corresponds to a different approximation to the likelihood

*p* (

*y*|

*k*,

**t**_{k},

*k*′,

**t**_{k′}). To assess the accuracy of these approximations, we compare the posterior distribution of knots in each case to the marginal distribution of knots when sampling from

*p*(

*k*,

**t**_{k},

*k*′,

**t**_{k′}, ψ, σ

^{2}|

*y*), where ψ is a set of covariance parameters and σ

^{2} is the within-subject variability. The marginal distribution of knots when sampling from this higher-dimensional posterior is exact as its corresponding likelihood is available in closed form. Sampling from this larger posterior, however, is less efficient because of the additional parameters that need to be sampled.

In Section 2, we discuss the linear model given in

(5) in more detail. In Section 3, the likelihood of interest,

*p*(

*y*|

*k*,

**t**_{k},

*k*′,

**t**_{k′}), is discussed as are the two methods we use to approximate this likelihood. Section 4 shows the algorithm used to sample from the resulting approximate posterior distributions, and Section 5 shows the algorithm we use to sample from

*p*(

*k*,

**t**_{k},

*k*′,

**t**_{k′}, ψ, σ

^{2}|

*y*). Section 6 theoretically compares the approximations, and in Section 7, we describe a simulation study performed to explore the differences in these approximations. In Section 8, we apply the preferred approximation to a real data set.