In the past decade, mathematical models of viral infection have been successfully applied to a number of problems on the periphery of the annual public health problem that is influenza [

1]. In the laboratory, mathematical models have aided the development of efficient vaccine production techniques [

2] and improved the quantitative characterization of antiviral drug action [

3]. Mathematical models have also improved our understanding of the course of the disease within human [

4] and animal hosts [

5]. Because these models serve as a bridge between the microscopic scale (where virus interacts with cell) and the macroscopic scale (where the infection is manifested as a disease) they will inevitably be applied in the future to pressing public health questions such as the estimation of virulence and fitness for emerging strains, the spread of drug resistance and, more generally, the connections between viral genotypic information and clinical data.

The success of a within-host virus infection model depends on an accurate representation of biological reality. This allows a model not only to describe the phenomenon under consideration, but also to make reliable predictions about unobserved consequences. For example, in 1995 a simple model of HIV dynamics was applied to describe the observed exponential clearance of virus under the administration of a drug suppressing viral production [

6]. The primary result of this work, however, was not the description of viral clearance itself, but the prediction of dynamics in the absence of drug, i.e., that high viral clearance must be balanced by high viral production, which in turn allows for extremely rapid mutation of the virus strain. This conclusion had important implications for the development of therapy, specifically the necessity of a “drug cocktail”. For influenza infections, the primary clinical data available to a mathematical modeler is the viral titer over the course of an infection, usually obtained by a daily nasal wash collected from an infected patient. This data generally follows a simple functional form in time which can be reproduced by a variety of dynamical models. Thus, if meaningful information is to be extracted from such data, the model applied must already be a trusted simulator of the underlying infection kinetics. In this paper, we consider evidence from laboratory infection experiments which must inform the construction of a mathematical model, focusing specifically on the implementation of the time spent by a cell in each of the various stages of infection.

The basic viral infection model [

4,

7] assumes interaction of virus with cells in four different states (Figure ), and is based on a coarse-grained view of the virus replication cycle. Cells that have not yet been infected by the virus, but are susceptible to infection, are considered target cells (

*T*). The interaction of virus with target cells leads to these cells becoming latently (

*L*) infected (i.e., infected but not producing virus). After infection, a time,

*t*_{L}, passes — as the virus particle is unpacked, its genome is delivered to the cell nucleus, replication begins, and new particles assemble at the plasma membrane — before new virus particles are released and the cell enters the infectious (

*I*) state. After a subsequent time,

*t*_{I}, the infectious cell halts virus production and transitions into a state we will refer to as dead (

*D*).

The implementation of a particular dynamical structure on this basic model requires a more detailed specification of the biological processes. The infection of cells (the transition of target cells to latently-infected cells) has been observed to be a Poisson process where the rate of infection is proportional to the local virus concentration [

8] and it is implemented in the model as a continuous representation of that stochastic process. Virus production by infectious cells can be assumed to proceed at a constant rate and the infectivity of free virus is known to decrease exponentially in time [

3,

9], leading to a simple equation for virus dynamics. To complete the dynamical description, one must specify how a latently infected cell becomes infectious and for how long infectious cells produce virus. In other words, one must specify the distribution of the delays,

*t*_{L} and

*t*_{I}, between the states of infection.

In an epidemiological context, the problem of implementing generic delays between infected classes was first considered by Kermack and McKendrick in their seminal 1927 work on infectious disease dynamics [

10]. Hethcote and Tudor [

11] introduced a general approach to the problem, using a probability density function for the time spent in a given state, which has been applied frequently in the field of mathematical epidemiology (see, e.g., [

12-

14] and references therein). Here, we will apply the same approach to within-host influenza viral infections, resulting in a model with differential equations to describe target cell and virus dynamics, and integral equations to describe the latent and infectious cell populations (a similar approach was considered for HIV in [

15]).

Mathematically, the simplest choice of delay distribution is exponential (shown in Figure with a few other choices), because it reduces the model to a system of ordinary differential equations (ODEs). For that reason, it is the most commonly-used model type for both epidemiological and within-host problems. In viral infections, however, the assumption of an exponential distribution seemingly conflicts with the biological evidence. For example, if the time of latent infection is chosen from such a distribution, the model would predict that a significant fraction of cells begin producing virus almost immediately after infection. In reality, however, there is always a minimum delay prior to viral release: endocytosis and the fusion of the viral envelope with the endosome takes, on average, half an hour [

16]; the viral RNA enters the nucleus in most cells within the next one hour [

17]; mRNA is transcribed in the nucleus, then transported back to the cytoplasm for translation and newly formed M1 matrix proteins are observed only three hours after infection, on average, and hemagglutinin four hours post-infection [

17]; newly formed glycoproteins, matrix proteins and nucleocapsids then must assemble at the cell membrane, bud off and be cleaved from the sialic acid receptors [

18]. Each of these steps and their timings depend on virus strain and cell type, and one can expect significant variation between cells, but a long delay without viral production is an essential characteristic of the infection cycle. Influenza virus-induced cell death is less well characterized: the mechanism of cell killing (apoptosis or necrosis) depends on cell type [

19,

20], and the timing of apoptosis in particular is strongly strain dependent [

21]. In this situation, a broad freedom in selecting the distribution for infectious cell lifespans is warranted.

Despite questions about their biological appropriateness, ODE models have had success in describing in vivo infection data (for influenza see, for example, [

4,

5]). Models with non-exponential delays have been similarly successful, including those with Dirac delta function transition distributions, leading to a delay-differential model [

3,

4,

22]; and multi-compartmental ODE models (with

*n* sequential phases of infection) yielding delays with a gamma-function distribution [

23-

25] Here, we consider a set of in vitro experiments which allows for some discrimination between models, namely the single-cycle viral yield assay. By fitting models with different transition distributions (Figure ) to single-cycle assay data, we show that the correct implementation of delays is crucial to the success of a model in describing these assays. Using these results, we consider in vivo data from challenge experiments in humans to explore how the choice of delays affects the parameter values extracted when fitting the model to experimental data.