|Home | About | Journals | Submit | Contact Us | Français|
Many organisms navigate gradients by alternating straight motions (runs) with random reorientations (tumbles), transiently suppressing tumbles whenever attractant signal increases. This induces a functional coupling between movement and sensation, since tumbling probability is controlled by the internal state of the organism which, in turn, depends on previous signal levels. Although a negative feedback tends to maintain this internal state close to adapted levels, positive feedback can arise when motion up the gradient reduces tumbling probability, further boosting drift up the gradient. Importantly, such positive feedback can drive large fluctuations in the internal state, complicating analytical approaches. Previous studies focused on what happens when the negative feedback dominates the dynamics. By contrast, we show here that there is a large portion of physiologically-relevant parameter space where the positive feedback can dominate, even when gradients are relatively shallow. We demonstrate how large transients emerge because of non-normal dynamics (non-orthogonal eigenvectors near a stable fixed point) inherent in the positive feedback, and further identify a fundamental nonlinearity that strongly amplifies their effect. Most importantly, this amplification is asymmetric, elongating runs in favorable directions and abbreviating others. The result is a “ratchet-like” gradient climbing behavior with drift speeds that can approach half the maximum run speed of the organism. Our results thus show that the classical drawback of run-and-tumble navigation—wasteful runs in the wrong direction—can be mitigated by exploiting the non-normal dynamics implicit in the run-and-tumble strategy.
Countless bacteria, larvae and even larger organisms (and robots) navigate gradients by alternating periods of straight motion (runs) with random reorientation events (tumbles). Control of the tumble probability is based on previously-encountered signals. A drawback of this run-and-tumble strategy is that occasional runs in the wrong direction are wasteful. Here we show that there is an operating regime within the organism’s internal parameter space where run-and-tumble navigation can be extremely efficient. We characterize how the positive feedback between behavior and sensed signal results in a type of non-equilibrium dynamics, with the organism rapidly tumbling after moving in the wrong direction and extending motion in the right ones. For a distant source, then, the organism can find it fast.
Navigation up a gradient succeeds by finding those directions in which signals of interest increase. This can be difficult when the size of the navigator is small compared to the length scale of the gradient because local directional information becomes unreliable. In this case, cells [1, 2], worms , larvae , and even robots  often adopt a run-and-tumble strategy to navigate. During runs the organism moves approximately straight, collecting differential sensor information in one direction. Tumbles, or reorientations at zero speed, enable the organism to explore other directions. Signal levels are transduced rapidly to the motility apparatus through an internal state variable, so that increases in attractant transiently raise the probability to run longer (and tumble less) before a negative integral feedback adapts it back . Classically, averaging over many runs and tumbles results in a net drift up the gradient, although this is usually rather modest because of the occasional runs in the wrong direction. We here focus on the positive feedback inherent to this strategy wherein motion up the gradient lowers the probability to tumble, which further boosts drift up the gradient. Our analysis reveals an unstudied regime in which rapid progress can be achieved. Small fluctuations in the speed of the organism along the gradient grow into large transients in the correct direction but small ones otherwise. We show that this asymmetric amplification arises from the positive feedback, which causes the eigenvectors near the adapted state of the dynamical system to become non-orthogonal, therefore leading to non-normal dynamics. The resulting large transient are further boosted by a nonlinearity that is intrinsic to the positive feedback. Such non-normal dynamics were first discovered in fluid mechanics where they were shown to play an important role in the onset of turbulence in the absence of unstable modes [7, 8].
Past theoretical studies of run-and-tumble navigation have mostly focused on what happens when adaptation dominates the dynamics (e.g. [9–13]). In this regime, the internal state of the organism exhibits small fluctuations around its mean, and mean field theory (MFT) can be applied to make predictions. This approach has been used to describe the motile behavior or populations of E. coli bacteria in exponential ramps [13–15] and oscillating gradients . Beyond the well-understood negative feedback-dominated regime there is a large portion of physiologically relevant parameter space where the positive feedback between movement and sensation dominates the run-and-tumble dynamics. Agent-based simulations have shown that, in this case, large transient fluctuations can emerge in the internal state of an individual organism climbing a gradient, precluding the use of mean field approaches . While systems of partial differential equations (PDEs) can be integrated numerically to reproduce these dynamics , a precise understanding of the role of the positive feedback in generating such large fluctuations and the impact of those on the performance of a biased random walk are fundamental questions that remain largely unanswered because of difficulties in obtaining analytical results.
Here we develop an analytical model of run-and-tumble gradient ascent that preserves the rich nonlinearity of the problem and incorporates the internal state, 3D-direction of motion, and position of the organism as stochastic variables. We find that large fluctuations in the internal state originate from two key mechanisms: (i) the non-normal dynamical structure of the positive feedback that enables small fluctuations to grow, and (ii) a quadratic nonlinearity in the speed along the gradient that further amplifies such transients asymmetrically. Utilizing phase space analysis and stochastic simulations, we show how these two effects combine to generate a highly effective “ratchet-like” gradient-climbing mode that strongly mitigates the classic drawback of biased random walks: wasteful runs in wrong directions. In this new regime an organism should be able to achieve drift speeds on the order of the maximum swim speed. Our results are general in that they apply to a large class of biased random walk strategies, where run speed and sampling of new directions may be modulated based on previously encountered signals.
Consider a random walker with an internal state variable F that follows linear relaxation towards the adapted state F0 over the timescale tM, which represents the memory duration of the random walker. We assume that the perceived signal, ϕ(X, t) = ϕ(C(X, t)), at position X and time t (here C represents the signal), is rapidly transduced to determine the value of an internal state variable F via a receptor with gain N:
Stochastic switching between runs and tumbles depends on F and follows inhomogeneous Poisson statistics with probability to run r(F) = λT/(λR + λT) = 1/(1 + exp(−HF)), where H is the gain of the motor, and λR(F) and λT(F) are the transition rates from run to tumble and vice versa [15, 18].
During runs the speed is constant and the direction of motion is subject to rotational Brownian motion with diffusion coefficient DR. During tumbles the speed is nil and reorientation follows rotational diffusion DT > DR to account for persistence effects . Taken together, these two processes cause the random walker to lose its original direction at the expected rate where n = 2, 3 for two- and three-dimensional motion respectively. Note that, in this minimal model we ignore possible internal signaling noise [20, 21], and all randomness comes from the rotational diffusions DR and DT as well as the stochastic switchings with rates λR(f) and λT(f). The effect of signaling noise is considered below using agent-based simulations. Since ϕ(C) can be nonlinear, Eq (1) includes possible effects of saturation of the sensory system.
Consider a static one-dimensional gradient and define the length scale of the perceived gradient as L(X) = 1/||∇ϕ(X)|| and the direction of motion as . Then from Eq (1) the internal dynamics satisfies the following equations during runs and tumbles, respectively:
We are interested in the displacement of the random walker along the gradient over timescales longer than individual runs and tumbles. In the limit where the switching timescale tS = 1/(λR + λT) is much shorter than the other timescales we derive from a two-state stochastic model and Eq (2) (Methods Eqs (13) and (14)):
where f = HF. The first term is the negative feedback towards the adapted run probability r0 = r(f0). The second term shows how motion up the gradient (s > 0) causes the probability to run r to feed back on itself—when the organism is oriented up the gradient (s > 0), F increases only during runs (Eq (2)), and this increase in turn raises r(F) so that the probability that the dynamics of F follows rather than is increased, and so on. A positive feedback is thereby created with characteristic timescale tE = L/(NHv0). Steeper gradient (smaller L), stronger receptor gain N or motor gain H, or faster speed v0, all lead to stronger positive feedback (shorter tE). This important timescale, tE, together with tM (memory duration) and tD (direction decorrelation time), effectively determines the dynamics.
Expressing time in units of tM, we introduce the following two non-dimensional timescales:
Here τE quantifies the ratio between the negative and positive feedbacks. (See Table 1 for a summary of the symbols used.) From above, we expect that the dynamics will depend on how τE and τD compare with one.
To explore how run-and-tumble dynamics depend on τE and τD, we used a previously published stochastic agent-based simulator of the bacteria E. coli that reproduces well available experimental data on the wild-type laboratory strain RP437 ([15, 22] and S1 Appendix). In this case the internal state F represents the free energy of the chemoreceptors. Since E. coli approximately detects log-concentrations (S1 Appendix Eq (S11)), we simulated an exponential gradient so that τE is a constant. In this case the cells reach steady state with a constant drift speed VD. Calculating VD from 104 simulated trajectories for a range of τE and τD0 = τD(r0) values reveals that cells climb the gradient much faster when the positive feedback dominates (τE < 1) (Fig 1A). The trajectories of individual cells resembled that of a ratchet that moves almost only in one direction (Fig 1B green). In contrast, when the negative feedback dominates (τE > 1) the trajectories exhibit both up and down runs of similar although slightly biased lengths (Fig 1B red). VD also depends on τD and peaks when the direction decorrelation time is on the same order as the memory duration (τD 1), consistent with previous studies [11, 23, 24]. In these simulations the adapted probability to run r0 = 0.8 and the ratio DT/DR = 37 were kept constant. Changing these values did not change the main results (S1A–S1C Fig).
In a wild type population, individual isogenic cells will have different values of τE and τD0 due to cell-to-cell variabilities in swimming speed and in the abundance of chemotaxis proteins [22, 25, 26]. In a recent experimental study, the phenotype and performance of individual wild type cells (RP437 strain) was quantified by tracking cells swimming up a static quasi-linear gradient of methyl-aspartate (varying from 0 to 1 mM over 10 mm). This experiment revealed large differences among the performances of individual cells within the isogenic population , which could be reproduced by complementing the model of bacterial chemotaxis just described with a simple model of noisy gene expression (Fig 2 in ). To examine in which region of the (τE, τD0) space these cells might have been operating, we used this same model (complemented with diversity in rotational diffusion coefficients DR and DT due to variations in cell length; see S1 Appendix) with the same parameter values to run simulations of 16,000 cells climbing the experimentally measured (Fig 1A). We find that even in this relatively shallow gradient some cells might have been operating in the positive-feedback-dominated regime, especially near the bottom of the gradient (black dots). As the cells climb the gradient, τE becomes larger (white dots) because, as concentration increases, the log-sensing cells in the quasi-linear gradient face a shallower gradient, and thus weaker positive feedback.
To better understand the origin of the fast drift speed and its associated “ratchet-like” behavior, we examine the relationship between the drift speed VD and the statistics of the internal state f. Using tM as the unit of time and v0tM as the unit of length we derive a Fokker-Planck equation for the probability P(x, f, s, τ) that at time τ = t/tM the cell is at position x = X/(v0tM) with internal state f and orientation s (Methods Eq (14)):
Here is the rotational diffusion operator on the (n − 1)-sphere. All symbols used are summarized in Table 1.
For simplicity we consider a log-sensing organism swimming in a static exponential gradient. In this case, τE(x) = τE is constant (more complex gradient profiles and the effect of receptor saturation are considered later in the paper). Therefore the positive feedback becomes independent of position and the system can reach a steady state drift speed. Separating the variable x and integrating over x we obtain (Methods Eq (17))
where represents averaging over f and s and the bar indicates steady state. Eq (6) indicates that the drift speed is determined by the steady state marginal distribution . To find an analytical expression for , we expand the steady state joint distribution in orthonormal eigenfunctions of the angular operator —the first two coefficients are the marginal distribution and the first angular moment —and discard higher orders to obtain a closed system of equations. The analytical solution for the steady state marginal distribution reads
where W is a normalization constant. The full derivation is provided in Methods Eqs (25)–(27), together with an interpretation of the distribution as a potential solution where V(f) is the “potential”. We also examine how the shape of the potential depends on τE and τD.
The solution is plotted in Fig 1C. When the negative feedback dominates (τE 1) the distribution is sharply peaked and nearly Gaussian with variance (Methods Eq (32)) and its mean barely deviates from the adapted state f0 (Fig 1C red and blue). Substituting into Eq (6) and taking the limit τE 1 yields known MFT results [11–13] (Methods Eq (43)). When the positive feedback dominates (τE 1) the distribution now exhibits large asymmetrical deviations (Fig 1C green) between the lower and upper bounds fL and fU, which satisfy the relations fL = f0 − r(fL)/τE and fU = f0 + r(fU)/τE. For small τE the lower bound decreases as fL → ln τE whereas the upper bound increases as fU → 1/τE (Methods Eq (46)). MFT becomes inadequate in this regime, as recently suggested by 1D approximations . When the positive feedback dominates, matching the memory of the cell with the direction decorrelation time becomes important: keeping the direction of motion long enough (τD 1) allows the distribution to develop a peak near fU (Fig 1C green), which according to Eq (6) results in higher drift speed (S2 Fig). We verified the approximate analytical solution captures the run-and-tumble dynamics well by plotting it against the distribution of f obtained from the agent-based simulations (Fig 1C). Integrating according to Eq (6) predicts well the drift speed for all τE (Fig 1D), including where the positive feedback dominates (τE < 1).
In the fast gradient climbing regime (τE 1) trajectories resemble that of a ratchet (Fig 1B). To gain mechanistic insight into this striking efficiency we examined the Langevin system equivalent to the Fokker-Planck Eq (5). Defining v = rs as the normalized run speed projected along the gradient, we change variables from (f, s) to (r, v) and obtain (Methods Eqs (47)–(52)):
where v = dx/dτ and η(τ) denotes delta-correlated Gaussian white noise. The nullclines of the system (Fig 2A and 2C) intersect at the only stable fixed point (r, v) = (r0, 0) of Eq (8) where the eigenvalues of the relaxation matrix
are both negative (Methods Eq (53)). Stochastic fluctuations due to rotational diffusions DR and DT (heat maps in Fig 2A and 2C) continuously push the system away from the fixed point. The magnitude of these fluctuations is large near the fixed point, causing the system to quickly move away. Fluctuations are smaller near r = 1 and v = 1, enabling the organism to climb the gradient at high speed for a longer time. Net drift results from spending more time in the region where v > 0.
Stochastic excursions in the (r, v)-plane away from the fixed point exhibit distinctive trajectories depending on the value of τE. When the positive feedback dominates (τE 1; Fig 2A) the eigenvectors of the relaxation matrix, (1, 0)T and , are highly non-orthogonal. This defines a non-normal dynamics that enables linear deviations to grow transiently [7, 8] to feed the nonlinear positive feedback (v2 term second line in Eq (8)) leading to large deviations. Importantly, this only happens for runs that start in the correct direction. If the run is in the wrong direction the linear deviation does not grow (Fig 2B; see also S1 Movie). Asymmetry arises because the v2 term is always positive. Similar selective amplification properties are observed in neuronal networks, where non-normal dynamics enables the network to respond to certain signals while ignoring others (including noise) [27, 28]. Thus, a random walker running in the correct direction is aided by the positive feedback, which pushes its internal dynamics towards the upper right corner of the phase plane where r = 1 and v = 1. If, instead, the run is in the wrong direction (v < 0), the nonlinearity pushes the system back into the high noise region near the fixed point where it will rapidly pick a new direction (Fig 2B).
In contrast, when the negative feedback dominates (τE 1; Fig 2C), the eigenvectors become nearly orthogonal. Linear deviations from the fixed point simply relax to the fixed point regardless of the initial direction of the run. Thus runs up and down the gradient are only marginally different in length, resulting in a small net drift (Fig 2D). This key difference between the positive and negative feedback regimes is reflected in the flow field (white curves in Fig 2A and 2C).
For simplicity in our analytical derivations we assumed the environment was a constant exponential gradient with concentrations in the (log-sensing) sensitivity range of the organism. Here we explore what happens when the organism encounters concentrations beyond its sensitivity range. For wild type E. coli the change in the free energy of the chemorecetor cluster due to ligand binding is proportional to ln((1 + C/Ki)/(1 + C/Ka)) (S1 Appendix Eq (S8)). Therefore the receptor is log-sensing to methyl-aspartate only for concentrations between Ki C Ka, where Ki = 0.0182 mM and Ka = 3 mM are the dissociation constants of the inactive and active states of the receptor. When C < Ki the receptor senses linear concentration , whereas when C > Ka the receptors saturate [30–33]: as a cell approaches a high concentration source its sensitivity decreases (S1 Appendix Eq (S10)). This in turn increases the value of τE. Simulations in an exponential gradient show that this effect results in an eventual slow-down as the cell approaches the source (Fig 3A–3C).
Realistic gradients are typically limited in spatial extent and often are not exponential, in which case L and therefore τE are different in different regions. L is long near the source in a linear gradient, for example, and decreases linearly with distance from the source. Simulations show that the cell initially climbs the gradient fast but later slows down as the gradient length scale L increases and τE increases (Fig 3D–3F). On the contrary, for a static localized source in three dimensions, L is short near the source but increases linearly with distance from it (Methods). Thus, τE decreases and the cell accelerates as it approaches the source (Fig 3G–3I). Comparing cells in various dynamical regimes (different values of τE) across these different gradients suggests that a lower value of τE results in faster gradient ascent.
When entering a food gradient, it is natural to try to climb as fast as possible. However this strategy could create a problem: the longer runs implied by the positive feedback mechanism could propel the accelerating E. coli beyond the nutrient source. This is the case in Fig 3E, where cells with the lowest τE (green) reach the source first but overshoot slightly; they settle, on average, at a further distance than those with intermediate τE (blue). Thus there is a trade-off between transient gradient climbing and long-term aggregating, as previously observed [13, 15, 23]. In nature, as chemotactic bacteria live in swarms, chasing and eating nutrient patches driven by flows and diffusions while new plumes of nutrients are constantly created by other organisms [2, 34], the actual environments experienced by bacteria are far more complex. The trade-off we found here hints that in these random small fluctuating gradients [11, 16, 35–37] the bacteria should not aim for maximal drift speed but need to deal with this trade-off to avoid overshoot. In general, natural environments will be complex, with a variety of different sources and gradients, implying different parameter domains will be optimal for E. coli at different times. Such phenotypic diversity may well confer an advantage [15, 37–41].
Our results illustrate the surprisingly new capabilities that can emerge when living systems exploit the full nonlinearity inherent within an otherwise simple and widely used strategy. For the particular case of bacterial chemotaxis we showed that cells that swim fast, have long memory (adaptation time), or large signal amplification, are likely to exhibit “ratchet-like” climbing behavior in a positive-feedback-dominated regime, even in shallow gradients. As we showed from simulations using a model that fits experimental data, this regime should be accessible to wild-type bacterial populations. Actually identifying these “ratcheting” cells from experimental trajectories would require observing them for a sufficient time (T tM, tD, tE) and in a sufficiently steep gradient over the distance traveled (ΔX = VDT ~ 0.5v0T). Using parameter estimates from [13, 20], for tE < tD tM ~ 10 s we take T ~ 200 s, and for v0 = 20 μm/s we get ΔX = 2 mm. To see how this compares with existing experimental setup with a quasi-linear gradient varying from 0 to 1 mM over 10 mm , we note that the black dots in Fig 1A show that some cells located 1.5 mm away from 0 concentration can operate in the positive-feedback-dominated regime. Thus, using the same setup as in  these requirements would be satisfied near the bottom of the gradient if the source concentration was increased to 3 mM.
It is common to make simplifying assumptions to facilitate analysis, but we do not believe that ours are limiting. We showed with simulations that our results hold (S1 Fig for details) when we take into account: (i) different values of r0 and DT/DR; (ii) the limited range of the receptor sensitivity [15, 18] (S1 Appendix Eq (S10)); (iii) possible nonlinearities (S1 Appendix Eq (S4)) and asymmetries of adaptation rates [14, 42]. A hallmark of E. coli chemotaxis is that, in the absence of a gradient, run-and-tumble behavior adapts back to prestimulus statistics [6, 43]. These robust properties of integral feedback control  remain in place in our study because the transients originate from non-normal dynamics around the stable fixed point. The boost from positive feedback described here is independent from other mechanisms that can enhance drift up a gradient such as imperfect adaptation in the response to some amino acids  and stochastic fluctuations in the adaptation mechanism [20, 21]. The latter has been shown to enhance chemotactic performance in shallow gradients by transiently pushing the system into a regime of slower direction changing provided it is running up the gradient. There are some similarities between the effect of signaling noise and the positive feedback mechanism presented here: both can affect drift speed by causing long-lasting asymmetries in the internal state when running up the gradient. In S3 Fig we show using simulations that signaling noise in the adaptation mechanism does not change our conclusion that the drift speed is maximal in the positive-feedback-dominated regime. Depending on the region of the (τE, τD) parameter space, the signaling noise can either enhance the drift speed by less than 10% or reduce it by up to 30%.
The fact that non-normal dynamics might be exploited to boost runs in the correct direction parallels recent findings in neuroscience  that suggest neuronal networks use similar strategies to selectively amplify neural activity patterns of interest. Thus, non-normal dynamics could be a feature that is selected for in living dynamical systems. Although we used bacterial chemotaxis as an example, our results do not depend on the specific form of the functions r(f) and tD(f), provided they are increasing. Therefore our findings should be applicable to a large class of biased random walk strategies exhibited by organisms when local directional information is unreliable. In essence, any stochastic navigation strategy requires a memory, tM, to make temporal comparisons, a reorientation mechanism, tD, to sample new directions, and external information, tE, relayed to decision-making circuitry through motion and signal amplification. Our theoretical contribution showed the (surprisingly) diverse behavioral repertoire that is possible by having these work in concert. In retrospect, perhaps this should not be surprising given the diverse environments in which running-and-tumbling organisms can thrive.
A detailed description of the chemotaxis model and agent-based simulations is provided in S1 Appendix (parameters in S1 Table). Briefly, the agent-based simulations were performed using Euler’s method to integrate a standard model of E. coli chemotaxis [12–14, 18, 20] in which the cell relays information from the external environment to the flagellar motor through a signaling cascade triggered by ligand-binding receptors. The receptors are described by a two-state model where the activity a is determined by the free energy difference F between the active and inactive states, which is determined by both the ligand concentration C and the receptors’ methylation level m. At each time step, the cell moves forward or stays in place according to its motility state (run or tumble), which also determines whether its direction changes with rotational diffusion coefficients DR or DT. At the new position changes in ligand concentration C cause changes in free energy F and thus activity a, and the methylation state adapts to compensate that change to maintain a constant activity. The updated value of the free energy F then determines the switching rates between the clockwise and counter-clockwise rotation of the flagellar motor state, which in turn determines the motility state of the cell according to rules and parameters in , completing one time step.
In Fig 1A we considered a wild type population in the scatter plot. To generate a population with realistic parameters, we used a recent model  of phenotypic diversity in E. coli chemotaxis that reproduces available experimental data on the laboratory strain RP437 climbing a linear gradient of methyl-aspartate. In this model individual cells have different abundances of the chemotaxis proteins (CheRBYZAW) and receptors (Tar, Tsr). These molecular abundances then determine the memory time tM and the adapted probability to run r0 . The run speed was different among cells and sampled from a Gaussian distribution to match experimental observations . Rotational diffusion coefficients were also distributed to reflect differences in cell length.
We define and as the probability distributions at time t to be running or tumbling at position X in direction with internal variable F. As described, there is Poisson switching between runs and tumbles with rates λR(F) and λT(F), runs and tumbles follow rotational diffusion with DR and DT, and motion is constant in runs and 0 in tumbles. Thus we construct a two-state stochastic master equation model 
where are defined in Eq (2).
Since the gradient varies in one direction only we focus on motion in the gradient direction and integrate the probability over all other directions. Thus and , the polar angle part of the rotational diffusion operator on the (n − 1)-sphere. To derive the analytical form of we note in n-dimensional space we can iteratively write down the Laplace-Beltrami operator  as
where 0 < θ < π is the polar angle. In a one-dimensional gradient we define the gradient direction as the polar axis, thus . We can write and . Then the polar angle part is
Using the definitions of the normalized internal state f = HF, of the timescale of switching between runs and tumbles tS = 1/(λR + λT) , and of the probability to run r = λT/(λR + λT), we obtain
If we assume the switching terms with in Eq (13) dominate, the probabilities to be running and tumbling equilibrate on a much faster timescale than the other ones. Therefore we can let P = PR + PT and can approximate the actual probability to run as PR/P ≈ r. Adding the two equations above yields the Fokker-Planck equation:
This is equivalent to a system of Langevin equations. Considering dr/df = r(1 − r) the internal variable dynamics (the first term on the right) gives Eq (3) which defines tE. The angular dynamics (the second term on the right) defines tD.
From the Fokker-Planck Eq (5) we consider the steady state so that τ = 0. For a log-sensing organism moving in an exponential gradient τE does not depend on x. We can therefore integrate over x to get an equation for the marginal steady state distribution —this removes the x term. Integrating over s gives
where the bar indicates steady state. By the boundary conditions that P → 0 at ±∞, we must have
Here we use separation of variables and expand the solution to the Fokker-Planck Eq (5) as a sum of eigenfunctions of the operator on s. We then ignore high order terms assuming τD0 1 and derive an approximate analytical solution.
The eigenvalue problem of the angular operator , defined in Eq (12), is
We identify this as the Gegenbauer differential equation , with eigenfunctions the Gegenbauer polynomials and the corresponding eigenvalues . When n = 3 they are Legendre polynomials with eigenvalues . The first few Gegenbauer polynomials are
They are orthogonal in the sense that
where the normalization constants are . When n = 3 they are , those of Legendre polynomials.
The weight in the integration above is consistent with the geometry on an (n − 1)-sphere Sn−1, whose the volume element are iteratively defined  as
After a change of variable s = cos θ and integrating over all remaining dimensions, we see that any integration of s should carry a weight
From orthogonality and completeness, we write any function of s, in particular the probability distribution P, as a series of Gegenbauer polynomials. When n = 3 this is the Fourier-Legendre Series.
where we normalize the definitions to ensure is the same as the marginal distribution. When n = 3, the above is
From now on we denote the marginal distribution p(f) = p0(f). Also, from this definition .
where (summation over l implied) is an operator relating neighboring orders. It comes from the positive feedback term. When n = 3 it is . The first few equations are
In the definition of , when k 1 the non-zero entries approach a constant 1/2. This means for large k the coefficients pk in Eq (25) evolve similarly except that higher orders decay with faster rates k(k + n − 2)/(n − 1)τD. Therefore when τD0 1 we can neglect the 2nd and higher orders, which closes the infinite series of moment equations and leaves two equations concerning the zeroth and first marginal moments in s, p(x, f, τ) and p1(x, f, τ) respectively. At steady state the approximation gives the analytical solution
We can interpret the steady state distribution as a potential solution where V(f) is the “potential”. In this case the equivalent “force” in internal state is
Since τD0 1 the second term dominates, making the “force” a spring-like system, with spring constant
Three observations can be made from this spring constant in intuitively understanding the steady state distribution . (i) k(f)→∞, i.e. the “spring” becomes infinitely “stiff”, when the denominator approaches 0. Therefore, the bounds of the distribution are proportional to 1/τE, the ratio between the positive and negative feedbacks (Eq (4)). Intuitively, a stronger positive feedback (smaller τE) drives the internal state f further away from f0, so the spring constant k(f) is smaller and the distribution is wider. (ii) A slower change in direction (smaller τD) leads to a larger spring constant k(f) 1/τD(f), and thus the distribution is more concentrated near the “origin” f0. Intuitively, a shorter direction correlation time τD inhibits coherent motion in a single direction, which is required by the positive feedback to consistently drive the internal state f away. Thus the distribution is more concentrated. (iii) Asymmetries are created by the functional dependencies of r(f) and τD(f), both increasing in f—a “weaker spring” for higher values of f shifts the distribution there. Intuitively, more positive feedback r(f) and more coherent motion τD(f) in the positive direction asymmetrically drives the internal state towards higher values. These 3 observations can all be found in Fig 1C.
First, we write the steady state distribution Eq (27) as
From the Taylor expansion of the integrand in the exponent
where ′ = d/df, we see that if we define
the first term in A(f) will give . If we can show that the rest of the terms are small when and τD0 < 1, we can write as a small deviation from a Gaussian.
Indeed, if we consider the integration range , we can write
Similarly, the prefactor is
with normalization constant Z.
We notice from Eq (30) that the range of distribution is bounded by fL and fU, defined by
Since , we see that the integration range is much larger than the standard deviation of the Gaussian factor, and thus can be considered from −∞ to ∞. Therefore we get the normalization constant
Finally, noticing that by the definition of τD in Eq (4)
we can get
Taking DT DR, we put this back into Eq (38) and get
When converted back to real units (t instead of τ = t/tM), the highest-order term is identical, except for notations, to Eq (3) in Dufour et al.  obtained from a different approach. It can also be reduced to Eq (12) in Si et al.  by assuming a high running probability and a long memory. It agrees with Eq (6.24) in Erban & Othmer  and Eq (16) in Franz et al.  with appropriate inclusion of rotational diffusion.
In Eq (38) we expanded the distribution as a near-Gaussian around f0. From Eq (6) we see the mean internal state has a slight shift, so it’s more accurate to expand around fm. From Eqs (38) and (6) in the main text, we see . Thus considering the shift in fm the resulting VD has the same form compared to Eq (42):
The first term in Eq (5) says the flux in f-space is non-negative provided −(f − f0) + rs/τE > 0, or, noting s ≤ 1
Thus the upper bound fU of the distribution p(f) is achieved at equality. Similarly, the lower bound fL is achieved when we take equal signs of
noting s ≥ 1.
When τE becomes small we note fU,L deviates far away from f0 as 1/τE → ∞. Using the definition r = 1/(1 + exp(−f)), we write
The plus sign gives exp(−fU) 1 and fU ≈ 1/τE. The minus sign gives exp(−fL) 1 and fL = −exp(fL)/τE. Taking logarithm, the latter gives fL = ln (|fL|τE) ≈ ln τE.
To derive Langevin equations from the Fokker-Planck equation we need to consider the geometric weight factor w(s) in Eq (22) for anglular integration. In deriving s-dynamics, we start with the angular part of the Fokker-Planck Eq (5)
Multiplying an arbitrary function A(s) and integrating over all dimensions, we obtain
To apply the standard result of equivalence between Fokker-Planck equations and Langevin equations, we need to change the measure in s-space to unity. This prompts the definition Q(s, t) = w(s)P(y, f, s, t)dxdf so that the above becomes
where we integrated by parts and discarded boundary terms. Since A(s) is an arbitrary function, we can write down the Fokker-Planck equation
which is equivalent  to the Langevin equation
where η(τ) denotes the Gaussian white noise with η(τ1)η(τ2) = δ(τ1 − τ2).
Near the fixed point (r0, 0), the eigenvectors and eigenvalues of the linearized Langevin Eq (8) are:
When τE is large, and the eigenvectors are almost orthogonal. When τE is small, and the eigenvectors are not orthogonal.
In Fig 1A heat map the drift speed VD was calculated by fitting the linear part of the mean trajectory. In Fig 1B the first 50 s were removed to avoid the start up transient. In Fig 1C, the steady state from agent-based simulations was calculated from the histogram of all the internal values of the 104 simulated cells between τ = 10 and τ = 20, sampled at regular steps of τ = 0.01. Numerical solutions of the Fokker-Planck Eq (5) were obtained by expanding the distribution in angles, as in Eq (25), and keeping the first 10 orders. The steady state was found by solving an initial value problem using the NDSolve function in Mathematica, with 104 spatial points and integration time up to τ = 10. Further orders, finer grid, and longer integration times were checked to ensure solution accuracy. In Fig 1D, VD from agent-based and Fokker-Planck were calculated by plugging into Eq (6) obtained from those methods in C. MFT was calculated by combining Eq (42) with Eq (6) to find both and VD [12, 13]. In the inset, the black curves show the approximate distribution in Eq (35).
In Fig 3C, 3F and 3I the τE calculation considered receptor saturation as well as the varying gradient length scales, with C and L evaluated at mean positions. Note this is not the average τE over the population.
(A) Same as Fig 1A (where r0 = 0.8, Table S1 Table) except with r0 = 0.5. (B) r0 = 0.7. (C) Same as Fig 1A (where DT/DR ≈ 37, Table S1 Table) except with DT/DR = 5. (D) Same as Fig 1A except without assuming receptors in log-sensing range, i.e. Eq (S7) was used rather than Eq (S11). (E) Same as D, but additionally implements adaptation asymmetry, where the adaptation rate in equation Eq (S4) depends on m . Here the adaptation is 3 times faster when m > m(C) than when m < m(C), and tM is defined as the time scale when m < m(C). (F) Same as D but with nonlinear adaptation rate Eq (S4). Values for the parameters are: aB = 0.74, rB = 4.0, KR = 0.32, KB = 0.30, and VR and VB(0) chosen to ensure when a = a0 and the adaptation time is tM when linearized.
5 sample trajectories (thin solid curves) and the mean over 104 sample trajectories (thick lines) of non-dimensionalized position x = X/(v0tM) as a function of time τ = t/tM. Colors correspond to cells with matching (green τD0 = 1) and non-matching (orange τD0 = 0.1) reorientation as in Fig 1 (same methods).
(A) Same as Fig 1A but adding a signaling noise term in Eq (S6) where Γ(t) is the standard Wiener process (see Eq  in ). From Eqs (S1)-(S2) and Y = αa we obtain dY/dm = Y(1 − a)ϵ1. Then plugging in σY/Y = 0.1 [20, 21], a ≈ 0.5 and ϵ1 = −1, we obtain σm = 0.2. (B) The absolute difference in drift speed between A (with signaling noise) and Fig 1A (without signaling noise), ΔVD = VD|noise − VD|nonoise, shows how signaling noise can either enhance or reduce the drift speed depending on (τE, τD). Note the colorbar is different. Black contour lines show level sets of ΔVD at the colorbar ticks (-0.05, -0.025, 0, 0.025, and 0.05). (C) The relative difference in drift speed (B divided by the drift speed without signaling noise). Again, the color scale is different and black contour lines show level sets at the colorbar ticks.
We thank D Clark, Y Dufour, N Frankel, X Fu, S Kato, N Olsman, DC Vural, and A Waite for discussions. This work was supported by the HPC facilities operated by, and the staff of, the Yale Center for Research Computing.
JL received support from the Natural Sciences and Engineering Research Council of Canada (NSERC) Postgraduate Scholarships-Doctoral Program (http://www.nserc-crsng.gc.ca/Students-Etudiants/PG-CS/BellandPostgrad-BelletSuperieures_eng.asp) PGSD2-471587-2015. TE received support from the National Institute of General Medical Sciences (www.nigms.nih.gov) grant 4R01GM106189-04. TE, JL and SWZ were supported by the Allen Distinguished Investigator Program (grant 11562) through the Paul G. Allen Frontiers Group (www.pgafamilyfoundation.org/programs/investigators-fellows). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
All relevant data are within the paper and its Supporting Information files.