Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC3355474

Formats

Article sections

- Abstract
- 1. Introduction
- 2. Results: Gain control with no additional regulatory structures
- 3. Discussion
- References

Authors

Related links

Phys Biol. Author manuscript; available in PMC 2013 April 4.

Published in final edited form as:

Published online 2012 April 4. doi: 10.1088/1478-3975/9/2/026003

PMCID: PMC3355474

NIHMSID: NIHMS374073

Ilya Nemenman, Departments of Physics and Biology, Computational and Life Sciences Initiative Emory University, Atlanta, GA 30322, USA;

Ilya Nemenman: ilya.nemenman/at/emory.edu

The publisher's final edited version of this article is available at Phys Biol

Statistical properties of environments experienced by biological signaling systems in the real world change, which necessitates adaptive responses to achieve high fidelity information transmission. One form of such adaptive response is gain control. Here we argue that a certain simple mechanism of gain control, understood well in the context of systems neuroscience, also works for molecular signaling. The mechanism allows to transmit more than one bit (on or off) of information about the signal independently of the signal variance. It does not require additional molecular circuitry beyond that already present in many molecular systems, and, in particular, it does not depend on existence of feedback loops. The mechanism provides a potential explanation for abundance of ultrasensitive response curves in biological regulatory networks.

An important function of all biological systems is responding to signals from the surrounding environment. These signals (hereafter assumed to be scalars), *s*(*t*), are often probabilistic, described by some probability distribution *P*[*s*(*t*)]. They have nontrivial temporal dynamics, so that the probability of a certain value of the signal at a given time is dependent on its entire history.

Often the response *r*(*t*) is produced from *s* by (possibly nonlinear and noisy) temporal filtering. For example, in a deterministic molecular circuit, we may have

(1)

where *f* is the response molecule production rate, which depends on the current value of the signal. Here *k* is the rate of the first-order degradation of the molecule. Note that *r*(*t*) depends on the entire history of *s*(*t*′), *t*′ < *t*, and hence carries information about it. For more complicated, nonlinear degradation or for *r*-dependent production, Eq. (1) may be interpreted as linearization around the mean response. We point out that this equation can also describe dynamics of a continuous firing rate in neural systems, and this realization is one of the main motivations for the current article.

The distribution of stimuli, *P*[*s*(*t*)], places severe constraints on the forms of *f* that can transduce the stimuli with high fidelity. To see this, for quasi-static signals (that is, when the signal correlation time *τ* is large, *τ* 1/*k*), we use Eq. (1) to write the steady state dose-response curve

(2)

A typical monotonic, sigmoidal *f* is characterized by only a few large-scale parameters: the range, [*f*_{min}, *f*_{max}]; the mid-point *s*_{1/2}; and the width of the transition region, Δ*s* (cf. Fig. 1). If the signal mean *μ* *s*_{1/2}, then, for most signals, *r*_{ss} ≈ *f*_{max}/*k*. Then responses to two different signals *s*_{1} and *s*_{2} are indistinguishable as long as

Parameters characterizing response to a signal. Left panel: the probability distribution of the signal, *P*(*s*) (blue), and the best-matched steady state dose-response curve *r*_{ss} (green). Top right: if the mid-point of the dose-response curve, *s*_{1/2}, is far **...**

(3)

where *δr* is the precision of the response resolution. Similarly, when *μ* *s*_{1/2}, then *r*_{ss} ≈ *f*_{min}/*k*. Thus, for reliably communicating information about the signal, *f* should be tuned such that *s*_{1/2} ≈ *μ*. If a biological system can change its *s*_{1/2} to follow changes in *μ*, this is called *adapting to the mean* of the signal, and, if *s*_{1/2}(*μ*) = *μ*, then the adaptation is *perfect* [1, 2]. Similarly, if the quasi-static signal is taken from the distribution with *σ* (*s*(*t*)^{2}* _{t}* −

Both of these adaptation behaviors can be traced to the same theoretical argument [3]: for sufficiently general conditions on the response resolution *δr*, the response that optimizes the fidelity of a signaling system, as measured by its information-theoretic channel capacity [4], is
, where *P*(*s*′) is the probability distribution of an instantaneous signal value, obtained from *P*[*s*(*t*)]. In some situations, when the mean and the variability of the signal scale proportionally, like in fold-change detection problems [5, 6], the two adaptations are deeply intertwined. However, more generally environmental changes that lead to varying *μ* and *σ*, as well as mechanisms of the adaptation are distinct. Thus it often makes sense to consider the two adaptations as separate phenomena [2].

Adaptation to the mean, sometimes also called *desensitization*, has been observed and studied in a wide variety of biological sensory systems [1, 7, 3, 8, 9], with active work persisting to date.^{‡} In contrast, while gain control has been investigated in neurobiology [10, 11, 12], we are not aware of its systematic analysis in molecular sensing. In this article, we start filling in the gap. Our main contribution is the observation that a mechanism for gain control, observed in a fly motion estimation system by Borst et al. [12], can be transferred to molecular information processing with minimal modifications. Importantly, unlike adaptation to the mean, which is implemented typically using extra feedback circuitry [1, 9, 13], the gain control mechanism we analyse requires no additional regulation. It is built-in into many molecular signaling systems. The main ingredients of the gain control mechanism in Ref. [12] is a strongly nonlinear, sigmoidal response function *f*(*s*) and a realization that real-world signals are dynamic with a nontrivial temporal structure. Thus one must move away from the steady state response analysis and autocorrelations within the signals will allow the response to carry more information about the signal than seems possible naively.

In this context, we show that, just like the neural circuits in Ref. [12], a simple biochemical circuit described by Eq. (1) can be made insensitive to changes in *σ* with no extra regulatory features. That is, for an arbitrary choice of *σ*, and for a wide range of other parameters, the circuit can generate an output that is informative of the input, and, in particular, carries more than a single bit of information about it. For brevity, we will not review the original work on gain control in neural systems [12], but will instead develop the methodology directly in the molecular context.

Let’s assume for simplicity that the signal in Eq. (1) has the Ornstein-Uhlenbeck dynamics with:

(4)

We will assume that the response has been adapted to the mean value of this signal (likely by additional feedback control circuitry, not considered here explicitly), so that the response to *s* = *μ* is half maximal. Now we explore how insensitivity to *σ* can be achieved as well.

We start with a step-function approximation to the sigmoidal response production rate

(5)

where *f*_{0} is some constant. This is a limiting case of very high Hill number dose-response curves, which have been observed in nature [14]. Figure 2 shows sample signals and responses produced by this system. Notice that such *f* makes the system manifestly insensitive to *σ*. Any changes in *σ* will not result in changes to the response, hence the gain is controlled *perfectly*.

Nevertheless, this choice of *f* is pathological, resulting in a binary steady state response (*r*_{ss} = 0 for *s* < *μ*, and *r*_{ss} = *f*_{0}/*k* otherwise). That is, the response cannot carry more than one bit of information about the stimulus. However, as illustrated in Fig. 2, a *dynamic* response is not binary and varies over its entire dynamic range. Can this make a difference and produce a dose-response relation that is both high fidelity and insensitive to the variance of the signal?

To answer this, we first specify what we mean by the dose-response curve or the input-output relation when there is no steady state response. For the response at a single time point *t*, we can write *P*(*r*(*t*) |{*s*(*t*′ ≤ *t*)}) = *δ*(*r* − *r*[*s*]), where *δ* (· · ·) is the Dirac *δ*-function, and the functional *r*[*s*] is obtained by solving Eq. (1). Since the signal is probabilistic, marginalizing over all but the instantaneous value of it at time *t* − *t*′, one gets *P*(*r*(*t*)|*s*(*t* − *t*′)), the distribution of the response at time *t* conditional on the value of the signal at *t* − *t*′. Further, for the distribution of the signal given by Eq. (4), one can numerically integrate Eq. (1) and evaluate the correlation *c*(*t*′) = *r*(*t*)*s*(*t* − *t*′)_{t}^{§}. Since Eq. (1) is causal, *c*(*t*′) has a maximum at some *t*′ = Δ(*τ*, *k*) ≥ 0, illustrated in Fig. 3. Correspondingly, in this paper we replace the familiar notion of the dose-response curve by the delayed probabilistic input-output relation *P*(*r*(*t*)|*s*(*t* − Δ)). This is a relatively common choice in molecular signaling [16] and in neuroscience [10].

Dependence of the delay between the signal and the response, Δ, which achieves the maximum correlation between *s* and *r*. Here Δ is expressed in units of the signal correlation time *τ*, and it is studied as a function of *kτ* **...**

We emphasize that, since *f* is a step function, *f*(*s*) *f*(*αs*) for any positive scalar *α*. Therefore, for two signals that can be mapped into each other by rescaling, *P*(*r*(*t*)|*s*(*t* − Δ)/*σ*) is manifestly independent of *σ*. In other words, the system is gain-compensating *by construction*. Correspondingly, in Fig. 4, we plot the input-output relation for *kτ* = 10, where *s* is measured in the units of *σ*. A smooth, probabilistic, sigmoidal response with a width of the transition region Δ ~ *σ* is clearly visible. This is because, for a step-function *f*, the value of *r*(*t*) depends not on *s*(*t*), but on how long the signal has been positive prior to the current time. In its turn, this duration is correlated with *s*/*σ*, producing a probabilistic dependence between *r* and *s*/*σ*. The latter is manifestly invariant to variance changes.

Conditional distribution *P*(*r*(*t*)|*s*(*t* − Δ)) for *kτ* = 10. The distributions depend only on *s*/*σ*, manifesting gain-compensating nature of the system. The signals are discretized into 30 values in the range of [−3*σ* **...**

These arguments make it clear that the fidelity of the response curve should depend on the ratio of characteristic times of the signal and the response, *kτ*. Indeed, as seen in Fig. 2, for *kτ* → 0, the response integrates the signal over long times. It is little affected by the current value of the signal and does not span the full available dynamic range. At the other extreme of a very fast response, *kτ* → ∞, the system is always almost in a steady state. Then the step-nature of *f* is evident, and the response quickly swings between two limiting values (*f*_{0}/*k* and 0).

We illustrate the dependence of the response conditional distribution on the integration time in Fig. 5 by plotting (Δ, *s*) = ∫ *dr r*(*t* + Δ)*P*(*r*(*t* + Δ)|*s*(*t*)), the conditional-averaged response for different values of *kτ*. Neither *kτ* → 0 nor *kτ* → ∞ are optimal for signal transmission. One expects existence of an optimal *k ^{*}*, for which most of the dynamic range of

Mean conditional response (Δ*, s*) for different combinations of the signal and the response characteristic times, *kτ*.

The signal-response mutual information at the optimal temporal delay as a function on *kτ*. The solid line represents *I*_{k}[*r*(*t* + Δ), *s*(*t*)], the information for the Ornstein-Uhlenbeck signal, and the maximum of the information here is *I*_{max} **...**

We emphasize that the information here is *per signaling event*, that is, per independent value of the signal. Indeed, since we consider responses that change much faster than the signals, the system is always near a steady state, and each “new” value of the signal is encoded by an independent response value. This also make sense experimentally: measuring joint distributions of time series of stimuli and responses is very hard, and experiments often focus on information between one signal value and one response value [16]. Our analysis is relevant for interpretation of such experiments.

The gain insensitivity of the constructed molecular circuit model depends only weakly on details of the temporal structure of the signal. As long as there are autocorrelations, one can use them to transmit more than one bit about the signal in a gain-independent fashion using the strong nonlinearity of *f*. To verify this, we replace the Ornstein-Uhlenbeck signal, Eq. (4), with its low-pass filtered version, *s*′(*t*) = 1/*k* ∫* ^{t} dt*′

When gain-insensitive, the system looses information about the actual signal variance. This rarely happens in biology. For example, while we see well at different ambient light levels, we nonetheless know how bright it is outside. For the fly visual system, it was shown that variance independence of the response breaks on long time scales. The signal variance can be inferred from long-term features of the neural code [10, 17]. Correspondingly, we ask if long term observation of the response of an approximately gain-controlled molecular signaling circuit allows to infer the signal variance *σ*.

To this extent, consider *f* as a narrow sigmoid, with the width of the crossover region Δ*s*/*σ* 1. The effect of the variance on the response is still negligible. For concreteness, we take *f* = *f*_{0}[tanh((*s* − *μ*)/Δ*s*) + 1]. Consider now the fraction of time the rate of change of the response is near max(*f*). This requires that *r* ≈ 0 (so that the degradation, *kr*, is negligible), but *s* is already large, (*s* − *μ*)/Δ*s* 1. The probability of this happening depends on the signal variance and hence on the speed with which the signal crosses over the threshold region. Thus one can estimate *σ* by observing a molecular circuit for a long time and counting how often the rate of change of the response is large. The probability of a large derivative will depend on the exact shape of *f*. However, for a signal defined by Eq. (4), the statistical error of any such counting estimator will scale as
. Hence, the system can be almost insensitive to *σ* on short time scales, but allow its determination from long observations periods, *T* *τ*.

To verify this, we simulate the signal determined by Eq. (4) with *kτ* = 20, which maximizes the signal-response mutual information. We arbitrarily choose the cutoff of 80% of the maximum possible rate of change of the response, and we calculate the mean fraction of time when the rate is above the cutoff. We further calculate the standard deviation of this fraction, *σ _{}*. We repeat this for signals with various Δ

This long-term variance determination can be performed molecularly. For example, one can use a feedforward incoherent loop with *r* as an input [18]. The loop acts as an approximate differentiator for signals that change slowly compared to its internal relaxation times [19]. The output of the loop can then activate a subsequent chemical species by a Hill-like dynamics, with the activation threshold close to the maximum of *f*. If this last species degrades slowly, it will integrate the fraction of time when *dr*/*dt* is above the threshold, providing the readout of the signal variance.

In this work, we were able to translate the arguments of Ref. [12] to the context of simple continuous biochemical dynamics, Eq. (1). We have argued that, just like neural circuits, simple molecular systems can respond to signals in a gain-insensitive way without a need for explicit adaptation and feedback loops (though such loops may be needed to choose *s*_{1/2} and *k* appropriately). That is, they can be sensitive only to the signal value relative to its standard deviation. To make the mechanism work, the signaling system must obey the following criteria

- a nonlinear-linear (NL) response; that is, a strongly nonlinear, sigmoidal response production rate
*f*integrated (linearly) over time; - properly matched time scales of the signal and the response dynamics.

In addition, the information about the signal variance can be recovered if

- episodes of large values of the rate of change of the response are counted over long times.

We have also argued that our results hold for a broad class of probability distributions of the signals.

Naively transmitted information of only one bit (on or off) would be possible with a step-function *f*. However, the response in this system is a time-average of a nonlinear function of the signal. This allows to use temporal correlations in the signal to transmit more than 1 bit of information for broad classes of signals. While 1.35 bits may not seem like much more than 1, the question of whether molecular signaling systems can achieve more than 1 bit at all is still a topic of active research [16, 20, 21]. Similar use of temporal correlations has been reported to increase information transmission in other circuits, such as clocks [22]. In practice, in our case, there is a tradeoff between variance-independence and high information transmission through the circuit: a wider production rate would give a higher maximal information for properly tuned signals, but then the information would drop down to zero if Δ*s* *σ*. It would be interesting to explore the optimal operational point for this tradeoff under various optimization hypotheses.

While our analysis is applicable to any molecular system, molecular or neural, that satisfies the three conditions listed above, there are specific examples where we believe it may be especially relevant. The *E. coli* chemotaxis flagellar motor has a very sharp response curve (Hill coefficient of about 10) [14]. This system is possibly the best studied example of biological adaptation to the mean of the signal. However, the question of whether the system is insensitive to the signal variance changes has not been addressed. The ultrasensitivity of the motor suggests that it might be. Similarly, in eukaryotic signaling, push-pull enzymatic amplifiers, including MAP kinase mediated signaling pathways, are also known for their ultrasensitivity [23, 24, 25]. And yet ability of these circuits to respond to temporally-varying signals in a variance-independent way has not been explored.

We end this article with a simple observation. While the number of biological information processing systems is astonishing, the types of computations they perform are limited. Focusing on the computation would allow cross-fertilization between seemingly disparate fields of quantitative biology. The phenomenon studied here, lifted wholesale from neurobiology literature, is an example. Arguably, computational neuroscience has had a head start compared to computational molecular systems biology. The latter can benefit immensely by embracing well-developed results and concepts from the former.

We thank F Alexander, W Hlavacek, and M Wall for useful discussions in the earlier stages of the work, participants of *The Fourth International* q-bio *Conference* for the feedback, and F Family and the anonymous referees for commenting on the manuscript. We are grateful to R de Ruyter van Steveninck for providing the data for one of the figures. This work was supported in part by DOE under Contract No. DE-AC52-06NA25396 and by NIH/NCI grant No. 7R01CA132629-04.

^{‡}To illuminate the relation between the classic perfect adaptation (in *E. coli* chemotaxis or elsewhere) and our terminology, we point out that we consider slowly varying chemical concentrations inputs in such experiments not as signals, but as *mean* signals. Fluctuations create additional fast signals on top of these slowly changing means. Feedback then ensures that the mean signal elicits the mean response.

^{§}All simulations were performed using Matlab v. 7.6 and Octave v. 3.0.2 using Apple Macbook Air. Correlation time of the signal was *τ* = 300 integration time steps, and averages were taken over 3 × 10^{6} time steps. To change the value of *kτ*, only *k* was adjusted.

1. Berg H. E. coli in motion. Springer-Verlag; New York: 2003.

2. Nemenman I. Information theory and adaptation. In: Wall M, editor. Quantitative biology: From molecules to Cellular Systems. CRC Press; In press.

3. Laughlin S. A simple coding procedure enhances a neuron’s information capacity. Z Naturforsch. 1981;36:910. [PubMed]

4. Shannon C, Weaver W. The mathematical theory of communication. The University of Illinois Press; Urbana, IL: 1949.

5. Shoval O, Goentoro L, Hart Y, Mayo A, Sontag E, Alon U. Fold-change detection and scalar symmetry of sensory input fields. Proc Natl Acad Sci USA. 2010;107:15995–16000. [PubMed]

6. Lazova M, Ahmed T, Bellomo D, Stocker R, Shimizu T. Response rescaling in bacterial chemotaxis. Proc Natl Acad Sci USA. 2011;108:13870–13875. [PubMed]

7. Normann R, Perlman I. The effects of background illumination on the photoresponses of red and green cells. J Physiol. 1979;286:491. [PubMed]

8. MacGlashan D, Lavens-Phillips S, Katsushi M. IgE-mediated desensitization in human basophils and mast cells. Front Biosci. 1998;3:746–56. [PubMed]

9. Detwiler P, Ramanathan S, Sengupta A, Shraiman B. Engineering aspects of enzymatic signal transduction: Photoreceptors in the retina. Biophys J. 2000;79:2801. [PubMed]

10. Brenner N, Bialek W, de Ruyter van Steveninck R. Adaptive rescaling optimizes information transmission. Neuron. 2000;26:695. [PubMed]

11. Gaudry K, Reinagel P. Contrast adaptation in a nonadapting lgn model. J Neurophysiol. 2007;98:1287–96. [PubMed]

12. Borst A, Flanagin V, Sompolinsky H. Adaptation without parameter change: Dynamic gain control in motion detection. Proc Natl Acad Sci USA. 2005;102:6172. [PubMed]

13. Ma W, Trusina A, El Samad H, Lim W, Tang C. Defining network topologies that can achieve biochemical adaptation. Cell. 2009;138:760–73. [PMC free article] [PubMed]

14. Cluzel P, Surette M, Leibler S. An ultrasensitive bacterial motor revealed by monitoring signaling proteins in single cells. Science. 2000;287:1652. [PubMed]

15. Nemenman I, Lewen G, Bialek W, de Ruyter van Steveninck R. Neural coding of natural stimuli: Information at sub-millisecond resolution. PLoS Comput Biol. 2008:e1000025. [PMC free article] [PubMed]

16. Cheong R, Rhee A, Wang CJ, Nemenman I, Levchenko A. Information transduction capacity of noisy biochemical signaling networks. Science. 2011;334:354–358. [PubMed]

17. Fairhall A, Lewen G, Bialek W, de Ruyter van Steveninck R. Efficiency and ambiguity in an adaptive neural code. Nature. 2001;412:787. [PubMed]

18. Mangan S, Alon U. Structure and function of the feed-forward loop network motif. Proc Natl Acad Sci USA. 2003;100:11980–5. [PubMed]

19. Sontag E. Remarks on feedforward circuits, adaptation, and pulse memory. IET Syst Biol. 2010;4:39–51. [PubMed]

20. Ziv E, Nemenman I, Wiggins C. Optimal signal processing in small stochastic biochemical networks. PLoS One. 2007;2:e1077. [PMC free article] [PubMed]

21. Tkacik G, Callan C, Bialek W. Information capacity of genetic regulatory elements. Phys Rev E. 2008;78:011910. [PMC free article] [PubMed]

22. Mugler A, Walczak A, Wiggins C. Information-optimal transcriptional response to oscillatory driving. Phys Rev Lett. 2010;105:058101. [PubMed]

23. Goldbeter A, Koshland D. An amplified sensitivity arising from covalent modification in biological systems. Proc Natl Acad Sci USA. 1981;78:6840–4. [PubMed]

24. Huang C, Ferrell J. Ultrasensitivity in the mitogen-activated protein kinase cascade. Proc Natl Acad Sci USA. 1996;93:10078–83. [PubMed]

25. Samoilov M, Plyasunov S, Arkin A. Stochastic amplification and signaling in enzymatic futile cycles through noise-induced bistability with oscillations. Proc Natl Acad Sci USA. 2005;102:2310–5. [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's Canada Institute for Scientific and Technical Information in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |