Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC3895446

Formats

Article sections

- Abstract
- Quantifying information transduction
- Bush and tree models of network signaling
- TNF network signaling
- Negative feedback
- Time integration
- Collective cell signaling
- Discussion
- Supplementary Material
- References and Notes

Authors

Related links

Science. Author manuscript; available in PMC 2014 January 19.

Published in final edited form as:

Published online 2011 September 15. doi: 10.1126/science.1204553

PMCID: PMC3895446

NIHMSID: NIHMS544007

Raymond Cheong,^{1} Alex Rhee,^{1} Chiaochun Joanne Wang,^{1} Ilya Nemenman,^{2} and Andre Levchenko^{1,}^{*}

Molecular noise restricts the ability of an individual cell to resolve input signals of different strengths and gather information about the external environment. Transmitting information through complex signaling networks with redundancies can overcome this limitation. We developed an integrative theoretical and experimental framework, based on the formalism of information theory, to quantitatively predict and measure the amount of information transduced by molecular and cellular networks. Analyzing tumor necrosis factor (TNF) signaling revealed that individual TNF signaling pathways transduce information sufficient for accurate binary decisions, and an upstream bottleneck limits the information gained via multiple pathways together. Negative feedback to this bottleneck could both alleviate and enhance its limiting effect, despite decreasing noise. Bottlenecks likewise constrain information attained by networks signaling through multiple genes or cells.

Signaling networks are biochemical systems dedicated to processing information about the environment provided by extracellular stimuli. Large populations of cells can accurately sense signaling inputs, such as the concentration of growth factors or other receptor ligands, but this task can be challenging for an individual cell affected by biochemical noise (1–3). Noise maps an input signal to a distribution of possible output responses which can cause loss of information about the input. For example, a cell cannot reliably distinguish different inputs that, due to noise, can generate the same output (Fig. 1A).

Information theoretic analysis of cell signaling fidelity. (**A**) Schematic showing information loss due to overlapping noisy response distributions. (**B**) Diagram of the TNF-NF-κB signaling pathway represented in biochemical form (left) and as a noisy **...**

Conventional metrics related to the standard deviation or variance of the response distribution measure noise magnitude (4–8), but fail to elucidate how noise quantitatively affects the accuracy of information processing in single cells. On the other hand, an information theoretic approach (Fig. 1B), and the metric of mutual information in particular, can quantify signaling fidelity in terms of the maximum number of input values that a cell can resolve in the presence of noise. Such methods have been commonly used to evaluate man-made telecommunication systems (9) and more recently in computational neuroscience and in analyses of transcriptional regulatory systems (10–14), but has not been applied to biochemical signaling networks. We developed a general integrative theoretical and experimental framework to predict and measure the mutual information transduced by one or more signaling pathways. Applying this framework to analyze a 4-dimensional compendium of single cell responses to tumor necrosis factor (TNF) (Fig. 1C, see also SOM Section 1), an inflammatory cytokine that initiates stochastic signaling at physiologic concentrations spanning ~4 orders of magnitude (15–21), shows that signaling via a network rather than a single pathway can abate the information lost to noise. Furthermore, an information bottleneck can restrict the maximum information a network can capture, and negative feedback potentially but not always relieves this limitation.

The mutual information, *I*(*R*;*S*), measured in bits, is the binary logarithm of the maximum number of input signal values (*S*), such as ligand concentrations, a signaling system can perfectly resolve on the basis of its noisy output responses (*R*) (9). One bit of information can resolve two different signal values, two bits resolves four values, etc. More generally,

$$I(R;S)={\int}_{S}{\int}_{R}P(R,S){\text{log}}_{2}\left(\frac{P(R,S)}{P(R)P(S)}\right)dRdS.$$

[1]

The joint distribution *P*(*R*,*S*) determines the marginal distributions *P*(*R*) and *P*(*S*) and hence also the mutual information, and can be decomposed as *P*(*R*,*S*) = *P*(*S*) *P*(*R*|*S*). The response distribution, *P*(*R*|*S*), is experimentally accessible by sampling responses of individual isogenic cells to various signal levels (Fig. 1C) and its spread reflects the noise magnitude given any specific input. The signal distribution, *P*(*S*), reflects potentially context-specific frequencies at which a cell experiences different signal values. Although the amount of information might thus vary from case to case, one can also determine the maximal amount of transducible information, given the observed noise (see SOM Section 2). This quantity, known as the channel capacity (9), is a general characteristic of the signaling system and the signal-response pair of interest, and can thereby be experimentally measured without making assumptions about the (possibly nonlinear) relationship between *R* and *S*, signal power, or noise properties.

Using immunocytochemistry, we assayed nuclear concentrations of the transcription factor NF-κB in thousands of individual mouse fibroblasts 30 min. after exposure to various TNF concentrations (Fig. 1D), choosing this time point because NF-κB translocation peaks at 30 min. regardless of the concentration used, initiating expression of early response inflammatory genes (19–22). The NF-κB response value in a single cell could yield at most 0.92 ± 0.01 bits of information which is equivalent to resolving 2^{0.92} = 1.9, or about 2, concentrations of the TNF signal, thus essentially only reliably indicating whether TNF is present or not. (See SOM, Sections 2.2 and 3, regarding the low experimental uncertainty.) A bimodal input signal distribution, *P*(*S*), with peaks at low and high TNF concentrations maximizes the information (Fig. S1), supporting the notion of essentially binary (digital) sensing capabilities of this pathway (18), although we did not observe bimodal output responses, *P*(*R*|*S*).

Noise also limits other canonical pathways, including signaling by platelet derived growth factor (PDGF), epidermal growth factor (23), and G-protein coupled receptors (24) to ~1 bit (Fig. S2A–C, Table S1). Even the most reliable system we examined, morphogen gradient signaling through the receptor Torso in *Drosophila* embryos (25), was limited to 1.61 bits (Fig. S2D, Table S1), corresponding to just ~3 distinguishable signal levels.

The pathways examined above are examples of individual biochemical communication channels (Fig. 1B) that capture relatively low amounts of information about signal intensity, which would allow only limited reliable decision making by a cell. However, information in biological systems is typically processed by networks comprising multiple communication channels, each transducing information about the signal. For instance, a transcription factor often regulates many genes, a receptor many transcription factors, and a diffusible ligand many cells. The outputs of such multiple channels together can provide more information about the signal than the output of any one channel (see SOM Section 4). Subsequently, downstream signaling processes that converge to co-regulate common effectors, biological processes, or physiologic functions can provide the point needed to integrate the multiple outputs to realize the benefit of increased aggregate information (Fig. S3). To provide a unified framework for analyzing such various networks, we first theoretically investigated information gained by network signaling in general, then experimentally tested the predictions made by the theory when applied to a specific system.

We considered two information theoretic models, similar to models of population coding in neural systems (26–28), for transmitting a signal *S* through multiple channels to the responses *R*_{1}, *R*_{2}, …, *R _{n}*, under the assumption of Gaussian variables (see SOM Section 5). The bush model utilizes independent channels (topologically resembling an upside down shrub) (Fig. 2A), whereas the tree model signals through a common channel (“trunk”) to the intermediate,

$${I}_{\text{bush}}({R}_{1},\dots ,{R}_{n};S)=\frac{1}{2}{\text{log}}_{2}(1+n\frac{{\sigma}_{S}^{2}}{{\sigma}_{S\to R}^{2}}),$$

[2]

where ${\sigma}_{S}^{2}$ is the variance of the signal distribution, and ${\sigma}_{S\to R}^{2}$ is the noise (variance) introduced in each branch. Thus, the information can grow logarithmically with the number of branches without an upper bound. In contrast, the information resulting from the tree model is

$${I}_{\text{tree}}({R}_{1},\dots ,{R}_{n};S)=\frac{1}{2}{\text{log}}_{2}(1+\frac{n{\sigma}_{S}^{2}/{\sigma}_{C\to R}^{2}}{1+n{\sigma}_{S\to C}^{2}/{\sigma}_{C\to R}^{2}}),$$

[3]

where ${\sigma}_{S\to C}^{2}$ and ${\sigma}_{C\to R}^{2}$ are the trunk and branch noises, respectively (see SOM, Section 3.3). As the number of branches increases, the information asymptotically approaches an upper limit equal to the mutual information between the input signal and the common intermediate, thus the information lost to noise in the trunk determines the maximum throughput of a tree network.

Information gained by signaling through a network comprising multiple communication channels. (**A**) Schematic of a bush network with independent channels lacking an information bottleneck. (**B**) Schematic of a tree network with channels sharing a common trunk **...**

The key difference between bush and tree networks is the absence or presence of this trunk-based information bottleneck. The biochemical structure of a network can resemble a tree, but if the trunk presents little information limitation, the bush model lacking a bottleneck might best estimate the capacity of the network. Additionally, the bush and tree models make various semi-quantitative predictions (see SOM, Section 6), such as the information captured by a network based on the capacities of its component pathways. For example, for a bush network comprising two pathways each with 1 bit responses, Eq. 2 implies ${\sigma}_{S}^{2}/{\sigma}_{S\to R}^{2}=3$ and that together they should yield $\frac{1}{2}{\text{log}}_{2}(1+2(3))=1.4\text{bits}$.

TNF activates the NF-κB and c-Jun N-terminal kinase (JNK) pathways, stimulating nuclear localization of NF-κB and phosphorylated activating transcription factor-2 (ATF-2) (Fig. S4), respectively (29). To determine if the TNF signaling network contains a significant upstream information bottleneck limiting the information captured by these pathways, we examined whether the bush (bottleneck absent) or tree (bottleneck present) network model better approximates the network (Fig. S5). The models are applicable because the NF-κB (Fig. 1D) and ATF-2 (Fig. S6) response distributions are approximately Gaussian at all TNF concentrations. We found that NF-κB alone yielded at most 0.92 bits of information about TNF concentration, and ATF-2 alone yielded at most 0.85 ± 0.02 bits (Fig. S1B, Table S1). Together, the bush model predicts that these pathways jointly yield 1.27 ± 0.01 bits (Fig. 2C) and a similar model assuming independent pathway responses that are not necessarily Gaussian likewise predicts an increase to 1.13 ± 0.01 bits. The actual information determined by dual staining immunocytochemistry (Fig. 2D) was 1.05 ± 0.02 bits, much lower than both predictions (Fig. 2C), demonstrating that the bush model does not approximate the TNF network well. In contrast, the tree model predicts 1.03 ± 0.01 bits, matching the experimental value within error (Fig. 2C), and also correctly predicts the statistical dependency between the responses given the signal (Fig. S7).

The correspondence between the tree model predictions and experimental measurements strongly indicates that the network contains an information bottleneck. The tree model predicts the maximum information that can pass through the bottleneck is 1.26 ± 0.13 bits (Fig. 2C), corresponding to just 2^{1.26} = 2.3 distinguishable TNF concentrations. The known biochemistry of TNF signaling implies the bottleneck (trunk) comprises the steps of TNF receptor complex activation common to both pathways, including ligand binding, receptor trimerization, and complex formation and activation. Since all TNF signaling passes through the receptor complex, multiple pathways in the TNF signaling network, activated at the 30 min. time point, only modestly increase the information about TNF concentration regardless of the number of pathways or their fidelity.

We next explored whether negative feedback, which can reduce noise (12, 30, 31), might alleviate the receptor level signaling bottleneck. The information captured by a single channel (Eq. 2, *n* = 1) can be written as $\frac{1}{2}{\text{log}}_{2}({\sigma}_{R}^{2}/{\sigma}_{S\to R}^{2})$. Thus negative feedback can have equivocal effects on information, depending on the balance of the tendencies for negative feedback to reduce both the dynamic range of the signaling response (32), represented by the response variance ${\sigma}_{R}^{2}$ and noise represented by ${\sigma}_{S\to R}^{2}$. Indeed, comparison of wildtype cells and cells lacking A20 (Fig. S8), an inhibitor of TNF receptor complexes whose expression is upregulated by NF-κB (33) (Fig. 3A), showed that A20-mediated negative feedback increases information at the 30 min. time point, but decreases it at 4 hrs (Fig. 3B).

Impact of negative feedback to the bottleneck on information transfer. (**A**) TNF signaling network diagram showing A20-mediated negative feedback to the information bottleneck. (**B**) Comparison of information about TNF concentration captured with and without **...**

To understand these different outcomes, we examined how A20 affects the dynamic range and noise at either time point. At the early time point, constitutively expressed A20 inhibits basal NF-κB activity, but TNF does not induce A20 expression rapidly enough to affect saturating levels of NF-κB at 30 min (Fig. 3C–D, S9) (17, 34). Hence, A20 negative feedback decreases noise, primarily at low TNF concentrations, and also increases the dynamic range by lowering basal NF-κB levels (Fig. 3E, S10A), explaining why information at 30 min. is higher for wildtype than for A20^{−/−} cells (Fig. 3B). In contrast, at the late time point, A20 is increased in wildtype cells (17, 34). The negative feedback decreases noise at all TNF concentrations but also decreases the dynamic range by strongly suppressing the maximum inducible NF-κB activity (Fig. 3E, S10A). The net effect is lower information for wildtype versus A20^{−/−} cells at 4 hrs (Fig. 3B).

We observed that A20 negative feedback similarly both improves and limits information at the early and late time points respectively for ATF-2 alone, or together with NF-κB (Fig. 3B, S10B), consistent with A20 affecting the portion of the network common to both pathways. Nevertheless, the maximal information about TNF concentration acquired with or without A20-mediated negative feedback was still ~1 bit, suggesting limited advantages for mitigating the information bottleneck in this pathway using negative feedback.

We next considered whether networks comprising multiple target genes can capture substantial amounts of information through time integration. If the target gene product lifetime is long compared to its transcription and translation time scales, the accumulated protein concentration is approximately proportional to the time integral of signaling activity, thereby averaging out temporal fluctuations (35, 36). However, the biochemical readout of protein synthesis can introduce extra noise confounding determination of the information contained in the time integral. Fortunately, the maximum information captured by a tree network, in which the time integral of transcription factor activity is the intermediate signal activating multiple independent target genes (Fig. 4A, inset), is determined by the trunk (time integration) rather than branch noise (readout mechanism). We measured the information captured by such tree networks in cells stably transfected with different copy numbers (1.8 fold difference, as determined by polymerase chain reaction) of a gene for a stable green fluorescent protein (GFP) (37) reporting on NF-κB activity (Fig. 4B). Using the tree model to extrapolate the extent of the bottleneck, under the assumption that ~10 hrs TNF exposure induces similar expression level and noise for each gene, indicates that 1.64 ± 0.36 bits is the maximum information that integrating NF-κB activity over the experimental time period can yield about TNF concentration (Fig. 4A), regardless of the readout mechanism.

Information gained by signaling through networks of multiple genes. (**A**) Plot shows the unique curve (solid black) determined by the tree model (inset), passing through the experimentally determined values (circles), for information as a function of the **...**

To understand why information was only moderately higher compared to a single time point (1.64 versus 0.92 bits), we monitored GFP expression in individual cells, finding that, for any given cell, GFP accumulated linearly in time in a nearly deterministic fashion, although its onset and accumulation rate varied from cell to cell (Fig. 4C). This is consistent with observations made using live cell probes (18–20) showing NF-κB dynamics to be essentially deterministic over the experimental time scale within each cell, but distinct across cells. We thus conclude that the ability of time integration to increase the information about TNF concentration is limited by the lack of rapid temporal fluctuations that would otherwise be suppressed by integration over the 10 hour response.

Finally, we considered signaling via multiple cells, each considered as separate information channels within a network (Fig. 5A, inset). An ensemble of cells resembles a bush network if each cell directly and independently accesses the same signal, and since bush networks do not contain bottlenecks, substantial increases in information might be obtained. To test this hypothesis, we analyzed the collective TNF response of different numbers of cells, as measured by immunocytochemistry. We varied cell number by considering cells within non-overlapping circular regions of variable diameter (Fig. 5B), and used the average NF-κB response within each region to simulate cells contributing to a collective response in proportion to their NF-κB activity. The bush model predicts (Eq. 2), and the data confirms (Fig. 5A), that the information should increase logarithmically with the number of independently signaling cells functioning collectively.

Information gained by signaling through networks of multiple cells. (**A**) Comparison of experimentally measured information obtained by collective cell responses (circles) versus logarithmic trend (solid black line) predicted the bush model (inset). (**B** **...**

Moreover, we found that networks of just 14 cells can yield up to 1.8 bits of information, far greater than the other network types analyzed above. Since ensembles of this size can plausibly experience a similar concentration of a diffusing signal such as TNF and function collectively (21, 38) (e.g., TNF-activated blood vessel endothelial cells (39)), collective cell behavior can effectively increase the information gained and produce responses that can discriminate between many TNF concentrations. Nonetheless, networks relying on cell-cell communication can still contain bottlenecks. For instance, TNF can be secreted by macrophages stimulated by lipo-polysaccharide (LPS) from invading bacteria, with the information about the initial LPS dose lost within the macrophage signaling networks prior to secretion of TNF.

By treating biochemical signaling systems as information theoretic communication channels, we have rigorously and quantitatively shown that in a single cell noise can substantially restrict the amount of information transduced about input intensity, particularly within individual signaling pathways. The bush and tree network models, which provide a unified theoretical framework for analyzing branched motifs widespread in natural and synthetic signaling networks, further demonstrated that signaling networks can be more effective in information transfer, although bottlenecks can also severely limit the information gained. Receptor level bottlenecks restrict the TNF and also PDGF signaling networks (Fig. S11) and may be prevalent in other signaling systems.

We explored several strategies that a cell might employ to overcome restrictions due to noise. We found that negative feedback can suppress bottleneck noise, which can be offset by concomitantly reduced dynamic range of the response. Time integration can increase the information transferred, to the extent that the response undergoes substantial dynamic fluctuations in a single cell over the physiologically relevant time course. The advantage of collective cell responses can also be substantial, but limited by the number of cells exposed to the same signal or by the information present in the initiating signal itself.

Responses incorporating the signaling history of the cell might also increase the information (40, 41). For instance, responses relative to the basal state (fold-change response) might be less susceptible to noise arising from diverse initial states (23), although this does not necessarily translate into large amounts of transferred information (Table S1). Similarly, for the reporter gene system described here (Fig. S12), ~0.5 bits of additional information can be obtained if a cell can determine expression levels at both early and late time points. However, noise in the biochemical networks a cell uses to record earlier output levels and to later compute the final response may nullify the information gain potentially provided by this strategy. Overall, we anticipate that the information theory paradigm can extend to the analysis of noise mitigation strategies and information transfer mechanisms beyond those explored here, in order to determine what specific signaling systems can do reliably despite noise.

The authors thank Alex Hoffmann, Mel Simon, Stanislas Shvartsman, Cellina Cohen-Saidon, and Uri Alon for sharing data and materials; Ambhi Ganesan and Hao Chang for experimental assistance; and Pablo Iglesias, Yan Qi, and Andrew Feinberg for insightful discussions and reviewing drafts of the manuscript. This work was supported by the National Institutes of Health (GM072024, R.C., A.R., C.J.W., A.L.), the Medical Scientist Training Program at the Johns Hopkins University (R.C.), and, in early stages of the work, the Los Alamos National Laboratory Directed Research and Development program (I.N.). Raw data is available upon request.

1. Albeck JG, Burke JM, Spencer SL, Lauffenburger DA, Sorger PK. Modeling a snap-action, variable-delay switch controlling extrinsic cell death. PLoS Biol. 2008;6:2831. [PMC free article] [PubMed]

2. Rosenfeld N, Young JW, Alon U, Swain PS, Elowitz MB. Gene regulation at the single-cell level. Science. 2005;307:1962. [PubMed]

3. Perkins TJ, Swain PS. Strategies for cellular decision-making. Mol Syst Biol. 2009;5:326. [PMC free article] [PubMed]

4. Blake WJ, Cantor KAMCR, Collins JJ. Noise in eukaryotic gene expression. Nature. 2003;422:633. [PubMed]

5. Elowitz MB, Levine AJ, Siggia ED, Swain PS. Stochastic gene expression in a single cell. Science. 2002;297:1183. [PubMed]

6. Paulsson J. Summing up the noise in gene networks. Nature. 2004;427:415. [PubMed]

7. Pedraza JM, van Oudenaarden A. Noise propagation in gene networks. Science. 2005;307:1965. [PubMed]

8. Raser JM, O'Shea EK. Control of stochasticity in eukaryotic gene expression. Science. 2004;304:1811. [PMC free article] [PubMed]

9. Cover TM, Thomas JA. Elements of information theory. New York: Wiley; 1991.

10. de Ruyter van Steveninck RR, Lewen GD, Strong SP, Koberle R, Bialek W. Reproducibility and variability in neural spike trains. Science. 1997;275:1805. [PubMed]

11. Fuller D, et al. External and internal constraints on eukaryotic chemotaxis. Proc Natl Acad Sci U S A. 2010;107:9656. [PubMed]

12. Ziv E, Nemenman I, Wiggins CH. Optimal signal processing in small stochastic biochemical networks. PLoS One. 2007;2:e1077. [PMC free article] [PubMed]

13. Tkacik G, Callan CG, Jr, Bialek W. Information flow and optimization in transcriptional regulation. Proc Natl Acad Sci U S A. 2008;105:12265. [PubMed]

14. Mehta P, Goyal S, Long T, Bassler BL, Wingreen NS. Information processing and signal integration in bacterial quorum sensing. Mol Syst Biol. 2009;5:325. [PMC free article] [PubMed]

15. Cheong R, Hoffmann A, Levchenko A. Understanding NF-kappaB signaling via mathematical modeling. Mol Syst Biol. 2008;4:192. [PMC free article] [PubMed]

16. Cheong R, Wang CJ, Levchenko A. High content cell screening in a microfluidic device. Mol Cell Proteomics. 2009;8:433. [PMC free article] [PubMed]

17. Werner SL, et al. Encoding NF-kappaB temporal control in response to TNF: distinct roles for the negative regulators IkappaBalpha and A20. Genes Dev. 2008;22:2093. [PubMed]

18. Tay S, et al. Single-cell NF-kappaB dynamics reveal digital activation and analogue information processing. Nature. 2010;466:267. [PMC free article] [PubMed]

19. Ashall L, et al. Pulsatile stimulation determines timing and specificity of NF-kappaB-dependent transcription. Science. 2009;324:242. [PMC free article] [PubMed]

20. Nelson DE, et al. Oscillations in NF-kappaB signaling control the dynamics of gene expression. Science. 2004;306:704. [PubMed]

21. Cheong R, et al. Transient IkappaB kinase activity mediates temporal NF-kappaB dynamics in response to a wide range of tumor necrosis factor-alpha doses. J Biol Chem. 2006;281:2945. [PubMed]

22. Hoffmann A, Levchenko A, Scott ML, Baltimore D. The IkappaB-NF-kappaB signaling module: temporal control and selective gene activation. Science. 2002;298:1241. [PubMed]

23. Cohen-Saidon C, Cohen AA, Sigal A, Liron Y, Alon U. Dynamics and variability of ERK2 response to EGF in individual living cells. Mol Cell. 2009;36:885. [PubMed]

24. Bao XR, Fraser ID, Wall EA, Quake SR, Simon MI. Variability in G-protein-coupled signaling studied with microfluidic devices. Biophys J. 2010;99:2414. [PubMed]

25. Coppey M, Boettiger AN, Berezhkovskii AM, Shvartsman SY. Nuclear trapping shapes the terminal gradient in the Drosophila embryo. Curr Biol. 2008;18:915. [PMC free article] [PubMed]

26. Averbeck BB, Latham PE, Pouget A. Neural correlations, population coding and computation. Nat Rev Neurosci. 2006;7:358. [PubMed]

27. Pillow JW, et al. Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature. 2008;454:995. [PMC free article] [PubMed]

28. Schneidman E, Bialek W, Berry MJ., II Synergy, redundancy, and independence in population codes. J Neurosci. 2003;23:11539. [PubMed]

29. Wajant H, Pfizenmaier K, Scheurich P. Tumor necrosis factor signaling. Cell Death Differ. 2003;10:45. [PubMed]

30. Becskei A, Serrano L. Engineering stability in gene networks by autoregulation. Nature. 2000;405:590. [PubMed]

31. Lestas I, Vinnicombe G, Paulsson J. Fundamental limits on the suppression of molecular fluctuations. Nature. 2010;467:174. [PMC free article] [PubMed]

32. Yu RC, et al. Negative feedback that improves information transmission in yeast signalling. Nature. 2008;456:755. [PMC free article] [PubMed]

33. Wertz IE, et al. De-ubiquitination and ubiquitin ligase domains of A20 downregulate NF-kappaB signalling. Nature. 2004;430:694. [PubMed]

34. Lee EG, et al. Failure to regulate TNF-induced NF-kappaB and cell death responses in A20-deficient mice. Science. 2000;289:2350. [PMC free article] [PubMed]

35. Shahrezaei V, Swain PS. Analytical distributions for stochastic gene expression. Proc Natl Acad Sci U S A. 2008;105:17256. [PubMed]

36. Krishna S, Jensen MH, Sneppen K. Minimal model of spiky oscillations in NF-kappaB signaling. Proc Natl Acad Sci U S A. 2006;103:10840. [PubMed]

37. Thierfelder S, Ostermann K, Gobel A, Rodel G. Vectors for glucose-dependent protein expression in Saccharomyces cerevisiae. Appl Biochem Biotechnol. 2011;163:954. [PubMed]

38. Francis K, Palsson BO. Effective intercellular communication distances are determined by the relative time constants for cyto/chemokine secretion and diffusion. Proc Natl Acad Sci U S A. 1997;94:12258. [PubMed]

39. Parkin J, Cohen B. An overview of the immune system. Lancet. 2001;357:1777. [PubMed]

40. Nemenman I, Lewen GD, Bialek W, de Ruyter van Steveninck RR. Neural coding of natural stimuli: information at sub-millisecond resolution. PLoS Comput Biol. 2008;4:e1000025. [PMC free article] [PubMed]

41. Strong SP, Koberle R, de Ruyter van Steveninck RR, Bialek W. Entropy and information in neural spike trains. Phys Rev Lett. 1998;80:197.

42. Freedman DA, Folkman J. Maintenance of G1 checkpoint controls in telomerase-immortalized endothelial cells. Cell Cycle. 2004;3:811. [PubMed]

43. Cohen AA, et al. Dynamic proteomics of individual cancer cells in response to a drug. Science. 2008;322:1511. [PubMed]

44. Paninski L. Estimation of entropy and mutual information. Neural Comput. 2003;15:1191.

45. Nemenman I, Shafee F, Bialek W. In: Advances in Neural Information Processing Systems. Dietterich TG, Becker S, Ghahramani Z, editors. vol. 14. Cambridge, MA: MIT Press; 2002. pp. 95–100.

46. Slonim N, Atwal GS, Tkacik G, Bialek W. Information-based clustering. Proc Natl Acad Sci U S A. 2005;102:18297. [PubMed]

47. Margolin AA, et al. ARACNE: an algorithm for the reconstruction of gene regulatory networks in a mammalian cellular context. BMC Bioinformatics. 2006;7(Suppl 1):S7. [PMC free article] [PubMed]

48. Kraskov A, Stögbauer H, Grassberger P. Estimating mutual information. Phys Rev E. 2004;69:066138. [PubMed]

49. Slonim N, Atwal GS, Tkacik G, Bialek W. Estimating mutual information and multi-information in large networks. arXiv cs.IT/0502017. 2005

50. Chong EKP, Zak SH. An introduction to optimization. 3rd ed. Hoboken NJ: Wiley; 2008. pp. 457–471.

51. Tkacik G, Callan CG, Jr, Bialek W. Information capacity of genetic regulatory elements. Phys Rev E. 2008;78:011910. [PMC free article] [PubMed]

52. Nemenman I, Bialek W. Occam factors and model independent Bayesian learning of continuous distributions. Phys Rev E. 2002;65:026137. [PubMed]

53. Panzeri S, Senatore R, Montemurro MA, Petersen RS. Correcting for the sampling bias problem in spike train information measures. J Neurophysiol. 2007;98:1064. [PubMed]

54. Margolin AA, Wang K, Califano A, Nemenman I. Multivariate dependence and genetic networks inference. IET Syst Biol. 2010;4:428. [PubMed]

55. Magesacher T, Odling P, Sayir J, Nordstrom T. paper presented at the IEEE International Symposium on Information Theory. Yokohama; Japan. 2003.

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |