Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
J Comput Neurosci. Author manuscript; available in PMC 2010 August 2.
Published in final edited form as:
PMCID: PMC2913428

Balance between noise and adaptation in competition models of perceptual bistability

Asya Shpiro, Ruben Moreno-Bote, and Nava Rubin
Center for Neural Science, New York University, 4 Washington Place, New York, NY 10003, USA


Perceptual bistability occurs when a physical stimulus gives rise to two distinct interpretations that alternate irregularly. Noise and adaptation processes are two possible mechanisms for switching in neuronal competition models that describe the alternating behaviors. Either of these processes, if strong enough, could alone cause the alternations in dominance. We examined their relative role in producing alternations by studying models where by smoothly varying the parameters, one can change the rhythmogenesis mechanism from being adaptation-driven to noise-driven. In consideration of the experimental constraints on the statistics of the alternations (mean and shape of the dominance duration distribution and correlations between successive durations) we ask whether we can rule out one of the mechanisms. We conclude that in order to comply with the observed mean of the dominance durations and their coefficient of variation, the models must operate within a balance between the noise and adaptation strength—both mechanisms are involved in producing alternations, in such a way that the system operates near the boundary between being adaptation-driven and noise-driven.

Keywords: Bistability, Competition, Mutual inhibition, Attractor, Oscillator, Noise

1 Introduction

When observers are presented with an ambiguous stimulus which supports two distinct interpretations, perception alternates between these interpretations in a random manner, a phenomenon called perceptual bistability. Examples of bistable perception include, among others, binocular rivalry (Wheatstone 1838; Levelt 1968; Blake 1989; Logothetis 1998; Blake 2001; Tong 2001), ambiguous motion displays (Wallach 1935, 1996; Hupe and Rubin 2003), ambiguous depth perception (Necker 1832; Rubin, 1921, 2001), and random dot kinematograms (Julesz 1971). The most extensively studied bistable phenomenon, binocular rivalry, occurs when two different images are presented simultaneously and independently to the two eyes. Only one of the images is perceived at any given moment, with dominance switching randomly between the two.

Bistable perception, and binocular rivalry in particular, are often modeled mechanistically using reciprocal inhibition architecture, as shown schematically in Fig. 1(a). There are two neuronal populations whose activities represent the two competing interpretations of the stimulus. The dominant population exerts a strong inhibitory influence on the competing one, so that the latter is suppressed, and only one stimulus is being perceived at a time, a scenario known as mutual exclusivity in most models. The switching in dominance between the two populations is realized by an adaptation mechanism, such as spike frequency adaptation and/or synaptic depression. The adaptation process weakens the inhibition either by decreasing the activity of the dominant population or by decreasing the strength of inhibitory connection between the populations, thus allowing the suppressed population to become active. The resulting activity of such system are deterministic continuous anti-phase oscillations of the firing rates of the two populations, and corresponding switches of two percepts. These general principles have been incorporated in many mathematical models of binocular rivalry (Lehky 1988; Kalarickal and Marshall 2000; Lago-Fernandez and Deco 2002; Laing and Chow 2002; Stollenwerk and Bode 2003; Wilson 2003; Shpiro et al. 2007). We term these oscillator models: noise is inessential here. Experimentally, it is an unavoidable process, but if it could somehow be removed, alternations would still occur.

Fig. 1
(a) Network architecture for neuronal competition models with direct mutual inhibition (DIRI). u1,2 is the population activity. Each population receives a deterministic input of equal strength Ii and independent noise n1,2. Slow negative feedback process, ...

In other neural competition models, different percepts are represented by multiple stable states (attractors) of the system (Hertz et al. 1991; Haken 1994; Riani and Simonotto 1994; Salinas 2003; Freeman 2005; Kim et al. 2006; Moreno-Bote et al. 2007). Noise is responsible for the switches between the states, so that in the absence of noise no alternations are possible. These are the noise-driven attractor models (Moreno-Bote et al. 2007).

Here we consider these two types of models within a single theoretical framework. We show that oscillator and attractor behavior are the two regimes of a single neuronal competition model that includes both noise and adaptation processes. In the oscillator regime, adaptation causes the populations to alternate in dominance, while noise is only the source of the irregularity in the switching times. In the attractor regime, noise is the primary cause of the switches. While the adaptation process is still present, it is not strong enough to cause alternations on its own. In the absence of noise, the model in the attractor regime would not switch. Figure 1(b) shows schematically the behavior of a model in the two extremes, when only adaptation, and no noise is present (bottom), and when only noise and no adaptation is present (top). In the oscillator regime without noise, alternations are completely deterministic, the dominance duration (time between the populations' activities switches) does not fluctuate over time, and the coefficient of variation (CV) is equal to zero. Alternatively, in the attractor regime without adaptation, the distribution of the dominance durations is exponential, and the CV is close to one. In our description, the change between the attractor and oscillator regimes is realized by a smooth change of the model's parameters, such as adaptation strength and input strength, rather than by the model's architecture.

Different perceptual bistability phenomena exhibit some common statistical properties. The distribution of dominance durations is shaped as a skewed Gaussian, typically fit by gamma or log-normal functions (Levelt 1968; Lehky 1995; Rubin and Hupe 2004; Zhou et al. 2004). The mean dominance duration of each percept depends both on the bistable phenomenon and on the observer. Typically, it is between 1 and 10 s in the binocular rivalry experiments (Levelt 1968; Mueller and Blake 1989; Leopold and Logothetis 1996; Zhou et al. 2004; Haynes et al. 2005), and between 5 and 10 s in the ambiguous depth perception (Meng and Tong 2004; Zhou et al. 2004; Moreno-Bote et al. 2008) and ambiguous motion displays (Hupe and Rubin 2003). The coefficient of variation of the dominance durations, CV, defined as the ratio between the standard deviation and the mean, varies between 0.4 and 0.6 (Levelt 1968; Zhou et al. 2004; Moreno-Bote et al. unpublished). Since no significant correlations between the durations of successive percepts have been reported, successive durations appear to be statistically independent (Fox and Herrmann 1967; Levelt 1968; Lehky 1995; Rubin and Hupe 2004; but see van Dam et al. 2007).

To match the model's performance to the experimentally observed behavior, we focus on the two main statistical measures of the stochastic switching behavior: the mean dominance duration of a percept and the coefficient of variation of the dominance durations, CV. We find the regions in the model's parameter space where the simulated behavior statistics (mean dominance duration and CV) matches the one observed in experiments. Despite the seemingly broad ranges of the mean dominance duration and CV that we consider, the boundaries impose significant constraints on the models' parameters.

We consider two neuronal competition models that describe bistable perception. One of the models represents a simple DIRect Inhibition architecture shown in Fig. 1(a) (DIRI model, see Shpiro et al. 2007). The other model (Moreno-Bote et al. 2007) characterizes a different, Global Excitation-Local Inhibition architecture (GELI model, see Fig. 3 in Moreno-Bote et al. 2007). We show that because of the interplay between the noise and adaptation, the range of the models' parameters leading to the alternations consistent with the experimentally observed statistics is small compared to the overall range of the parameters where alternations are possible. Moreover, we show that independently of the model's architecture and details of the adaptation process, this relevant range of parameters is such that the models operate at the boundary between the oscillator and attractor regimes.

Fig. 3
Effects of the input and slow negative feedback process strength on the mean dominance duration T and the CV of the dominance duration distribution in the presence of noise. (a, b) T and CV in the spike-frequency adaptation ...

2 Methods

We use an idealized population firing rate model to describe the network shown in Fig. 1(a) (DIRI model). Two different implementations of the adaptation process, spike-frequency adaptation and synaptic depression, are considered separately (Shpiro et al. 2007). The model with spike-frequency adaptation only (SFA model) is described by the following set of equations:


The following set of equations describes the model with synaptic depression only (SD model):


Below, when we use the terminology SFA and SD, the architecture DIRI will be implied; for the GELI architecture we consider only spike frequency adaptation. In the above two sets of Eqs. (1) and (2), u1 and u2 are the spatially and short-time averaged firing rates characterizing the activities of the populations 1 and 2, normalized so that its maximum value is 1. The time scale of the populations' firing rates is taken to be a unit of time, with the convention that one unit of time is equal to 10 ms in real time (see, for example, Amit and Brunel 1997; comparable values of 15 and 20 ms have been used by Wilson 2003, 2007). ai is the spike-frequency adaptation variable with the time scale τa; di is the synaptic depression variable with the time scale τd. The strength of the slow process is determined by the parameters γ (for spike-frequency adaptation) and δ (for synaptic depression). The gain function is takentobeasigmoid, f (x) = 1/(1 + exp(−(xθ)/k)), where 1/k defines the slope of the+gain function, and θ is a threshold. The positive input to the gain function consists of the deterministic external stimulus I1 = I2 and a stochastic term ni which simulates the noise in the system (see, for example, Mattia and Del Giudice 2002). The input strength, Ii, is assumed to increase monotonically with the stimulus strength (for example, image contrast in binocular rivalry experiments). The strength of inhibition is β. In the SD model, the cross-inhibition term is reduced by the synaptic depression variable. In the SFA model, the adaptation variable term is another negative input to the gain function. The input noise is an exponentially filtered white noise, defined by the following equation:


where σ is the standard deviation and τn is the timescale of the noise (Risken 1989). η(t) is white noise with zero mean and unit variation. This choice of noise can be thought of as a model for synaptic filtering of inputs to a neuron (Brunel and Sergi 1998; Moreno-Bote and Parga 2004). Finite size effects can be another source of noise (Soula and Chow 2007; Buice and Chow 2007). However, we do not consider them in our simple ad-hoc mean-field model.

Typically, firing rate models that are used to describe rhythmic network behavior include the recurrent (intrapopulation) excitation term. However, in the models describing bistable perception recurrent excitation may be harmful, as it can lead to rhythmogenesis in an isolated population (Shpiro et al. 2007). We, therefore, do not include recurrent excitation in the DIRI model.

The following values of the parameters are used: τa = τd = 200 (2 sec in real time), τn = 10 (0.1 sec in real time), k = 0.1, θ = 0, β = 1 in the SFA model and β = 0.75 in the SD model. We vary the strength of the noise (σ) and the strength of the slow processes (γ for spike-frequency adaptation and δ for synaptic depression), as well as the strength of the deterministic external input to the system (I1 = I2).

We also consider the alternative architecture model, introduced by Moreno-Bote (GELI model, Moreno-Bote et al. 2007). The GELI model is described by the following equations for the mean firing rates:


where f (x) = 1/(1 + exp(−(xθ)/k)), Θ(s) is a Heaviside step function, that is, Θ(s) = s if s ≥ 0 and Θ(s) = 0 if s < 0. This model differs from the SFA model described by Eq. (1) by the presence of the recurrent excitation term, proportional to α, and the non-linear inhibition term, proportional to β. Architectures similar to the one we use here have been used to model gain normalization (see, for example, Grossberg 1973 and Moldakarimov et al. 2005). Our network differs from the ones described in these papers in several important ways, including the way local inhibition activation depends on the activity of the local excitatory population and external inputs.

The exponentially filtered white noise which dynamics is described by the Eq. (3) is used, as in the SFA and SD models. The values of the model's parameters used in the simulations are τa = 200 (2 sec in real time), τn = 10 (0.1 sec in real time), θ = 0, k = 0.05, α = 0.75, β = [var phi] = ϕ = 0.5. We vary the strength of the noise and the strength of the adaptation γ, as well as the strength of the deterministic external input to the system I1 = I2.

For each set of parameters tested, we calculate 5000 s time courses of the models' behavior, which allows for at least 250 switches in dominance, and obtain the time series of the dominance durations for each set of the models' parameters. Population i is considered to be dominant and population j to be suppressed if ui > uj. We calculate several statistical characteristics of the time series: mean dominance duration T, coefficient of variations of the dominance durations (CV, defined as the ratio between the standard deviation and the mean), and correlation coefficient R between the two successive dominance durations, that is, between the dominance duration of one percept and the immediately following dominance duration of the alternative percept:




where Tk is the k-th dominance duration, that is, the time between the switches k and k 1. The correlation coefficient indicates the degree of the linear dependence between the two random variables. We consider the correlation to be significant if the probability of getting a correlation value as large as the observed value by random chance becomes less than 5% (McClave and Sincich 2006). We perform Kolmogorov-Smirnov goodness-of-fit test to compare the simulated dominance times distributions with the gamma, log-normal, and Weibull distributions. The probability density functions of these distributions are given by the following expressions:


for the gamma distribution, where the mean of the distribution is equal to a/b, and the variance is equal to a/b2, Γ(a) is a gamma-function,


for the log-normal distribution where the mean of the distribution is equal to eμ+σ2/2, and the variance is equal to (eσ2 − 1)e2μ+σ2, and


for the Weibull distribution, where the mean of the distribution is equal to 1bΓ(1+1a), the variance is equal 1b2(Γ(1+2a)+Γ2(1+1a)).

The models equations were implemented within C programming language using forward Euler's method with the integration time step of 0.1 (1 ms in real time). Further decrease of the integration time step does not change the result of the simulations. The GNU scientific library was used to generate Gaussian white noise. Matlab software was used to calculate statistical properties of the dominance times distributions.

For the reader's convenience, below are the definitions of all abbreviations used in the paper.


DIRect Inhibition
Spike Frequency Adaptation model with DIRI architecture
Synaptic Depression model with DIRI architecture
Global Excitation-Local Inhibition

Types of behavior

SIMultaneously active/inactive populations


Coefficient of Variance
Region of Relevance

3 Results

3.1 Competition models: direct inhibition models without noise—oscillator and attractor dynamics

We first investigate the behavior of the SFA and SD models of perceptual switches (Eqs. (1) and (2)) in the absence of noise. We solve the systems of differential equations (Eqs. (1) and (2)) numerically and observe that depending on the values of the systems' parameters, several types of solutions are possible. The behavior is periodic when the two populations take turns in being active. The behavior is winner-take-all, when, depending on the initial conditions, only one population remains active indefinitely, while the other remains silent (attractor dynamics). Finally, both populations can have simultaneously the same steady level of activity. The time courses of the network activity corresponding to different modes of behavior can be found in Shpiro et al. (2007).

The results of our simulations are summarized in Fig. 2(a) for the SFA model and panel (b) for the SD model. Figure 2 is a “phase diagram” of the system in the parameter space of input strength and adaptation (or depression) strength for a particular value of the inhibition strength. Consider, first, a system without a slow process (γ = 0 for the SFA model and δ = 0 for the SD model, top border of the diagrams in Fig. 2(a) and (b). In this case, both the SFA and SD models reduce to


with f (x) = 1/(1 + exp(−(xθ)/k)) and I1 = I2. Three types of behavior are possible in this reduced model. For the large values of the input, a single stable equilibrium point exists, u1 = u2 ≈ 1. Both populations are active at the simultaneously≈ high level. For the small values of the input, u1 = u2 = ≈ 0 is a single stable equilibrium point, and both populations are active at the simultaneously low level. For the values of the input I1 = I2 between θ and θ + β, approximately, where θ is the threshold of the gain function, the equilibrium point u1 = u2 is no longer stable. Instead, u1 ≈ 1, u2 ≈ 0 and u1 ≈ 0, u2 ≈ 1 are the two stable equilibrium points, and the system is bistable, it operates in the winner-take-all regime. Note that all the equilibrium values and the values for the winner-take-all regime borders become exact for the Heaviside step-function form of the gain, that is f (x) = 0 when x < θ, and f(x) = 1 when xθ (Curtu et al. 2008).

Fig. 2
Phase, or response, diagram of the system's behavior for the spike-frequency adaptation (SFA) model (panel a) and synaptic depression (SD) model (panel b) in the (input strength, adaptation/depression strength) parameter space in the absence of noise. ...

For non-zero values of the adaptation/depression strength, oscillations are possible for certain values of the input strength I1 = I2 in the both SFA and SD models. Evolution of the slow variable (adaptation or depression) allows the input to the gain function in the Eqs. (1) and (2), (− βujγai + Ii) or (− βdjuj + Ij) to pass the threshold θ, leading to a switch in the populations' activities. The regions of the input where oscillations are possible appear first near the (two) boundaries between the winner-take-all and simultaneously low and simultaneously high activities regimes. The ranges of the input where oscillations are produced widen as the adaptation/depression strength increases. The range of the input where the winner-take-all behavior is produced shrinks, until it disappears for some value of the spike-frequency adaptation/synaptic depression strength. The region of the (adaptation/depression strength, input strength) parameter space where the oscillatory behavior is produced is white at the phase diagrams in Fig. 2(a) and (b), and marked OSC. The region where the winner-take-all behavior is produced is black and marked ATT (standing for ATTractor dynamics). The regions of the simultaneously high or simultaneously low activities (gray areas on the phase diagram, marked SIM) remain outside of the region of the oscillatory behavior. Further increase of the adaptation strength leads to the widening of the region of the oscillatory behavior in the SFA model, because as the range of the γai term in the gain function input increases, the range of the input Ii needed to bring the gain function input (−βujγai + Ii) through the threshold increases as well. Further increase of the depression strength leads to shrinking and, finally, disappearance of the oscillatory behavior region in the SD model. For a more detailed analysis of the system of nonlinear differential equations describing the SFA model see Curtu et al. (2008).

The diagrams in Fig. 2 are made for the value of the inhibition strength β = 1 for adaptation and β = 0.75 for depression. For other values of β, the shapes of the different regions remain the same, though the exact location of the borders changes.

3.2 DIRI model in the presence of noise: regions of the parameter space where experimental constraints are satisfied

In the presence of noise, the system's behavior is no longer deterministic. The switches between the populations' activities are irregular, and instead of the time series consisting of equal dominance durations, characteristic to the deterministic models, simulations produce a distribution of the dominance durations for each set of the models' parameters. These distributions can be characterized by their mean, T, and their coefficient of variation, CV.

In Fig. 3 we show the effect of the input and the adaptation process strength on the mean dominance duration and the CV in the SFA (panels (a) and (b)) and SD (panels (c) and (d)) models. The values of the mean dominance duration and the CV are represented by the grayscale, the brighter the gray color, the higher is the value. The black lines are the contours of T = 1 s and T = 10 s in panels (a) and and (c), and CV = 0.4 and CV = 0.8 in the panels (b) and (d). In other words, the regions between these contours are the ones where the values of the mean dominance duration and the CV calculated in the models satisfy the experimental constraints: the mean dominance duration is between 1 s and 10 s, and CV is between 0.4 and 0.8 (we extend the range of the CV from 0.4–0.6 to 0.4–0.8 for illustration purposes, we consider the consequences of the more realistic 0.4–0.6 CV range in Discussion). In Fig. 3, we show these regions separately for the mean dominance duration and the CV; however, it is really the overlap of these regions that we are interested in, since we are looking for the regions in the parameter space where both constraints are satisfied. We will discuss in detail below the location of such regions, as well as their evolution as the strength of the noise changes.

Note that in the SD model, for the chosen value of the noise strength σ = 0.02 there is a region of the (input strength, depression strength) parameter space where no switches are produced in the model (the black-colored region on the Fig. 3(c) and (d). This is, however, an artifact of the finite simulation time: the time between the switches is too long to be observed.

In Fig. 4 we show only the regions of T between 1 s and 10 s (yellow), regions of CV between 0.4 and 0.8 (blue), and their overlap (green) for the same values of inhibition strength β as in Fig. 3 (1 for adaptation and 0.75 for depression) and for several values of the noise strength. One can see that for small noise strength, the size of the region of relevance (ROR), that is, the region where both T and CV are in the desired range, is small compared to the size of the regions where T and CV are in the desired range separately. As the noise strength increases, the relevant CV region expands, while the relevant T region decreases in size. The size of the ROR seems to be determined primarily by the location of the relevant CV region for small values of the noise strength, as the ROR expands together with the relevant CV region. As the noise strength increases further, however, the ROR starts to occupy a large part of the relevant T region. As the increase of the noise strength leads to the decrease in the size of the relevant T region, the ROR decreases as well.

Fig. 4
Combined mean dominance time T and CV maps in the presence of noise of strength σ for (a) spike-frequency adaptation (SFA) model and (b) synaptic depression (SD) model. Regions where T is in the relevant range ...

Figure 4 shows also the location of the relevant regions of T and CV, and, more importantly, of the overlap region relative to the phase diagram for the noise-free system (the one shown in Fig. 2). We note that this region is very small compared with the overall region in the parameter space where the switching behavior is observed, that is, very limited ranges of the adaptation/depression strength and input strength parameters produce the experimentally observed statistics. The black line in all panels in Fig. 4 is the border between the attractor and oscillatory regimes of the system's behavior in the (input strength, adaptation/depression strength) parameter space. One can see that the region of the parameter space where both T and CV are in the desired ranges is located at the border between the attractor and oscillatory regimes, i.e., the switching behavior that satisfies experimentally observed statistics results from the fine interplay between the noise and a slow process.

In the following section we explain in detail some features of the statistical properties of alternations. Here, we note that the strength of the noise has opposite effects on the mean dominance duration and CV. Suppose the model is in the pure oscillator regime, without noise, so that the coefficient of variation of the dominance durations is equal to zero. Significant amount of noise has to be introduced to account for the large CV of the experimental distributions. Noise in the models, however, leads to a decrease in the mean dominance duration. In order to keep the mean dominance duration in the range prescribed by the experimental observations, the models have to be shifted from the oscillator regime toward the attractor regime, where even in the presence of the large amount of noise the mean dominance duration is large enough.

3.3 Dynamical mechanisms and statistics of alternations

Some features of the statistical properties of the alternations can be attributed to dynamical mechanisms. For example, CV increases substantially as parameter values are tuned toward the border between the OSC and SIM regimes. These regions of increasing CV correspond to moving through the blue bands from the white areas toward the gray regions in Fig. 4 (say, for stronger adaptation/depression strengths). Here, the mean T decreases as CV increases. While this effect may seem counterintuitive let us recall that the deterministic system has relaxation dynamics, i.e. relatively fast change of firing rate value and slow negative feedback. During phases of dominance the system trajectory moves slowly along a multibranched manifold of bistable states (one population at high activity while the other one is nearly quiescent). Switching of dominance, in the deterministic case, corresponds to reaching a “knee” or fold on the surface and then jumping to the opposite branch (see, for example, Guckenheimer and Holmes 2002). In the presence of noise the switch points are randomly distributed, on the surface and before the folds. The OSC-SIM border corresponds to a dynamical phase transition, a supercritical Hopf bifurcation (Curtu et al. 2008). As parameter values are chosen closer to the OSC-SIM border the slow variables range decreases during a phase of dominance (corresponding to overall decrease in oscillation amplitude) and the times between switches become relatively shorter. For a given choice of parameters in this range the noise and therefore randomness in switching points along the slow manifold can lead to a substantial relative variation in the time between switches and hence the CVs will be large. This argument accounts for the CV-relationship in the blue regions, away from the ATT regime.

The CVs are relatively low within the OSC regime, for parameter values away from the OSC-SIM boundary, including for parameter values in the yellow region. Here, the system behaves as a noisy relaxation oscillator. This characterization also applies as parameters are tuned from OSC to ATT regions (i.e., crossing the black curve in Fig. 4), at least for mid-amplitude stimulus strengths. In these yellow areas the CV is relatively low and does not change drastically with T or stimulus strength. Again, this is understood from the effect of noise on the relaxation dynamics. For parameter values just inside the ATT region the noise-free system has a stable steady state (attractor) very close to the knee. One might expect that as parameter values move from the OSC regime toward and into the ATT regime that dominance durations would become substantially longer and that they would have greater variability. However, the effect of noise likely causes the switch points to occur well before the knee and therefore the trajectories do not encounter the slowing effect on a trajectory heading toward the steady state attractor; the CV does not grow substantially in this region. The increase of CV is experienced for parameter values deeper into the ATT region (again, thinking of mid-amplitude stimuli), say decreasing adaptation strength (upper left panels of Fig. 4) when steady states are farther from the knees and closer to the noise-driven jump points. This transition, moving vertically in Fig. 4 panels from the OSC region across the OSC-ATT border is reminiscent of reducing the stimulus strength to a neuron model, from the repetitive firing to resting regime.

It has been shown that in the presence of noise the increase in mean interspike interval (reduced firing rate or probability) is accompanied by an increase in CV (Gutkin and Ermentrout 1998). While the dynamics are quite different in the competition models (relaxation dynamics for symmetric populations) we see a similar trend as adaptation strength is decreased. On the other hand, our models show the opposite behavior with horizontal parameter paths: moving from outside to inside the ATT region we find that CV decreases while T increases.

3.4 Competition models: global excitation-local inhibition (GELI) model

We also investigate the behavior of the GELI model, described by Eq. (4). The presence of strong recurrent excitation and nonlinear inhibition that is driven by both pooled and locally recurrent activity as well as by the stimulus makes this model architecturally different from the DIRI model. Despite these differences between the models, however, their dynamics is qualitatively similar. As in the SFA model, depending on the values of the parameters, three regimes of behavior, attractor (ATT), oscillatory (OSC) and simultaneous activities of the populations (SIM), are possible in the GELI model in the absence of noise (Fig. 5(a)). For this model also, the regions of relevance are small compared with the overall ranges of the parameters where the alternations are possible. While the shape and location of the ROR on the (adaptation strength, input strength) map in the GELI model Fig. 5(b), (c), and (d) are different from those for the SFA model (Fig. 4(a)), the observation remains the same: the region in the parameter space where both mean dominance duration and CV are in the experimentally observed ranges is located near the border between the attractor and oscillatory regimes that correspond to alternation behavior.

Fig. 5
(a) Phase diagram of the system's behavior for GELI model in the (input strength, adaptation strength) parameter space in the absence of noise. White-colored (OSC) regions are the regions of parameter space where the system's behavior is oscillatory; ...

3.5 Correlations between successive dominance durations

Experimentally, no correlations have been found between successive dominance durations (Fox and Herrmann 1967; Levelt 1968; Lehky 1995; Rubin and Hupe 2004). In the models, however, the presence of adaptation leads to correlations. During a longer dominance of one population, there is more time for the suppressed population to recover, and when the other percept becomes active, it will, on average, remain active longer as well, resulting in positive correlations. In the limiting case of no noise (σ [double less-than sign] 1), the switches in dominance are almost completely deterministic, and the successive dominance durations are strongly correlated, yielding correlation coefficients close to the maximum value of 1. In the other extreme, in the case of a noisy attractor model with no adaptation successive dominance durations are independent, yielding a correlation coefficient equal to zero.

We calculate the correlation coefficient, Eq. (7) between successive dominance durations for several sets of the model's parameters for both SFA and SD models (Eqs. (1) and (2)). Figure 6 shows how the correlation coefficient depends on the number of successive dominance duration pairs for SFA (panel (a)) and SD (panel (b)) models. We choose the values of the models' parameters so that the mean dominance duration T and CV values are within experimentally observed ranges (see green-colored regions in Fig. 4), and both attractor and oscillatory types of behavior are represented. In the SFA model, the values of the noise strength and input are σ = 0.12 and I1 = I2 = 0.6. For γ = 0.3 (circles in Fig. 6(a), the model's behavior is adaptation-driven, or oscillatory, with T = 2.1 s and CV = 0.43; the correlation coefficient reaches 0.27, and correlation becomes significant with p = 0.05 when the number of the successive dominance durations pairs exceeds 100. For γ = 0.1 (diamonds in Fig. 6(a)), the behavior is noise-driven, or attractor-like, with T = 3.5 s and CV = 0.64; the correlation coefficient reaches 0.12, and correlation becomes significant when the number of pairs exceeds 200. The value of γ = 0.05 (squares in Fig. 6(a)) is at the border of the experimentally relevant region of the (input strength, adaptation strength) plane in Fig. 4(a), with T = 4.2 s and CV = 0.77. The correlation value in this case is about 0.03, and a large number of the successive dominance durations pairs, about 3000 (not shown in Fig. 6(a)), is needed for the correlation to become significant.

Fig. 6
Correlation coefficients between consecutive dominance durations as functions of the number of pairs of successive dominance durations (the dominance time of one percept and the immediately following dominance time of the alternative percept) for several ...

In the SD model, the values of the noise strength and input are σ = 0.02 and I1 = I2 = 0.08. For δ = 0.7 (circles in Fig. 6(b)), the behavior is depression-driven, or oscillatory, with T = 1.8 s and CV = 0.5; the correlation coefficient reaches 0.28 and becomes significant when the number of successive dominance durations pairs becomes more than 100. For δ = 0.3 (diamonds in Fig. 6(b)), behavior is noise-driven, or attractor-like, with T = 3.2 s and CV = 0.53; the correlation coefficient is about 0.1, and it becomes significant when the number of pairs exceeds 200.

As expected, the correlation between successive dominance durations decreases as the strength of the slow negative process (spike-frequency adaptation or synaptic depression) decreases, and the system goes from slow process-dominated to noise-driven regime of behavior. In the limiting case without a slow process (γ = 0 for SFA model and δ = 0 for SD model), when only noise governs the alternations, the switches are completely random events, and the successive dominance durations are completely uncorrelated. However, as follows from the diagrams in Fig. 4, a slow process should always be present in the models, even when the switches are noise-driven, in order to account for experimentally feasible values of mean dominance duration and CV. The models, therefore, predict finite correlation between the successive dominance durations.

3.6 Shape of the dominance durations distributions

Dominance duration time series in bistable perception, and in binocular rivalry in particular, are generally compared with gamma distributions (Fox and Herrmann 1967; Levelt 1968; Logothetis et al. 1996). However, the log-normal distribution has been reported to provide as good or better fit than the gamma distribution for the binocular rivalry data of Lehky (1995) and binocular rivalry and Necker cube data of Zhou et al. (2004), as well as for the bistable motion of plaids data, as shown by Rubin and Hupe (2004). Finally, in some cases of Zhou et al. (2004), the Weibull distribution provides a better fit for the binocular rivalry data.

Following Zhou et al. (2004), we compare our simulated data with all three of the above mentioned distributions, gamma, log-normal, and Weibull. (Note, however, that the number of our simulated data points is at least an order of magnitude more than the number of data points obtained in their experiments.) Figure 7 shows the probability density functions (pdfs) for the data and the best fits in both SFA model (panels (a), (b)) and SD model (panels (c), (d)). The choice of parameters is the same as for the correlation coefficient calculation above, except that the case of γ = 0.05 for the SFA model is not shown. In all four panels, the thick solid line is the gamma pdf, the thin solid line is the log-normal pdf, and dashed line is the Weibull pdf. A log-normal distribution does not seem to be a good fit for any of the presented cases, while a gamma distribution seems to be a better fit for the cases with smaller value of the slow process strength (γ = 0.1 for the SFA model and δ = 0.3 for the SD model), and a Weibull distribution seems to be a better fit for the cases with larger value of the slow process strength (γ = 0.3 for the adaptation and δ = 0.7 for the depression).

Fig. 7
Probability density functions (pdfs) for the simulated dominance durations and their fits for the spike-frequency adaptation (SFA) model (panels a, b) and synaptic depression (SD) models (panels c, d). In all four panels, thick solid line is the best ...

In order to determine more precisely from which distribution the simulated data could have arisen, for each set of the simulated data we generated three sets of numbers, drawn from the gamma, log-normal, and Weibull distributions with the same means and standard deviations as the simulated ones. We then performed Kolmogorov-Smirnov goodness-of-fit test between the simulated distributions and these “theoretical” distributions. The null hypothesis of the test is that the theoretical and simulated distributions are the same. We find that this hypothesis can be rejected (at the 5% level) in all four presented cases when the simulated distributions are compared with the log-normal theoretical distribution. For the SFA model with γ = 0.1, the gamma distribution cannot be rejected (significance value p = 0.31), while Weibull distribution can be rejected (p = 0.003). The conclusion is opposite for the SFA model with γ = 0.3, the gamma distribution can be rejected (p < 0.0001 and Weibull distribution cannot be rejected (p = 0.19). For the SD model with δ = 0.3, both gamma and Weibull fits can be rejected (p = 0.046 for the gamma and p < 0.0001 for Weibull), but for the SD model with = 0.7, while the gamma fit can be rejected (p < 0.0001), Weibull fit cannot (p = 0.052).

In the absence of the adaptation process, in the noise-driven attractor models the dominance durations distribution is exponential (Kramers 1940; van Kampen 2001). Addition of adaptation to the models leads to the elimination of the short dominance durations from the distributions (Moreno-Bote et al. 2007). In order to investigate how long dominance durations are affected by the presence of both mechanisms of the perceptual switches, noise and adaptation, we separately analyze the shape of the tails of the simulated data distributions.

The inserts in all panels in Fig. 7 show the natural logarithm of the dominance duration histogram. All four simulated distributions are fitted with a straight line at the tail, where the tail of the distributions is defined as all dominance durations greater than the mean of the distribution plus one standard deviation. As expected, in both SFA and SD models, a straight line is a good fit when the slow process is weak, and the switches are noise-driven (γ = 0.1 for the SFA model, panel (a), and δ = 0.3 for the SD model, panel (c)). Alternatively, when the slow process is strong (γ = 0.3 for the adaptation, panel (b) and δ = 0.7 for the depression, panel (d)), the logarithm of the histogram is a convex curve, indicating that the number of dominance durations decreases faster than exponential when the dominance duration length increases. We note that in all four cases presented in Fig. 7, the parameters are chosen so that the experimental constraints on the mean dominance duration and CV are satisfied. However, the shape of the tail depends on whether the switches were produced primarily by noise or by an adaptation process.

4 Discussion

We studied the dynamics of neuronal competition models describing bistable perception phenomena. Two types of architectures were considered, with both noise and adaptation processes included in the models. We identified regions in the models' parameter space where alternations between the two populations' dominance, which represent the switches in perception, are obtained. Within those, we identified the subregions where the statistics of alternations is consistent with the experimentally observed statistics of bistable perception.

Either mechanism, noise or adaptation, if strong enough, can cause the alternations in dominance. We considered models where both processes are present, and showed that a transition between the two types of behavior, can be realized by smoothly changing the parameters of a single model, without changing the architecture of the system or the equations which govern its dynamics.

4.1 Fine-tuning of the system's parameters is needed to reproduce experimentally observed statistics

The regions of parameter space where both mean dominance duration and CV are within the experimentally observed ranges are very small compared with the overall regions where switches occur. These regions of relevance (ROR, green area of the 2-D adaptation/depression strength-input strength maps in Figs. Figs.44 and and5)5) lie mostly in the noise-driven, or attractor regime. At the same time, they are always adjacent to the border between the two regimes, and, moreover, a small part of the ROR lies in the oscillator regime. We conclude that in order to produce the statistical characteristics consistent with bistable perception experiments, precise tuning of the system's parameters is necessary. Specifically, the adaptation/depression strength and noise strength are such that these neuronal competition models operate near the border between the noise-driven and adaptation-driven types of alternations. We speculate that this conclusion does not depend on the details of the model, as it has emerged for all models studied here, which differ considerably in architecture and other implementation details (com-pare Eq. (1) and Eq. (4)). Note that while the relevant parameter region is small, it is not a one-dimensional object in the parameter space. We use the term “fine-tuning” of the system's parameters to emphasize the specific location of the ROR near the border between the model's regimes; of secondary significance for fine tuning is that the ROR is rather small.

Note that on our plots we chose the borders of the region of the relevant CV values to be at 0.4 and 0.8, while the CV values observed in experiments lie between 0.4 and 0.6. We considered this extended CV range for illustration purposes. When the more restrictive range is considered, the region in the (adaptation/depression strength, input strength) space where the constraints for both T and CV are satisfied decreases in size, mostly at the expense of the part located in the attractor-based regime of alternations. Therefore, the conclusion remains that in order for the neuronal competition models to reproduce experimentally observed statistics, the models' parameters should be fine-tuned so that the system operates at the border between the attractor and oscillatory regimes.

We have utilized mean-field-like firing rate models which enable us to explore and characterize parameter sensitivities of statistical properties. The models are idealized. They represent the effects of heterogeneities in cell properties, synaptic connections, etc. in only a qualitative and averaged sense, say, through the smooth exponential-like foot of the neuronal input-output relation (Wilson and Cowan 1972). The representation of noise is also averaged; only the noise that is common to all neurons survives. It is certainly conceivable that such sources of variability in cell-based networks models may induce additional robustness and enlarge relevant parameter regions.

4.2 “Release” vs. “escape” in DIRI and GELI models and Levelt's Proposition IV

In a previous study we showed that in deterministic DIRI models described by Eqs. (1) and (2), both with spike-frequency adaptation and synaptic depression, a switch in dominance can occur by either of two mechanisms: “release” or “escape” (Wang and Rinzel 1992; Shpiro et al. 2007). In release, the accumulating negative feedback to the dominant (active) population overcomes the stimulus. The population's activity drops, and it no longer suppresses the other population, thereby “releasing” it. In escape, the suppressed (inactive) population recovers enough from negative feedback to overcome, or “escape”, from the inhibition. The inactive population becomes active, causing the other population to become inhibited. In the release mechanism, the dominance durations grow as the external input to the populations grows, while in the escape mechanism, dominance durations decrease with the increasing input, resulting in the non-monotonic overall dominance vs. stimulus strength dependence (Shpiro et al. 2007; Curtu et al. 2008). This non-monotonicity, common to neuronal competition models, is robust also in the presence of noise.

The non-monotonicity of the mean dominance duration vs. stimulus strength relation makes it problematic to apply these models to bi-stable perception phenomena, since experimental evidence suggests that this dependence is strictly monotonic, with the mean dominance durations decreasing with increasing input strength. In contrast, in the GELI model (Eq. (4); Moreno-Bote et al. 2007) the mean dominance depends monotonically on the stimulus strength (except for a small region of inputs where the detection threshold behavior is realized, see Moreno-Bote et al. 2007). We emphasize, however, that our conclusion that it is the fine interplay between the models' parameters that is responsible for the experimentally observed statistics of dominance durations does not depend on the model's architecture. The models similar to the DIRI model presented here (Eqs. (1) and (2)) are commonly used to describe bi-stable perception (Laing and Chow 2002; Wilson 2003), as they provide a simple explanation of a switching process, while the GELI model is more complex. In addition, our observations may prove useful in describing other neuronal competition phenomena, such as central pattern generation.

4.3 Effects of the relaxation time and inhibition strength

The period of alternations is directly affected by the time scale of the slow negative feedback process. The greater is the adaptation time scale, the longer are the dominance durations, since it takes longer to approach the adaptation threshold necessary for switching without the noise. For very slow adaptation the dominance durations (noise-free case) increase linearly with the adaptation time constant. This dependence is weaker than linear dependence in the presence of noise. If the adaptation time constant τa is increased, the slower drift toward the adaptation threshold provides more chances for noise to cause a pre-mature switch; the net effect on the duration is not linear in τa. At the same time, the adaptation time scale has little (if any) effect on the location of the border between the attractor and the oscillator regimes in the (noise-free) models. Therefore, increasing the adaptation time scale shifts the region where the system's behavior is consistent with experiments so that it overlaps more with the oscillator regime of the models, while decreasing the adaptation time scale shifts this region toward the attractor or noise-driven regime. The time scale of adaptation for cortical neurons can be obtained in electrophysiology experiments and is found to be between 0.1 and 10 s (Descalzo et al. 2005; Abbott et al. 1997; Varela et al. 1997). In our simulations, we varied the adaptation time scale from 0.5 to 3 s, six-fold change, and found approximately a two-fold change in the dominance durations. On the other hand, the CV of the dominance times distributions remained virtually unchanged when the adaptation time scale changed, since the mean dominance time and the standard deviation change together when the time scale of the noise is much faster than the time scale of adaptation. The blue regions in Fig. 4 are little affected by changes in the adaptation time scale, while the yellow regions drift deeper into the oscillator region when the adaptation time scale increases and deeper into the attractor regime when it decreases. Therefore, because of the shapes and locations of these two regions, their overlap (i.e. the region of relevance) remains near the border between the adaptation-driven and noise-driven regimes for alternations.

The period of alternations is also affected by the cross-inhibition strength β if the strength of the slow process remains constant (Shpiro et al. 2007; Curtu et al. 2008). We notice however that since both inhibition strength β and adaptation strength γ enter the input to the gain function in the Eq. (1), (−βujγai + Ii), changing β changes the value of the adaptation strength for which the ATT-OSC transition happens and for which the transition no longer happens. Increasing β moves the boundary between the attractor and oscillatory regimes toward higher adaptation strength values, because more adaptation is needed to overcome stronger inhibition to get oscillations. Similarly, in the synaptic depression case, the ATT-OSC border is moved toward higher depression strength values when the inhibition strength is bigger. Locations of the relevant T (yellow) and relevant CV (blue) regions are affected by the change of the inhibition strength, but so is the border between the ATT and OSC regimes. We found that the region of relevance remains at the border between the adaptation-driven and noise-driven regimes of alternations.

4.4 The shape of the dominance time distributions constrains the mechanism responsible for alternations

We considered three features of the simulated dominance time distributions, rise, tail, and overall shape of the distribution, in order to asses which mechanism of rhythmogenesis, noise or adaptation, is better suited to account for experimental results. In the models we studied here it is the adaptation process that results in the emergence of the rising part of the distribution. In the purely noisy attractor model without any adaptation process, the dominance time distribution would be exponential. A slow adaptation process acts in a memory-like fashion, making it unlikely to switch just after the preceding switch, thereby eliminating the large number of short durations, smoothing and shifting rightward the distribution's rising portion.

The overall shape of the histograms from our simulations are not well described by the log-normal probability density function. This is contrary to the experimental histograms (Lehky 1995; Zhou et al. 2004; Rubin and Hupe 2004). For the models in the noise-driven regime, the gamma distribution yields a better fit. For the models working in the adaptation-driven regime, a Weibull distribution is a better fit. Aside from one study, (Zhou et al. 2004) we are not aware of other studies that fit the experimental dominance durations distributions with the Weibull distribution. Therefore, we are unable to say, on the basis of these parametric distributions, whether one or the other, noise- or adaptation-driven, competition models described here are better suited to account for the experimentally observed dominance time distributions.

Finally, we analyzed separately the tails of the distributions. In the models, despite the similarity in overall statistics (i.e. the mean dominance duration and the CV being in the experimentally observed ranges), the noise-driven and adaptation-driven models have differently shaped tails. The tails are exponential when noise is the primary source of switching, and decay faster than exponential when the switches are produced by an adaptation mechanism. In perception experiments it is hard to collect, in reasonable time, enough data to analyze the tails of the distributions. Nevertheless, such an analysis may be a useful tool in investigating the role of the noise versus the role of the adaptation in other phenomena described by competition models, such as central pattern generation.

4.5 Low correlations between successive dominance durations support noise-driven alternations

A necessary consequence of adaptation is that it produces correlations in the simulated time series of the dominance durations. We show that the correlation coefficient can be as high as 0.25–0.3 for the models operating in the oscillator regime (see also Lehky 1988). In contrast, for the models operating in the attractor regime the correlation coefficient is about 0.1. Experimentally, long-term correlations of high magnitude between successive dominance durations are not observed. Small positive correlations were shown to exist, and a recent meta-analysis of several data samples has found a statistically significant short-term correlation of 0.1 (van Dam et al. 2007; also see Fox and Herrmann 1967; Lehky 1988, 1995). Our results suggest that noise-driven alternations are more likely to produce such small correlations.

5 Conclusions: finding the balance between noise and adaptation in competition dynamics

Our study focuses attention on the relative roles of adaptation and noise, as contributing factors for producing alternations, irregularly and with statistical properties as observed experimentally. We have found, in several competition models, that alternations are possible over large regions of parameter space but the experimental constraints are satisfied in only a restricted domain, the region of relevance. This region lies primarily in the noise-driven attractor regime. For the region to be reasonably-sized requires adequate noise which in turn causes the region to extend to the boundary with the adaptation-driven regime. We did not find robust examples that mainly favor a noisy adaptation-driven model (i.e., with most of the relevant parameter region in the noise-free oscillator regime). This is because substantial noise must be included to achieve the appropriate CVs, and this leads to alternations which are generally too rapid. Some features of the distribution of dominance durations are also relevant to finding a balance. The distribution's delayed rise supports the need for some adaptation (or some memory-like) mechanism (Moreno-Bote et al. 2007), while an exponential tail favors the relatively weak adaptation of a noise-driven attractor regime.

We expect that our conclusions, although based on our results for specific models, should apply to a class of competition models with relaxation dynamics. The specific nonlinearities and architectures, even the symmetry due to identical units or the number of units, should not preclude generalization. In these models, random occurrence of different states may be realized by the interplay between noise and adaptation, and both noisy, adaptation-driven oscillator and noise-driven attractor regimes are possible. It will be worthwhile to further explore whether the statistics of the time series of state occurrences might be useful in the identification, as in our case, of regions of relevance for particular models. It is tantalizing to wonder if the brain favors, and why so, noise-driven attractor dynamics as our relevant regions suggest in the models considered here.


Present Address: A. Shpiro Department of Mathematics, Medgar Evers College, City University of New York, Brooklyn, New York, NY 11225, USA

Present Address: R. Moreno-Bote Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA


  • Abbott LF, Varela JA, Sen K, Nelson SB. Synaptic depression and cortical gain control. Science. 1997;275(5297):220–224. [PubMed]
  • Amit DJ, Brunel N. Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cerebral Cortex. 1997;7:237–252. [PubMed]
  • Blake R. A neural theory of binocular rivalry. Psychological Review. 1989;96:145–167. [PubMed]
  • Blake R. A primer on binocular rivalry. Brain and Mind. 2001;2:5–38.
  • Brunel N, Sergi S. Firing frequency of leaky integrate-and-fire neurons with synaptic current dynamics. Journal of Theoretical Biology. 1998;195(1):87–95. [PubMed]
  • Buice MA, Chow CC. Correlations, fluctuations, and stability of a finite-size network of coupled oscillators. Physical Review E. 2007;76:031118. [PubMed]
  • Curtu R, Shpiro A, Rubin N, Rinzel J. Mechanisms for frequency control in neuronal competition models. SIAM Journal on Applied Dynamical Systems. 2008;7(2):609–649. [PMC free article] [PubMed]
  • Descalzo VF, Nowak LG, Brumberg JC, McCormick DA, Sanchez-Vives MV. Slow adaptation in fast-spiking neurons of visual cortex. Journal of Neurophysiology. 2005;93(2):1111–8. [PubMed]
  • Fox R, Herrmann J. Stochastic properties of binocular rivalry alterations. Perception and Psychophysics. 1967;2:432–436.
  • Freeman AW. Multistage model for binocular rivalry. Journal of Neurophysiology. 2005;94:4412–4420. [PubMed]
  • Grossberg S. Contour enhancement, short-term memory, and constancies in reverberating neural networks. Studies in Applied Mathematics. 1973;52:217–257.
  • Guckenheimer J, Holmes P. Nonlinear oscillations, dynamical systems, and bifurcations of vector field. Springer; New York: 2002.
  • Gutkin BS, Ermentrout GB. Dynamics of membrane excitability determine interspike interval variability: A link between spike generation mechanisms and cortical spike train statistics. Neural Computation. 1998;10:1047–1065. [PubMed]
  • Haken H. A brain model for vision in terms of synergetics. Journal of Theoretical Biology. 1994;171:75–85.
  • Haynes JD, Deichmann R, Rees G. Eye-specific effects of binocular rivalry in the human lateral geniculate nucleus. Nature. 2005;438:496–499. [PMC free article] [PubMed]
  • Hertz J, Krogh A, Palmer RG. Introduction to the theory of neural computation. Addison-Wesley; Redwood City: 1991.
  • Hupe JM, Rubin N. The dynamics of bistable alternation in ambiguous motion displays: A fresh look at plaids. Vision Research. 2003;43:531–548. [PubMed]
  • Julesz B. Foundations of cyclopean perception. University of Chicago Press; Chicago: 1971.
  • Kalarickal GJ, Marshall JA. Neural model of temporal and stochastic properties of binocular rivalry. Neurocomputing. 2000;32–33:843–853.
  • Kim YJ, Grabowecky M, Suzuki S. Stochastic resonance in binocular rivalry. Vision Research. 2006;46:392–406. [PubMed]
  • Kramers HA. Brownian motion in a field of force and the diffusion model of chemical reactions. Physica. 1940;7:284–304.
  • Lago-Fernandez LF, Deco G. A model of binocular rivalry based on competition in IT. Neurocomputing. 2002;44:503–507.
  • Laing CR, Chow CC. A spiking neuron model for binocular rivalry. Journal of Computational Neuroscience. 2002;12:39–53. [PubMed]
  • Lehky SR. An astable multivibrator model of binocular rivalry. Perception. 1988;17:215–228. [PubMed]
  • Lehky SR. Binocular rivalry is not chaotic. Proceedings: Biological Sciences. 1995;259(1354):71–76. [PubMed]
  • Leopold DA, Logothetis NK. Activity changes in early visual cortex reflect monkeys' percepts during binocular rivalry. Nature. 1996;379:549–554. [PubMed]
  • Levelt WJM. On binocular rivalry. Mouton; The Hague: 1968.
  • Logothetis NK. A primer on binocular rivalry, including current controversies. Philosophical Transactions of the Royal Society of London, B. 1998;353:1801–1818. [PMC free article] [PubMed]
  • Logothetis NK, Leopold DA, Sheinberg DL. What is rivaling during binocular rivalry? Nature. 1996;380:621–624. [PubMed]
  • Mattia M, Del Giudice P. Population dynamics of interacting spiking neurons. Physical Review E. 2002;66:051917. [PubMed]
  • McClave J, Sincich T. Statistics. 10th ed. Pearson Prentice Hall; Englewood Cliffs: 2006. section 8.3.
  • Meng M, Tong F. Can attention selectively bias bistable perception? Differences between binocular rivalry and ambiguous figures. Journal of Vision. 2004;4:539–551. [PMC free article] [PubMed]
  • Moldakarimov S, Rollenhagen JE, Olson CR, Chow CC. Competitive dynamics in cortical responses to visual stimuli. Journal of Neurophysiology. 2005;94:3388–3396. [PubMed]
  • Moreno-Bote R, Parga N. Role of synaptic filtering on the firing response of simple model neurons. Physical Reviews Letters. 2004;92:0281021–0281024. [PubMed]
  • Moreno-Bote R, Rinzel J, Rubin N. Noise-induced alternations in an attractor network model of perceptual bistability. Journal of Neurophysiology. 2007;98:1125–1139. [PMC free article] [PubMed]
  • Moreno-Bote R, Shpiro A, Rinzel J, Rubin N. Bi-stable depth ordering of superimposed moving grating. Journal of Vision. 2008;8(7):1–13. 20. [PMC free article] [PubMed]
  • Mueller TJ, Blake R. A fresh look at the temporal dynamics of binocular rivalry. Biological Cybernetics. 1989;61:223–232. [PubMed]
  • Necker LA. Observations on some remarkable phenomenon which occurs on viewing a figure of a crystal of geometrical solid. London and Edinburgh Philosophical Magazine and Journal of Science. 1832;3:329–337.
  • Riani M, Simonotto E. Stochastic resonance in the perceptual interpretation of ambiguous figures: A neural network model. Physical Reviews Letters. 1994;72:3120–3123. [PubMed]
  • Risken H. The Fokker-Planck equation. Springer; Berlin: 1989.
  • Rubin E. Visuellwahrgenommene Figuren, Gyldendals, Copenhagen. Partial version in English in: Rubin, E. (2001). Figure and ground. In: Yantis S, editor. Visual perception: Essential readings. Psychology Press; Hove: 1921.
  • Rubin N, Hupe JM. Dynamics of perceptual bistability: Plaids and binocular rivalry compared. In: Alais D, Blake R, editors. Binocular rivalry. MIT; Cambridge: 2004.
  • Salinas E. Background synaptic activity as a switch between dynamical states in a network. Neural Computation. 2003;15(7):1439–1475. [PubMed]
  • Shpiro A, Curtu R, Rinzel J, Rubin N. Dynamical characteristics common to neuronal competition models. Journal of Neurophysiology. 2007;97:462–473. [PMC free article] [PubMed]
  • Soula H, Chow CC. Stochastic dynamics of a finite-size spiking neural network. Neural Computation. 2007;19(12):3262–3292. [PubMed]
  • Stollenwerk L, Bode M. Lateral neural model of binocular rivalry. Neural Computation. 2003;15:2863–2882. [PubMed]
  • Tong F. Competing theories of binocular rivalry. Brain and Mind. 2001;2:55–83.
  • van Dam L, Mulder R, Noest A, Brascamp J, van den Berg B, van Ee R. Sequential dependency in percept durations for binocular rivalry. (Abstract) Journal of Vision. 2007;7(9):53–53a.
  • van Kampen NG. Stochastic processes in physics and chemistry. North Holland; Amsterdam: 2001.
  • Varela JA, Sen K, Gibson J, Fost J, Abbot LF, Nelson SB. A quantitative description of short-term plasticity at excitatory synapses in layer 2/3 of rat primary visual cortex. Journal of Neuroscience. 1997;17:7926. [PubMed]
  • Wallach H. Uber visuell wahrgenommene Bewegungsrichtung. Psychologische Forschung. 1935;20:325–380. [English translation in: Wuerger, S., et al. (1996). ‘On the visually perceived direction of motion’ by Hans Wallach: 60 years later. Perception, 25, 1317–1367.
  • Wang X-J, Rinzel J. Alternating and synchronous rhythms in reciprocally inhibitory model neurons. Neural Computation. 1992;4:84–97.
  • Wheatstone C. Contributions to the physiology of vision. Part I: On some remarkable, and hitherto unobserved, phenomena of binocular vision. (Series 43).The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 1838:241–267.
  • Wilson HR. Computational evidence for a rivalry hierarchy in vision. Proceeding of the National Academy of Sciences USA. 2003;100:14499–14503. [PubMed]
  • Wilson HR. Minimal physiological conditions for binocular rivalry and rivalry memory. Vision Research. 2007;47(21):2741–2750. [PubMed]
  • Wilson HR, Cowan JD. Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal. 1972;12:1–24. [PubMed]
  • Zhou YH, Gao JB, White KD, Merk I, Yao K. Perceptual dominance time distributions in multistable visual perception. Biological Cybernetics. 2004;90(4):256–63. [PubMed]