Search tips
Search criteria 


Logo of wtpaEurope PMCEurope PMC Funders GroupSubmit a Manuscript
Science. Author manuscript; available in PMC 2008 September 9.
Published in final edited form as:
PMCID: PMC2532743

Dynamic Shifts of Limited Working Memory Resources in Human Vision


Our ability to remember what we have seen is remarkably limited. Most current views characterize this limit as a fixed number of items – only 4 objects – that can be held in visual working memory. Here we show that visual memory capacity is not fixed by number of objects, but rather is a limited resource that is shared out dynamically between all items in the visual scene. This resource can be shifted flexibly between objects, with allocation biased by selective attention and towards targets of upcoming eye movements. The proportion of resources allocated to each item determines the precision with which it is remembered, a relationship that we show is governed by a simple power law, allowing quantitative estimates of resource distribution in a scene.

The dominant model of visual memory capacity asserts that only a limited number of items can be simultaneously represented in working memory (1-10). Support for this model has come primarily from change-detection tasks, in which detection was close to 100% correct for small numbers of items, and declined only when set size increased above a certain limit, generally three or four (4, 5). An alternative way to explore the limits of visual working memory is to consider the precision with which each visual item is stored as a function of the number of objects in a scene. This approach provides a radically different perspective on visual capacity limits, revealing rapid redistribution of limited memory resources across eye movements and covert shifts of attention.

We tested subjects' ability to remember the location and orientation of multiple visual items following a brief disappearance of the stimulus array, with or without an intervening eye movement (Fig. 1). To minimize the role of configurational memory (11) only one of the items was re-displayed after the delay: subjects reported the direction in which this probe item had been displaced or rotated. Responses varied probabilistically with the magnitude of the change to the probe item (Fig. 2A). Subjects' response functions were successfully fitted with cumulative gaussian distributions, consistent with a gaussian distribution of error in the stored representation of the original stimulus (12).

Fig. 1
Experimental procedure
Fig. 2
Performance on the memory task

In the absence of eye movements, subjects were able to recall both location and orientation of a single item with considerable accuracy (Fig. 2A, N = 1, black symbols), with discrimination of 0.5° displacements and 5° rotations significantly better than chance at 73% and 80% correct, respectively (t > 5.8, p < 0.001). However, increasing the number of items to be remembered led to a decrease in performance, indicative of the limited capacity of visual working memory (Fig. 2A black symbols, set size increasing left to right). Fig. 2B shows how precision, measured by the reciprocal of the standard deviation of the response function, reduced as the number of items in the display increased (black symbols). Note that these data do not reveal a sharp drop in performance at a limit of four items.

Next we asked whether the precision of visual working memory is affected by an eye movement. Detection of changes to visual stimuli that occur during an eye movement presents a challenge to the brain, because the pre- and post-saccadic retinal locations of every visual item are very different. For location discrimination in single-item displays (Fig. 2A, top left, red symbols), an intervening eye movement introduced a small bias (mean 1.4°) into subjects' judgements: a tendency to report a shift in the direction of the saccade even for small displacements in the opposite direction. However, as can be seen from the similar slopes of the two response functions, the precision with which this discrimination was made did not differ significantly from the fixation condition (t = 1.2, p = 0.24). This indicates that subjects take into account the size and direction of their eye movement in order to estimate the expected post-saccadic retinal location of the single target (13, 14). This may be achieved by remapping a retinotopic location representation based on an internal copy of the saccadic motor signal (15, 16).

Precision in the saccade condition decreased with increasing number of items similarly to the fixation condition, for both location and orientation judgements (Fig. 2B, red symbols), with no significant advantage of fixation at any of the tested set sizes (t < 1.3; p > 0.23). This indicates that the process of spatial updating does not introduce any additional capacity limit on visual working memory, and therefore that the full contents of memory undergo remapping.

The item-limit model of visual working memory predicts that discrimination performance will begin to decline only once the limiting number of items is exceeded. In contrast, our results – for both fixation and saccade conditions – show that the precision with which visual items are remembered decreases with increasing numbers even at the smallest set sizes (t > 2.7, p < 0.006), with the largest drop in precision occurring between one- and two-item displays, and no evidence for any discontinuity in the region of four items (Fig. 2B). Our data therefore support an alternative model in which limited visual memory resources must be shared out between items, with the consequence that increasing numbers of items are stored with decreasing precision (see Fig. S1 for an illustration). To quantify the relationship between the resources available to encode an item (R) and the precision with which it is remembered (P), we re-plotted precision as a function of the proportion of resources available per item (Fig. 3A). The results suggest that this relationship can be captured by a simple power law (P[proportional, variant]Rk, power constant k = 0.74 ± 0.06 [95% conf. limits]; blue line).

Fig. 3
Modelling visual memory performance

The similarity of our results for memory of location and orientation suggests they share a common mechanism. This may be the representation of stimulus attributes by population coding, in which information is encoded in the combined activity of a large number of neurons (17). Currently identified population decoding schemes do not permit a neuron to simultaneously encode information about more than one stimulus. Therefore, when multiple items must be represented, the total pool of neurons must be shared out between the different items. Because each neuron's firing rate is corrupted by noise (18), reducing the number of neurons representing an item will increase variability in the population estimate, and consequently reduce the precision with which the item is represented. Theoretical studies have shown that a maximum likelihood decoding scheme would result in a power law relationship between precision and number of neurons (19), similar to that obtained in the current study (20).

Can the power-law model also explain why previous studies (4, 5) found a decrease in performance only for greater numbers of items? Fig. 3B shows how the model predicts precision will change with increasing set size, and Fig. 3C displays the corresponding response functions. The power-law model predicts that accuracy (proportion correct) will vary with the magnitude of the change to be discriminated. In this study, we tested discrimination of small changes to stimuli, where discrimination is difficult even with only one item in the display. In this range, our model predicts that accuracy will decrease with increasing number of items even at the smallest set sizes (e.g. dotted vertical line in Fig. 3C). In contrast, previous tests of visual working memory have generally used ‘supra-threshold’ changes, where performance is close to 100% correct for a single item. In these cases, the power-law model predicts that accuracy will initially change almost imperceptibly with increasing numbers of items, and then more steeply at larger set sizes (e.g. dashed vertical line in Fig. 3C). The full predictions of the model are shown in Fig. 3D (black lines). The power-law model is consistent both with our data (examples shown in red: 0.5° displacements, 5° rotations; see also Fig. S2), and with many of the results previously taken to support a 3–4 item limit (examples shown in green (5)).

Although we have seen that an upcoming eye movement does not reduce the total memory resources available, it does have an effect on how those resources are allocated. Fig. 4A shows precision of discrimination judgements in multi-item (N > 1) displays, now separating the data into trials on which the probed item was the saccade target and those on which it was one of the other items in the display. For both location and orientation judgements, the saccade target was remembered with greater precision than non-targets, indicating a preferential shift of visual memory resources to the target of the eye movement (black symbols; t > 4.2, p < 0.001). This finding was not a consequence of the way in which the saccade target was specified (endogenously cued by color (21)) because a similar effect was also observed, in a different condition, when we cued the saccade target exogenously by flashing it (Fig. 4A, grey symbols; t > 4.3, p < 0.001). Thus limited working memory resources get rapidly redistributed so that the target of a forthcoming eye movement receives privileged allocation, thereby improving the precision for this item. Because total resources are limited, the corollary of this enhanced memory for the saccade target should be a decrease in precision for non-targets, which will be most evident when the total number of items is small. A comparison of saccade and fixation performance in two-item displays confirmed this effect, with the increased precision for the saccade target (t = 4.26; p < 0.001) matched by a significant decrease in precision for the non-target item (t = 3.19; p < 0.01).

Fig. 4
Effects of eye movements and attention

Is this flexibility in the allocation of memory resources specific to eye movements, or does it also occur with shifts of covert attention (22-25)? In a further condition, subjects' kept their eyes fixed, but now one of the items in the sample display flashed briefly before the screen was blanked, a manipulation known to involuntarily attract visual attention (26). When the flashed item was subsequently probed, discrimination precision was significantly higher than for non-flashed items (Fig. 4B; t > 3.4, p < 0.001; (27)). Thus visual attention acts as a ‘gatekeeper’, determining which visual information is given priority for storage in working memory (28-31), perhaps by biasing competitive interactions in cortical regions mediating visual memory (32, 33).

In normal scene viewing we make many eye movements in order to extract the maximum possible information from a scene. We performed an additional experiment to examine how visual memory resources are dynamically allocated across a sequence of saccades. Subjects made a series of eye movements to fixate each item in a five-item display; the display was blanked before the saccade to the fifth item reached its target. The precision on a subsequent discrimination judgement, probing memory for any one of the five targets, varied with order of fixation (Fig. 4C).

Discrimination of both location and orientation of the saccade target (the fifth item) was substantially more precise than for any of the other items in the display (t > 3.9; pcorrected < 0.001). However, the target of the previous saccade, which was also the most recently fixated item, was not remembered with significantly greater precision than any of the previously-fixated items (t < 2.5; pcorrected > 0.12). Nor were there any differences in precision between the previous items (t < 2.0; pcorrected > 0.55). We found no significant relationship between precision and fixation time (t < 1.0; p > 0.31), indicating that these results do not reflect temporal (e.g. recency) effects. Rather, it appears that the high-resolution memory for a saccade target survives only one eye movement. Based on the power law obtained in the first experiment, we can estimate the proportion of working memory resources allocated to each item in the sequence. This analysis reveals that, at the time of a saccade, the majority of visual memory resources are allocated to the target of the next fixation (location task: 56%, orientation task: 61%; (34)) rather than to the currently fixated item (location task: 15%, orientation task: 16%).

The current results are inconsistent with the view that visual working memory capacity is limited to a fixed number of objects. Several previous studies have attempted to go beyond the simple fixed item-limit account of visual memory (9, 10, 35). One study (9) has proposed a variable item-limit, based on a fixed ’information load’, whereby the more visually complex the items to be remembered, the fewer can be stored. While related to our limited-resource model, this hypothesis cannot account for the relationship between precision and number of items observed in the current study, as the visual complexity of the sample stimuli was held constant. It has been argued (10) that the changes in detection performance observed in this previous study are the result of increasing similarity between sample and probe items, rather than increasing complexity of the sample. Because the precision of visual memory is limited, reducing the size of the change to the stimulus results in poorer performance, in agreement with our model.

Since submission of this article, another study has been published that also examines the precision of visual memory (36). The authors put forward a two-component model, combining a variable precision memory for fewer items with an absolute upper limit on number of items (above which decreases in performance are accounted for solely by random guesses). Based on this interpretation, their data indicate that the average subject can only hold about two items in working memory (see their Fig. 2 and Supplementary Fig. 3). However, this study did not control eye movements, which we have seen can strongly bias precision in favour of fixation targets. A reanalysis of our own fixation task data in accordance with their mixture-model approach reveals that precision falls with increasing number of items throughout the tested range, including between 4 and 6 items (χ2 = 5.6; p = 0.018; Fig. S3 and supporting online text). We conclude, therefore, that the capacity of visual memory can be explained solely in terms of a limited resource that must be shared out between all items in the scene, with no evidence for an upper limit on the number of items that can be stored, contrary to the hypothesis of a two-component model (36).

The allocation of this limited resource is highly flexible: making an eye movement to an item, or directing covert attention to it, causes a greater proportion of memory resources to be allocated to it, so it is retained with far greater precision than other objects in the scene. All information stored in visual working memory is dynamically updated during an eye-movement to take into account the change in gaze position. However, because the resource is limited, the high-resolution representation of a fixated item is significantly degraded as memory resources are reallocated to the target of the next eye movement.

Materials & Methods


Subjects and apparatus

A total of 32 experimentally-naïve subjects participated in the study after giving informed consent (age 19–42; 15 male, 17 female; all with normal or corrected-to-normal vision). Stimuli were presented on a 21″ CRT monitor viewed at a distance of 70 cm, with a refresh rate of 140 Hz (mean delay <4 ms, phosphor persistence <1 ms). Eye position was monitored online at 1000 Hz using a frame-mounted infra-red eye tracker.

General procedure

Subjects reported the direction of a change to a visual item's location or orientation that occurred during a brief blanking of the display. Stimuli consisted of colored squares (location task, 0.8° × 0.8°) or randomly-oriented colored arrows (orientation task, 1.25° radius) presented against a grey background. Stimulus colors were randomly selected on each trial, without repetition, from a set of highly discriminable colors (white, black, red, green, blue, yellow, cyan). Each trial began with a sample display of between one and six items, followed by a blanking period, brief presentation of a probe display, then the subject's response. The probe display consisted of the reappearance of a randomly-chosen item from the sample display, displaced horizontally from its original position (0.5°, 2°, or 5°, leftward or rightward) or rotated (5°, 20°, or 45°, clockwise or counter-clockwise).

Experimental conditions differed in the eye movements made by subjects between presentation of the sample display and the probe display. In the first experiment (Figs (Figs11 & 2), subjects either maintained fixation on a cross 10° from the centre of the stimulus array (fixation condition), or after 1000 ms made a saccade from the fixation cross towards one of the display items (saccade condition). In two further conditions (Fig. 4A & B), a flash of one of the sample display items after 1000 ms acted either as a signal to saccade to the flashed item (saccade-to-cue condition) or as an attention-grabbing but task-irrelevant distractor (fixation-with-cue condition). In the saccade conditions, blanking of the sample display was triggered by the onset of the eye movement; in the fixation conditions, the sample display period was adjusted to match display time in the saccade conditions. In the second experiment (Fig. 4C), the sample display consisted of five items and subjects had to fixate each item in turn, without revisiting any item: the blanking period was triggered by the onset of a saccade towards the final item. Full details of each experiment are given below.

Experiment 1

16 subjects participated in this experiment, 8 on the location task and 8 on the orientation task. Following a short practice session, each subject completed a block of 160 trials in each of four conditions (saccade, fixation, saccade-to-cue, fixation-with-cue) in a counterbalanced order. All trials began with the presentation of a fixation cross deviated 7° horizontally from the display centre (alternating left or right on each trial). Once the subject was fixating the cross, the sample display was presented. The number of items in the sample display was varied from trial to trial (1, 2, 4 or 6 items). Items were randomly arranged within an invisible square (9° × 9°) centred 10° horizontally from the fixation cross, with a minimum separation of 3° between items.

On trials in the saccade condition, at an auditory go-signal (1000 ms after onset of the sample display) subjects made an eye-movement to one of the display items, specified by color (each subject was randomly allocated a color prior to the experiment that would indicate the saccade target on each trial). The onset of the saccade, determined by the detection of a horizontal gaze position deviated 2° or more from the fixation cross, triggered the blanking period (500 ms), followed by the probe display (250 ms). Subjects then indicated with a button-press in which direction they judged the probe item to have moved/rotated. To ensure only genuine saccades were included, any trial in which eye velocity was less than 50° s−1 at the time the display was blanked were rejected during offline analysis.

The fixation condition was identical to the saccade condition except that subjects maintained fixation on the cross throughout. The sample display period on fixation trials was matched to that on saccade trials based on an estimate of saccadic reaction time obtained from previous saccade trials or the initial practice session. Saccade-to-cue and fixation-with-cue conditions were identical to the corresponding conditions described above, except that a randomly chosen item flashed off (100 ms) and back on, beginning 1000 ms after onset of the sample display. On saccade-to-cue trials, this acted as a signal to saccade to the flashed item. On fixation-with-cue trials, subjects were instructed that the flashed item was irrelevant to the task. The flashed item was not predictive of the item to be probed, nor was the saccade target on saccade trials.

Experiment 2

A separate group of 16 subjects took part in this experiment, 8 on the location task and 8 on the orientation task. The sample display comprised five items: four items, separated by a minimum of 6°, randomly arranged on the circumference of an invisible circle (8° radius) centred on a fifth item. Subjects made eye movements from an initial fixation location to each item in turn, finishing with the central item. Each fixation on an item (criteria: distance < 1.5°, duration > 150 ms) was rewarded with an audible click, indicating to the subject that the fixation had been registered. Re-fixation of an item, or fixation of the central item out of order, caused the trial to be aborted, and immediately repeated with new randomly-generated stimulus parameters. Detection of a saccade towards the final item triggered the blanking period (250 ms), followed by the probe display (250 ms) and collection of the subject's response. As before, trials were rejected if eye velocity was found to be less than 50° s−1 at the time the display was blanked.


Statistical analysis

A probit regression model was used to estimate parameters of the cumulative gaussian distribution that best fit the relationship between response probability and stimulus displacement/rotation in each experiment. Any discrimination bias was indicated by the mean of the fitted gaussian (μ), and precision was determined by the reciprocal of the standard deviation (1). Experimental parameters were identified as influencing precision if they had a significant (p < 0.05) effect on the slope term of the fitted regression model (Ref. S1). Bonferroni correction was used to correct for multiple comparisons.

Modelling of precision data

We used the mean precision estimates obtained in Experiment 1 to quantify the relationship between the precision with which an item is remembered (P) and the memory resources available to encode it (R). For a display of N items, the proportion of resources available for each item RRmax equals 1N. This value was plotted against the relative precision PPmax, obtained by normalizing the mean precision estimate for each condition and set-size by precision in the N = 1 case (i.e. the maximum precision obtained when all resources are allocated to a single item). We approximated the relationship between these two variables with a power law:


The maximum likelihood value of k obtained from the data was then used to extrapolate estimates of relative precision to all set-sizes in the range 1–12 (Fig. 3B, solid line). The response functions corresponding to these precision estimates (assuming no discrimination bias) were determined by cumulative gaussian distributions with mean of zero and standard deviation 1/P (plotted in Fig. 3C; abscissa shows change to the stimulus in multiples of σ = 1/Pmax).

As in the empirical data shown in Fig. 2, the predicted response curves become flatter with increasing number of items, reflecting changes in the distributions of error in the stored representation of the stimulus (Fig. 3C, inset). This has consequences for the ability to discriminate different magnitudes of stimulus change, as highlighted by the vertical lines. The dotted vertical line indicates a small change to the stimulus similar to that used in the current study – the probability of correctly discriminating the change falls rapidly with increasing number of items. In contrast, the dashed vertical line indicates a much larger stimulus change – in this case performance would be close to 100% for 1–4 items but fall with further increases in the number of items.

Comparison with change detection studies

In order to compare the predictions of our model with the results of earlier change detection studies, the probability of correctly identifying different magnitudes of stimulus change was determined from the response functions calculated in the previous step (Fig. 3D, black lines). This permitted a direct comparison with performance data from previous tests of visual memory capacity (green lines). To demonstrate the validity of the model, example results from the current study were also re-plotted as proportion of responses correct (red lines). The full results for all sizes of stimulus change are shown in Fig. S2. Both our data and the results from previous studies are consistent with the power-law model, with any apparent discrepancies explained simply by differences in the magnitude of stimulus change tested.

Estimating resource allocation

In Experiment 2, all N items in a display were fixated sequentially, and separate precision estimates {P1, P2, … PN} were obtained for each item in the sequence. Given that the total resource Rmax is equal to the sum of the resources allocated to each item, j=1NRj, it follows from (1) that the proportion of resources allocated to item i is given by


By substituting into this equation the value of k obtained in the analysis of Experiment 1, we calculated the proportion of memory resources allocated to each item as a function of its order in the fixation sequence.

Comparison with Zhang & Luck (2008)

In this study (36), subjects were presented with a brief sample display consisting of N colored items and then instructed to report the color of the item at a probed location by selecting from a color wheel. The distribution of errors was fitted with a mixture model comprising a (circular) gaussian distribution and a uniform distribution (Fig. S3 A, top). The response distribution predicted by this model is an ‘elevated’ gaussian function that tends to a non-zero value for large errors (blue curve, Fig. S3 A, top). Consistent with our results, at small set sizes the width of the guassian increased with increasing N. However, no significant change in gaussian width was observed when the set size increased from 3 to 6 items. The decrease in performance was instead attributed to the uniform distribution, which the authors interpreted as representing the number of ‘guess’ responses, and hence took to indicate that an upper limit on storage had been exceeded.

To test whether such a mixture model could account for our results, we performed an equivalent analysis on subjects' responses in our fixation condition. As ours is a discrimination rather than a report task, the equivalent model (illustrated in Fig. S3 A, bottom) is a mixture of a cumulative gaussian function (corresponding to a gaussian-distributed limited-precision memory of the item) and a uniform response function with probability 0.5 (corresponding to random guesses). The response distribution predicted by this mixture model is a ‘compressed’ sigmoidal function that, unlike a cumulative gaussian, does not asymptote to 0 or 1 for large stimulus changes (blue curve, Fig. S3 A, bottom).

We first normalized the data by estimated precision in the N = 1 case, and shifted the data to remove any bias. We then used a non-linear optimization algorithm to obtain maximum likelihood estimates (Ref. S2) for the two parameters of the model: the standard deviation of the cumulative gaussian (σ) and the mixture parameter (α, equivalent to Pm in (36)). The resulting fitted response curves are shown in Fig. S3 B (blue; curves fitted with a cumulative gaussian alone are shown in green for comparison).

Contrary to the Zhang & Luck model, the width of the gaussian component (σ) increased significantly with every increase in N (likelihood ratio test: χ2 > 5.5; p < 0.02). Hence, even when guesses are ‘filtered out’ by the uniform distribution, precision (1/σ) still decreases significantly with each increase in the number of items, including between 4 and 6 items (Fig. S3 C). The mixture parameter (α) showed some variation with number of items for smaller N, but, again unlike (36), we observe no change in α, and hence no increase in guessing, between 4 and 6 items (χ2 < 0.01; p > 0.9). The results of this reanalysis are therefore fully consistent with those described in the main body of this paper, and, contrary to (36), do not indicate any upper limit on the number of items that can be stored in visual memory.

Why do our findings differ from those of Zhang & Luck? One key difference is that eye movements were not controlled in their study. We have shown that making a saccade to an item has a substantial effect on the allocation of visual memory resources, enhancing the precision of memory for the saccade target and reducing precision for non-target items. However, the mixture model used by Zhang & Luck to analyse their data assumes that all stored items are represented with the same precision. While this assumption may be appropriate when eye movements are suppressed (as in our fixation condition) it is not valid when subjects are free to move their eyes, as we have demonstrated.

A second important difference between the two studies is in the choice of visual features investigated. We chose to test memory for object positions and orientations on the grounds that the internal representation of these features by neuronal populations is tolerably well understood (17). Theory predicts a gaussian distribution of error in the stored representation of a stimulus, but this gaussian distribution will only be observed if the tested parameter space corresponds to the internal parameter space in which the item is stored. The internal representation of object position and orientation can be considered at least monotonically related to absolute differences in distance or angle (e.g. we can reasonably assume that an absolute error of 20° in remembering a probed item's orientation corresponds to a larger internal misrepresentation of the stimulus than if the error were only 10°). In contrast, Zhang & Luck chose to test memory for object color and, in a second experiment, shape. In these cases, the parameter space of the internal representation is unknown and so the tested parameter space was chosen more or less arbitrarily (e.g. when the probed stimulus was in fact blue, a click on the yellow section of the color wheel was considered a greater error than a click on the red section). If this specification of the parameter space does not correspond to the internal representation of the tested feature, we would not expect the full gaussian distribution to be observed, with the result that larger internal errors might appear randomly distributed.

Supplementary Material

Supplementary Material


38. We thank M. Bays, A. Faisal, M. Machizawa, K. Nzerem, and P. Sumner for helpful discussion. This research was supported by the Wellcome Trust and NIHR CBC at UCLH/UCL.


Publisher's Disclaimer: This manuscript has been accepted for publication in Science. This version has not undergone final editing. Please refer to the complete version of record at Their manuscript may not be reproduced or used in any manner that does not fall within the fair use provisions of the Copyright Act without the prior, written permission of AAAS.

References and Notes

1. Sperling G. Psychol Monogr. 1960;74
2. Pashler H. Percept Psychophys. 1988;44:369–78. [PubMed]
3. Irwin DE. J Exp Psychol Learn Mem Cogn. 1992;18:307–317.
4. Luck SJ, Vogel EK. Nature. 1997;390:279. [PubMed]
5. Vogel EK, Woodman GF, Luck SJ. J Exp Psychol Hum Percept Perform. 2001;27:92–114. [PubMed]
6. Cowan N. Behav Brain Sci. 2001;24:87–114. [PubMed]
7. Vogel EK, Machizawa MG. Nature. 2004;428:748–751. [PubMed]
8. Todd JJ, Marois R. Nature. 2004;428:751–754. [PubMed]
9. Alvarez GA, Cavanagh P. Psychol Sci. 2004;15:106–111. [PubMed]
10. Awh E, Barton B, Vogel EK. Psychol Sci. 2007;18:622–8. [PubMed]
11. Jiang Y, Olson IR, Chun MM. J Exp Psychol Learn Mem Cogn. 2000;26:683–702. [PubMed]
12. Wickens TD. Elementary Signal Detection Theory. Oxford University Press; 2002.
13. Deubel H, Schneider WX, Bridgeman B. Vision Res. 1996;36:985–996. [PubMed]
14. Davidson ML, Fox MJ, Dick AO. Percept Psychophys. 1973;14:110–116.
15. Bays PM, Husain M. Neuroreport. 2007;18:1207–13. [PMC free article] [PubMed]
16. Duhamel JR, Colby CL, Goldberg ME. Science. 1992;255:90–2. [PubMed]
17. Pouget A, Dayan P, Zemel R. Nat Rev Neurosci. 2000;1:125–32. [PubMed]
18. Bialek W, Rieke F. Trends Neurosci. 1992;15:428–34. [PubMed]
19. Seung H, Sompolinsky H. Proc Natl Acad Sci USA. 1993;90:10749–10753. [PubMed]
20. Simple models of ML decoding predict a power constant of 0.5, significantly smaller than that obtained empirically in the current study (0.74). This discrepancy may be a result of the simplifying assumption that neurons in the population fire independently of one another; see e.g. (37).
21. Materials and methods are available on Science Online.
22. Eriksen CW, Hoffman JE. Percept Psychophys. 1973;14:155–160.
23. Posner MI. QJ Exp Psychol. 1980;32:3–25. [PubMed]
24. Deubel H, Schneider WX. Vision Res. 1996;36:1827–1837. [PubMed]
25. Hoffman JE, Subramaniam B. Percept Psychophys. 1995;57:787–795. [PubMed]
26. Yantis S, Jonides J. J Exp Psychol Hum Percept Perform. 1984;10:601–21. [PubMed]
27. As in the saccade condition, this reallocation of resources resulted in both an increase in mean precision for the flashed item and a decrease in mean precision for non-flashed items, although in this case these differences did not attain statistical significance.
28. Desimone R, Duncan J. Annu Rev Neurosci. 1995;18:193–222. [PubMed]
29. Cowan N. Attention and Memory: An Integrated Framework. Oxford University Press; 1997.
30. Awh E, Vogel EK, Oh S. Neuroscience. 2006;139:201–8. [PubMed]
31. Treisman A. Philos Trans R Soc Lond B Biol Sci. 1998;353:1295–1306. [PMC free article] [PubMed]
32. Desimone R. Philos Trans R Soc Lond B Biol Sci. 1998;353:1245–1255. [PMC free article] [PubMed]
33. Duncan J, Humphreys G, Ward R. Current Opinion in Neurobiology. 1997;7:255–261. [PubMed]
34. We do not claim that the fraction of resources allocated to the saccade target is fixed. Rather, this allocation may be flexible depending on factors such as the number of non-target items and specific task demands.
35. Palmer J. J Exp Psychol Hum Percept Perform. 1990;16:332–350. [PubMed]
36. Zhang W, Luck SJ. Nature. 2008;453:233–5. [PMC free article] [PubMed]
37. Averbeck B, Latham P, Pouget A. Nat Rev Neurosci. 2006;7:358–66. [PubMed]