Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC3972919

Formats

Article sections

Authors

Related links

Neural Comput. Author manuscript; available in PMC 2014 April 2.

Published in final edited form as:

Published online 2012 June 26. doi: 10.1162/NECO_a_00334

PMCID: PMC3972919

NIHMSID: NIHMS562452

Gordon Pipa,^{1,}^{3,}^{4,}^{*} Zhe Chen,^{1,}^{2,}^{*} Sergio Neuenschwander,^{4} Bruss Lima,^{4} and Emery N. Brown^{1,}^{2}

Gordon Pipa: ed.sou@apipg; Zhe Chen: ude.tim@nehcehz

Correspondence address: Dr. Zhe Chen, 43 Vassar Street, Dept. Brain and Cognitive Sciences, Rm 46-6057, Massachusetts Institute of Technology, Cambridge, MA 02139

The publisher's final edited version of this article is available at Neural Comput

The moving bar experiment is a classic paradigm for characterizing the receptive field (RF) properties of neurons in primary visual cortex (V1). Current approaches for analyzing neural spiking activity recorded from these experiments do not take into account the point process nature of these data and the circular geometry of the stimulus presentation. We present a novel analysis approach to mapping V1 receptive fields that combines point process generalized linear models (PPGLM) with tomographic reconstruction computed by filtered-back projection. We use the method to map the RF sizes and orientations of 251 V1 neurons recorded from two macaque monkeys during a moving bar experiment. Our cross-validated goodness-of-fit analyses show that the PPGLM provides a more accurate characterization of spike train data than analyses based on rate functions computed by the methods of spike triggered averages or first-order Wiener-Volterra kernel. Our analysis leads to a new definition of RF size as the spatial area over which the spiking activity is significantly greater than baseline activity. Our approach yields larger RF sizes and sharper orientation tuning estimates. The tomographic reconstruction paradigm further suggests an efficient approach to choosing the number of directions and the number of trials per direction in designing moving bar experiments. Our results demonstrate that standard tomographic principles for image reconstruction can be adapted to characterize V1 RFs and that two fundamental properties, size and orientation, may be substantially different from what is currently reported.

The receptive field (RF) of a neuron defines how its spiking activity changes in response to a stimulus. Developing accurate quantitative characterization of the RFs of neurons in primary visual cortex (V1) is critical for understanding how the early visual system represents and transmits information (e.g., De Valois, Albrecht, & Thorell, 1982; Jones & Palmer, 1987; de Ruyter & Bialek, 1988; Leventhal et al., 1995; Stanley, 2002; Ringach, 2004; Nishimoto, Arai, & Ohzawa, 2005; Chen et al., 2007). The moving bar experiment is a classic paradigm pioneered by Hubel and Wiesel for measuring the RFs of V1 neurons (Hubel & Wiesel, 1959). It has been well known that V1 simple cells respond precisely to the location and contrast polarity of features in the visual scene. In the typical moving bar experiment, electrodes are implanted in animal’s V1 area and a bar is passed in front of the animal’s eye (i.e., the visual field) at constant speed and a fixed orientation. The bar is passed systematically at multiple orientations for several trials and the neurons’ spiking activity is recorded across all orientations and all trials. Simple V1 neurons can be well described by a set of linear filters (i.e., RFs), which can be analyzed using the principle of reverse correlation (Ringach & Shapley, 2004) or Bayesian inference (Park & Pillow, 2011). The standard approaches to estimating the RF rely on constructing a spatial histogram of the spiking activity, or using methods of spike-triggered average (STA) or Wiener-Volterra kernel (WVK) with linear or second-order kernels (Gabbiani, 1996; Rieke et al., 1997; Paninski, 2003). The STA provides an unbiased estimate of the linear filter for uncorrelated (or spherically symmetric) stimuli, and WVK provides a minimum-variance estimate in the least-squares sense. Nowadays, the moving bar experiment remains a practical paradigm for mapping visual RFs in awake moneys, although other alternative visual stimuli have been employed, such as the white noise, grating, and m-sequences (Reid, Victor, & Shapley, 1997).

By examining the literature, two important estimation issues have not been considered in standard analyses of V1 moving bar experiments. First, none of these methods uses the fact that neural spiking activity can be accurately represented by point processes (Brown, Kass, & Mitra, 2004; Kass, Ventura, & Brown, 2005; Truccolo et al., 2005; Paninski, 2004). Second, none of these estimation procedures has considered the highly structured geometry of the moving bar experiments. Passing the moving bar at multiple orientations to construct the visual neuron’s RF is analogous to tomographic image reconstruction in which energy is passed through an object from different directions. The transmitted energy is captured by arrays of detectors and a reconstruction algorithm is used to build a tomogram or image of the object’s internal structure. For computer tomography, positron emissions tomography, and single photon emission computer tomography, the incident energies are respectively X-rays, gamma rays and electron-positron annihilation (Kak & Slaney, 1988; Deans, 1993; Natterer, 2001). This analogy suggests that tomographic principles for image reconstruction could be used to estimate visual RFs in moving bar experiments. The idea of using the tomographic reconstruction principle to map visual RFs is not new (Sun & Bonds, 1994). However, the spiking activity of V1 neurons was not carefully modeled in that the non-Poissonian firing effect was completely ignored. Another important issue is the characterize the RF size of V1 simple cells. The V1 RF size was traditionally determined based upon the minimum-response-field technique (Movshon, Thomson, & Tolhurst, 1978; Pettet & Gilbert, 1992; Crist, Li, & Gilbert, 2001). The minimum-response-field technique assumes the RF profile has a Gaussian or Difference of Gaussian shape, and it characterizes the RF extent using a threshold based on the *e*^{−1} criterion (DeAngelis et al., 1995) or using the standard deviation to be the size of RF (Nienborg et al., 2004). However, we argue that this technique lacks statistical justification since first, it does not take the uncertainty of the neuronal responses into account; second, the Gaussian assumption is a poor approximation, and the selection of the threshold is rather ad hoc.

In this paper, we combine point process generalized linear model (PPGLM) methods (Truccolo et al., 2005; McCullagh & Nelder, 1989) with a filtered back-projection algorithm (Kak & Slaney, 1988) to derive a novel analysis approach for mapping the RFs of V1 neurons recorded in a moving bar experiment. Based on our new analysis approach, we propose a statistical criterion (based on the notion of confidence intervals) to define the RF size. We use the proposed approach to characterize the RF size and orientation tuning of 251 V1 neurons. Our results show that the orientation preferences of V1 neurons are sharper and the sizes of their RFs are larger than the estimates from conventional methods. Our analysis further suggests an approach to optimal design of moving bar experiments. This paper emphasizes the computational nature and principle of our proposed approach, therefore an exhaustive comparison with every estimation method in the literature is beyond the scope of this paper. In addition, our proposed approach is specifically aimed for the moving bar experiments, no claim is made about the superiority of moving bar stimuli to other state-of-the-art experimental stimuli paradigms for mapping V1 RFs.

Our approach consists of three important steps: in the first step, we use a PPGLM to characterize the neuronal responses at each moving bar direction; in the second step, we use the principle of tomography reconstruction to map the visual RF; in the third step, we characterize the RF size and orientation tuning for every neuron. The first two steps are illustrated in Figure 1.

(**a**) Two-dimensional RF (warm color represents high firing rate). (**b**) The bar is moved with a constant velocity with an angle *θ*_{j} across the rectangular visual field. (**c**) Raster plot of a single V1 neuron recorded from one awake monkey with respect **...**

Passing the moving bar at multiple orientations to construct the visual RF is analogous to tomographic image reconstruction in which energy is passed through an object from multiple directions. The neuronal responses to a visual stimulus at different locations of the RF are assumed to follow a linear superposition principle, such that the neural response along a moving bar angle *θ* is treated as a line integration of the two-dimensional (2D) RF orthogonal to the moving direction.

Mathematically, the Radon transform
(*f*)[*θ*] specifies the ‘sliced’ projection of a 2D function *f*(*x*, *y*) along the specific projection angle *θ* (Kak & Slaney, 1988; Deans, 1993; Natterer, 2001). The V1 RF estimate conditional on the angle *θ*, characterized as a function of both spatial position (*x*, *y*) and orientation *α*, is denoted as *h*(*x*, *y*, *α*|*θ*). Assuming a factorial property (Rodieck, 1965), the V1 RF is expressed as a product of spatial tuning and orientation tuning: *h*(*x*, *y*, *α*|*θ*) = *f*(*x*, *y*|*θ*)*g*(*α*|*θ*),^{1} where *g*(*α*|*θ*) , 0 ≤ *α* ≤ *π*. It then follows that

$$\mathcal{R}(h)[\theta ]=\mathcal{R}(f\xb7g)[\theta ]=g(\alpha \theta )\xb7\mathcal{R}(f)[\theta ].$$

(2.1)

That is, the one-dimensional (1D) projection of neural response can be fully characterized by a product an orientation tuning function and a spatial tuning function jointly defined by the Radon transform. For notation simplicity, the conditional term of the angle *θ* is made implicit in the functions *f* and *h* from now on.

To reconstruct the spatial tuning *f*(*x*, *y*), the inverse Radon transform is applied (Kak & Slaney, 1988)

$$f(x,y)={\mathcal{R}}^{-1}\{\mathcal{R}(f)[\theta ]\}={\mathcal{F}}_{2}^{-1}\{{\mathcal{F}}_{1}\{\mathcal{R}(f)[\theta ]\}\},$$

(2.2)

where
and
denote the 1D and 2D Fourier transforms, respectively. The second equality follows from the relationship between the Radon transform and Fourier transform. See Appendix A for technical details on continuous Radon transform. In theory, the ‘Projection-Slice Theorem’ states that a perfect reconstruction is possible given an infinite number of 1D projections taken at an infinite number of angles (Kak & Slaney, 1988). In practice, the inverse Radon transform is realized by filtered back-projection based on a finite set of projection angles {*θ _{j}*}, which is implemented numerically in a finite discrete grid.

The spatial tuning function *f*(*x*, *y*) is assumed to have a band-limited spectrum with a bandwidth of *b* Hz. Reliable reconstruction of *f*(*x*, *y*) based on its line projections
${\{\mathcal{R}(f)[{\theta}_{j}]\}}_{j=1}^{J}$ requires a minimum sampling frequency *q* and a minimum number of projections *J* such that the following two sampling criteria are fulfilled (Natterer, 2001)

$$\{\begin{array}{ll}J\ge 2b\hfill & (\text{angular}\phantom{\rule{0.16667em}{0ex}}\text{sampling})\hfill \\ q\ge 2b/\pi \hfill & (\text{linear}\phantom{\rule{0.16667em}{0ex}}\text{or}\phantom{\rule{0.16667em}{0ex}}\text{projection}\phantom{\rule{0.16667em}{0ex}}\text{sampling})\hfill \end{array}.$$

(2.3)

For *J* = 16 projections evenly spaced on 2*π*, the bandwidth is upper bounded: *b* ≤ 8 Hz. Arbitrarily small bin size could be chosen to assure a sufficiently high temporal sampling frequency *q*.

In RF mapping, the Radon transform at each moving bar projection direction is unknown and requires estimation based on spike train observations recorded from multiple trials. To do so, single-unit spike trains are treated as point process observations. Within a time interval (0, *T*], the spike trains are evenly binned with a bin size of Δ = *K*^{−1}*T* (note that *q* = 1*/*Δ). The number *K* is chosen sufficiently large such that there is at most one spike in any bin, and the point process is modeled by a conditional intensity function (CIF) *λ*(*t _{k}*|

For the *j*-th moving bar direction *θ _{j}*, we model the log-CIF of the spiking activity at time

$$log{\lambda}_{j}({t}_{k}{H}_{{t}_{k}})=\sum _{m=1}^{M}{\xi}_{m,j}^{(B)}{\mathcal{B}}_{m}({t}_{k})+\sum _{l=1}^{I}{\xi}_{l,j}^{(n)}{n}_{k-l,j},$$

(2.4)

where the first term of the right-hand side of equation 2.4 represents the *stimulus-evoked* component that is related to the RF characteristic, and the second term represents the *spike history-dependent* component that is related to the neuron’s intrinsic biophysical property. The stimulus-evoked component is modeled using *M* cubic B-spline functions
, and the spike history-dependent component is modeled using spike count statistics within preceding *I* history windows. The equation 2.4 essentially defines a GLM with a log link function.

The parameters of PPGLM are estimated using an iterative reweighted least squares (IRWLS) algorithm (glmfit function, MATLAB) based on maximum likelihood estimation (Pawitan, 2001; Brown et al., 2003; Paninski, 2004). The order parameters *M* and *I* are chosen according to the Akaike’s information criterion (AIC) and assessed by a Kolmogorov-Smirnov (KS) test in light of the time-rescaling theorem (Brown et al., 2002; Haslinger, Pipa, & Brown, 2010). Upon completing the maximum likelihood fitting, taking the exponent of both sides of equation 2.4 and evaluating it with the maximum likelihood estimate yields the CIF estimate from the PPGLM (Figure 1d). In addition, confidence intervals (CIs) for the parameters {
${\xi}_{m,j}^{(B)}$} and {
${\xi}_{l,j}^{(n)}$}, as well as the CI for *λ _{j}*(

Due to the circular geometry of the stimulus representation, the neural response along a moving bar angle *θ* is treated as a line integration of the 2D RF spatial tuning function *f*(*x*, *y*) orthogonal to the moving direction, which is characterized by a Radon transform
(*f*)[*θ*]. At each moving bar direction, the estimated Radon transform is only determined by the stimulus-evoked neural response

$$\widehat{\mathcal{R}}(f)[{\theta}_{j}]=exp\phantom{\rule{0.16667em}{0ex}}\left(\sum _{m=1}^{M}{\widehat{\xi}}_{m,j}^{(B)}{\mathcal{B}}_{m}\right).$$

(2.5)

To model the stimulus-evoked spiking activity of equation 2.4 in each moving bar direction, we use a set of cubic B-spline basis functions
${\{{\mathcal{B}}_{m}\}}_{m=1}^{M}$. A B-spline is a spline function that has the minimal support with respect to a given degree, smoothness, and domain partition. A cubic B-spline is a curve with smoothness order *d* = 3 (i.e., polynomials of degree 3), which is continuous and twice differentiable. The values of control points only affect the shape of equation 2.5 locally in time due to the piecewise definition of
given a sequence of control points {0 ≤ *t*_{0} ≤ *t*_{1} ≤ ··· ≤ *t _{r}*

$${\mathcal{B}}_{m,0}(t)=\{\begin{array}{ll}1\hfill & \text{if}\phantom{\rule{0.16667em}{0ex}}{t}_{m}\le t<{t}_{m+1}\hfill \\ 0\hfill & \text{otherwise}\hfill \end{array}$$

(2.6)

$${\mathcal{B}}_{m,d}(t)=\{\begin{array}{ll}{\scriptstyle \frac{t-{t}_{m}}{{t}_{m+d}-{t}_{m}}}{\mathcal{B}}_{m,d-1}(t)+{\scriptstyle \frac{{t}_{m+d+1}-t}{{t}_{m+d+1}-{t}_{m+1}}}{\mathcal{B}}_{m+1,d-1}(t)\hfill & \text{if}\phantom{\rule{0.16667em}{0ex}}{t}_{m}\le t<{t}_{m+d+1}\hfill \\ 0\hfill & \text{otherwise}\hfill \end{array}$$

(2.7)

The control points are placed non-evenly on the 2-s time interval with an adaptive placement strategy, which automatically allocates the control points based on the cumulated peri-stimlus time histogram (PSTH) of the spike activity (see Figure 2 for illustration).

An illustration of adaptive placement of knots for cubic splines. (**a**) A set of 16 cubic B-splines are placed on the 2-s time interval. The knots are non-uniform in that the distance between two neighboring knots is determined in a way that there are on **...**

To model the spike history-dependent component of equation 2.4 at each moving bar direction, we use the spike count within the previous temporal windows to account for the spiking history prior to the current spike’s occurrence. The *n _{k}*

Upon estimating the Radon transform in 16 directions, each Radon transform estimate is stored in a 2,000*×*1 vector. In light of equations 2.2 and 2.5, the spatial tuning of the V1 RF is then reconstructed by applying the filtered back-projection algorithm (Kak & Slaney, 1988) to the 16 estimated 1D Radon transforms, followed by a spline interpolation in the finite (1,414*×*1,414) discrete grid (Figure 1e).^{2} Specifically, the continuous integration is approximated by a finite sum operation

$$\begin{array}{l}f(x,y)={\int}_{0}^{\pi}{Q}_{\theta}(t)d\theta \\ \approx \sum _{j=1}^{16}{\mathcal{Q}}_{{\theta}_{j}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\text{with}\phantom{\rule{0.38889em}{0ex}}{\theta}_{j}=\frac{2(j-1)\pi}{16},\end{array}$$

(2.8)

where
represents a discrete 1D Fourier transform of the *filtered* version of the estimated Radon transform
(*f*)[*θ _{j}*] (Kak & Slaney, 1988)

$${\mathcal{F}}_{1}\left\{\widehat{\mathcal{R}}(f)[{\theta}_{j}](t)\right\}=\sum _{k=-707}^{707}\widehat{\mathcal{R}}(f)[{\theta}_{j}](t){e}^{-i(2\pi wt/707)}$$

(2.9)

$${\mathcal{Q}}_{{\theta}_{j}}(t)=Re\{\sum _{m=-707}^{707}{\mathcal{F}}_{1}\left\{\widehat{\mathcal{R}}(f)[{\theta}_{j}](t)\right\}{w}_{m}{e}^{i(2\pi {w}_{m}t)}\},$$

(2.10)

where
$i=\sqrt{-1}$, and the frequency response |*w*| in equation 2.10 defines a so-called ‘Ramp’ filter in the frequency domain (note that |*w*| is not a square integrable function, so its inverse Fourier transform does not exist in a conventional sense). Therefore, the reconstruction is viewed as a filtered back-projection process: a filtering operation (equations 2.9 and 2.10) combined with a back-projection (equation 2.8).

To make the filtered back-projection operation more robust with respect to noise, we use a modified ‘Ramp’ filter (multiplied by a Hamming window). We specifically construct a symmetric, finite-length discrete filter (with a total length 2,048, where 2,048 is the next highest power of 2 for 2,000) within the Nyquist bandwidth (therefore cutoff the frequency response outside the Nyquist range by setting them to 0). In addition, we can implement the Ramp filter in the time domain instead of the frequency domain. In so doing, we zero pad the filter as well as the Radon transform signal, which helps avoid aliasing. We thus, replace the circular convolution with a non-periodic convolution.

Finally, for each discrete point (* _{m}*,

*Note:* There is a typical time delay between the moving bar stimulus and V1 responses in the order of *~*60 ms. For each direction, we used a fixed latency of 60 ms to shift the neuronal response in time to reflect the delay (otherwise the peak of the estimated RFs would be mis-identified). The rationale of estimating the the latency and choosing 60 ms is as follows: Given that the moving bar directions are spanned from 0 to 2 *π* (the Radon transform theory only requires sampling from 0 to *π* by assuming radial symmetry), we can use bars moving in from reverse directions (e.g., 0 and *π*, *π/*2 and 3 *π/*2) to estimate the actual latency. Imaginably, if the latency parameter is correct, two estimated CIFs from opposite directions would be aligned in the peak response; in contrast, if the latency parameter is either too large or too small, the peak responses of two estimated CIFs would deviate. In fact, we have tested other alternative latency values varying from 50, 55, 65, and 70 ms, and found that by using these alternative values the reconstructed RF is much more elongated, indicating that the latency is either too long or too small.^{3}

The standard minimum-response-field technique (Movshon et al., 1978; Pettet & Gilbert, 1992) to define the RF extent is based on a Gaussian shape assumption of the RF profile. However, the Gaussian assumption is not fully justified and this ad hoc technique lacks statistical justification. Because of the asymmetry of the V1 RF, here we compute the RF size from the projected temporal profile at each direction using two criteria defined below. With the standard *e*^{−1} criterion (Criterion 1) (DeAngelis et al., 1995), the RF size is defined by the extent (in time and then converted to degree) where the neuronal response exceeds a threshold (Figure 4); and the threshold value is defined as *e*^{−1} of the dynamic range above the baseline firing rate: *λ _{j}*

Illustrations of RF size estimation in three moving bar directions for a single V1 neuron based on Criterion 1 (top row) and Criterion 2 (bottom row), where the black curve indicates the estimated CIF, and the shaded area marks the RF range. In the top **...**

Specifically, for the *j*-th direction, assuming that the maximum likelihood estimate
${\widehat{\mathit{\xi}}}_{j}=\{{\widehat{\xi}}_{m,j}^{(B)},{\widehat{\xi}}_{l,j}^{(n)}\}$ follows a multivariate Gaussian distribution
(* _{j}*,

$$\mathbb{E}[{\widehat{\lambda}}_{j}]={e}^{{\mu}_{j}+{\sigma}_{j}^{2}/2}$$

(2.11)

$$\mathit{Var}({\widehat{\lambda}}_{j})=\left({e}^{{\sigma}_{j}^{2}}-1\right){e}^{2{\mu}_{j}+{\sigma}_{j}^{2}}$$

(2.12)

The 95% (or 99%) CI roughly corresponds to the twofold (or threefold) standard deviation. In comparison to Criterion 1, Criterion 2 is statistically meaningful and makes no assumption of the RF shape.

Orientation tuning *g*(*α*|*θ*) is estimated separately from the spatial tuning *f*(*x*, *y*) at each direction *θ _{j}*. The strength of

$$CV=1-\left|\frac{{\sum}_{j=1}^{16}{\widehat{\lambda}}_{j,max}{e}^{2i{\theta}_{j}}}{{\sum}_{j=1}^{16}{\widehat{\lambda}}_{j,max}}\right|,$$

(2.13)

and characterize the orientation tuning with an index: *ε* = 1−*CV* (where 0 ≤ *ε* ≤ 1). A large value of *CV* or a low value of *ε* indicates a low sensitivity to orientation (Figure 5c). Since the periodicity of orientation is 180°, but the moving bar direction is cyclic over 360°, the actual directions of the bar are used to calculate the directional preferences of each neuron. In all graphic visualizations presented later (Figure 5b and Figure 7), instead of visualizing *h*(*x*, *y*, *α*) = *f*(*x*, *y*)*g*(*α*), we plot a rescaled version of *f*(*x*, *y*) by assuming an isotropic orientation in all directions.

Comparison of four spike encoding approaches (columns 1 through 4: GLM_{hist}, GLM, STA, WVK) in estimating an excitatory RF of V1 neuron. (**a**) Raster plot of the single cell response to 16 moving bar directions, with each direction contains 9 experimental **...**

The spiking activity from 251 V1 neurons was recorded in two awake macaque monkeys during the moving bar experiment (see Figure 1c for the raster plot of one V1 neuron), in which a moving light bar (luminance: 50 cd·m^{−2}, bar width: 0.2°) stimulus was used in 16 different orientations separated evenly by 22.5° in 9 trials per direction presented in random order. Each trial lasted 2 seconds. Details of experimental setup and recordings are described in Appendix C.

In the encoding analysis, we estimate the 1D Radon transform of each of the 16 different orientations as the CIF of a PPGLM fit to the spiking activity. We compare the reconstructed RF from two PPGLMs, one using spike history (GLM_{hist}) and one not using spike history (GLM) with estimates computed from standard methods of STA and WVK (using only linear kernels). The details of the STA and WVK methods are given in Appendix B.

Figure 5 shows a typical comparison of four estimation methods. All provide rate function estimates in each of the 16 directions from the 9 trials of spiking activity (Figure 5a). In Figure 5a, the estimated CIFs are plotted against the raw spike rasters. The CIFs from the GLM_{hist} and the GLM are smoother than those from the STA and WVK because the former two used smoothing cubic splines as basis functions. The WVK and STA rate function estimates are noisier because neither considers the point process nature of the spiking activity. The WVK is a least-squares estimate whereas the STA is the simple average of the spike counts over the 9 trials. Furthermore, the RF spatial tunings of the GLM_{hist} and the GLM are sharper than those of the STA and the WVK (Figure 5b). The spatial RF’s peak amplitude of the GLM_{hist} is typically less than that of the GLM because the latter does not consider spike history. Note that all spatial tuning estimates have stripe-like “artifacts” that are clearly related to the discrete sampling of the bars at the 16 orientations. In addition, the GLM_{hist} also gives sharper orientation tuning measured as a greater *ε*-index value than the GLM, STA and WVK (Figure 5c).

Of the four models considered the GLM_{hist} gives the most accurate description of the spiking activity across all of the 251 V1 neurons in terms of cross-validation KS goodness-of-fit assessments (Table 1 and see Figure 6 for one example). In the validation data, the GLM_{hist} fits meet the KS goodness-of-fit criterion for 96.4% of the neurons compared with 71.3% of the GLM, 44.6% of the STA and 24.3% of the WVK fits. Therefore, in the subsequent analyses we compare the GLM_{hist} results with the conventional STA and WVK methods.

Goodness-of-fit assessments summary for the fits of the GLM_{hist}, GLM, STA and WVK models for 251 V1 neurons. The table entries are the number (and percentage) of the neurons whose KS plots are within the 95% CIs, suggesting a close agreement between the **...**

V1 neurons have both excitatory and inhibitory RFs. Two representative examples of spatial RF mapping are shown in Figure 7. In the encoding analysis, GLM_{hist} consistently obtains a smoother and sharper spatial RF estimate (higher contrast and spatially well-defined).

Our PPGLM framework also provides a new criterion for assessing RF size based upon comparing the baseline firing activity with the lower 95% CI of the CIF. The logic behind this new definition is that activity which is significantly greater than baseline can be considered the stimulus-evoked response. Using the new criterion, we find that the estimate of RF size is asymmetric and larger than that determined by the *e*^{−1} criterion. As illustrated in Figure 4 for one V1 neuron, the RF size estimates from three selected moving bar directions are smaller using the old criterion than using the new criterion. This is consistently observed across the majority of the orientations (80.9% among all 16×251 orientations; Figure 8c) of the V1 neurons.

Summary statistics of RF size and orientation tuning estimates for 251 V1 neurons. (**a, b**) Box plots comparison of three spike encoding methods in estimating the RF size and orientation tuning. On each box, the center line marks the median, the edges of **...**

We compare the population RF size and orientation estimates of three methods: GLM_{hist}, STA and WVK (Table 1). For GLM_{hist}. we consider two criteria for defining RF size: Criterion 1, the standard *e*^{−1} criterion; Criterion 2, defined by the 95% CI of the estimated CIF. For STA and WVK, we only consider the standard *e*^{−1} criterion. In principle we can also use the 95% CI criterion for the WVK (i.e., roughly two standard deviations of the estimate, see Appendix), however, the least-squares method may produce a negative CIF estimate at some time points (see the last column of Figure 5a, for an illustration), such that its lower 95% CI is completely within the negative region. In this case, it is not meaningful at all to use the 95% CI criterion for the WVK method.

As seen from Figure 8a, based on Criterion 1, the median RF size estimates from GLM_{hist}, STA and WVK are in close agreement (pairwise median tests between STA and WVK, *n*=4,016 orientations, *P* > 0.05). Using Criterion 2 GLM_{hist} has larger median RF size. The RF size estimates from the GLM_{hist} using Criterion 2 are linearly related to, but significantly larger than, the estimate from GLM_{hist} using Criterion 1 (Figure 8c, linear regression, RF_{GLM,crit 2} = 1.066 × RF_{GLM,crit 1} + 0.599), suggesting a clear offset between these estimates. In the analysis of the orientation tuning (Figure 8b), the *ε*-estimate from GLM_{hist} is significantly different from STA and WVK (*n* = 251 cells, *P* < 0.01, Kruskal-Wallis one-way ANOVA median test). No significant difference is found between STA and WVK estimates. In addition, the *ε*-estimate from GLM_{hist} is nearly twice the estimates from WVK and STA (Figure 8d and 8e). The average slope (± 95% CI) for the linear regression of WVK versus GLM_{hist} is 0.543 ± 0.021 and for STA versus GLM_{hist} is 0.573 ± 0.023. These findings suggest that formal statistical modeling of point process nature of the neural spiking activity combined with tomographic reconstruction and statistically-based definition of the RF property leads to substantially different inferences about the RF size and orientation tuning. Note that there is no significant correlation between the estimated RF size and the orientation tuning. We compare the median RF size and the *ε*-estimate from 16 directions from 251 neurons, the Pearson correlation coefficients for the GLM_{hist}, STA and WVK methods are −0.1284, −0.1548 and −0.1122, respectively; but none of these correlations is statistically significant.

Furthermore, using an approximate formula between the index *ε* and the V1 RF bandwidth *b* (Ringach et al., 2002)

$$\epsilon \approx {\left(\frac{sin(2\pi b)}{2\pi b}\right)}^{2},$$

(3.1)

we estimate the V1 RF bandwidth *b* from the *ε*-estimate. A greater *ε*-estimate (a sharper orientation selectivity) yielded a narrower bandwidth. Median value 0.129 (25% quantile: 0.067, and 75% quantile: 0.250, respectively) of the *ε*-estimate among 251 V1 neurons (Figure 8b) corresponds to a median bandwidth 0.35 Hz (0.39 Hz and 0.30 Hz, respectively).

For experiments using moving bar stimuli to estimate visual RF two design questions arise naturally. What is the minimum number of moving bar directions required to reconstruct the RF with high fidelity? Given a fixed total number of trials, what is the optimal configuration of number of directions and the number of trials per direction?

To answer these experimental design questions, we simulate spiking activity from a V1 neuron with an ON-center, OFF-surround RF (with a shape of the ‘difference of Gaussians’) having the following spatial tuning function (Figure 9a)

(**a**) The simulated 2D visual RF, and the reconstructed RF based on tomographic reconstruction using various numbers of projections (**b1–b5**). The visual RF is simulated with a baseline firing rate of 25 Hz and a peak firing rate of *f*_{max} = 125 Hz. **...**

$$f(x,y)={A}_{1}exp\phantom{\rule{0.16667em}{0ex}}\left(-\frac{0.5({x}^{2}+{y}^{2})}{{\sigma}^{2}}\right)-{A}_{2}exp\phantom{\rule{0.16667em}{0ex}}\left(-\frac{0.5({x}^{2}+{y}^{2})}{1.2{\sigma}^{2}}\right)+{A}_{3},$$

(3.2)

with *σ*^{2} = 1/8. The function *f*(*x*, *y*) is further properly shifted and scaled (by three parameters *A*_{1}, *A*_{2} and *A*_{3}) such that the ultimate function *f*(*x*, *y*) > 0 and has a peak firing rate of 125 Hz and a baseline firing rate of 25 Hz. The RF *f*(*x*, *y*) is discretized in a 1, 414 × 1, 414 grid and has a bandwidth of 8 Hz/pixel.

From equation 2.13 we simulate two seconds of spike trains in each of 128 evenly spaced directions with 9 trials per direction. From these data, we reconstruct the RF using 8, 16, 32, 64 and 128 directions and 1 to 9 trials per direction (see Figure 9 for illustrations). Given a number of directions and a number of trials per direction, we fit PPGLMs to the spiking activity to estimate the Radon transforms in each direction and then applied filtered back-projection to the estimated transforms to reconstruct the RF. We evaluate the quality of the reconstruction by computing the mean-squared reconstruction error (MSRE) between the estimated RF and the true value across all 1,414×1,414 pixels.

As expected, the reconstruction error decreases as the number of directions and the number of trials per direction increases (Figure 10a). In examining the MSRE with respect to the total number of trials, we observe that, provided the condition for the minimum number of spatial samples is satisfied (at least 16 directions), the error decreases with increasing number of trials. However, as the total number of trials reaches approximately 200, the MSRE approaches an asymptote of approximately 2 × 10^{−5} (Figure 10b). As the sampling theory suggests, if the high-frequency details of the RF are required (i.e., larger bandwidth), then the number of directions is crucial, and oversampling will yield a more accurate reconstruction. Because in moving bar experiments the neural responses are stochastic, a minimal number of trials per direction is always required. We find that to achieve an MSRE of 2×10^{−5}, a feasible design would be to use 32 directions, with at least 6 trials per direction, or 192 trials for the experiment, which suggests a different configuration from the 16 directions and 9 trial per direction used in our experiment.

Thus far, we have assumed that the spatial tuning and orientation tuning of the V1 neurons is separable (equation 2.1). In reality, V1 neurons are better modeled by a 2D Gabor filter (e.g., Daugman, 1980; Jones & Palmer, 1987). In order to test the robustness of our assumption and tomographic reconstruction approach, we further simulate a V1 neuron using a Gabor filter model, which is given by the product of a Gaussian-shape envelope and a complex sinusoidal carrier (Movellan, 2008)

$$f(x,y)=Aexp\phantom{\rule{0.16667em}{0ex}}\left(-\frac{\pi {((x-{x}_{0})cos\xi +(y-{y}_{0})sin\xi )}^{2}}{{a}^{2}}-\frac{\pi {(-(x-{x}_{0})sin\xi +(y-{y}_{0})cos\xi )}^{2}}{{b}^{2}}\right)\times exp\phantom{\rule{0.16667em}{0ex}}\left(i2\pi ({u}_{0}x+{v}_{0}y)+i\varphi \right)$$

(3.3)

were (*x*_{0}, *y*_{0}) denotes the location of the peak of the Gaussian envelope; (*a*^{2}, *b*^{2}) denote the spatial variances along the (*x*, *y*) coordinates, respectively; (*A*, *ξ*) denote the amplitude and rotation angle of the Gaussian envelope, respectively; (*u*_{0}, *v*_{0}) and *ϕ* denote the spatial frequencies and phase of the sinusoid, respectively. Taking the 2D Fourier transform of equation 3.3 further yields

$$\begin{array}{l}\mathcal{F}(u,v)=\frac{A}{ab}exp\phantom{\rule{0.16667em}{0ex}}\left(-i2\pi \phantom{\rule{0.16667em}{0ex}}\left({x}_{0}(u-{u}_{0})+{y}_{0}(v-{v}_{0})\right)+\varphi \right)\\ \times exp\phantom{\rule{0.16667em}{0ex}}\left(-\pi \frac{{((u-{u}_{0})cos\xi +(v-{v}_{0})sin\xi )}^{2}}{{a}^{2}}-\pi \frac{{(-(u-{u}_{0})sin\xi +(v-{v}_{0})cos\xi )}^{2}}{{b}^{2}}\right)\end{array}$$

(3.4)

Basically, Gabor filter is spatially localized, and can be viewed as a band-pass filter that favors spatial frequencies within a certain range. Here, we set *x*_{0} = *y*_{0} = 0, *a* = 50 pixels, *b* = 40 pixels, *ξ* = *π*/4, *u*_{0} = *v*_{0} = 1/80 cycles/pixel, and *ϕ* = 0. Let
$C=\sqrt{{\scriptstyle \frac{log2}{\pi}}}$ and
$R=\sqrt{{u}_{0}^{2}+{v}_{0}^{2}}$, the spatial frequency BW is about 2*C*/*a* ≈ 0.0188 pixel, and the orientation BW is about 2 tan^{−1}(*C*/*Rb*) ≈ 1.173 deg (see Movellan, 2008 for BW derivations based on the half-magnitude RF profile). Figures 11a and 11b show the real and imaginary parts of the Gabor filter in spatial domain on an image of 1,024×1,024 pixels, respectively.

Results from the simulation study using a Gabor filter model. (**a**) The real part of the complex Gabor filter in spatial domain (*f*_{max} = 80 Hz). (**b**) The imaginary part of the complex Gabor filter in spatial domain. (**c**) The estimated RF profile based on 32 **...**

From the real part of the Gabor filter (which is properly shifted such that it is nonnegative, with a peak firing rate of ~80 Hz, and a baseline firing rate of ~20 Hz), we simulate two seconds of spike trains in 32 (or 64) evenly spaced directions in [0, 2*π*) with 4 trials per direction.^{4} Using the separability assumption, we estimate the spatial tuning of the visual RF, and evaluate the quality of the reconstruction by computing the MSRE between the ground truth (Figure 11a) and estimated RF profile (in terms of the pixel intensity of two images). The estimated results are shown in Figures 11c and 11d. It appears that despite the use of the separability assumption, the estimated RF profile remains qualitatively and quantitatively similar to the true RF. In these two cases, the MSRE values are 0.0432 and 0.0304, respectively. As seen in Figure 11, although the details of the reconstructed RF images are partly distorted, the underlying Gabor filter structure has been captured, suggesting the robustness of the separability assumption.

By combining statistical modeling of neural spike trains using the PPGLM framework with the principles of tomographic reconstruction, we have presented a new two-step approach to estimating the RFs of V1 neurons from spiking activity recorded in an experiment in which a bar was moved across the visual field at multiple orientations. In the first step, a PPGLM is used to estimate the Radon transform as the CIF of the spike trains recorded across multiple trials in each direction. In the second step, under the linear superposition principle, we invert the estimated Radon transforms to reconstruct the V1 RF using filtered back-projection. This approach takes advantage of the point process nature of the spiking activity and the highly structured geometry of the experiment. This latter step is analogous to image reconstruction in which use of filtered back-projection is standard practice (Kak & Slaney, 1988; Deans, 1993). Our use of tomographic principles to map V1 RFs is a further extension of previous work by Sun and Bonds (1994) in two important aspects: one is characterization of spiking activity via a PPGLM; another is to propose a statistical criterion for determining the RF size.

Our PPGLM (GLM_{hist}) that models spike history along with stimulus effect, gives the most accurate description of the spiking data compared with a PPGLM that models only the stimulus effect (GLM), and the standard STA and WVK algorithms for rate function estimates. There are several reasons why the GLM_{hist} outperforms the STA and WVK. Instead of treating the spiking activity as a point process, the WVK use a linear Gaussian model to relate the visual stimuli to neural responses (Paninski, 2003). The WVK parameters are one of the initial guesses commonly used to start the IRWLS algorithm used to compute the PPGLM parameter estimates (Pawitan, 2001). The STA is an approximation to the WVK. Neither of these methods takes into account spike history. The spike history effect reflects the nature of non-Poissonian firing (Shimokawa & Shinomoto, 2009), as evidenced in most V1 neurons (Figure 3b). Compared to coarse spike count coding (as in STA and WVK), fine temporal coding contains more information of visual stimuli, as seen by its superior decoding performance in real visual system (Jacobs et al., 2009). It is also now appreciated that intrinsic neuronal dynamics including absolute and relative refractory periods are important for predicting current spiking activity (Truccolo et al., 2005; Truccolo, Hochberg, & Donoghue, 2010). Including both the stimulus and spike history into the PPGLM helps explain its superior performance relative to the STA and WVK in terms of the accuracy of the data description and the model generalizability, as evaluated respectively in the goodness-of-fit and cross-validation analyses. Since neither STA nor WVK meets the sampling criteria imposed by tomographic reconstruction, they consequently suffer a loss of reconstruction accuracy.

The RF of a V1 neuron defines a region of space that is sensitive to a visual stimulus. Such region can have rather complex dependencies of the stimulus sensitivity on the exact position and type of the stimulus. Despite this complexity, a RF is often experimentally described with rather simple measures, such as the RF size and the orientation specificity. Our analysis, based on the maximum likelihood estimation of the PPGLM, provides a new definition of RF size. This definition is more reasonable than the standard definition because it uses an accepted statistical criterion. The spatial extent of the RF is determined by the region over which the lower bound of 95% CI of the CIF is greater than the baseline. In contrast, the *e*^{−1} criterion assumes a Gaussian shape for each RF and does not consider the statistical properties of the data. In particular, it is possible to set the *e*^{−1} boundary beyond the point where there is statistically significant increase in the CIF relative to baseline. It is also possible to have statistically significant activity beyond the boundary defined by the *e*^{−1} criterion. This latter case is what we have observed. In the GLM_{hist} analyses, the RF sizes based on the new criterion are systematically larger compared with the RF sizes estimated by the *e*^{−1} criterion (Figure 8c). The GLM_{hist} RFs also have greater orientation specificity than those estimated by either the STA or the WVK (Figures 8d and 8e).

One has to keep in mind that defining the RF size requires determining a threshold or minimal sensitivity given a certain type of a stimulus. Based on the signal detection theory, it is clear that changing the threshold or minimal sensitivity results in changes of the RF size. Given the typical shape of the RFs in V1, a stricter criterion that requires a larger sensitivity will typically result in a smaller RF size and vice versa. We expect the same for the threshold criterion and the CI criterion discussed in this paper. For both criteria the precise change of the RF size as a function of the threshold will depend on the exact shape of the spatial sensitivity. In case of the CI criterion, the threshold itself also depends on the amount of data used for fitting the PPGLM. Therefore, more data usually leads to smaller CIs and larger RF size. Therefore, the RF size estimate should be seen as not only an intrinsic property of the V1 neurons, but also an induced quantity of uncertainty about the data. The confidence of uncertainty is defined *a priori* and is often dependent on the amount of data that are available.

Our proposed definition using the 95% CI criterion applied to the 251 V1 neurons analyzed in this study suggests that V1 RFs may be larger than currently appreciated. Presumably, it we increase the threshold of confidence level (say, from 95% to 99%), the estimated RF size will become smaller and the gap between these two criteria might become smaller. However, with the same level of confidence but with more spiking data collected from more experimental trials, the estimated RF size may also become smaller (due to reduced confidence bounds or reduced uncertainty for the estimated parameters).

Maximum likelihood estimation is known to subject to overfitting by using an undesirable or unnecessary complex model. Due to small number of trials being collected, we have not extensively investigated the regularization option for all the methods (WVK and PPGLM). Determining optimal regularization parameters and regularization schemes (e.g., *L*_{1} norm or *L*_{2} norm) is nontrivial estimation problem in the presence of small sample size (Hastie, Tibshirani, & Friedman, 2009). Based on the goodness-of-fit assessment using the KS plot and cross-validation, we believe that the overfitting issue is not a concern for GLM_{hist}.

Finally, our paradigm suggests an approach to determining the optimal design of moving bar experiments to maximize the accuracy of RF reconstruction. Given a fixed number of projections, the signal bandwidth that can be faithfully estimated is bounded above (Natterer, 2001). For the smoothing spline model, the number of control points per direction is bounded above by the number of moving bar projections multiplied by the time interval duration per direction. Therefore, in order to improve the estimation resolution of the RFs, it is necessary to increase the number of moving bar projections to meet the sampling criteria. Given a fixed number of trials, an experimental design can be chosen that balances the trade-off between accurate RF reconstruction and the number of directions and the number of trials per direction required to achieve it (Figures 9 and and1010).

Our findings suggests several directions for future investigations. We can directly compare V1 RF properties (size and orientation tuning) obtained using different stimulus paradigms, and examine the plasticity or sensitivity of V1 RF to distinct visual stimuli (Pettet & Gilbert, 1992; DeAngelis et al., 1995; Wandell & Smirnakis, 2009). Since our moving-bar tomographic reconstruction framework is not limited by the assumption of a linear relationship between the visual stimuli and their neural responses, it can also be used to examine the nonlinear stimulus-response relationship of neurons in other visual areas, such as V2. Alternatively, the principle can be extended to estimate the spatio-temporal RF profile (Rust et al., 2005). In addition, instead of using filtered back-projection, alternative computational approaches for tomographic reconstruction (Herman, 2009; Mao et al., 2010) can be used to improve the reconstruction accuracy and to reduce the minimum number of moving bar directions required. Finally, our current RF model is rather simplified and limited by several assumptions, it would be interesting to explore the possibility to estimate the spatiotemporal visual RF (Theunissen et al., 2001; Stanley, 2002; Ringach, 2004; Nishimoto et al. 2005) using the same or modified experimental paradigm. These studies will be topics of future reports.

Finally, it is also realized that there are some limitations in using the moving bar stimuli paradigm. Considering a 0.2° bar width, the size of the bar width might be comparable to the full RF of some V1 cells, therefore it might be too coarse to reconstruct the fine internal structure of RF. In addition, at the eccentricity of 2–5°, the fixation noise is comparable to the RF size, which might cause the blurring in the reconstruction. Despite these inherent limitations caused by the original moving bar experiments, we still believe that our proposed analysis approach provides an alternative computational solution for mapping visual RFs using moving bar stimuli.

In addition, it shall be pointed out that in the current paper, we only estimate the orientation tuning, but not directional tuning property of the V1 neurons. If V1 neurons are directionally selective (e.g., De Valois, Yund, & Hepler, 1982; Livingstone, 1998; Livingstone & Conway, 2003), then our approach would yield some biases in the RF estimate. This is due to the fact that the Radon transform is defined in the symmetric plane with orientation coverage of [0, *π*) (Kak & Slanney, 1988; see also Appendix A), therefore the orientation selectivity is implicitly assumed. Despite this simplified assumption (in addition to the assumption of separability of spatial and orientation tunings), our approach still provides a useful tool to assess the visual RF properties (complementary to the STA and WVK approaches).

Since the moving bar experiment remains a classic paradigm for mapping visual RFs in awake monkeys, we believe that our proposed approach and the investigation of experimental design would provide valuable information for neuroscientists. This can also be used as a complementary approach to compare the RF estimate obtained from the other experimental paradigms based on either random white noise stimuli or m-sequences.

We thank two anonymous reviewers for valuable comments. This work was supported by the US National Institutes of Health grant DP1-OD003646 (to E.N.B.) and the European Union FP7-1CT grant 240763 (to G.P.). Part of preliminary results was presented in COSYNE’09, Salt Lake City, Utah.

The Radon transform (Radon, 1917) is a method that has been widely used in various inverse reconstruction problems, such as computerized tomography, nuclear magnetic resonance, optics, astronomy, and geophysics (Kak & Slanney, 1988; Deans, 1993). The common task of these problems is to reconstruct a 2D or 3D function based on its lower-dimensional (1D or 2D) projections, which are treated as samples of the Radon transform.

Let *f*(*x*, *y*) be a continuous, band-limited function defined on a 2D Euclidean space [*x*, *y*] ^{2}. The Radon transform of *f* along a direction *θ* [0, 2*π*], denoted as
(*f*)[*θ*], is defined by a set of *line integrals* along the projection direction *θ* at a distance *t* of the origin. Mathematically, the line integral is written as

$$\mathcal{R}(f)[t;\theta ]={\int}_{-\infty}^{\infty}f(t,s)\mathrm{d}s$$

(A.1)

where (*t*, *s*) are defined by the rotated version of (*x*, *y*) as follows:

$$\left[\begin{array}{c}t\\ s\end{array}\right]=\left[\begin{array}{cc}cos\theta & sin\theta \\ -sin\theta & cos\theta \end{array}\right]\phantom{\rule{0.38889em}{0ex}}\left[\begin{array}{c}x\\ y\end{array}\right]$$

(A.2)

Here, *t* = *x* cos *θ* + *y* sin *θ* defines a line in the 2D plane, which is specified by an angle *θ* and a distance *t* from the origin. Therefore, the Radon transform
(*f*)[*θ*] is a 1D line projection of *f*(*x*, *y*) for a given direction *θ*. The underlying mathematical theory for tomographic reconstruction is the ‘Projection-Slice Theorem’. Let
and
denote the 1D and 2D Fourier transforms, respectively. The 2D Fourier transform of *f*(*x*, *y*) is written as

$${\mathcal{F}}_{2}(u,v)\{f(x,y)\}={\int}_{-\infty}^{\infty}{\int}_{-\infty}^{\infty}f(x,y){e}^{-2\pi i(ux+vy)}\mathrm{d}x\mathrm{d}y$$

(A.3)

where
$i=\sqrt{-1}$, and {*u*, v}are the spatial frequencies. Let *u* = *w* cos *θ* and *v* = *w* sin *θ*, which convert the Cartesian coordinate system (*u*, *v*) in the frequency domain to a polar coordinate system (*w*, *θ*), we then obtain

$${\mathcal{F}}_{2}(w,\theta )\{f(x,y)\}={\int}_{-\infty}^{\infty}{\int}_{-\infty}^{\infty}f(x,y){e}^{-2\pi iw(xcos\theta +ysin\theta )}\mathrm{d}x\mathrm{d}y$$

(A.4)

Because of *t* = *x* cos *θ* + *y* sin *θ* (equation A.2), it follows that

$$\begin{array}{l}{\mathcal{F}}_{2}(w,\theta )\{f(x,y)\}={\int}_{-\infty}^{\infty}{\int}_{-\infty}^{\infty}f(t,s){e}^{-2\pi \mathit{iwt}}\mathrm{d}s\phantom{\rule{0.16667em}{0ex}}\mathrm{d}t\\ ={\int}_{-\infty}^{\infty}\left[{\int}_{-\infty}^{\infty}f(t,s)\mathrm{d}s\right]\phantom{\rule{0.16667em}{0ex}}{e}^{-2\pi \mathit{iwt}}\phantom{\rule{0.38889em}{0ex}}\mathrm{d}t\end{array}$$

(A.5)

From equation A.1, equation A.5 is further written as:

$$\begin{array}{l}{\mathcal{F}}_{2}(w,\theta )\{f(x,y)\}={\int}_{-\infty}^{\infty}\mathcal{R}(f)[t;\theta ]{e}^{-2\pi \mathit{iwt}}\mathrm{d}t\\ ={\mathcal{F}}_{1}\{\mathcal{R}\{f\}\}\end{array}$$

(A.6)

That is, the 2D Fourier transform of a function *f*(*x*, *y*) is identical to the 1D Fourier transform of the Radon transform
{*f*}. Note that equation A.6 is defined in the polar coordinate system (*w*, *θ*) of the frequency domain. To reconstruct *f*(*x*, *y*), we apply the inverse 2D Fourier transform to equation A.6, which yields

$$\begin{array}{l}f(x,y)={\mathcal{F}}_{2}^{-1}\{{\mathcal{F}}_{1}\{\mathcal{R}(f)\}\}\\ ={\int}_{-\infty}^{\infty}{\int}_{-\infty}^{\infty}{\mathcal{F}}_{2}(u,v){e}^{2\pi i(ux+vy)}\mathrm{d}u\mathrm{d}v\end{array}$$

(A.7)

Using the identity *u* = *w* cos *θ*, *v* = *w* sin *θ*, and d*u*d*v* = *w*d*w*d*θ*, equation A.7 is rewritten as

$$f(x,y)={\int}_{0}^{2\pi}{\int}_{0}^{\infty}{\mathcal{F}}_{2}(w,\theta )\phantom{\rule{0.16667em}{0ex}}w\phantom{\rule{0.16667em}{0ex}}{e}^{2\pi iw(xcos\theta +ysin\theta )}\mathrm{d}w\mathrm{d}\theta $$

(A.8)

In light of the Fourier transform property:
(*w*, *θ* + *π*) =
(−*w*, *θ*), equation A.8 is further written as

$$\begin{array}{l}f(x,y)={\int}_{0}^{\pi}{\int}_{-\infty}^{\infty}{\mathcal{F}}_{2}(w,\theta )w{e}^{2\pi iw(xcos\theta +ysin\theta )}\mathrm{d}w\mathrm{d}\theta \stackrel{\text{def}}{=}{\int}_{0}^{\pi}{\mathcal{Q}}_{\theta}(r)\mathrm{d}\theta \end{array}$$

(A.9)

where
(*w*, *θ*) =
{
(*f*)[*θ*]}, and

$${\mathcal{Q}}_{\theta}(t)={\int}_{-\infty}^{\infty}{\mathcal{F}}_{1}\{\mathcal{R}(f)[t;\theta ]\}w{e}^{2\pi \mathit{iwt}}\mathrm{d}w$$

(A.10)

in which the frequency response |*w*| defines a so-called ‘Ramp’ filter. Hence, the reconstruction is viewed as a filtered back-projection process: a filtering operation (equation A.10) combined with a back-projection (equation A.9).

In terms of discrete implementation of the inverse Radon transform, it is worth pointing out several additional important numerical issues that are often encountered in practice (some of which might not be crucial in the computerized tomography problem but are pivotal in the RF mapping problem).

- Ramp filter: The original Ramp filtering is defined in the continuous-time. Due to discrete implementation, the exact zero frequency filtering cannot be done, so some very low frequency components might be cancelled out. One of the solutions to alleviate this problem is to implement the Ramp filter in the time domain instead of in the frequency domain, namely, to zero pad the filter as well as the Radon transform (
*f*)[*t; θ*] (Kak & Slanney, 1988), followed by FFT—this is aimed to avoid the aliasing in circular convolution and replace the circular convolution with a non-periodic convolution._{j} - DC leakage: If the 2D RF profile has a nonzero baseline, the DC leakage phenomenon will occur due to the discrete Fourier transform in { (
*f*)[*t; θ*]}. This issue will become more severe because of the finite projection effect (the less number of projections, the more severe is the problem). To tackle this issue, we come up with a practical solution. Because the inverse Radon transform involves only linear operation, the linear superposition principle will hold. Therefore, we can first estimate the DC component_{j}*c*(the global mean value) of the reconstructed RF image and apply the Radon transform to a constant DC image (with the same image size), from which we obtain (*c*); then before applying the inverse Radon transform, we subtract the original Radon transform by (*c*); and finally, we apply the inverse Radon transform to (*f*)[*t; θ*] − (_{j}*c*), from which (*x*,*y*) is obtained, and we estimate the RF by adding back the constant*f*(*x*,*y*) = (*x*,*y*)+*c*for every pixel (*x*,*y*). In practice, it is found that this trick is quite effective in reducing the DC leakage effect. - Boundary effect: In the discrete inverse Radon transform, because of finite approximation and interpolation, the boundary values in the reconstructed RF images are less reliable (typically overestimated). In practice, we may simply ignore the estimated boundary values and only display the central image.

A visual RF can be viewed as a linear transfer function that maps the visual stimulus input to a neuronal response output. The RF characterizes the basic firing property of a neuron, either excitatory or inhibitory. In the experimental setup of mapping a 2D visual RF, we assume that the RF space has a size of × pixels.

Let *j* = 1, ···, *J* denote the number of bar projections. In the *j*-th direction and at the *k*-th time bin, let *S*_{j,k} ^{2} be a vectorized visual stimulus field input (represented as a 1× ^{2} row vector), and let *n _{j,k}* denote the corresponding observed spike count (neural response output). Let
${\mathit{X}}_{j}={[{\mathit{S}}_{j,1}^{}}^{}$ denote a concatenated matrix that consists of a total of

$$\begin{array}{l}\mathit{X}={[{\mathit{X}}_{1}^{}}^{\mathit{n}={[{\mathit{n}}_{1}^{}}^{}}\end{array}$$

The linear neural encoder relates the stimulus input with the neural response output via a linear regressor *ξ*

$$\mathit{n}=\mathit{X}\mathit{\xi},$$

(B.1)

where *ξ*^{2×1} is a parameter vector to be estimated. Upon re-matrization of ** ξ** into an × matrix, we obtain the 2D RF estimate projected onto an × pixel space. Equation B.1 defines a linear representation between the input

To find a solution to the linear equation B.1, we can use the least-squares method, which yields the WVK (using only linear kernels) solution^{5}

$${\mathit{\xi}}_{\text{WVK}}=({\mathit{X}}^{}$$

(B.2)

The variance of the estimate *ξ*_{WVK} is

$$\mathit{Var}[{\mathit{\xi}}_{\text{WVK}}]={\widehat{\sigma}}^{2}{({\mathit{X}}^{}-1}^{.}$$

(B.3)

where
${\widehat{\sigma}}^{2}={\scriptstyle \frac{1}{KJ-{2}^{}}{\left|\right|\mathit{n}-\mathit{X}{\mathit{\xi}}_{\text{WVK}}\left|\right|}^{2}={\scriptstyle \frac{1}{KJ-{2}^{}}{(\mathit{n}-\mathit{X}{\mathit{\xi}}_{\text{WVK}})}^{}}}$ (the denominator *KJ* − ^{2} denotes the statistical degrees of freedom).

STA can be viewed as a reduced form of the first-order WVK method. Let *N* denotes the total number of spikes, and let ** D**(

$${\mathit{\xi}}_{\text{STA}}=\mathit{D}{(N)}^{-1}{\mathit{X}}^{}$$

(B.4)

where *c* = 1/*N* denotes a rate constant, which multiplies the vector *X*^{}** n** to yield the STA estimate. That is, STA computes the normalized spike counts across time related to the stimulus, hence it can be interpreted as coding a ‘preferred’ stimulus of the neuron. In the so-called ‘whitened STA’ or ‘rotated STA’ (RSTA) (Paninski, 2003), the regressor vector is weighted by a full matrix rather than a constant:
${\mathit{\xi}}_{\text{RSTA}}={\mathbf{\sum}}_{\mathit{X}}^{-1}{\mathit{\xi}}_{\text{STA}}$, where

Next, in the context of moving bar experiments, we would like to examine the STA and WVK estimates (given data collected from all moving bar directions) in relation to their individual estimates given data collected in each direction. Put another word, can the STA or WVK estimate be composed into a linear sum of *J* estimates computed separately from each moving bar direction?

In the case of STA, let
${N}_{j}={\sum}_{k=1}^{K}{n}_{j,k}$ denotes the number of total spike counts in the *j*-th moving bar direction, and let
$N={\sum}_{j=1}^{J}{N}_{j}$ and *c _{j}* = 1/

$$c=\frac{1}{{\sum}_{j=1}^{J}{N}_{j}}=\frac{1}{{\sum}_{j=1}^{J}{c}_{j}^{-1}}=\frac{1}{J}\frac{J}{{\sum}_{j=1}^{J}{c}_{j}^{-1}}.$$

(B.5)

Hence, *c* is equal to 1/*J* fraction of the *harmonic mean* of *c _{j}* obtained from all

$$\begin{array}{l}{\mathit{\xi}}_{S\text{TA}}=c\sum _{j=1}^{J}{\mathit{X}}_{j}^{}=c\sum _{j=1}^{J}{\mathit{\xi}}_{j}& \ne \frac{1}{J}\sum _{j=1}^{J}{c}_{j}{\mathit{\xi}}_{j}\end{array}$$

(B.6)

where
${\mathit{\xi}}_{j}={\mathit{X}}_{j}^{}$ denotes the individual estimate from the *j*-th direction. Since the harmonic mean and the arithmetic mean are generally different in value, from equation B.4 we know that
$c\ne {\scriptstyle \frac{1}{J}}{\sum}_{j=1}^{J}{c}_{j}$. However, in the special case when *N _{j}* or

Unlike STA, WVK generally cannot be decomposed into a linear combination of the individual estimates from each direction

$$\begin{array}{l}{\mathit{\xi}}_{\text{WVK}}={(\sum _{j=1}^{J}{\mathit{X}}_{j}^{})-1}^{(\sum _{j=1}^{J}{\mathit{X}}_{j}^{})}=\sum _{j=1}^{J}{(\sum _{j=1}^{J}{\mathit{X}}_{j}^{})-1}^{{\mathit{X}}_{j}^{}}\ne \frac{1}{J}\sum _{j=1}^{J}{({\mathit{X}}_{j}^{}-1}^{{\mathit{X}}_{j}^{}}\end{array}$$

(B.7)

From simple linear algebra, we know in general the last two equations are not equal in value, except for a special case when
${\mathit{X}}_{j}^{}$ are identical for all *j*. In the moving bar experiment, if the bar length is greater than the diameter or maximum span of the visual field, this special condition could be satisfied since the area coverage of the moving bar:
${\mathit{X}}_{j}={[{\mathit{S}}_{j,1}^{}}^{}$, is exactly the same at each direction in a *circular-shape* visual field (due to circular symmetry).

Note that in equation B.7 the matrix
${\mathit{X}}_{j}^{}$ has to be of full rank for all *j*-th directions, which typically requires a large number of *K* for each direction. Because of the least-squares fit, the elements of *X*_{j}*ξ*_{WVK} are not always nonnegative (see e.g., Figure 5a, *rightmost column*), and the least-squared estimate might be sensitive to noise or outliers in *X** _{j}*, as reflected in the worst performance in goodness-of-fit (Table 1).

The experimental procedures were approved by the National Committee on Animal Welfare in compliance with the guidelines of the European Community for the care and use of laboratory animals. Two adult female rhesus monkeys (*Macaca mulatta*) were used in this study. Neuronal spiking activity was recorded from awake and head-fixed monkeys in opercular region of V1 (2–5° of eccentricity) and, in some occasions, from the superior bank of the calcarine sulcus (8–12° of eccentricity).

Each trial started with an appearance of a red fixation point (0.15°; luminance: 10 cd·m^{−2}), and the monkeys were trained to press a lever within 700 ms and to maintain their gazes within a virtual window (window size: 1°) centered on the fixation point. The task ended by randomly changing the color of the fixation point from red to green, between 2,500 and 4,000 ms after the trial onset. To obtain a juice reward, monkeys had to release the lever within 500 ms after the color change in the fixation point. Trials were discarded whenever fixation was interrupted. The monkeys’ eye positions were tracked continuously by a search coil system (DNI and Crist Instruments) with a 500 Hz sampling rate.

Visual stimuli were generated as sequences of computer-generated bitmap images with an interface developed in LabVIEW (National Instruments), and were presented as movies (frame rate: 100 frames/s) using a standard graphic board (GeForce 6600-series, NVIDIA) controlled by ActiveSTIM (www.activestim.com). The CRT monitor (CM813ET, Hitachi) was gamma corrected and spanned a visual angle of 36° × 28°. The visual stimulus was a high contrast light bar (luminance: 50 cd·m^{−2}; bar width: 0.2°) that was moved with a constant velocity (14.9°/s) in the visual field (21.8° × 21.8°). The bar stimulus was presented in random order in 16 different orientations separately evenly by 22.5°. Each trials lasted two seconds (during which the bar traveled inside the visual field), where *t* = 0 and *T* = 2 s, respectively, corresponded to the the moment when the bar’s position overlaid two corners of the visual field in the diagonal, and *t* = 1 corresponded to the center of the field (Figure 1b).

Quartz-insulated tungsten-platinum electrodes (~80 *μ*m diameter, 0.3–1.0 MΩ impedance; Thomas Recording) were used to record the extracellular neural activity from 3 to 5 sites in both superficial and deep layers of the striate cortex (digitally band-pass filtered, 0.7–6.0 kHz; Plexon). Neuronal spikes were detected by amplitude thresholding, which was set interactively based on online visualization of the spike waveforms. Spike events and corresponding waveforms were sampled at 32 kHz (spike waveform length: 1.2 ms). Single-unit activity was separated using an interactive custom-designed spike-sorting software. Experimental trials with artifacts were rejected during which either monkey did not maintain fixation or monkey showed no response or incorrect behavior. Upon trial rejection the complete data sets were balanced for all conditions such that all recorded V1 neurons contained exactly 9 trials per direction.

^{1}A more realistic model for the V1 RF is the 2D Gabor filter (Daugman, 1980; Jones & Palmer, 1987), which has coupled or non-separable spatial and orientation tunings. However, applying the Radon transform to the non-separable Gabor filter model would make the analysis nontrivial (equation 2.1).

^{2}The number
$1,414=2000/\sqrt{2}$ arises from the fact that we treat the moving bar 2-s traveling time (binned as a 2,000-dimensional vector) as the diagonal in the rectangular stimulus field (Figure 1b).

^{3}Alternatively, as suggested by one reviewer, one could apply the reverse correlation or STA method to reveal the period between the stimulus onset and the cortical response that is either significantly correlated or not correlated, and based on that to use it as a rough estimate of the latency.

^{4}Assuming circular symmetry, this is equivalent to sampling from 16 (or 32) evenly spaced directions in [0, *π*) with 8 trials per directions.

^{5}This is obtained by solving the Yule-Walker equation. Multiplying *X*^{} to both sides of equation B.1, we approximate the auto-correlation and cross-correlation as *X*^{}** X** and

- Brown EN, Barbieri R, Ventura V, Kass RE, Frank LM. The time-rescaling theorem and its application to neural spike train analysis. Neural Comput. 2002;14:325–346. [PubMed]
- Brown EN, Barbieri R, Eden UT, Frank LM. Likelihood methods for neural data analysis. In: Feng J, editor. Computational Neuroscience: A Comprehensive Approach. London: CRC Press; 2003. pp. 253–286.
- Brown EN, Kass RE, Mitra PP. Multiple neural spike train data analysis: state-of-the-art and future challenges. Nat Neurosci. 2004;7:456–461. [PubMed]
- Chen X, Han F, Poo MM, Dan Y. Excitatory and suppressive receptive field subunits in awake monkey primary visual cortex (V1) Proc Natl Acad Sci USA. 2007;104:19120–19125. [PubMed]
- Crist RE, Li W, Gilbert CD. Learning to see: experience and attention in primary visual cortex. Nat Neurosci. 2001;4:519–525. [PubMed]
- Daugman JG. Two-dimensional spectral analysis of cortical receptive field profiles. Vision Res. 1980;20:847–856. [PubMed]
- de Boor C. A Practical Guide to Splines. Berlin: Springer-Verlag; 1978.
- de Ruyter van Steveninck RR, Bialek W. Coding and information transfer in short spike sequences. Proc R Soc Lond B. 1988;234:379–414.
- De Valois RL, Yund W, Hepler N. The orientation and direction selectivity of cells in macaque visual cortex. Vis Res. 1982;22:531–544. [PubMed]
- De Valois RL, Albrecht DG, Thorell LG. Spatial frequency selectivity of cells in macaque visual cortex. Vis Res. 1982;22:545–559. [PubMed]
- DeAngelis GC, Anzai A, Ohzawa I, Freeman RD. Receptive field structure in the visual cortex: does selective stimulation induce plasticity? Proc Natl Acad Sci USA. 1995;92:9682–9686. [PubMed]
- Deans SR. The Radon Transform and Some of Its Applications. New York: Dover Publications; 1993.
- Gabbiani F. Coding of time varying signals in spike trains of linear and half-wave rectifying neurons. Network. 1996;7:61–85.
- Haslinger R, Pipa G, Brown EN. Discrete time rescaling theorem: determining goodness of fit for discrete time statistical models of neural spiking. Neural Comput. 2010;22:2477–2506. [PMC free article] [PubMed]
- Hastie T, Tibshirani R, Friedman J. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. 2. Berlin: Springer; 2009.
- Herman GT. Fundamentals of Computerized Tomography: Image Recon-struction from Projection. 2. Berlin: Springer; 2009.
- Hubel DH, Wiesel TN. Receptive fields of single neurons in the cat’s striate cortex. J Physiol. 1959;148:574–591. [PubMed]
- Jacobs AL, Fridman G, Douglas RM, Alam NM, Latham PE, Prusky GT, Nirenberg S. Ruling out and ruling in neural codes. Proc Natl Acad Sci USA. 2009;106:5936–5941. [PubMed]
- Jones JP, Palmer LA. The two-dimensional spatial structure of simple receptive fields in cat striate cortex. J Neurophysiol. 1987;58:1187–1211. [PubMed]
- Kak AC, Slaney M. Principles of Computerized Tomographic Imaging. New York: IEEE Press; 1988.
- Kass RE, Ventura V, Brown EN. Statistical issues in the analysis of neuronal data. J Neurophysiol. 2005;94:8–25. [PubMed]
- Leventhal AG, Thompson KG, Liu D, Zhou Y, Ault SJ. Concomitant sensitivity to orientation, direction, and color of cells in layers 2, 3, and 4 of monkey striate cortex. J Neurosci. 1995;15:1808–1818. [PubMed]
- Livingstone MS. Mechanisms of direction selectivity in Macaque V1. Neuron. 1998;20(3):509–526. [PubMed]
- Livingstone MS, Conway BR. Substructure of direction-selective receptive fields in Macaque V1. J Neurophysiol. 2003;89(5):2743–2759. [PMC free article] [PubMed]
- Mao Y, Fahimian BP, Osher SJ, Miao J. Development and optimization of regularized tomographic reconstruction algorithms utilizing equally-sloped tomography. IEEE Trans Image Process. 2010;19:1259–1268. [PubMed]
- Mardia KV. Statistics of Directional Data. Academic Press; London: 1972.
- McCullagh P, Nelder J. Generalized Linear Models. London: Chapman & Hall; 1989.
- Movellan JR. Tech Rep. University of California; San Diego: 2008. Tutorial on Gabor filter.
- Movshon JA, Thomson ID, Tolhurst DJ. Spatial summation in the receptive fields of simple cells in the cat’s striate cortex. J Phyiol. 1978;283:53–77. [PubMed]
- Natterer F. The Mathematics of Computerized Tomography. Philadelphia, PA: SIAM Press; 2001.
- Nienborg H, Bridge H, Parker AJ, Cumming BG. Receptive field size in V1 neurons limits acuity for perceiving disparity modulation. J Neurosci. 2004;24:2065–2076. [PubMed]
- Nishimoto S, Arai M, Ohzawa I. Accuracy of subspace mapping of spa-tiotemporal frequency domain visual receptive fields. J Neurophysiol. 2005;93:3524– 3536. [PubMed]
- Paninski L. Convergence properties of some spike-triggered analysis techniques. Network. 2003;14:437–464. [PubMed]
- Paninski L. Maximum likelihood estimation of cascade point-process neural encoding models. Network. 2004;15:243–262. [PubMed]
- Park M, Pillow JW. Receptive field inference with localized priors. PloS Comput Biol. 2011;7(10):e1002219. [PMC free article] [PubMed]
- Pawitan Y. In All Likelihood: Statistical Modelling and Inference Using Likelihood. New York: Oxford Univ Press; 2001.
- Pettet MW, Gilbert CD. Dynamic changes in receptive-field size in cat primary visual cortex. Proc Natl Acad Sci USA. 1992;89:8366–8370. [PubMed]
- Pillow JW, Ahmadian Y, Paninski L. Model-based decoding, information estimation, and change-point detection techniques for multi-neuron spike trains. Neural Computation. 2011;23:1–45. [PubMed]
- Pipa G, Chen Z, Neuenschwander S, Lima B, Brown EN. Efficient spike encoding for mapping visual receptive fields. Oral presentation in Computational and Systems Neuroscience (COSYNE’09); 2009. Abstract in Frontier in Neuroscience.
- Radon J. Math–Phys Kl. Vol. 69. Berichte Sächsische Akademie der Wissenschaften; Leipzig: 1917. Uber die Bestimmung von Funkionen durch ihre Integralwerte längs gewisser Mannigfaltigkeiten; pp. 262–267.
- Reid RC, Victor JD, Shapley RM. The use of m-sequences in the analysis of visual neurons: Linear receptive field properties. Vis Neurosci. 1997;14:1015–1027. [PubMed]
- Rieke F, Warland D, de Ruyter van Steveninck RR, Bialek W. Spikes: Exploring the Neural Code. Cambridge, MA: MIT Press; 1997.
- Ringach DL, Shapley RM. Reverse correlation in neurophysiology. Cognitive Science. 2004;28:147–166.
- Ringach DL, Shapley RM, Hawken MJ. Orientation selectivity in Macaque V1: diversity and laminar dependence. J Neurosci. 2002;22:5639–5651. [PubMed]
- Ringach DL. Mapping receptive fields in primary visual cortex. J Physiol. 2004;558:717–728. [PubMed]
- Rodieck RW. Quantitative analysis of cat retinal ganglion cell response to visual stimuli. Vision Res. 1965;5:583–601. [PubMed]
- Rust NC, Schwartz O, Movshon JA, Simoncelli EP. Spatiotemporal elements of macaque V1 receptive fields. Neuron. 2005;46:945–956. [PubMed]
- Shimokawa T, Shinomoto S. Estimating instantaneous irregularity of neuronal firing. Neural Computation. 2009;21:1931–1951. [PubMed]
- Stanley GB. Adaptive spatiotemporal receptive field estimation in the visual pathway. Neural Comput. 2002;14:2925–2946. [PubMed]
- Sun M, Bonds AB. Two-dimensional receptive-field organization in striate cortical neurons of the cat. Vis Neurosci. 1994;11:703–720. [PubMed]
- Theunissen FE, David SV, Singh NC, Hsu A, Vinje WE, Gallant JL. Estimating spatio-temporal receptive fields of auditory and visual neurons from their responses to natural stimuli. Network. 2001;12:289–316. [PubMed]
- Truccolo W, Eden UT, Fellow M, Donoghue JP, Brown EN. A point process framework for relating neural spiking activity to spiking history, neural ensemble and covariate effects. J Neurophysiol. 2005;93:1074–1089. [PubMed]
- Truccolo W, Hochberg LR, Donoghue JP. Collective dynamics in human and monkey sensorimotor cortex: predicting single neuron spikes. Nat Neurosci. 2010;13:105–111. [PMC free article] [PubMed]
- Wandell BA, Smirnakis SM. Plasticity and stability of visual field maps in adult primary visual cortex. Nat Rev Neurosci. 2009;10:873–884. [PMC free article] [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |