|Home | About | Journals | Submit | Contact Us | Français|
Many attempts have been made to characterize latent structures in “texture spaces” defined by attentive similarity judgments. While an optimal description of perceptual texture space remains elusive, we suggest that the similarity judgments gained from these procedures provide a useful standard for relating image statistics to high-level similarity. In the present experiment, we ask subjects to group natural textures into visually similar clusters. We also represent each image using the features employed by three different parametric texture synthesis models. Given the cluster labels for our textures, we use linear discriminant analysis to predict cluster membership. We compare each model’s assignments to human data for both positive and contrast-negated textures, and evaluate relative model performance.
While a great deal of work has been done to characterize the nature of “pre-attentive” texture processing, comparatively little research effort has been directed towards the perception of texture under fully attentive viewing conditions. In particular, the question of what image features contribute to perceived similarity between pairs of textures has not been thoroughly investigated. Some attempts have been made to describe latent structures in psychological texture spaces using dimensional models [1–3], non-linear trajectories , or cluster-based analysis . In general, studies such as these tend towards the descriptive rather than the quantitative, with the end goal being to characterize the high-level properties that are captured by a particular axis, path, or cluster in the recovered texture space. In other cases, specific image features have been evaluated according to their ability to approximate human judgments of a particular high-level attribute such as “roughness” or “periodicity” [6,7]. While these results are useful for some applications, it is potentially dangerous to extrapolate from such findings to the more general similarity problem as the texture properties isolated for investigation may not be an important aspect of generic similarity judgments.
In the current study, we simultaneously characterize high-level texture similarity psychophysically and computationally. Our basic strategy is to build a dimensional model of texture space by collecting texture groupings from human observers via a card-sorting task. Within this space, we then use k-means clustering to determine a set of texture categories. By performing this second step, we are able to transform the task of judging similarity between pairs of textures into a multi-category classification task. Category membership for each texture is then treated as “ground truth” data that we can attempt to approximate with a simple linear classifier that can operate on any representation of the input images we wish. Currently, we have opted to examine the relative merits of the features employed by three different parametric texture synthesis models. To motivate our use of texture synthesis models as the subject of this study, we briefly discuss alternative texture descriptors used in other domains.
One particular domain that might seem to offer an interesting starting point for our analysis is the modeling of pre-attentive texture segmentation. In segmentation tasks, subjects are usually asked to report the location or orientation of a boundary that separates one region of homogeneous texture from another. Our idea of what features are calculated and compared to accomplish segmentation tasks such as this has evolved over the years to include broad orientation statistics , micropatterns within the texture array , and the current view that various spatial filters may provide the right outputs to match human performance [10–13]. Unfortunately, the relationship between pre-attentive segmentation and attentive similarity is uneasy at best. Segmentation is generally studied using artificial textures, where it is possible to finely control the extent to which one texture differs from another under some image feature. By contrast, almost all pairs of natural textures would be trivially easy to segment, despite the fact that human observers can usually provide a graded similarity judgment for arbitrary pairs of images. This suggests that the features that prove useful for describing segmentation performance may not be appropriate for characterizing attentive similarity.
An alternative would be to consider features that are used for texture recognition. There have been many independent investigations of the utility of various features for content-based retrieval of textures from complex images, as well as database match-to-query tasks [7,14–16]. One important advantage of such features is that they have generally been developed for use with real images. As such, these features are more capable of capturing the richness of natural textures and may thus be more useful for our purposes. However, it is important to point out that recognition and similarity are not interchangeable, despite being closely related. If one wishes to accurately classify a particular texture in an image, for example, it will be important to use representations that are invariant to changes in scale, illumination, and other sources of image variability. Similarity judgments are not necessarily invariant to these transformations, however. A particular texture viewed from far away may not be judged as highly similar to the same texture viewed close-up, for example. Any recognition algorithm designed to be invariant to a set of transformations on a given texture might have to be re-designed if similarity judgments are highly variable under the same manipulation. To avoid this potential difficulty, we do not consider texture recognition algorithms in the current study (though we cannot rule out their possible efficacy).
In general, the problem of approximating texture similarity judgments is complicated by the lack of any obvious principles governing the mapping between input images and observers’ output. By way of comparison, any rigid 3-D object is limited in its ability to take on different appearances by its geometric and photometric properties. Though the set of appearances such an object can exhibit may lie on a complicated manifold in image space, these appearances are ultimately constrained by a small number of true degrees of freedom such as pose, lighting angles, etc. . Images of texture are far less constrained in this regard, since a particular texture may be recognizable from multiple distinct local image patches with unique geometry. Worse than this, judging similarity (rather than identity) is even more difficult since highly similar textures may share little in terms of true shape or material properties, resembling one another only in appearance. Thus, a collection of similar-looking textures may not necessarily lie on any particular manifold through image space, or be easily describable in terms of a limited number of degrees of freedom.
Given that texture similarity may be fundamentally dependent on appearance (rather than underlying form), we have attempted here to model similarity judgments using the image features that form the basis of three distinct texture synthesis models. Synthesizing a convincing texture is fundamentally dependent on discovering a small set of features that can capture texture appearance effectively. In general, there are no assumptions made about invariances that should be accounted for (as in texture recognition), nor is the texture description meant to account for a particular psychophysical task (such as texture segmentation). Instead, texture synthesis models are designed to measure the “ingredients” of a texture so that arbitrarily large amounts of that same texture can be constructed from noise images. The criterion for success is simply the appearance of the synthesized texture. A wide range of features have been employed in synthesis models, ranging from non-parametric models that use pixel-level  or patch-level representations , to parametric methods incorporating V1-like filters [20–23]. Aside from being a useful graphics application of texture research, we have recently demonstrated  that synthesis models provide a means for understanding what statistics are important for capturing the structure of various natural textures under pre-attentive conditions.
Texture synthesis models are thus a particularly useful starting point for modeling attentive texture similarity. Such models are built to address some of the same issues we encounter in understanding similarity, namely determining the “ingredients” of a texture that contribute most to its appearance. Good models are defined by their ability to accurately capture appearance with a set of candidate features. This gives us an intuitive means of guessing what features we expect to do a good job of approximating the human categorization judgments we collect. To the extent that a model measures the right things to construct a convincing synthetic image, we can expect that it is using features that are perceptually important to our observers. In turn, we would expect that these perceptually important features may provide a good basis for determining whether or not two textures are similar, and should therefore be grouped together. Thus, we would naively expect high-quality synthesis to imply good performance in a similarity task.
In the current study, we evaluated the performance of feature sets used in three distinct parametric models of texture synthesis. First, we used the power spectrum recovered from a texture to approximate its appearance . Second, we used the Heeger–Bergen analysis and synthesis algorithm to characterize textures in terms of oriented contrast energy at multiple spatial scales as measured by derivative-of-gaussian features . Finally, we used the Portilla–Simoncelli algorithm, which augments basic measurements of multi-scale-oriented contrast with higher-order measurements of edge co-occurrence across orientation, position, and scale [21,26,27]. These three algorithms represent a clear progression in synthesis quality over the span of approximately 15 years of research, making it easy to determine whether or not there is a clear relationship between synthesis quality and similarity performance. Also, the algorithms differ a good bit in terms of basic primitives, incorporating features that capture global (power-spectrum), local (Heeger–Bergen), and intermediate image structure (Portilla–Simoncelli). We discuss the details of each model in a later section.
We conducted our analysis on natural texture images. We also assessed agreement between model and experiment on both positive and contrast-negated textures. This manipulation is important from both a cognitive and computational standpoint. Negation often greatly disrupts human performance, despite the fact that many low-level aspects of the image are preserved. For example, faces are particularly difficult to recognize in photographic negative . This impairment has been attributed to the disruption of information vital for computing shape-from-shading , the disruption of surface pigmentation , or to an impaired ability to extract certain material properties like translucency . Given the interesting perceptual consequences of contrast negation, we expect that subjects’ judgments may change dramatically when images are negated. In particular, since subjects may be less able to identify the materials or objects that make up the texture, our models may be more able to accurately predict category membership in the negative case than the positive case.
In general, we do not expect to find profound agreement between human and model judgments. However, we feel that this analysis is potentially quite valuable nonetheless. Given the complexity of the problem it is worthwhile to see how much we can accomplish with feature sets that have proven extremely useful in another domain. Also, reformulating texture similarity as a categorization task may prove to be a useful tool for comparing the efficacy of texture representations beyond those discussed here. We continue by presenting the three synthesis algorithms in more detail, discussing the features used in each model to represent target textures.
As we stated above, the goal of texture synthesis algorithms is to describe a procedure for measuring enough statistics in a target image that a new image can be created that will strongly resemble the original texture. We consider three parametric texture synthesis algorithms in this study, each one based on a unique set of features. Here, we describe each algorithm, paying particular attention to the set of measurements used to describe each target texture.
This algorithm is the simplest one we will consider and represents one of the earliest attempts to create synthetic digital texture. In this procedure, target textures are subjected to a Fourier transform. The phase terms are discarded, and the power spectrum of the target image is used to describe its appearance. A new texture can be synthesized by creating a new phase image (perhaps taken from a Fourier-transformed noise image) and then carrying out an inverse Fourier transform using the target power spectrum and the newly created phase information. The resulting image is the synthetic texture. For our classification experiments, we down-sampled the original 512 × 512 pixel image by one octave (using a binomial filter) before calculating the power spectrum. This was done due to memory limitations on the classification procedure described in the Methods section.
The power-spectrum algorithm can produce good synthetic images of grainy, stochastic textures. It is not very useful for structured textures containing discrete elements or for non-homogeneous textures.
The second algorithm we consider is the Heeger–Bergen texture synthesis algorithm. This model measures local oriented contrast at multiple scales of the target image and then attempts to alter a noise image to have the same filter output statistics as the target, while maintaining the original pixel histogram of the target texture.
To be more specific, each target image is filtered at multiple scales with oriented directional derivative spatial filters. The expression for such a filter (a first-order derivative-of-Gaussian oriented at an angle θ) is as follows:
Each filtered image (at one orientation and one spatial scale) is then compressed into a histogram. The target image is thus expressed as a set of sub-band histograms, each providing a summary of the filter outputs obtained by filtering the target texture at a particular scale and orientation. Furthermore, an intensity histogram of the target texture is also maintained. To create a synthetic version of the target texture, a noise image is initialized and then altered to have the same sub-band histograms as the target texture at each scale and orientation. Following this manipulation, the intensity histogram of the target is imposed on the synthetic image to remove any artifacts of the sub-band histogram matching process. These two histogram-matching steps are iterated until the amount of image change at each iteration is below a pre-determined threshold, or until a set number of iterations is complete. For our purposes, we used 17 × 17 pixel first-order derivative-of-gaussian filters at four orientations (0°, 45°, 135°, 90°), with a spatial constant of 5 pixels. These filters were applied at four spatial scales by down-sampling the target image an octave at a time (again using a binomial filter) and filtering the down-sampled image with the 17 × 17 pixel oriented filter described above. The pixel intensity histogram and all sub-band histograms had 256 bins each.
The Heeger–Bergen algorithm is useful for creating synthetic images of a much wider range of target textures than the power-spectrum algorithm. However, textures with extended contours or large-scale structures are not well approximated by this technique due to its basic characterization of the image solely in terms of local contrast.
Finally, we present the Portilla–Simoncelli algorithm. This is a very complex model of texture synthesis, a full description of which is beyond the scope of this paper. The interested reader is referred to the original report of this algorithm for implementation details . Here, we briefly summarize the features measured in the analysis of a target texture.
This model functions in much the same way as the Heeger–Bergen algorithm, insofar as a noise image is manipulated to have the same statistics as a target texture. Furthermore, this algorithm is also based on a pyramid decomposition of the target texture into filtered images at multiple scales and orientations. Just as described above, target textures are down-sampled and filtered at multiple orientations using oriented derivative-of-gaussian filters. Unlike the Heeger–Bergen algorithm, however, the filter outputs are not simply listed in a histogram. Instead, the co-occurrence of filter outputs across space, scale, and orientation is measured in multiple ways which we describe below.
The Portilla–Simoncelli model utilizes four large sets of features to generate novel texture images from a specified target. The first of these feature sets is a series of first-order constraints on the pixel intensity distribution derived from the target texture. The mean luminance, variance, kurtosis, and skew of the target are imposed on the new image, as well as the range of the pixel values. The skew and kurtosis of the low-resolution version of the image produced by the pyramid decomposition is also included here. Second, the local autocorrelation of the target image’s low-pass counterparts in the pyramid decomposition is measured and matched in the new image. Third, the measured correlation between neighboring filter magnitudes is measured. This set of statistics includes neighbors in space, orientation, and scale. Finally, cross-scale phase statistics are matched between the old and new images. This is a measure of the dominant local relative phase between coefficients within a sub-band, and their neighbors in the immediately larger scale.
Since the Portilla–Simoncelli algorithm includes measurements of feature correlations across scale, orientation, and spatial location, it is capable of successfully capturing extended contours and large-scale inhomogeneities in the target. For our purposes, we use default settings of the original implementation of the model as described in the authors’ initial report  (a MATLAB implementation of this code is available online at http://www.cns.nyu.edu/~lcv/texture/).
We conclude this section with a figure depicting the synthetic texture resulting from applying each of these models to the same target texture (Fig. 1). There are clearly dramatic differences in synthesis quality. In our classification experiment, we determine whether these clear differences in synthesis ability translate into comparable differences in the ability of each feature set to serve as the basis for predicting human judgments of texture similarity.
We continue by describing our method for obtaining clusters of similar-looking textures from human observers.
We characterized the perceptual similarity of a large group of natural textures via a set of clusters obtained from human observers during a sorting task. These clusters are used to assign labels to the textures comprising each group, which allowed us to evaluate each set of candidate features in terms of classification accuracy in a leave-one-out cross-validation test. Here we describe our method for obtaining these clusters. We carried out our analysis using both unaltered, positive images of natural textures and contrast-negated versions of the same images.
We recruited 48 subjects to participate in this task from the MIT undergraduate community. Subject age ranged from 18 to 40 years of age, and all subjects reported normal or corrected-to-normal vision. All subjects were naïve to the purpose of the experiment. Twenty four volunteers contributed to the sorting of the positive texture images, and the remaining 24 volunteers contributed to the sorting of the negative texture images. Within each group of 24 participants, 16 people carried out the initial sorting of “training images,” with the remaining eight people carrying out the classification of “test images.” This two-step process made the initial sorting task more tractable for our participants while still providing category data for a large number of textures.
One hundred and twelve textures taken from the Brodatz texture collection  were used in this experiment. Thirty of these textures were removed to serve as the “training images” and have been used in previous studies of high-level texture similarity  because they represent a convenient cross-section of the full Brodatz collection (Fig. 1). The original 8.5 in × 11 in images were scanned in at a resolution of 72 pixels/inch. For easy handling, these were reduced in size to approximately 2 in × 3 in and printed on white cardstock at 1200 dots per inch. For computational analysis, a 512 × 512 pixel patch was taken from the center of each original image. Subjects carried out the sorting task on a large table under fluorescent illumination.
The Brodatz images employed in Experiment 1 were contrast-reversed in Matlab. Original pixel values ranged from 0 to 255, and negation was accomplished by subtracting each original value from 255. Images of some of the textures used in the sorting task are displayed in Fig. 2.
Sixteen observers in each group (positive and negative textures) were given the stack of 30 “Training” cards and asked to form groups of similar-looking textures. They were told to use any visual criterion that they felt was relevant, but to refrain as much as possible from grouping textures together according to object identity. For example, a picture of tree bark viewed up close should not necessarily be put in the same group as picture of a copse of trees just because both images contain tree parts. Rather, we suggested that subjects put textures into the same group only if they felt there was a true visual similarity between the images within a group. Subjects were free to form as many groups as they wished, and were free to take as much time as necessary to complete their sorting. Subjects typically completed the task in 20 min or less.
The groups of textures created by our observers were used to construct two matrices of pooled pair-wise texture similarity ratings, one for the positive textures and one for the negative textures. In each case, a 30 × 30 matrix of zeros was initialized. Then, for each group of textures formed by a subject, a “1” was placed in the ith row and j th column of the matrix if textures i and j were both present in the group. The binary matrices obtained from all subjects were summed, resulting in a similarity matrix where a high value at entry i, j indicated strong similarity between textures i and j. Each texture was considered to always be in a group with itself, so each entry along the main diagonal of this matrix was a “16” (the maximum possible similarity score across our 16 subjects).
Since we needed to translate similarity ratings into distances, we subtracted each entry in this matrix from 16. Given this dissimilarity matrix, we positioned our 30 textures into a “psychological” similarity space through the use of a classical multi-dimensional scaling (MDS) algorithm .
Typically, MDS is used to recover the dimensions along which points are organized. However, we did not make any use of the resulting axes, other than to provide a means for positioning each texture in a space that respects the “distances” calculated across our subjects’ groupings. We used three dimensions to characterize these textures, following the example of Rao and Lohse . The subsequent clustering procedure was not highly sensitive to the number of included dimensions, however, meaning that this parameter does not likely require fine tuning.
We continued by using k-means clustering  to discover clumps of textures that are similar to each other, as quantified by their position in the 3-D texture space defined by MDS. The k-means algorithm requires that the user specify a value of k, which is the number of clusters to be discovered in the data. Since, there is no a priori way to know how many clusters we should be looking for in either case we first conducted a simple graphical analysis on the scree plots constructed from clustering solutions on both groups of textures using values of k between 2 and 30. Graphs such as these are often used to determine intrinsic model complexity (see Ref.  for some interesting examples). Based on these scree plots, a 7-cluster solution was selected for each group (see Figs. 3 and and44).
Why recover clusters rather than attempt to model the dissimilarity matrix itself? While this is possible in principle, we prefer the clustering analysis for several reasons. First, the dissimilarity matrix only represents the pooled data from multiple observers’ categorization judgments. Since individual subjects only provided categories, not paired similarity ratings, we restrict our models to the same judgment. Second, most pairs of texture will be highly dissimilar. Using categories rather than the full dissimilarity matrix allows us to ignore a very large number of texture comparisons that are likely not meaningful to the human observer. Finally, to model the paired similarity of all the textures in our database would require an intractably large number of trials for our observers. By using the similarity space generated by MDS solely for extracting clusters, we are able to rapidly collect category labels for all of our images. We continued by describing how the remaining textures in our database were categorized into the groups defined by training image sorting.
After clustering the training images, our next step was to determine how observers would assign the remaining “test” textures to these categories based on visual similarity. Volunteers that participated in this second portion of our task were presented with the “training images” laid out in their respective clusters. They were then given the remaining 82 “test images” and asked to place each new texture in front of the cluster they felt it best belonged to. New textures were placed face down to stop subjects from forming a “running average” of the clusters they were augmenting. If participants felt that a given texture could not be reasonably assigned to any cluster, they could discard the image without providing an assignment.
For each “test image” in the positive and negative groups a cluster label was assigned to an image according to the majority of observers’ votes. Images were discarded from further analysis if there was no majority across votes, or if the majority response was “no category.” Two textures were removed from both the positive and negative groups as a result. The full lists of cluster assignments for the positive and negative cases are listed in Tables 1 and and22.
Having obtained category labels for our full set of 110 textures we used these labels as ground truth data for testing the efficacy of the features used in the three texture synthesis models discussed earlier. We continue by describing our classification procedure.
To compare the utility of each feature set, we used the labeled texture images to carry out multi-class classification. We chose to use relatively simple classifiers for this experiment, both because we have no a priori way to assess what methods would be most appropriate for this problem domain and also since we are particularly interested in relative model performance rather than absolute accuracy.
All of our candidate feature sets have very high dimensionality and substantial feature redundancy, so we first carried out principal components analysis (PCA) on the feature vectors obtained from each synthesis model. Classification was then carried out given these low-dimensional embeddings of the texture features. This is commonly done in many computer vision tasks, such as face recognition . At test, we also parametrically varied the number of principal components used to represent the data in each model to determine the relationship between accuracy and dimensionality for all models under consideration.
For each set of features, we measured leave-one-out classification accuracy using discriminant analysis. For each test case, we determined the a posteriori probability P (Cj|x) that the test point ‘x’ belongs to the cluster Cj for all j clusters. We computed this probability by combining the likelihood term P (x|Cj) with the prior probability P (Cj) using Bayes’ Law.
The likelihood P (x|Cj) was calculated for each test point by fitting a multivariate normal probability density function to each cluster in feature space. Formally, the probability that a point at location x came from the cluster Cj is as follows:
where μ is the centroid of the cluster Cj and Σ is an N × N covariance estimate for that cluster. We calculated the covariance matrix in three distinct ways, described below:
The prior term P (Cj) was set by counting the number of training points belonging to each cluster. Cluster membership of each test point was decided according to a MAP rule, in which the cluster with the highest posterior probability was selected.
For each synthesis model, we used leave-one-out accuracy as our measure of categorization performance. The earlier distinction between “training” and “test” images is no longer used. Instead, we categorized each texture using the labeled data from the remaining 109 images. Given the complexity of the task, our first question was whether or not any of these models performs better than expected by chance (~ 14% for a 7AFC task). Second, we asked whether any model proved better than the others at predicting human judgments. Our initial hypothesis was that texture synthesis models that are capable of synthesizing a wide range of textures with high quality should perform better than those that are very limited. Thus, we expected the Portilla–Simoncelli model to do best, followed by Heeger–Bergen and the power-spectrum model in that order. We also varied the number of principal components used to represent the data from 1 to 100. This allowed us to see how robust each feature set was to compression. Finally, using the positive and negative texture groups allowed us to investigate whether texture negation results in the formation of texture groups that are more amenable to classification via low-level image statistics. The results of this analysis for both the positive and negative textures are presented in Fig. 5.
First, examining the data from the positive textures we can see that all of our models perform better than chance. It is also apparent that the power-spectrum model does best, followed by the Portilla–Simoncelli model and lastly, the Heeger–Bergen model. In terms of how compression affects categorization, none of our models improve much when more than about 15 principal components are included. In particular, the power-spectrum model reaches its peak performance with a relatively small number of PCs.
Second, examining the performance with negative textures, we see that our hypothesis concerning overall improvement in performance following negation is not supported by these results. In general, we can see that accuracy is not appreciably different between the positive and the negative case. If anything, negative classification performance is a bit lower. While contrast negation substantially altered observers’ groupings, they did not change such that classification with the features we have used here was any easier. Also, there are no clear differences in model performance in this case. Overall, we conclude that negation has profound effects on human similarity judgments, but not on the efficacy of these features to approximate those judgments.
Finally, performance is not substantially different across our three classifiers. Incorporating more detailed measurements of feature covariance is not helpful in these data sets, suggesting that mislabeled data points may generally be quite distant from their ‘parent’ density, or overlap substantially with other clusters.
Human subjects bring a whole host of visual and cognitive processes to bear when assigning textures to clusters based on similarity, and so the level of performance achieved by our models is very encouraging. It is likely that even the most complete model of spatial vision would prove inadequate for modeling all the intricacies of these judgments, so being able to accurately predict human judgments at this level with such simple methods is remarkable. More importantly, we can see differences in performance between our three candidate models. This indicates that our reformulation of texture similarity as a multi-class categorization task is a useful methodology for comparing different feature sets. In particular, the result that the power-spectrum model is best able to match human judgments in the positive case is surprising. Of the three models proposed, the power-spectrum model is the least capable of successfully reconstructing an arbitrary target texture. Our basic hypothesis that successful synthesis should imply success at estimating similarity is contradicted. The superiority of the power-spectrum model in this experiment demonstrates that better classification performance does not depend on more accurate reconstruction of the image.
Finally, how might we achieve better performance? We could attempt to use a more powerful classifier than the discriminants we employed here. The use of a support-vector machine with a polynomial or Gaussian kernel, for example, might increase the accuracy of all of our models. However, before employing more complex methods for any one model in isolation, it may also prove useful to consider combining information from all three. The simplest way to do this is to combine the measurements made by all three models into one large input matrix. However, there are more sophisticated methods for combining a set of weak classifiers into a far more powerful classifier. This is commonly referred to as “boosting,” and has proven successful for a range of classification tasks, including face recognition .
Though the categorization judgments made by these three parametric models might be combined in some way to achieve better performance, it is likely that future efforts will also benefit from considering a wider range of features. For example, finding features that reflect the constraints imposed in non-parametric texture synthesis models may prove extremely useful. Fragment-based representations in particular have proven useful in object recognition . Defining “informative” fragments for a category of similar textures is likely to be quite challenging, but may be very powerful. We hope that by providing the category labels we obtained from our sorting task, other researchers will be encouraged to attempt modeling these data by other means.
The current results provide some interesting insights into the perception of similarity in natural textures. First of all, we have demonstrated that texture similarity can be conceived as a multi-class categorization task. This makes it possible to apply many established computer vision and statistical learning techniques to the problem of predicting perceived similarity between pairs of textures. Classification accuracy is a simple quantitative means of judging how well a particular model approximates human judgments. We have further demonstrated that it is possible to make meaningful comparisons between different feature sets using this approach. Second, our results are rather surprising in that the poorest model of texture synthesis provides the best approximation to human data. While the power spectrum of a texture is a relatively coarse measurement of image structure, it provides the most predictive power of any of our models. Finally, we have shown that when images are negated, human similarity judgments are substantially different, though the relative performance of the models we considered was not greatly affected. This may provide an important constraint to guide the search for more effective feature sets.
The author would like to thank Ann Breckencamp for all of her assistance in constructing stimuli and collecting similarity judgments for use here. Also, the observations and advice of Ted Adelson, Erin Conwell, Aude Oliva, Ruth Rosenholtz, Richard Russell, and Pawan Sinha proved invaluable. BJB is supported by a National Defense Science and Engineering Graduate fellowship.
BENJAMIN BALAS is currently is Ph.D. student in MIT’s Department of Brain and Cognitive Sciences, working in collaboration with Pawan Sinha on psychophysical and computational studies of high-level recognition. His recent work focuses on finding novel features for representing objects and textures for recognition, with particular interest in dynamic stimuli.