PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of plosonePLoS OneView this ArticleSubmit to PLoSGet E-mail AlertsContact UsPublic Library of Science (PLoS)
 
PLoS One. 2010; 5(6): e10663.
Published online 2010 June 9. doi:  10.1371/journal.pone.0010663
PMCID: PMC2882939

Disambiguating Multi–Modal Scene Representations Using Perceptual Grouping Constraints

Teresa Serrano-Gotarredona, Editor

Abstract

In its early stages, the visual system suffers from a lot of ambiguity and noise that severely limits the performance of early vision algorithms. This article presents feedback mechanisms between early visual processes, such as perceptual grouping, stereopsis and depth reconstruction, that allow the system to reduce this ambiguity and improve early representation of visual information. In the first part, the article proposes a local perceptual grouping algorithm that — in addition to commonly used geometric information — makes use of a novel multi–modal measure between local edge/line features. The grouping information is then used to: 1) disambiguate stereopsis by enforcing that stereo matches preserve groups; and 2) correct the reconstruction error due to the image pixel sampling using a linear interpolation over the groups. The integration of mutual feedback between early vision processes is shown to reduce considerably ambiguity and noise without the need for global constraints.

Introduction

Both human and machine perception involve a progressive abstraction of visual information, from the raw signal provided by the eyes or the cameras towards symbolic, object–centric representations [1]. One problem endemic to visual perception is that each abstraction step requires the taking of some decision about the information, effectively interpreting it; the large amount of noise and ambiguity in the visual signal may lead to erroneous interpretations, as discussed by, e.g., Aloimonos and Shulman [2]. There exist several approaches to solve this problem. One is to design features that describe more closely the original signal, and therefore require less abstraction. However, the resulting representation only describes the appearance of image patches as well as image noise, and lacks a semantic description of shapes — useful, e.g., for grasping, robotic control, planning. Nonetheless, a large amount of work on signal processing and invariant feature descriptors [3] lead to significant progress for tasks like navigation [4] and object recognition [5]. An alternative is to extract abstract symbolic representations directly from the image. One notable attempt by Nevatia and colleagues [6], [7], makes use of a feature hierarchy for stereo reconstruction. Another notable class of systems is the model–based vision, where a large amount of world knowledge is available and is used to disambiguate and interpret the visual signal. One problem with the latter approach is that the large amount of ambiguity and noise present in images can lead an early extraction of symbolic features to fail, failures which are difficult to correct. The dilemma between those two approaches can be expressed in terms of the bias/variance dilemma in neural networks [8]. Namely, the use of sophisticated models in vision introduces more bias in the system, whereas signal based approaches lead to more variance.

In the present work, we attempt to address the above dilemma by proposing a gradual abstraction that postpones decision taking using mutual feedback between two mid–level visual processes, namely perceptual grouping and stereopsis, to reduce ambiguity and noise. Ambiguities addressed here include incorrect stereo matches and inaccurate 3D reconstructions. Moreover, properties of the local signal such as local estimates of orientation, phase and colour will also be stabilised by perceptual grouping mechanisms. This work makes use of a sparse symbolic scene representation based on multi–modal primitives [9]. In this work, the term ‘multi–modal’ stresses that the descriptors cover different visual modalities such as motion, orientation and colour; it is not meant to indicate different sensorial modalities. Primitives form a local feature vector containing multi–modal visual information covering appearance as well as geometric information, in 2D and 3D. Such multi–modal descriptors offer certain advantages for the representation of visual scenes. For example, they allow for the explicit formulation of visual semantics in terms of meaningful local descriptors and higher–order relations between them, such as motion, co–planarity and similarity of appearance (see, e.g., [10]). One property of symbolic representations is that the transfer of visual information to a symbolic level increases the predictiveness of visual events [11] and at the same time decreases the memory and bandwidth required to process and transfer information. Hence, in these representations, regularities between visual events can be efficiently used for disambiguation. Primitives–based visual representations are used in a variety of applications, covering, e.g., object learning [12] and grasping [13].

The contributions in this paper are threefold: first we propose a local perceptual grouping mechanism making full use of the multi–modal and semantic information carried by the visual primitives; second, we propose a stereo matching scheme for primitives, allowing for the reconstruction of the 3D equivalent of 2D primitives; third, we investigate how perceptual grouping reduces ambiguities in the reconstructed 3D representation. In the following, these contributions will be described in more detail and put into the context of related work.

This paper's first contribution is a perceptual grouping scheme making use of the multi–modal information carried by the primitives. Perceptual grouping can be divided into two tasks: 1) defining an affinity measure between primitives and using it to build a graph of the connectedness between primitives, and 2) extracting groups, which are the connected components of this graph. We will only define the affinity measure between primitives, and not extract the groups themselves explicitly, as we only need a primitive's local grouping information to apply the correction mechanisms proposed in this paper. Similar affinity measures have been proposed [14], [15], formalising a good continuation constraint, and Elder and Goldberg [16] included the intensity on each side of the contour into a Bayesian formulation of grouping. We go beyond this work by proposing a multi–modal similarity measure, composed of phase, colour and optical flow measurement, and combine it with a classical good continuation criterion forming a novel multi–modal definition of the affinity between primitives.

As a second contribution, this work extends the work by Krueger and Felsberg [17] by enriching the multi–modal stereo matching using local motion [18] and, more importantly, by evaluating statistically the importance of the different visual modalities for stereo matching using ground truth range data.

As a third contribution, we make use of perceptual groups of primitives to disambiguate stereo matching and correct the 3D scene reconstruction. Grouping allows for the interpolation of visual properties such as position, local orientation, phase and colour, and thus helps to improve local feature extraction. This paper studies how perceptual grouping information can be used to disambiguate stereopsis and 3D reconstruction using primitives. If we assume that image contours (2D) are likely to be the projection of 3D contours on the image, then we can expect all 3D contours to project as 2D contours on each camera plane (except in the case of partial occlusions). Conversely, this also implies that any contour in one image has a corresponding contour in the second image. We therefore propose a non–local external stereo confidence measure, which estimates how well a primitive's neighbours that belong to the same group agree with that primitive's putative stereo correspondences. This allows for discarding a large number of putative stereo correspondences, hence reducing the ambiguity of the stereo matching and scene reconstruction processes. Moreover, the interpolation of the curves described by groups of primitives is used to correct these primitives' geometric and appearance modalities.

The scheme presented in this paper is illustrated in Figure 1, where solid lines stand for forward dependencies and dashed lines for feedback mechanisms. The local symbolic representation is extracted from the images. From this representation, we extract perceptual groups (i.e., contours) and we use correspondences across a pair of stereo views of the scene to reconstruct a local and symbolic 3D representation of the scene, equivalent to the 2D image representations it is reconstructed from; this is the feedforward part of the scheme, represented with solid lines. Then, the perceptual grouping information is used to correct the 2D symbolic image description, the stereo matches, and the reconstructed 3D scene representation; this is the corrective part of the scheme, represented with dashed lines.

Figure 1
Summary of the scheme presented in this paper.

Methods

This section is structured as follows: first, the multi–modal primitives are described; second, distance measures for all modalities are proposed; third, the grouping mechanism is presented; fourth, the stereo matching scheme is discussed; then, a scheme for increasing stereo matching reliability from grouping information is described; finally, we present a scheme to correct 2D and 3D primitives' position and orientation by interpolating the curves described by groups of primitives.

2D primitives

Numerous feature detectors exist in the literature (see Mikolajczyk and Schmid [3] for a review). Any feature based approach can be divided into two complementary tasks: an interest point detector [19], [20] and a descriptor encoding information from a local patch of the image at this location, that can be based on histograms [3], [21], spatial frequency [22][24], local derivatives [25][27], steerable filters [28], or invariant moments [29]. In [3], these different descriptors have been compared, showing a best performance for SIFT–like descriptors (Scale Invariant Feature Transform [21]).

The primitives we will use in this work are local, multi–modal edge descriptors, described in Ref. [9]. In contrast to the above mentioned features, primitives focus on giving a semantically and geometrically meaningful description of the local image patch. The importance of such a semantic grounding of features for a general purpose vision front–end, and the relevance of edge–like structures for this purpose are discussed by Elder [30].

In the first step, an event map and the associated local phase are computed using the monogenic signal [31] — note that other signal processing could alternatively be used (e.g., steerable filters [28]). The 2D primitives are sparsely extracted at locations in the image that are most likely to contain events (edges or lines); these locations are detected using the local intrinsic dimension [32]. Sparseness is assured using a classical winner–take–all operation, which guarantees that the extracted primitives describe different image patches. Multi–modal information is gathered locally from the image, including the position An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e001.jpg of the centre of the patch, the orientation An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e002.jpg of the event, the phase An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e003.jpg of the signal at this point, the colour An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e004.jpg sampled over the image patch on both sides of the event, and the local optical flow An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e005.jpg computed using the classical Nagel algorithm [33] (the flow is disregarded for still images). The phase encodes the type of contrast transition across the event, e.g., dark to bright edge or dark line on bright background. See Ref. [22][24]. Consequently, a primitive is described by the multi–modal vector

equation image
(1)

The set of primitives describing an image is called image representation and written An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e007.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e008.jpg for images from the left and right camera. The image representation extracted from one image is illustrated in Figure 2. In the upper–left corner, panel A shows one image extracted from an indoor video sequence; panel B shows the result of a local filtering; and panel C shows the extracted primitives.

Figure 2
Illustration of the primitive extraction process from an indoor video sequence.

Note that these primitives are of lower dimensionality than, e.g., SIFT features (12 vs. 128) and can therefore suffer from a lesser distinctiveness (two unrelated primitives have a greater chance to have a similar aspect). Nonetheless, we will show in the results section that they are distinctive enough for a reliable stereo matching if the epipolar geometry of the cameras is known. The rich information carried by the 2D primitives can be used to reconstruct them in 3D, providing a more complete scene representation. Geometric meaning allows a description of proximate primitives in terms of perceptual grouping, as will be discussed in the following section.

Metrics of 2D primitives

In this section, we define metrics for each of the primitives' modalities. Those metrics will be used in the following for perceptual grouping of primitives and for stereo matching. Figure 3 illustrates how the distance measures defined here are combined. In the case of perceptual grouping (solid lines), proximity, collinearity and co–circularity measures between a pair of primitives are merged into a Geometric affinity, whereas the distances in phase, colour and optic flow form the Multi–modal affinity. The combination of those two form the overall affinity An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e009.jpg that is used to group 2D primitives. In the case of stereopsis (dashed lines) the orientation distance between the two primitives replaces the geometric criterion. Then the multi–modal similarity is computed from orientation, phase, colour and optic flow distances.

Figure 3
Illustration of the measures used in this paper and how they are combined.

Note that, in the context of perceptual grouping, the orientation difference is replaced with a more sensible interpretation of the good continuation constraint, combining proximity, collinearity and co–circularity; in contrast, the stereo similarity makes direct use of the orientation difference.

Orientation: If we consider two primitives An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e010.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e011.jpg, respectively with the orientations An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e012.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e013.jpg, then their orientation distance is

equation image
(2)

The An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e015.jpg factor ensures that the orientation metric is between An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e016.jpg, with 0 standing for parallel orientations, 0.5 for a 45 degrees angle and 1 for orthogonal orientations.

Phase: The phase metric An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e017.jpg is

equation image
(3)

The An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e019.jpg factor ensures that the phase metric is between An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e020.jpg, with 0 standing for two primitives encoding the contrast transition (e.g., bright to dark edge), and 1 standing for opposite contrast (e.g., a dark line and a bright line).

Colour: The colour metric An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e021.jpg is

equation image
(4)

where An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e023.jpg is defined in HSV space as

equation image
(5)

Because of the conical topology of the HSV space, the hue component An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e025.jpg is basically random for very low saturation An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e026.jpg, and saturation is random for low values of An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e027.jpg. This equation discards hue information for low saturation, and saturation information for low value of An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e028.jpg, and otherwise weights evenly the colour components. In Eq. 5, An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e029.jpg stands for the angular distance

equation image
(6)

and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e031.jpg (An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e032.jpg), An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e033.jpg (An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e034.jpg) and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e035.jpg (An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e036.jpg) are the hue, saturation and value components on the left (right) side of the primitive An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e037.jpg.

Optic Flow: The optic flow An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e038.jpg metric is

equation image
(7)

Note that these metrics are the same used in Refs. [17], [18].

Perceptual grouping of 2D primitives

Since the 1930's, the Gestalt psychologists suggested a collection of axioms describing the way the human visual system binds together features in an image [34][36]. This process is generally called perceptual grouping and the Gestalt psychologists proposed that it is driven by properties like proximity, good continuation, similarity and symmetry, amongst others. More recently, psychophysical experiments measured the impact of different cues for perceptual grouping (see, e.g., Ref. [37]). Furthermore, Brunswik and Kamiya [38] postulated that these properties should be related to statistics of natural images. This was later confirmed by several studies [39][41].

We defined the primitives as local edge descriptors, and assumed that a group of primitives describes a contour in the image. The Gestalt rule of proximity implies that primitives that are closer to one another are most likely to lie on the same contour. According to the Gestalt rule of good continuation, image contours are expected to be continuous and smooth (small and constant local curvature); thus, two proximate primitives in a group are expected to be either nearly collinear, or co–circular. According to these rules, a strong inflexion in a contour will lead this contour to be described as two groups, joining at the inflection point. Furthermore, the position and orientation of primitives that are part of a group are the local tangents of the contour it describes. Finally, we would expect a contour's properties such as colour (on both sides) to change smoothly (or not at all) along this contour. This is formalised by the rule of similarity, which states that similar primitives (in terms of the colour, phase and optical flow modalities) are most likely to belong together.

The two first rules are joined into a Geometric constraint, that is combined with a multi–modal Appearance constraint into an overall affinity measure.

Geometric constraints

The first constraint we enforce during grouping stems directly from the symbolic quality of the primitives: primitives are local event descriptors and therefore, according to the good continuation law, they should be locally nearly collinear or co–circular to form a group. Effectively, we compute this constraint as a combination of proximity, collinearity and co–circularity measures.

If we consider two primitives An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e040.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e041.jpg in An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e042.jpg, then the likelihood that they both describe the same contour An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e043.jpg can be formulated as a combination of three basic constraints on their relative position and orientation — see Figure 4.

Figure 4
Illustration of the values used for the collinearity computation.

Proximity: The proximity measure is given by

equation image
(8)

Here, An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e057.jpg stands for the radius of the primitive in pixels, and the quantity An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e058.jpg is the maximal distance between two primitives for them to be compared; more distant primitives will not be compared and therefore have a null similarity. The quantity An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e059.jpg stands for the distance (in pixels) separating the two primitives' centres. We found experimentally that An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e060.jpg proved to be a good value — i.e., grouped primitives are distant by five timed their size at most.

Collinearity: The collinearity measure is

equation image
(9)

Co–circularity: The co–circularity measure is

equation image
(10)

where An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e063.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e064.jpg are the angles between the line joining the two primitives centres and the orientation of An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e065.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e066.jpg, respectively (see figure 4).

Geometric affinity: The combination of those three criteria forms the geometric constraint:

equation image
(11)

where An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e068.jpg is the geometric affinity between two primitives An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e069.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e070.jpg. This affinity models the likelihood of a curve tangent to the lines defined by the two primitives An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e071.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e072.jpg; we have An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e073.jpg for a perfect match.

Appearance constraints

Effectively, the more similar the modalities between two primitives are, the more likely are those two primitives part of the same event. Note that Elder and Goldberg [39] already proposed to use the intensity as a cue for perceptual grouping, yet here we use a combination of phase, colour, and optical flow modalities of the primitives to decide, using the value of An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e074.jpg, if they describe the same event.

Appearance affinity: The appearance–based affinity is

equation image
(12)

where An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e076.jpg is the relative weighting of the modality An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e077.jpg, with An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e078.jpg, and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e079.jpg refers to the metrics defined in equations 3, 4, and 7; the modality weights were all set to An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e080.jpg; Therefore, An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e081.jpg stands for a perfect match between two primitives. Because the geometric constraint models the relative orientation of two primitives in a manner more adapted to the problem of grouping line segments, the orientation metric is not part of the multi–modal constraint.

Overall affinity

We define this affinity from Equations (11) and (12), such that:

  1. two primitives complying poorly with the good continuation rule have an affinity close to zero; and
  2. two primitives complying with the good continuation rule, yet with strongly dissimilar modalities, will only have an average affinity.

Two primitives An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e082.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e083.jpg form a link An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e084.jpg if they share a significant affinity (significant being set by a threshold on the overall affinity), and the confidence An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e085.jpg of this link is given by the overall affinity:

equation image
(13)

We found experimentally that applying a threshold of An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e087.jpg yields a good grouping, as can be seen in Figure 5.

Figure 5
Illustration of the links extracted for different affinity thresholds.

This affinity is also a valid estimate of the likelihood for An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e091.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e092.jpg to be part of the same contour An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e093.jpg. In the following, we will consider that a link An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e094.jpg between two primitives exists if its confidence An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e095.jpg is large enough. We will call neighbourhood An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e096.jpg of a primitive An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e097.jpg all primitives An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e098.jpg such that An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e099.jpg is a link:

equation image
(14)

Figure 6 shows the links extracted, along with the different modal affinities. The links extracted for different thresholds An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e101.jpg on the affinity are shown in Figure 5. In the following, links are extracted only if An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e102.jpg. The lines in these figures describe strings of grouped primitives. One can see in these images that the major image contours are adequately described. This criterion is what is meant in the rest of the paper every time we refer to ‘groups’.

Figure 6
Illustration of the affinities between 2D primitives.

Stereopsis using 2D primitives

In this section, we extend the concept of multi–modal primitives to 3D: first, we define a local multi–modal matching function; then we define the 3D primitives.

Classical stereopsis [42], [43] allows for the reconstruction of 3D points from pairs of corresponding points in two stereo images. A review of stereo algorithms was presented by Brown et al. [44]. Dense two–frames stereo algorithms (i.e., matching each and every pixel in the first image with a pixel in the second) were also compared by Scharstein and Szeliski [45]. The present work differs from classical approaches insofar that symbolic multi–modal entities are matched, and reconstructed, rather than points. Although it is commonplace to use complex features (e.g., SIFT) for matching, only the locations in space are generally reconstructed, whereas the present work reconstructs a symbolic local interpretation in space. The proposed method is local and makes use of the epipolar constraint to limit the scope of the correspondence search.

If we consider a 2D primitive An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e103.jpg in the left image An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e104.jpg, all 2D primitives An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e105.jpg in the right image that lie nearby its epipolar line An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e106.jpg are considered as putative correspondences, written An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e107.jpg. The difference between the image coordinates of An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e108.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e109.jpg is generally called the disparity. We will differentiate between the orthogonal distance from the centre of An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e110.jpg to the epipolar line An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e111.jpg, called normal disparity, and the distance along this line, called tangential disparity. The normal disparity expresses how strictly the epipolar constraint is satisfied. A certain tolerance is required here due to the representation's sparseness. In the following all primitives with a normal disparity lower than An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e112.jpg times the primitives' size are considered. The tangential disparity has a direct relation with the depth of the reconstructed 3D primitive: a tangential disparity of zero means that the point is infinitely far, whereas larger disparities denote closer points.

Finally, one putative correspondence An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e113.jpg is chosen using a local winner–take–all scheme: all putative correspondences An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e114.jpg (in the right image) of a primitive An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e115.jpg (in the left image) are competing against each other. The confidence in each of them is set to their similarity with the left primitive An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e116.jpg, and the most similar correspondence is selected. This similarity measure is explained in the following section.

Multi–modal stereo similarity

The multi–modal distance between two primitives is defined as a linear combination of the modal distances between two primitives. This similarity is akin to the multi–modal affinity defined in Equation (12) with the addition of the orientation similarity, that is used here to replace the geometric constraint:

equation image
(15)

where An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e118.jpg is the relative weighting of the modality An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e119.jpg, with An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e120.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e121.jpg. The performance of a winner–take–all stereo matching scheme based on this multi–modal similarity is evaluated on several stereo sequences in the results section.

Reconstruction of 3D primitives

We propose to reconstruct the 3D equivalent of a stereo pair of corresponding 2D primitives, hereafter called 3D primitives (An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e122.jpg) as encoded in the vector:

equation image
(16)

where An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e124.jpg is the location in space, An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e125.jpg is the 3D orientation of the edge, An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e126.jpg is the phase across this edge, and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e127.jpg holds the local colour information on both sides of the contour. Figure 7 illustrates the reconstruction of a 3D primitive from a stereo pair of corresponding 2D primitives. A 2D primitive defines an image line, that back–projects as a 3D plane; the intersection between the two planes back–projected by the corresponding primitives provide a 3D line, onto which the 3D primitive lies. This line's orientation give the 3D primitive's orientation; its position is given by the intersection between the line back–projected by the first 2D primitive's position, and the plane back–projected by the corresponding 3D primitive. We refer to [46] for a complete discussion of the 3D primitives reconstruction.

Figure 7
Illustration of a 3D primitive reconstruction from a stereo pair of 2D primitives.

The reconstruction shown corresponds to a multi–modal winner–take–all matching (using equation (15)) with a similarity threshold set to An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e128.jpg.

Perceptual grouping of 3D primitives

In order to allow for reasoning in the 3D space, we extend the perceptual grouping defined for 2D primitives to the reconstructed 3D primitives.

Two 3D primitives An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e129.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e130.jpg are linked An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e131.jpg, if and only if their projection in both image planes (respectively An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e132.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e133.jpg on the left image and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e134.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e135.jpg on the right) are linked (such that the two links An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e136.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e137.jpg both exist), according to the logical implication

equation image
(17)

This definition extends naturally the perceptual groups defined in the image domain to the 3D space.

Perceptual grouping constraints to improve stereopsis

In this section, we define a semi-global stereo matching function that is based on the expected consistency between grouping processes in the left and right image as well as the stereo matching process. We show that matching can be improved significantly by using such kind of context information. It also allows for the establishment of groups in 3D for which additional interpolation processes can be applied to further improve the precision of reconstruction.

Because the primitive–based image representation used in this work samples lines and step–edges, it carries redundant information along contours. This redundancy can be used for constraining the stereo matching problem, leading to the two following constraints:

  • (C1) Isolated primitives are likely to be unreliable: As primitives are extracted redundantly along the contours, conversely an isolated primitive is likely to be an artefact and hence isolated primitives can be neglected.
  • (C2) Stereo consistency over groups: If a set of primitives forms a contour in the first image, the correct correspondences of these primitives in the second image also form a contour (notwithstanding pathological cases).

In our representation, contour information is encoded by the link network that is the result of the perceptual grouping mechanism presented earlier; this is illustrated in Figure 8. In this figure, the orientation of the primitive An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e139.jpg makes it the most similar (according to Equation (15)) to An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e140.jpg; hence, the stereo correspondence An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e141.jpg holds a higher confidence than, e.g., An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e142.jpg. However, the putative correspondence An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e143.jpg forms a group An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e144.jpg, thus preserving the group relation An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e145.jpg across stereo, whereas An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e146.jpg is not grouped with An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e147.jpg. Therefore, An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e148.jpg is more likely to be the true stereo correspondence of An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e149.jpg.

Figure 8
The BSCE criterion.

Basic Stereo Consistency Event (BSCE)

Primitives represent local estimators of image contours; a constellation of primitives describes a contour as a whole. Such contours are consistent over stereo, with the notable exception of occlusion cases. As we have defined the likelihood for two primitives to describe the same contour as the affinity between these two primitives, we can rewrite the previous statement as:

Definition 1 Given two primitives An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e160.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e161.jpg in the left image An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e162.jpg and their respective correspondences An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e163.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e164.jpg in the right image An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e165.jpg; if An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e166.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e167.jpg belong to the same group in An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e168.jpg, then An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e169.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e170.jpg should also be part of a group in An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e171.jpg.

The link conservation between a pair of primitives and the stereo correspondences thereof is called Basic Stereo Consistency Event (BSCE) [47]. This condition can then be used to test the validity of a stereo hypothesis. Consider a primitive An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e172.jpg, a stereo hypothesis

equation image
(18)

and a 2D primitive An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e174.jpg in the neighbourhood of An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e175.jpg (as defined in Equation (14)), such that the two primitives share an affinity An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e176.jpg — see Equation (13). For this second primitive, a stereo correspondence An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e177.jpg with a confidence of An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e178.jpg exists. We can now define an estimate of how well the stereo hypothesis An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e179.jpg reflects the BSCE by:

equation image
(19)

In other words: the BSCE between a primitive in the first image and one of its neighbours is high if they share a strong affinity and if both primitives' stereo correspondences in the second image also share a strong affinity; it is low if they share a strong affinity yet their stereo correspondences in the second image do not. This naturally extends the concept of group into the stereo domain.

Neighbourhood consistency confidence

Equation (19) tells us how a primitive's stereo correspondence is consistent with our knowledge of one of its neighbours' stereo correspondence. In this section we extend this definition to the whole primitive's neighbourhood. If we consider a primitive An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e181.jpg and an associated stereo correspondence An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e182.jpg, we can integrate this BSCE confidence over the neighbourhood of the primitive An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e183.jpg — as defined by Equation (14) —

equation image
(20)

where An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e185.jpg is the size of the neighbourhood — i.e., the number of neighbours of An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e186.jpg considered. We call this new confidence the external confidence in An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e187.jpg, as opposed to the internal confidence given by the multi–modal similarity between the primitives — Equation (15).

Correcting primitives using contextual knowledge

Although primitives are extracted with sub–pixel localisation, their actual accuracies vary to a large extent depending on local amounts of noise, blur and texture in the image. The primitives' position and orientation inaccuracy is amplified by stereo reconstruction [48] and can lead to large errors thereafter. Moreover, one fundamental drawback of stereo–based reconstruction of 3D shapes is that the reconstructed entities' precision decreases quickly with distance to the cameras, due to the images' finite pixel sampling [49], [50]. The symbolic quality of primitives, and groups of primitives, provides us with additional knowledge that can be used to reduce this uncertainty. Namely, groups of 3D primitives are reconstructed from pairs of 2D primitives that form a perceptual group in both stereo images, and as such, according to the grouping assumption, they describe a smooth and continuous contour of the scene (except in some pathological perspectives). This knowledge that the group as a whole should form a smooth contour can be used to correct the individual 3D primitives modalities. In this section, we propose a scheme for correcting 2D– and 3D primitives by locally interpolating the contours described by groups of primitives.

Triplets of primitives

If we consider three primitives An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e188.jpg, An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e189.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e190.jpg, which belong to the same group, and if An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e191.jpg lies in between An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e192.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e193.jpg — such that the Euclidean distances between An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e194.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e195.jpg are both smaller than that between An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e196.jpg — then we call An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e197.jpg a triplet. Formally,

equation image
(21)

Triplets of 3D primitives can be defined in the exact same manner in 3D space: as for the 2D case, a 3D triplet An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e199.jpg is constituted of a central primitive An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e200.jpg linked to two supporting primitives An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e201.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e202.jpg, such that the central primitive lies in between the two supporting primitives (i.e., the Euclidean distances between An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e203.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e204.jpg are both smaller than An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e205.jpg). Formally,

equation image
(22)

These triplets are useful because it is possible to interpolate the curve between two primitives, and therefore, we can use the curve interpolated between the two supporting primitives of the triplet (An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e207.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e208.jpg) to correct the central primitive (An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e209.jpg).

Interpolation of modalities

We interpolate the curve between two (2D or 3D) primitives using Hermite polynomials [51]. These are convenient in this context as they allow for the interpolation of a curve from only two data points and the curve tangents at those points. Also, Hermite splines can be applied to interpolate 2D or 3D curves indifferently.

Position and orientation: The curve interpolated between two primitives An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e210.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e211.jpg, with positions An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e212.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e213.jpg, and local tangents (defined by the primitives' orientations) of An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e214.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e215.jpg is defined as all the points An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e216.jpg in the image, with An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e217.jpg such that An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e218.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e219.jpg and

equation image
(23)

where An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e221.jpg is the matrix formulation for the Hermite polynomials

equation image
(24)

Analogously for the orientation we have

equation image
(25)

Note that the exact same formulae are used for interpolating curves between 3D primitives, but applied to 3 dimensions instead of 2.

The other modalities are interpolated by assuming that these change linearly with An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e224.jpg between An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e225.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e226.jpg:

Phase: The phase modality of the primitive interpolated for An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e227.jpg is computed as by

equation image
(26)

Colour: The colour of the interpolated primitive is computed using the following equation:

equation image
(27)

2D Primitive correction

We can then correct the extracted primitive An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e230.jpg between An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e231.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e232.jpg with the interpolated primitive An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e233.jpg. This is done for each modality An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e234.jpg using a weighted mean between the two values. For position and colour information An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e235.jpg, the corrected value An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e236.jpg is computed by

equation image
(28)

where An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e238.jpg is the extracted modality value, An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e239.jpg is the value interpolated at An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e240.jpg between An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e241.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e242.jpg, and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e243.jpg is the correction rate.

For orientation and phase An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e244.jpg, we have:

equation image
(29)

Note that in the case of An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e245.jpg, we need to operate a switch of the primitive's interpretation of the orientation as defined in Ref. [9] before correcting the orientation, colour and phase.

The correction (in Equations 28 and 29) is applied for An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e247.jpg iterations, with a correction factor An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e248.jpg. This is evaluated on an artificial scene with precise 3D ground truth in the results section, and the results showed that a small number of iterations can already considerably improve accuracy.

3D primitive correction

In the 3D case, the primitives also suffer from the uncertainty that originates from the stereo matching and reconstruction processes. The 3D primitives' position in space is corrected to

equation image
(30)

and the orientation to

equation image
(31)

This correction is applied iteratively An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e251.jpg times, with a correction factor An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e252.jpg. Also in this case, the results section shows that a small number of iteration suffice to improve accuracy.

Results

This section contains an evaluation of the different mechanisms presented above. In order to evaluate the performance of the different algorithms, we used stereo video sequences generated from a high resolution images of a urban scenes, with the associated depth ground truth provided with range scanner.

The range scanner provided us with a single high–resolution image with associated range information, and therefore each pixel of the image is given by

equation image
(32)

where An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e254.jpg is the pixel's colour and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e255.jpg is the corresponding 3D point (according to the range scanner). For each image, we then define ten virtual pairs of stereo cameras with resolution An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e256.jpg, and used projective geometry to transform the original image pixels into the virtual cameras' images, then the colour of each pixel in the virtual images is linearly interpolated from the nearest 4 transformed points. The disparity between the two virtual stereo views is also linearly interpolated at all pixel positions — see Figure 9.

Figure 9
Illustration of how a sequence is generated from colour range images.

This offers realistic video sequences with an accurate 3D ground truth. Some images generated from three different range images are illustrated in Figure 10A, B and C; the dark blue areas (like the sky) correspond to where there was no range data available, and therefore the colour cannot be interpolated. No range data was available for sequence D, therefore we only have a qualitative evaluation on this sequence.

Figure 10
The four sequences on which we tested our approach.

Stereo Evaluation

We first assessed the performance of the stereo matching scheme using each modal distance individually, plus the proposed multi–modal distance. We used the sequences with ground truth in Figure 10A, B, C to evaluate quantitatively the efficiency of each measure for stereo matching. We considered that a match was correct if its disparity error with the ground truth was smaller than the 2D primitives' size — this ensures that no erroneous match is considered as correct.

Figure 11 shows the histogram distributions of the modal distances between primitive pairs satisfying the epipolar constraint — for all images in sequences A, B and C. All histograms show a separation between the distributions of correct (black) and false (white) correspondences. In the phase (Figure 11 top–right) and colour (Figure 11 bottom–left) histograms, the correct correspondences show a sharp peak at a modal distance of zero, whereas the false ones display an even distribution along all distances between An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e259.jpg. In the orientation histogram (Figure 11 top–left), the large peak at zero distance for false correspondences is explainable by the presence of parallel structures in the image. Consequently, if one draws a horizontal line in the image, this line would cross parallel contours of very similar local orientation. The optical flow distribution shown in Figure 11 bottom–right has a peaked distribution centred at a distance of 0.1 for the correct correspondences, with a long tail until 0.6. The fact that the distribution peaks at 0.1 is explained by the projective difference in the optical flow between the two stereo images (the flow is likely to be similar, but not equal); this long tail is likely to be a consequence of the noisiness of optical flow data. The false correspondences also show a broad distribution around a modal distance of 0.3; the fact that the distribution is not centred at 0.5 is a consequence of statistical distributions of edges in natural images: horizontal and vertical edges are more likely, and therefore horizontal and vertical flow vectors are also more likely. In spite of this large overlap, optical flow distance is still better than chance for identifying correct stereo correspondences from erroneous ones — see ROC analysis in Fig. 12B: the optic flow curve is above the diagonal line that indicates chance performance in ROC curves. Figure 12A shows the multi–modal similarity histogram for correct and erroneous stereo matches. There is little overlap between the two distributions, showing that the multi–modal similarity is a good criterion for stereo matching.

Figure 11
Histograms of the modal distances.
Figure 12
Evaluation of the multi–modal stereo.

In order to evaluate the performance of each distance measure for the task of identifying correct stereo matches from erroneous ones, we drew the Receiver Operating Characteristic (ROC) curves for each of them. If we consider a set of putative stereo correspondences, provided that we have a distance measure for all of them and that we know from the disparity ground truth which ones are correct, it is possible to compute the ratios of correct and erroneous pairs of primitives with a distance below threshold, respectively called true and false positive rates. A ROC curve records the true positive rates against the false positive rates obtained when considering one distance measure for a sample of threshold values ranging from 0 to 1. Therefore, a random measurement would generate a nearly diagonal ROC curve, whereas a measurement that is very significant for the task would have a large area below its ROC curve. In Figure 12B, such ROC curves show the performance of the stereo matching. Each of the curves shows the performance when using each modal similarity, or the multi–modal similarity proposed in Equation (15). In this figure, we can see that the colour modality is a particularly strong discriminant for stereopsis. This is explained by the fact that the hue and saturation are sampled on each side of the edge, leading to a 4–dimensional modality (if we neglect the An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e260.jpg component and only keep the An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e261.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e262.jpg), whereas phase and orientation are only 1–dimensional and optical flow is 2–dimensional (albeit the aperture problem reduces it to one effective dimension: the normal flow). Moreover, those stereo pairs of images were interpolated from a single high–resolution image with range ground truth; thus, pixel colour is consistency is unaffected by illumination and therefore artificially high between left and right images. On the other hand the poor performance of the optic flow modality could be explained by the relative simplicity of the motion in this scene: a pure forward translation of the camera, with no moving objects. Therefore, we would expect the performance of individual modalities to vary depending on the scenario, and the robustness of the multi–modal constraint could be further enhanced by a contextual weighting. Nevertheless, in a variety of scenarios the use of a static weighting proved robust enough to obtain reliable stereopsis. These results show that (1) the similarity measures in all modalities are efficient (i.e., better than chance) indicators for stereo matching, (2) the multi–modal similarity yields a better classification.

External Confidence Threshold

In a second set of experiments, we evaluated the effect of setting a minimal threshold on the external confidence. The external confidence threshold was always applied in conjunction with a sensible threshold on the multi–modal similarity of An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e263.jpg.

In Figure 13A, one can see that the correct (black) correspondences have mostly positive external confidences, while incorrect (white) ones have mainly negative values (large peak at An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e264.jpg). The small peak of correct correspondences for negative external confidence (near An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e265.jpg) is due to the few cases where most primitives on a contour have an erroneous correspondence, and therefore the few correct ones are strongly contradicted. The large values of erroneous correspondences with external confidences of An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e266.jpg comes from repetitive structures in the image, that require more global considerations for disambiguation. Applying a threshold on the external confidence will remove stereo hypotheses that are inconsistent with their neighbourhood, and thus reduce the ambiguity of the stereo matching. Note that selecting a threshold of zero implies the removal of all the isolated primitives (see constraint C1) as an isolated primitive has an external confidence of zero by definition.

Figure 13
Evaluation of the external confidence.

Figure 13B shows ROC curves of the performance for varying thresholds on the multi–modal similarity. Each curve shows the performance for a different threshold (with threshold of An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e267.jpg, and without threshold) applied to the external confidence prior to the ROC analysis. We can see from these results that applying a bias on the decision based on the external confidence is improving significantly the accuracy of the decision process. Depending on the type of selection process desired — very selective and reliable, or more lax, but yielding a denser set of correspondences — different thresholds can be chosen. The best overall improvement seems to be reached for a threshold of An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e268.jpg over the external confidence (with a negligible difference in performance between An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e269.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e270.jpg). However, in the general case where a high reliability is required of the stereo matches, a small positive threshold of An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e271.jpg is preferred (meaning discarding all primitives which are not part of a group) is preferred. Note that when a threshold is applied to the external confidence prior to the ROC analysis, the resulting curve does not reach the An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e272.jpg point of the graph. This is normal as the threshold already removes some stereo hypotheses even before the multi–modal confidence is considered.

Table 1 summarises the performance of the stereo matching scheme, with and without external confidence threshold (because the external confidence is within An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e273.jpg, a threshold of An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e274.jpg is the same as no threshold at all), on all three sequences with ground truth, showing a consistent improvement in all scenes, although the actual magnitude of the improvement varies. Sequence A, for example, contains a lot of repetitive, parallel structures which the external confidence cannot help disambiguating.

Table 1
Performance of the stereopsis with and without external confidence threshold.

Figure 14 illustrates the effect qualitatively for the video sequence from Figure 10D. Figure 14a) shows the 3D primitives reconstructed with a threshold on external confidence of An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e284.jpg. When comparing Figures 14A and 14B we can see that a large number of outliers has been discarded from the reconstructed 3D primitives, leading to a cleaner description of the scene.

Figure 14
Qualitative example of the effect of the external confidence threshold.

Interpolation

We evaluated the performance of the interpolation scheme, on two simple artificial sequences illustrated in Figure 15. In the case of 3D–interpolation we also evaluated the interpolation effect on the reconstructed 3D representation qualitatively. The interpolation scheme was applied for An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e289.jpg iterations, with a correction factor of An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e290.jpg.

Figure 15
Illustration of the primitives extracted from two simple artificial sequences, featuring a triangle (left) and a circle (right).

2D interpolation Results

The results for localisation, orientation and phase over 10 iterations of the correction process are shown in Figure 16, for the triangle (full line) and the circle (dashed line) scenarios. The horizontal axis shows the number of iterations of the correction process and the vertical axis the mean error of the 2D primitives. Note that the error is measured in pixels for the localisation and in radians for the orientation and the phase.

Figure 16
Correction of the 2D primitives using interpolation.

This sub–pixel accuracy is naturally lower for the circle scene, which is due to the contour's curvature. As primitives are local line descriptors, they can describe curved contours but they assume low local curvature. Hence, as the sub–pixel accuracy is assuming this linear model, it is performing better with purely linear structures. Nonetheless, note that the accuracy is extremely high in both cases: less than one tenth of a pixel for the localisation and and less than one hundredth of a radian for the orientation — i.e., less than An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e291.jpg degrees.

Moreover, we note that interpolation leads to mixed results depending on the modality: we see a distinct improvement of the localisation for the triangle scene, but not for the circle scene. This is likely to be due to the use of Hermite interpolation, in two respects: first, Hermite interpolation makes use of the tangents' orientation in addition to their position; hence, the interpolated curve is sensitive to errors in orientation. Second, even if the Hermite polynomials are an efficient model for describing general curves, they do not allow a perfect interpolation of an arc; thus, interpolation at high curvature locations lead to a loss in precision. Nonetheless, the accuracy of the interpolated primitive itself is always better than the original (reconstructed by stereo).

Concerning orientation, we see a clear improvement of An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e292.jpg radians for both objects (An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e293.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e294.jpg for the triangle and circle). Phase shows a clear (although smaller) improvement in both cases; the triangle scenario sees an improvement of An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e295.jpg (An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e296.jpg), whereas the circle scenario sees an improvement of An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e297.jpg (An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e298.jpg). The effect of phase correction is illustrated in Figure 17. This figure shows a detail of the primitives extracted on the circle scene; the phase is illustrated on the primitives by the green arrow, which orientation indicates the phase. In this case, horizontal indicates a full contrast edge structure, and vertical a full contrast line. Figure 17C and D show the phase before and after correction, where the dotted lines show the mean phase across the whole circle. Before correction, the phase of the central primitive differs significantly from the correct one, and it is closer to the dotted line after correction.

Figure 17
Illustration of the effect of phase correction in 2D.

3D primitives interpolation

This scheme was evaluated on the same triangle sequence as above (shown in Figure 15) and resulted in a reduction of the localisation error by An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e299.jpg; the orientation error was reduced by An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e300.jpg (see Table 2). When applying the same scheme to the circle scenario, the localisation error was reduced by An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e301.jpg; orientation error was reduced also by An external file that holds a picture, illustration, etc.
Object name is pone.0010663.e302.jpg (see Table 3 and Figure 18). Figure 19 shows the effect of this smoothing on selected details in an indoor scene.

Figure 18
Correction of 3D primitives.
Figure 19
Illustration of the effect of the correction of 3D primitives using interpolation.
Table 2
Effect of the correction process on the localisation and orientation in space of the primitives reconstructed from the triangle scenario.
Table 3
Effect of the correction process on the localisation and orientation in space of the primitives reconstructed from the circle scenario.

Discussion

In this paper, we presented several local operations on the visual primitives presented in Ref. [9], which produce a robust representation of visual scenes, some of them making use of the (still locally constrained) context.

First, we presented a simple algorithm to group primitives into contours. Contours were defined implicitly in terms of the pairwise relations between proximate 2D primitives. Note that an explicit description of the groups could easily be extracted from such an implicit definition using a variety of techniques, including: normalised [52] or average cuts [53], affinity normalisation [15], dynamic programming [54], probabilistic chaining [55], etc.

Second, we proposed to use the multi–modal similarity between 2D primitives to perform stereo matching between pairs of images. The stereo algorithm we used is purely local and therefore does not make use of global constraints (e.g., ordering constraint [56], figural continuity [57], etc.), or optimisation (e.g., dynamic programming [58], graph operations like maximal clique [59], etc.). Such global optimisations generally allow to improve significantly the performance of local stereo matching schemes, and therefore could be applied to this system to further improve the quality of stereo matching.

Third, we proposed a scheme integrating contextual information combining perceptual grouping and stereopsis to improve the reliability of the latter. The external confidence defined here is comparable to averaging over a local neighbourhood of a disparity gradient constraint along contours [60]. Also, in a similar way, Ohta and Kanade [56] proposed to apply inter–scanline consistency rules in addition to a more classical intra–scanline ordering constraint. Departing from those pixel–based constraints, the definition of the Basic Stereo Consistency Event (BSCE) allows to specify semantically which neighbours have positive and negative contributions to the confidence. It was shown that it could improve significantly the reliability of stereo matching.

Moreover, we showed that the same grouping relation can be used to interpolate contours between pairs of linked primitives. This was then used to correct primitives with the contour as interpolated from its neighbours. In 2D, we obtained a reduction by more than 30% of the orientation error, and more than 10% for the phase. When interpolating 3D primitives, we additionally found that the localisation error was reduced by more than 20%, and the orientation error by more than 15%. Therefore, this interpolation step proved to be a robust manner to improve the representation accuracy, both in 2D and 3D. Because the scheme is local, there is no a priori assumption that the whole contours comply with a certain mathematical description: we only assume that the contour is smooth between two proximate primitives, and model this using Hermite interpolation.

Finally, we showed that using such mutual feedback between mid–level, local processes allow to disambiguate them without need for additional contextual knowledge. Thereby, we provide a reliable 3D representation of the shapes in the scene that can then be used for higher level visual operations, where contextual knowledge may be available. This framework was used successfully to address a variety of robot vision tasks: e.g., grasping [13], ego–motion estimation [61], and learning of objects' shapes [12].

Acknowledgments

We thank the company RIEGL-UK Ltd. for the images with known ground truth used for sequence A, B and C.

Footnotes

Competing Interests: The authors have declared that no competing interests exist.

Funding: The work described in this paper was funded by the European projects PACOplus and IRFO. Florentin Woergoetter acknowledges funding by the Bernstein Center for Computational Neuroscience, Göttingen. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

1. Oram M, Perrett D. Modeling visual recognition from neurobiological constraints. Neural Networks. 1994;7:945–972.
2. Aloimonos Y, Shulman D. Integration of Visual Modules — An Extension of the Marr Paradigm. 1989. Academic Press, London.
3. Mikolajczyk K, Schmid C. A performance evaluation of local descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2005;27:1615–1630. [PubMed]
4. Se S, Lowe D, Little J. Vision-based mobile robot localization and mapping using scale-invariant features. 2001. pp. 2051–2058. In: IEEE International Conference on Robotics and Automation. volume 2.
5. Lowe D. Object recognition from local scale-invariant features. 1999. pp. 1150–1157. In: Proceedings of the International Conference on Computer Vision (ICCV'99)
6. Mohan R, Nevatia R. Perceptual organization for scene segmentation and description. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1992;14:616–635.
7. Chung R, Nevatia R. Use of monucular groupings and occlusion analysis in a hierarchical stereo system. Computer Vision and Image Understanding. 1995;62:245–268.
8. Geman S, Bienenstock E, Doursat R. Neural networks and the bias/variance dilemma. Neural Computation. 1995;4:1–58.
9. Krüger N, Lappe M, Wörgötter F. Biologically motivated multi-modal processing of visual primitives. Interdisciplinary Journal of Artificial Intelligence and the Simulation of Behaviour, AISB Journal. 2004;1:417–427.
10. Baseski E, Pugeault N, Kalkan S, Kraft D, Wörgötter F, et al. A scene representation based on multi-modal 2d and 3d features. 2007. In: ICCV Workshop on 3D Representation for Recognition 3dRR-07.
11. König P, Krüger N. Perspectives: Symbols as self-emergent entities in an optimization process of feature extraction and predictions. Biological Cybernetics. 2006;94:325–334. [PubMed]
12. Kraft D, Pugeault N, Başeski E, Popović M, Kragic D, et al. Birth of the object: Detection of objectness and extraction of object shape through object action complexes. Special Issue on “Cognitive Humanoid Robots” of the International Journal of Humanoid Robotics. 2009;5:247–265.
13. Popović M, Kraft D, Bodenhagen L, Başeski E, Pugeault N, et al. A strategy for grasping unknown objects based on co–planarity and colour information. Robotic and Autonomous Systems accepted
14. Parent P, Zucker S. Trace inference, curvature consistency, and curve detection. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1989;11:823–839.
15. Perona P, Freeman W. A Factorization Approach to Grouping. 1998. pp. 655–670. In: Proceedings of the 5th European Conference on Computer Vision (ECCV'98), LNCS 1406. volume 1.
16. Elder J, Goldberg R. Ecological statistics of Gestalt laws for the perceptual organization of contours. Journal of Vision. 2002;2:324–353. [PubMed]
17. Krüger N, Felsberg M. An explicit and compact coding of geometric and structural information applied to stereo matching. Pattern Recognition Letters. 2004;25:849–863.
18. Pugeault N, Krüger N. Multi–modal matching applied to stereo. Proceedings of the BMVC. 2003;2003:271–280.
19. Schmid C, Mohr R, Baukhage C. Evaluation of Interest Point Detectors. International Journal of Computer Vision. 2000;37:151–172.
20. Harris C, Stephens M. A combined corner and edge detector. 1988. pp. 147–151. In: Proceedings of the 4th Alvey Vision Conference.
21. Lowe D. Distinctive Image Features from Scale–Invariant Keypoints. International Journal of Computer Vision. 2004;60:91–110.
22. Kovesi P. Image features from phase congruency. Videre: Journal of Computer Vision Research. 1999;1:1–26.
23. Rodrigues J, du Buf J. Multi–scale keypoints in V1 and beyond: object segregation, scale selection, saliency maps and face detection. Biosystems. 2006;86:75–90. [PubMed]
24. Rodrigues J, du Buf J. Multi–scale lines and edges in V1 and beyond: brightness, object categorization and recognition, and consciousness. Biosystems. 2009;95:206–226. [PubMed]
25. Baumberg A. Reliable feature matching across widely separated views. 2000. pp. 774–781. In: Proceedings of the International Conference on Pattern Recognition.
26. Koenderink J, van Doorn A. Representation of Local Geometry in the Visual System. Biological Cybernetics. 1987;55:367–375. [PubMed]
27. Schaffalitzky F, Zisserman A. Multi–view matching for unordered image sets, or “how do I organize my holiday snaps?”. Lecture Notes in Computer Science. 2002;2350:414–431.
28. Freeman W, Adelson E. The design and use of steerable filters. IEEE transactions on Pattern Analysis and Machine Intelligence. 1991;13:891–906.
29. van Gool L, Moons T, Ungureanu D. Affine / Photometric Invariants for Planar Intensity Patterns. Lecture Notes In Computer Science. 1996;1064:642–651.
30. Elder J. Are edges incomplete? International Journal of Computer Vision. 1999;34:97–122.
31. Felsberg M, Sommer G. The monogenic signal. IEEE Transactions on Signal Processing. 2001;49:3136–3144.
32. Felsber M, Kalkan S, Krüger N. Continuous dimensionality characterization of image structure. Image and Vision Computing. 2009;27:628–636.
33. Nagel HH. On the estimation of optic flow: Relations between different approaches and some new results. Artificial Intelligence. 1987;33:299–324.
34. Koffka K. Principles of Gestalt Psychology. 1935. Lund Humphries, London.
35. Köhler K. Gestalt Psychology: An introduction to new concepts in psychology. New York: Liveright; 1947.
36. Wertheimer M, editor. Laws of Organsation in Perceptual Forms. 1935. Harcourt & Brace & Javanowitch, London.
37. Field D, Hayes A, Hess R. Contour integration by the human visual system: Evidence for a local “association field”. Vision Research. 1993;33:173–193. [PubMed]
38. Brunswik E, Kamiya J. Ecological cue–validity of ‘proximity’ and of other Gestalt factors. American Journal of Psychology. 1953;66:20–32. [PubMed]
39. Elder J, Goldberg R. Inferential reliability of contour grouping cues in natural images. Perception. 1998;27(Supplement)
40. Geisler W, Perry J, Super B, Gallogly D. Edge Co–occurrence in Natural Images Predicts Contour Grouping Performance. Vision Research. 2001;41:711–724. [PubMed]
41. Krüger N. Collinearity and parallelism are statistically significant second order relations of complex cell responses. Neural Processing Letters. 1998;8:117–129.
42. Faugeras O. Three–Dimensional Computer Vision. MIT Press; 1993.
43. Hartley R, Zisserman A. Multiple View Geometry in Computer Vision. Cambridge University Press; 2000.
44. Brown M, Burschka D, Hager G. Advances in Computational Stereo. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2003;25:993–1008.
45. Scharstein D, Szeliski R. A taxonomy and evaluation of dense two–frame stereo correspondence algorithms. International Journal of Computer Vision. 2002;47:7–42.
46. Pugeault N. Early Cognitive Vision: Feedback Mechanisms for the Disambiguation of Early Visual Representation. 2008. Ph.D. thesis, University of Göttingen.
47. Pugeault N, Wörgötter F, Krüger N. Multi-modal scene reconstruction using perceptual grouping constraints. 2006. pp. 195–213. In: Proc. IEEE Workshop on Perceptual Organization in Computer Vision (in conjunction with CVPR'06)
48. Pugeault N, Kalkan S, Başeski E, Wörgötter F, Krüger N. Reconstruction uncertainty and 3D relations. 2008. pp. 186–193. In: Proceedings of Int. Conf. on Computer Vision Theory and Applications (VISAPP'08). volume 2.
49. Verri A, Torre V. Absolute depth estimate in stereopsis. Journal of the Optical Society of America. 1986;3
50. Wolff L. Accurate measurements of orientation from stereo using line correspondence. 1989. pp. 410–415. In: Proceedings of the IEEE Computer Vision and Pattern Recognition conference (CVPR'98)
51. Wikipedia. Cubic Hermite Spline. 2007. URL http://en.wikipedia.org/wiki/Cubic_Hermite_spline.
52. Shi J, Malik J. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2000;22:888–905.
53. Sarkar S, Soundararajan P. Supervised learning of large perceptual organization: Graph spectral partitioning and learning automata. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2000;22:504–525.
54. Sha'ashua A, Ullman S. Grouping contours by iterated pairing network. 1990. pp. 335–341. In: Neural Information Processing Systems (NIPS). volume 3.
55. Crevier D. A probabilistic method for extracting chains of collinear segments. Computer Vision and Image Understanding. 1999;76:36–53.
56. Ohta Y, Kanade T. Stereo by intra– and inter–scanline search using dynamic programming. IEEE transactions on Pattern Analysis and Machine Intelligence. 1985;7 [PubMed]
57. Mayhew J, Frisby J. Psychophysical and computational studies towards a theory of human stereopsis. Artificial Intelligence. 1981;17:349–385.
58. Lee SH, Leou JJ. A dynamic programming approach to line segment matching in stereo vision. Pattern Recognition. 1994;27:961–986.
59. Horaud R, Skordas T. Stereo correspondences through feature grouping and maximal cliques. IEEE transactions on Pattern Analysis and Machine Intelligence. 1989;11
60. Kim N, Bovik A. A contour–based stereo matching algorithm using disparity continuity. Pattern Recognition. 1988;21:505–514.
61. Pugeault N, Wörgötter F, Krüger N. Rigid body motion in an early cognitive vision framework. 2006. pp. 217–223. In: Proceedings of the IEEE SMC UK&RI Conference on Advances in Cybernetic Systems (AICS'06)

Articles from PLoS ONE are provided here courtesy of Public Library of Science