|Home | About | Journals | Submit | Contact Us | Français|
Weakly electric fish use active electrolocation for object detection and orientation in their environment even in complete darkness. The African mormyrid Gnathonemus petersii can detect object parameters, such as material, size, shape, and distance. Here, we tested whether individuals of this species can learn to identify 3-dimensional objects independently of the training conditions and independently of the object's position in space (rotation-invariance; size-constancy). Individual G. petersii were trained in a two-alternative forced-choice procedure to electrically discriminate between a 3-dimensional object (S+) and several alternative objects (S−). Fish were then tested whether they could identify the S+ among novel objects and whether single components of S+ were sufficient for recognition. Size-constancy was investigated by presenting the S+ together with a larger version at different distances. Rotation-invariance was tested by rotating S+ and/or S− in 3D. Our results show that electrolocating G. petersii could (1) recognize an object independently of the S− used during training. When only single components of a complex S+ were offered, recognition of S+ was more or less affected depending on which part was used. (2) Object-size was detected independently of object distance, i.e. fish showed size-constancy. (3) The majority of the fishes tested recognized their S+ even if it was rotated in space, i.e. these fishes showed rotation-invariance. (4) Object recognition was restricted to the near field around the fish and failed when objects were moved more than about 4cm away from the animals. Our results indicate that even in complete darkness our G. petersii were capable of complex 3-dimensional scene perception using active electrolocation.
Perception and context-independent recognition in a 3-dimensional environment calls for complex neural computations. The hallmarks of these can be seen in the ability of some visually oriented animals to learn to identify 3-dimensional objects irrespectively of distance related changes of the retinal image-size, i.e. size-constancy (Leibowitz, 1971; Douglas et al., 1988; Sawamura et al., 2005; Arnold et al., 2008), or an object's rotation in space, called rotation-invariance (Vanrie et al., 2002; Spetch and Friedman, 2003; Köhler et al., 2005; Corballis et al., 2006; Kourtzi and DiCarlo, 2006). These abilities suggest that the animals establish some form of perspective-independent representation of an object's visual image in their brains, which allows object recognition even if the retinal image of the object has changed strongly. Here we tested whether similar aptitudes can be found in non-visual object recognition of weakly electric fishes, which use active electrolocation for object detection.
Weakly electric fishes generate electrical fields around their bodies by emitting electric signals (electric organ discharges, EODs) with a specialized electric organ. The waveform and duration of single EODs are usually constant and changes can occur only slowly over several days (Carlson et al., 2000), while the EOD discharge rate depends on the behavioral context (von der Emde, 1992; Moller, 1995; Carlson, 2002). If an object is present near the fish, it causes distortions of the electrical field lines, which change the voltages pattern on the skin of the animal opposite the object. The changed pattern is detected by electroreceptor organs located all over the fish's skin. The local modulation of the electric field at an area of the skin caused by an object is called the ‘electric image’ of an object (Rasnow and Bower, 1997; Caputi et al., 1998; von der Emde et al., 1998; Migliaro et al., 2005). The detection and analysis of objects based on such images is called ‘active electrolocation’ (Lissmann and Machin, 1958; Bastian, 1994; von der Emde, 2006).
In mormyrid electric fishes, electric images are characterized by a center-surround (‘Mexican hat’) spatial profile (Caputi et al., 1998). For example, a good conductor produces an electric image with a center region where the local EOD amplitude increases, surrounded by a rim area where the amplitude decreases. The image of a non-conductor has an opposite appearance. Like in vision, an object projects an electrical image onto the sensory surface of the fish, which is sampled by the electroreceptor organs similar to the rods and cones of the retina. However, in contrast to retinal images, the geometrical relations of the object are not preserved in the case of the electrical image. In the case of the retina to obtain shape information of objects, the brain can sample the geometry of the retinal image and thereby obtain information about the object directly. This is not possible for the electrical image. Because there is no focusing mechanism, electric images are always blurred, or ‘out of focus’ and, in this respect, are fundamentally different from optical images. In addition, there is no one-to-one relationship between spatial object properties and image shape: electrical images are always strongly distorted compared to an optical projection of a 3-D object onto a 2-D surface. While optical images are mainly determined by an object's geometrical features, such as shape and size, electric images also depend strongly on parameters such as material properties, object depth, location along the fish's body, distance, and many more (Caputi and Budelli, 2006).
Despite these problems, the weakly electric mormyrid fish Gnathonemus petersii is remarkably adept in recognizing objects with a 3-dimensional spatial complexity during active electrolocation. When trained in two-alternative forced-choice (2AFC) experiments to discriminate between two objects, G. petersii can perceive object parameters such as volume, material, 3-dimensional shape, distance, size and possibly many more (von der Emde and Fetz, 2007). These experiments revealed that the fish learn to pay attention to the relative differences between the two objects they had to discriminate. The results suggested that fish are able to link and assemble local features of an electrolocation pattern to construct a representation of an object, suggesting the presence of feature extraction mechanisms by which the fish can solve complex object recognition tasks.
How flexible is this feature extraction mechanism and does it allow object identification, even if the electric image of an object has changed because the object has moved to a different location or is rotated in space? It was claimed that visual rotational-invariance requires complex neural computations, including in some cases mental rotation (Vanrie et al., 2002), even though more simple solutions, such as attending to characteristic aspects of the object, are also possible (Köhler et al., 2005; Eisenegger et al., 2007). Likewise for size constancy, the animal has to recognize the size of an object irrespective of its distance and the corresponding changes of the size of the electric image (Douglas et al., 1988). Because active electrolocation, in contrast to vision, lacks mechanisms to focus environmental images, phenomena like size-constancy, if present, probably are based on very different neural computations. In this study, we tested whether electric fish can learn to identify 3-dimensional objects independently of the training conditions and independently of the object's position in space, including rotational changes. We further explored how far in space the ability to recognize 3-dimensional objects extends.
Ten G. petersii (standard length: 12–15cm) were used in the experiments. The animals were kept in individual tanks (75×42×40cm3), which were also used for training and tests. The water temperature was 26 ±1°C, water conductivity was 100 ±5 μS/cm, and the light–dark cycle was set to 12h:12h.
Each experimental tank was divided into two compartments (40×40cm2 and 35 ×40cm2) by a plastic partition, which contained two gates (9×10cm2, Figure Figure1).1). The smaller compartment was used as the living area, containing water plants and a plastic cylinder as a shelter. The second compartment was the experimental area. Behind each of the two gates an object was placed in such a way that the fish had to pass it to access the experimental area (Table (Table1).1). In some experiments a grid was placed between the gates and the object (see below). The distances between the objects and the gates were determined by using a scale (1mm resolution) placed under the transparent bottom of the experimental area.
Most experiments were conducted under a dim illumination of the experimental tanks by the room lights [<65lx, measured at the water surface with a spectrometer (International Light, Peabody, MA, USA, Model: RPS900R)]. Under these conditions G. petersii is not using vision for objects discrimination but rather relies on active electrolocation (e.g. in von der Emde and Fetz, 2007). However, some experiments were conducted in total darkness in order to test whether vision can contribute to object discrimination (see below).
The fish were trained in a food rewarded 2AFC procedure to pass through the gate, behind which a rewarded object (S+) was placed, and to avoid the gate with a non-reinforced negative object (S−). Animals had to discriminate between S+ and S− which differed either in shape or in size (Table (Table1).1). Some fish were trained with a fixed pair of objects (S+/S−). Other fish were trained with a constant S+ and several different S− (see below). Before each trial, the S+ and S− were placed behind the left and right gates according to a pseudorandom schedule (Gellermann, 1933). Opening of both gates simultaneously started a trial. Typically this resulted in a swimming pattern during which fish swam towards the partition, inspected both objects before passing through one of the gates. Since the behavior of the fish during object inspection was identical to that already described in detail in von der Emde and Fetz (2007), we abstain from providing another description in this paper. The choice of the correct gate, i.e. the gate with the S+, was rewarded with a chironomid larva. After eating the reward, the fish had to swim back into the living compartment. False choices of a gate immediately resulted in chasing the fish back to the living compartment. Once fish had returned to this compartment, the gates were closed and a new trial was prepared by the experimenter. On average, 40 trials per session (one session per day, 5 days a week) were conducted per fish.
Test trials were interspersed between the training trials once a fish performed with at least 70% correct choices on three consecutive days. Test trials were similar to learning trials, except that animals were neither rewarded nor punished in order to avoid further learning and to test preferences of the fish for the test objects based on previous learning. At the beginning of testing, single test trials were interspersed into a session only after three or four training trials, respectively. When a fish got more experienced with the procedures and learning was established more firmly, every second trial of a session was a test trial. This duty cycle of rewarded and unrewarded trials reduced frustration of the animals to a minimum and thus ensured high discrimination performance and motivation.
In all training and test trials it was recorded which gate was passed by the fish. In addition, the latency, defined as the time from the moment the gates were opened until the fish passed one of the gates with its whole body length, was scored.
In these experiments, two animals were trained to discriminate between a fixed S+ (‘little man’, consisting of a cone with a sphere on top) and six different S− (a small and a big cube, a small and a big cylinder, a small and a big prism). Objects were made of fired clay and varnished to seal them against water, thus giving them a high resistance similar to that of a stone. During training, the S− was exchanged randomly between every trial, including learning trials. This paradigm was chosen to force the fish to discriminate between objects based solely on S+ rather than by avoiding a specific S−.
Test trials consequently consisted of testing the preference of the fish for their S+ in comparison to all individual S− used during training. Three previously not presented objects (donut, hexagon, and sector) were also tested. In a second series of test trials, several S− were presented together with either the cone or the sphere that constitute the two components of the S+. These tests aimed to elucidate whether the fish had learned specific parts of the complex S+ rather than taking the complex shape in toto as the rewarded stimulus.
One G. petersii was trained to discriminate between a small (2×2×2cm3, S+) and a large (3 ×3 ×3cm3, S−) metal cube. During training, the S+ was presented at a distance of 2cm and the S− at a distance of 3cm from the gates. In the following tests, the distances of both objects (measured towards the object's edge facing the fish) from the gates were varied independently and the performance of the fish to recognize the S+ was measured.
In order to test whether the fish are able to recognize previously learned objects after they had been rotated in space, four fish were trained with different combinations of objects. Fish 1 was trained to discriminate between ‘little man’ and several different S−, fish 3 was trained to discriminate between a pyramid (S+) and a cube (S−), fish 4 was trained to discriminate between an object shaped like the letter A (S+) and an object shaped like a mushroom (S−), and fish 5 was trained to discriminate between a cone (S+) and a pyramid (S−). Rotation of objects was achieved by attaching two nylon threads (thin fishing line) to each object (S+ and S−). The ends of these threads were connected to a wooden platform above the experimental area such that each object was dangling from its strings. By adjusting the length of the strings it was possible to present the object at a certain angular position. Most objects were rotated in steps of 45° around a horizontal axis running perpendicular to the dividing wall through the centroid of the objects. In three cases, when a cube or a pyramid was used, these objects were rotated around a horizontal axis through the centroid running parallel to the dividing wall. This resulted in the objects’ tips facing the approaching fish (see symbols under the right 90° columns in Figure Figure5B).5B). The performance of the fish was tested at each rotation angle.
These experiments were performed with two animals in order to explore up to which distance object-identification is possible. One fish was trained to discriminate between a metal pyramid (S+) and a metal cube. The second fish had to discriminate between small (S+) and a large metal cube. Both objects were presented at equal distances from the gates and this distance was varied from 1 to 8cm. In order to restrict the minimal distance between the fish and the objects, a widely perforated plastic mesh grid (10 ×13cm2) was placed at a distance of 0.5cm behind each gate in front of the object. This grid prevented the fish from swimming closer to the object before making a decision. In these experiments, object distance was taken as the distance of the object from the grid.
In order to test whether vision plays a role during object discrimination, several control experiments were conducted. In order to test for the influence of light when active electrolocation was impaired by objects that project only very weak electric images, one fish was trained to discriminate between clay pyramid (S+) and a clay cube (S−) of similar volume. These objects were presented in two versions; one set of objects was varnished, sealing them against water, while a second set of objects remained unpainted. The unpainted objects were soaked with aquarium water and thus had a lower resistance then their varnished counterparts and therefore were more difficult to detect through active electrolocation by the fish. Distance measurements as described in Section ‘Distance Measurements’ were conducted either with a set of varnished or a set of unpainted clay objects. Other control experiments were performed to test whether vision augmented the phenomena of size constancy and rotational invariance.
In all control experiments, the discrimination ability between the same pairs of objects was tested both in complete darkness and with the lights on (<65lx). During the dark conditions (<<1lx visible light), the fish most likely could not see the objects and hence could only use active electrolocation. In order to monitor the behavior of the fish, the aquarium was illuminated by infrared light (>880nm, Elbex ELIR 1385/30), which is invisible to G. petersii (Ciali et al., 1997). The fish was observed with an infrared sensitive video camera (DCR-Pc120E, Sony Corporation, Japan) and visualized on a TV screen.
Tests for the significance of the differences between the choice frequencies obtained in test experiments and the results expected under random choice conditions (50%) were conducted using the Chi-square test (*P<0.05). Sigmoidal fits were obtained using Origin 7.0. Latency measurements were compared using ANOVA or the Kruskal–Wallis test, depending on their distribution.
The experiments were carried out in accordance with the guidelines of the German Government and the University of Bonn for Animal Welfare and with the ‘Principles of animal care’, publication No. 86-23, revised 1985, of the USA National Institute of Health.
Fish 1 was trained to discriminate between a single S+, which consisted of a cone with a sphere on top (‘little man’), and six different S−. During training, the S− was randomly chosen from the six differently shaped objects, so that it was not predictable for the fish which S− was presented as an alternative to the S+ in a given trial. It took the fish about 3 months of training to learn this task, which is considerably longer compared to training experiments with just a single S− (training of less than 4 weeks; von der Emde and Fetz, 2007).
When the fish had reached the learning criterion of more than 70% correct choices, test trials were conducted. During these, the fish preferred its S+ no matter which of the S− was offered as an alternative (light grey columns in Figure Figure2).2). This preference for S+ was not changed when two previously not used and thus unfamiliar objects were presented with the S+ (dark grey columns in Figure Figure2).2). The fish kept its preference for the S+ with a similar percentage as in trials with a familiar S−.
In order to test, whether the fish recognized S+ based on specific components or only if the whole object was present, we offered components of the S+ (the cone as the lower part or the sphere as the upper part of ‘little man’) versus either of two novel objects. In addition, we tested two pyramids joined at their tips, in order to simulate the thin middle part of ‘little man’, against another new object. When the cone was offered, choice frequency dropped compared to the original S+ and the fish chose it in 70 or 59% (depending on S−) of the trials (white columns in Figure Figure3B).3B). In the case of the sphere (light grey columns in Figure Figure3B),3B), the choice frequencies almost reached the values obtained with the original S+. For the two pyramids (dark column in Figure Figure3B),3B), choice performance dropped again, but was still significantly different from chance level.
In order to test for size constancy, fish 2 was trained to discriminate between a small (S+) and a large (S−) cube. During training, the S+ was placed at a distance of 2cm and the S− at a distance of 3cm from the gates. Thus we traded size and distance in a manner that the larger object produced a less intense image then it would produce if it were placed at the same distance as the smaller S+. This was necessary to guarantee that the fish discriminated objects based on their physical size rather than based on the intensity of the electric images they cast on the fish's skin. After the fish had learned the training task, test trials were conducted during which the distance of both cubes were varied independently.
Figure Figure4A4A shows that the fish chose its S+ (the small cube) no matter which distance-combination was offered. With increasing inter-object distances the performance decreased, but remained significant up to an object distance of 4cm. These results show that the fish always recognized the smaller object irrespective of the size of the electric image and the actual peak amplitudes in the image center. To make sure that the size of the electric image projected by the two objects varied strongly enough during our experiments, we measured image sizes of the large and small cubes at distances of 1 and 2cm from the fish's skin. At one cm distance, the horizontal image diameter of the small cube was 45.5±2.5mm and that of the large cube 48.2 ±3.7mm. Image diameter of the small cube increased to 55.8±3.6mm when it was moved to a distance of 2cm. This means that relying on image size would have been no successful strategy for our fish in these experiments. Measurements of peak amplitudes occurring in the center of the electric images in the presence of the large and the small cubes gave a similar picture. Moving the large cube from 1 to 2cm distance reduces the maximal amplitude modulation from 8.3 to 3.7%. Since the small cube evokes an amplitude modulation of 8.1% at 1cm and 2.6% at 2cm, the fish can not rely on amplitude cues alone to solve the recognition task. Taken together, these results indicate the presence of size constancy in active electrolocation.
Control experiments testing for the influence of light (Figure (Figure4B)4B) revealed that the performance of the fish was almost identical when the lights were turned off. This means that vision did not augment active electrolocation during the size constancy experiments.
Several test series with rotated objects were conducted with four individual G. petersii, which were trained to discriminate between different object combinations. In all cases, fish were tested with rotated objects after they had successfully learned to discriminate between non-rotated objects.
Fish 1, which was trained initially to discriminate between non-rotated ‘little man’ and several different S−, was also tested with rotated versions of its S+. Figure Figure5A5A shows that this fish still identified its S+ correctly at all rotation angles. Mean latencies of choosing between objects were always between 2.8 and 3.5s, and did not vary significantly with rotation angles (Figure (Figure55A).
Fish 3 was trained to discriminate between a non-rotated pyramid (S+) and a non-rotated cube (S−). This fish was then tested with rotated versions of S+ and of S−. For each rotation angle of S+, the S− was presented at three different rotation angles. As Figure Figure5B5B shows, rotation of neither S+ nor S− had an influence on the choice frequencies of this fish. The fish always recognized its S+ in at least 80% of the cases. Similar to data from fish 1, choice latencies did not vary systematically with rotation angle (Figure (Figure5B).5B). Control experiments conducted with fish 3 tested whether vision influenced object discrimination when rotated or non-rotated objects were used. In both cases, there was no difference in choice performance when the lights were turned off and fish had to rely only on active electrolocation (Figure (Figure55C).
Fish 4 (Figure (Figure6A)6A) was trained to discriminate between a non-rotated object shaped like the letter A (S+) and a non-rotated object shaped like a mushroom (S−). As in the experiments with fish 3, both objects were rotated and the fish had to discriminate between many combinations of rotation angles. Figure Figure6A6A shows that rotation did not influence choice performance or choice latency.
Fish 5 (Figure (Figure6B)6B) was trained to discriminate between a non-rotated cone (S+) and a non-rotated pyramid (S−). With these relatively similar objects, rotation proved to have an influence on choice performance in some cases. When only the S+ was rotated and not the S− (light grey columns in Figure Figure6B),6B), fish 5 had no problems choosing the S+ with an accuracy of over 80%. However in the opposite case, i.e. when only the S− was rotated, choice performance dropped until the fish chose the non-rotated cone only in 30% of the trails when the pyramid was rotated by 180° (dark grey columns in Figure Figure6B).6B). When both objects were rotated by the same amount (medium grey columns in Figure Figure6B),6B), choice performance was in between the other two cases. Despite the strong drop in choice performance in some cases, mean choice latencies did not vary systematically throughout the rotation experiment.
Up to what distance can G. petersii recognize an object and discriminate it from a differently shaped object? In order to answer this question, two fishes had to discriminate between two objects, which were placed at different distances from their gates. Here, as in all other experiments, distance was defined as the distance between the gate and the edge of the object closest to the gate. Fish 2 had learned to discriminate between a small (S+) and a large cube (see also Figure Figure4).4). When the objects were placed at a distance of 2 or 3cm from their gates, choice performance was over 70% correct choices. However at a distance of 4cm, performance dropped to 60% correct choices and then approached chance level at still larger distances (Figure (Figure7A).7A). The distance threshold, determined by fitting an exponential function to the data and measuring its crossing of the 70%-line, was at 3.9cm for fish 2. These results correspond to the results depicted in Figure Figure4,4, where performance also was reduced at an object distance of 4cm.
Fish 3, trained to discriminate between a non-rotated pyramid (S+) and a cube, gave very similar results to fish 2. Its performance was almost constant up to a distance of 3cm. At a distance of 4cm, however, discrimination ability dropped to about 65% and reached chance level at 5cm distance (Figure (Figure7B).7B). The distance threshold of this fish was 3.9cm.
During all experiments reported so far, the lights in the experimental room were turned on, and the fish might have used their eyes to discriminate between the objects. In order to test whether object discrimination was also possible in darkness, we conducted distance tests with fish 3 under light conditions and in complete darkness (infrared light only). Figure Figure8A8A shows that the performance of the fish was very similar under either condition. Even though there was no significant difference in the distance up to which discrimination was possible in light and dark conditions, there is a slight indication that at larger distances the fish performed just a little bit better, when the lights were off (dark columns inFigure 8A).
In order to be detectable through active electrolocation, the impedance of an object has to be different from the surrounding water. To test whether object detection under light and dark conditions is still possible with objects that only differ little from the water, we constructed objects made of clay, which were not varnished and therefore were soaked with aquarium water. The impedance of these objects was therefore more similar to that of the water than the varnished objects. Measurements revealed that the non-varnished objects produced an amplitude change in the electric image which was reduced by at least 50% compared to that caused by the varnished objects. When the fish was tested with such objects in darkness, its choice performance at a distance of 3cm was under 60% correct detections (dark columns in Figure Figure8B).8B). When the lights were on, the same objects became visible for the fish, and at a distance of 3cm it could discriminate between the objects with just above 70% correct choices (light grey columns in Figure Figure88B).
Object recognition is of fundamental importance to most animals. While many animals, including humans, rely on their visual system for this task, nocturnally active animals have to use other senses (e.g. Burt de Perera, 2004). Weakly electric fishes mainly employ active electrolocation for orientation, foraging and many other tasks, during which they have to classify and identify a variety of objects. Invariant object recognition is only one prerequisite for successful interaction with the environment. A fish also needs to assess an object's position, size and relative rotational angle. In this study we show that the African pulse-fish G. petersii is able to do so, even when objects are encountered at previously unknown angles or distances. Object recognition through active electrolocation thus can be compared to visual object recognition of other fishes in several respects (Douglas et al., 1988; Ross and Plug, 1998; Schuster et al., 2004; Firzlaff et al., 2007).
Our results further show and substantiate the notion that active electrolocation is a near field recognition system. Our animals could recognize object shape and size only up to a distance of about 4cm (Figures (Figures4,4, ,7,7, and and8).8). This is considerably shorter than what has been found for pure object detection, which might be possible up to a distance of one fish length (Moller, 1995; von der Emde et al., 2008). When trained in a distance discrimination task, G. petersii could judge distance differences of objects up to about 10cm away from the fish (von der Emde et al., 1998). Apparently, pure detection requires less amplitude change and fewer fine details of the electric images than analysis of an object's shape as in the present study. However, despite the fact that perception of object features is only possible when the fish is quite close to the target, fish can move around their environment at considerable speed. To a human observer, watching G. petersii with an infrared camera operating in complete darkness, no difference in agility compared to a fish orienting based on its visual sense under light conditions can be noticed.
Recently, the visual system of G. petersii has received some attention (Ulbricht et al., 2003; Wagner, 2007; Landsberger et al., 2008). African mormyrids have unusual eyes adapted to low light intensities in turbid environments. However, they might not use their eyes for object inspection and object recognition as has been suggested by several authors (e.g. von der Emde et al., 2008). Over the past years, in several studies G. petersii and other mormyrids species were trained to detect and analyze objects. In all of these studies control experiments revealed that the animals only used electrolocation and not vision to solve their tasks, even when light was present (reviewed in Landsberger et al., 2008; von der Emde et al., 2008). Also in this study it turned out that object discrimination was not improved when vision was possible (Figures (Figures4B,4B, B,5C,5C, and and8).8). Our control experiments revealed that both size constancy (Figure (Figure4B)4B) and rotational invariance (Figure (Figure5B)5B) are most likely based only on active electrolocation and not on vision. When varnished clay objects were used, which resembled natural objects such as stones in their electrical properties, the fish's performances was even a little bit worse under light conditions than in darkness (Figure (Figure8A).8A). Similar results have been found in several previous studies (e.g. von der Emde and Fetz, 2007). Vision can only amend electrolocation under extraordinary circumstances, for example when objects have to be detected that, like our unvarnished clay objects, project only very weak electric images unto the electroreceptive skin surface of the fish. Under these conditions, vision can provide a slight improvement of object recognition (Figure (Figure8B).8B). Size constancy and rotational invariance, however, apparently are not ‘extraordinary circumstances’ and are based only on active electrolocation and are not augmented by vision.
Former experiments have shown that when learning to discriminate between two objects in a forced-choice procedure, G. petersii not only learn to choose a particular S+ but they also learn to avoid the S− (von der Emde and Fetz, 2007). Fish paid attention to the relative differences between the two objects they had to discriminate. Apparently, fish were able to quantitatively determine several object features, such as shape, volume, material, and others, and to place each object into a multidimensional perceptual space. Choice behavior was determined by the overall perceptual distance of each object from the stored representation of S+ and S− in this space (Davison, 1983; von der Emde and Ronacher, 1994). Apparently, some object features were given more weight by the animals (volume, material) than others (shape). In addition, some parameters were spontaneously judged as negative (large volume, metal) by the fish, i.e. objects with these parameter were rejected in comparison to other objects. In contrast, other features were deemed positive (plastic, shape of S+) and the fish tended to prefer objects with these properties sometimes even without training. Positive or negative assignments depended on training, but also on existing, maybe inborn, preferences and aversions (von der Emde and Fetz, 2007).
In the present study, we tested whether fish could also learn to choose an object irrespective of the alternative objects used during training. Training took considerably longer compared to the former studies (3 months versus 5–24 days; von der Emde and Fetz, 2007), because the fish only memorized one object's features (the S+) and not, as in the former experiments, also attended to features of the S−. Immediate tests revealed performances of just over 70% correct (Figure (Figure2),2), which improved to over 85% correct after an additional month of training (Figure (Figure3A).3A). With this type of training the fish recognized the S+ independently of the S−, because choice performance was very similar even when novel objects were offered, which were not used during training (Figure (Figure22).
When recognizing S+, the fish use mainly prominent features of the learned object. When single parts of ‘little man’ were offered together with novel objects, the fish chose the upper part of S+ (sphere) at almost the same frequency as the complete S+ (Figure (Figure3B).3B). However, also the lower part of S+ (cone) was efficient to some degree, because it still was preferred significantly over novel objects (Figure (Figure3B).3B). Thus, during learning the fish extracts and memorizes particular features out of several possible cues that are present in the learned stimulus and uses them later for recognition. It will be interesting to investigate this in follow-up studies, since it indicates that a simplistic template-recognition mechanism is not being used by these animals; rather, fish might classify complex objects based on specific parts and evaluate an object against an alternative one based on the relative match of the weighted sum of several specific parts, as suggested by the results shown in Figure Figure33B.
In the visual system, the effect of object size on object recognition and underlying neural substrates has been investigated in detail by several authors (for a review see, e.g. Logothetis and Sheinberg, 1996). Neurons in the inferior temporal cortex, for example, can exhibit object-size invariant responses (Ito et al., 1995). This is surprising, since objects of the same size can produce images of very different physical dimensions on the retina, when presented at different distances. Apparently, the human visual system can take viewing distance into account, when judging the size of an object (Arnold et al., 2008). Also many animals, including fishes, can visually judge the absolute size of objects regardless of changes in viewing distance and thus despite the resulting dramatic difference in the size of the retinal images (Douglas et al., 1988; Ross and Plug, 1998; Schuster et al., 2004).
Similar to the retinal image during vision, the size of the electric image, which an object projects onto the electroreceptive surface of the animal, changes with distance. In contrast to retinal images, however, image size of electrical images increases at larger distances (Caputi and Budelli, 1995; von der Emde et al., 1998). In addition to an increased image size, peak amplitudes in the center of the electric image decrease when an object moves away from the fish. Peak amplitude is an ambiguous cue, since it also depends on the resistance of the object (low resistance objects cause higher amplitudes) and on the size of the object (larger objects cause higher amplitudes). Thus, neither the width nor the peak amplitude of an electrical image alone are object- or distance-invariant cues.
Previous studies have shown that the fish have a tendency to spontaneously avoid large and low resistance objects, both of which cause strong amplitude changes within their electric images (von der Emde and Fetz, 2007). When training G. petersii to discriminate between two objects, the animals tend towards using amplitude as their primary cue for object discrimination. In the present study, in order to overcome this tendency and to tempt the fish into using object size rather than amplitude for discrimination between the large and the small cube, we placed the large cube 1cm further away than the small cube during training. This reduced the amplitude in the image center of the large cube and helped to ensure proper analysis of both objects by the fish. As a result, they used cube size rather than only amplitude for discrimination.
When during our tests the distances of the small and the large cubes were varied independently, neither size nor amplitude of the image could serve as cues for recognizing the S+ (small cube). For the fish, both parameters changed unpredictably, with the small cube producing a larger or a smaller image (of higher or lower amplitude) than the large cube, depending on their relative distances. This was proven by measurements of image size and of maximal image amplitude. When close to the fish at 1cm, the image of the small cube was smaller and had a smaller amplitude than the image projected by the large cube at the same distance. At a distance of 2cm, the small cube's image was larger and had a lower amplitude than that of the large cube at 1cm. However, when the large cube was moved to 2cm, its maximal image amplitude became smaller than that of the small cube at 1cm. Nevertheless at all distance combinations, G. petersii 2 recognized the small cube independently of its distance and independently of the distance of the larger cube. This also means that it recognized the small cube independently of image size and image amplitude. It follows that up to an object distance of about 4cm, the effect of size constancy is present in active electrolocation of this individual G. petersii. This performance can only be achieved if the animal also measures the distances of both cubes and takes them into account for a decision. Former studies have shown that G. petersii can indeed measure the distance of an object during active electrolocation (von der Emde et al., 1998; Schwarz and von der Emde, 2001). It therefore has all the prerequisites for size constancy.
Additional control experiments testing for the influence of vision on size constancy were conducted, during which fish 2 discriminated between a large and a small cube under light conditions and in complete darkness. Figure Figure4B4B shows that with and without visible light, the fish's performance was almost identical at just above 70% correct choices. Since in these experiments, the distances of the objects were relatively large (3cm), the fish would have benefitted by the use of vision. Because this did not occur, we conclude that in our experiments, size constancy was solely based on active electrolocation without any augmentation by vision.
During visual object recognition, rotation of an object has an influence on its recognition. Several studies with various animals including humans have tested for rotational invariance by using a task during which the animals had to discriminate between a visual pattern and its mirror image. Shepard and Metzler (1971) found that the time it takes humans to discriminate between the image and the mirror-image of rotated figures is linearly dependent on the angular rotation of these figures. In addition to an increased latency, also the error rate increases with the rotation angle. The decrease in performance and increase in latency might be directly related to the effects of mental rotation, a time-consuming operation performed by the brain to match a retinal input to internal, previously stored representations (Shepard and Metzler, 1971; Jolicoeur, 1985; Tarr et al., 1998). Interestingly, some animals such as pigeons (Hollard and Delius, 1982; Delius and Hollard, 1995) and in some cases rhesus monkeys (Köhler et al., 2005) are able to discriminate between image and mirror-image of rotated stimuli at a constant latency, i.e. without a angle-dependent increase in latency. In these cases, rotational invariance might have been not based on mental rotation.
Other studies using a visual detection task, during which subjects had to respond to the presence of an animal in a natural scene, showed that human performance was surprisingly rotation invariant, as reaction times were similar and accuracy remarkably stable across orientations (Guyonneau et al., 2006). These results imply that mental rotation was not involved in this form of rapid object detection. An alternative may be that subjects are instead using local combinations of features that are indicative of the presence of the S+.
In our experiments with G. petersii, rotation of the objects in all but one case (see below) did not impair recognition. Moreover, response latencies did not depend on rotation angle in all our tests (Figures (Figures5A,B5A,B and and6A).6A). These results suggest that object recognition might be rotation-invariant during active electrolocation, provided that objects possess certain characteristic features. As in the case of size constancy, our control experiments in complete darkness shown in Figure Figure5C5C suggest that rotational invariance is only based on active electrolocation and not on vision.
There was one exception to the finding of rotational invariance: When a fish had to discriminate between two very similar objects, a cone and a pyramid of the same height and the same base-diameter, choice frequency was strongly impaired after rotation. Interestingly, only rotation of the S− and not of the S+ did compromise choice performance. When the S+ was rotated, choice performance did not change, while in the opposite case, i.e. rotation of the S−, depending on the rotation angle, the fish chose the rotated S− instead of the not-rotated S+ (Figure (Figure6B).6B). This shows that the fish based its decision for the S+ (pyramid) not only on the S+ but even more so on the S−. A very similar result was found by von der Emde and Fetz (2007), who showed that in a 2AFC discrimination experiments, all fish learned not only to select their S+ but also to avoid the S−. Decisions were always based on both objects, and in some cases avoidance learning had a stronger influence than positive object selection.
The results reported here indicate that rotation invariance during active electrolocation was only present when two clearly different objects were used. If the S+, and probably also the S−, had clear and distinct features, the fish were able to recognize the objects after rotation. These features may be so simple and distinct that recognition does not depend on their angular orientation in space. For example, when discriminating between ‘little man’ and its alternatives (Figure (Figure5A),5A), the upper part of the S+ (sphere) was distinct, and a feature like this did not occur in any of the alternative objects used. In the same manner, the pointed peak of the pyramid used in Figure Figure5B5B was so distinct from the local features of the cube that no confusion between these objects could occur even after rotation. Finally, the discrimination between the letter A and the mushroom (Figure (Figure6A)6A) could have worked in a similar manner. However, with the object combination of pyramid and cone, things are somewhat different. Even though the fish can learn to distinguish between the two objects, discrimination breaks down, when the objects are rotated (Figure (Figure6B).6B). In this case, the fish might have paid attention to subtle details between the non-rotated objects, which apparently disappeared when the objects were rotated.
Detection of the S+ was relatively fast and latency did not depend systematically on the rotation angle. This suggests that mental rotation was not involved in any of our experiments, at least potential differences due to rotational tasks were smaller then our resolution in scoring animal behavior. These results are similar to those reported by Guyonneau et al. (2006) (see above) were detection also was very fast and independent of the orientation of the objects.
When G. petersii learn to recognize an object during active electrolocation, do they pay attention to local features, such as edges or certain parts of an object, or do they learn to recognize the object as a whole? This is a general question of sensory perception that has been addressed in the literature for various animal models using in most cases vision for object recognition (e.g. in Dyer et al., 2005). According to the ‘feature extraction model’, the animal extracts and memorizes particular cues out of several possible cues that are present in the learned stimulus (Srinivasan, 1994; Palmeri and Gauthier, 2004). Similar to some insects orienting visually, our fishes would recognize rewarded or non-rewarded stimuli by the presence of learned cues in a novel object, even if other cues disagree with those of the trained object (Lehrer and Campan, 2005).
The results of the present study support the presence of a feature extraction model during active electrolocation in G. petersii. The results obtained when using a single S+ and several S− show that the fish recognized certain parts of ‘little man’ and used them for recognition of the whole S+ (Figure (Figure3).3). However, the upper part of ‘little man’ apparently was weighted more strongly than the lower part or the waist. The fact that size constancy exists argues for recognition of an individual object by certain cues, in this case object size (Figure (Figure4).4). Finally, the rotation experiments strongly support the notion that rotational invariant object recognition is based on certain ‘simple’ parts of objects which can be quickly recognized even after rotation (Figures (Figures55 and and66).
Recently, important advances in modeling and measuring of electric images, i.e. the local distortions of the electric field caused by simple objects, have been made (Caputi et al., 2008; Engelmann et al., 2008). It is only based on the concise knowledge of the physical properties of such images that important behavioral experiments, for example on distance determination, were possible (von der Emde et al., 1998). However, future studies are needed to better characterize the information content of electrical images, including a comparison of features that animals might make use of in the behavioral tasks like those described in the present study. A very challenging aspect of this approach will be to include spatial and temporal correlations in the images, since in our experiments fish always were moving with respect to the objects investigated. It thus is likely that animals make use of spatial and/or temporal correlations when evaluating and comparing different electrical images. This potentially important aspect of correlations (Borst, 2007) certainly needs to be addressed both from a network perspective, i.e. an analysis of potential connections allowing correlation extraction, and from a neurophysiological and computational point of view.
Currently physical properties of electrical images are thought to be encoded by the somatotopic population in the primary station of the brain, the electrosensory lateral line lobe (ELL, Bastian and Zakon, 2005; Maler, 2009). Recent experiments argue for anatomical (Bacelo et al., 2008) as well as neuronal (Metzen et al., 2008) specializations within the ELL of these fishes, which can be regarded as computationally optimal either for fine-resolution analysis of electrical images or for contrast-enhancing mechanisms. Apart from the important question of how information is represented and preserved neuronally, it will be a challenge to study how mental representations of specific objects are stored and maintained. To address such questions, studies of mid- to long-term neuronal activity patterns (e.g. c-fos or two-photon imaging) could be employed to learn how the neuronal representation of an object changes over time.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
This study was supported by the German Research Foundation DFG (Em 43/11-1, 2, 3) and by the European Commission (FET, ANGELS, contract 231845).