Our interest in spatial visualization stems from applied research on ultrasound guidance of common surgical interventions, like placing peripheral catheters or performing breast biopsies. Ultrasound imaging works on the same principle as sonar: Narrow beams of high-frequency sound waves are sent out sequentially from a transducer in a fan-shaped pattern to interrogate a region of space. As the sound waves encounter a sound-reflecting object, echoes are sent back to the transducer with a delay indicating the object's range. These signals are converted into a 2-D image, which shows the spatial locations of sound-reflecting objects in the scanned slice of environment.
Typically, the slice is displayed on a local monitor, with important consequences: Several aspects of the display lead to cognitive mediation being invoked to map the ultrasound data to locations in external space. First, because the visual display cannot physically occupy the space in front of the transducer, it is displaced from the region being scanned. Second, the axes in the displayed image are aligned only with the transducer's frame of reference; they do not correspond to any fixed frame in external space. Specifically, the vertical axis of the image is generally in the direction of the central ultrasound beam, while the horizontal axis is transverse to that beam. Hence, as the transducer is moved or tilted, the image on the screen corresponds to different locations and/or orientations. Third, the scale of the display is variable. The user can zoom to enlarge or reduce the screen image, with the objective size indicated by a scale on the monitor. While rescaling is convenient for viewing objects, it means that there is no consistent correspondence between the sizes of display space and external space. Collectively, these factors mean that in order to build a representation of target location from the ultrasound image, the user must invoke spatial cognitive processes such as mental rotation, translation, and rescaling. This constitutes visualization, not direct perception.
We have contrasted cognitively mediated and perceptually directed action by comparing conventional ultrasound to a novel ultrasound display, which employs “augmented reality” to portray hidden targets at their physical 3-D locations (Stetten & Chib, 2001
). This device uses a tool familiar to psychologists, the half-silvered mirror, to project an ultrasound image directly into the area being scanned (). A small video display is embedded into the handle of an ultrasound transducer, and a half-silvered mirror is mounted above its shaft. Light rays from the display that contact the mirror are reflected back toward the eyes of the user, providing normal visual distance cues including convergence, accommodation, and stereopsis. This line of sight creates the same percept as if the rays came from the space imaged by the transducer itself. Because light rays also pass directly through the mirror (that's the beauty of half-silvering), the user can see not only the reflected ultrasound image but also the real world in front of the transducer.
Fig. 1 The augmented-reality ultrasound visualization device. The ultrasound image on the flat-panel display is reflected in a half-silvered mirror and projected to the user's eyes as if it comes from the target area on the patient's body. The image thus looks (more ...)
The result of the augmented-reality display is an illusion in which the ultrasound image appears to float within the space that is being scanned, at the precise location of the imaged data. We call this form of viewing in-situ, in contrast to the ex-situ visualization of the image on a conventional ultrasound display. In-situ viewing allows a representation of the location of ultrasonically detected targets to be encoded with basic perceptual processes; ex-situ visualization requires a cognitive overlay.