The presented approach concentrates on interaction rather than visualization. For visualization, VR, MIP, and MPR are utilized, as they are already well-established techniques. As radiologists have experience and comfort with the review of a stack of 2D images, our system let the user still focus on 2D views based on slices or slabs, but provides a new way of 3D navigation to arbitrarily define their position and orientation within the image stack.
As for currently deployed solutions, the utilization of a mouse or trackball, which are limited to two degrees of freedom (DOF) for navigation, seems to be a major obstacle for exploring three-dimensional data. It requires the user to switch between different modes of interaction, eg, between one for rotating and one for translating. For example, defining a double- or triple-oblique slice often requires multiple steps and becomes a nonintuitive, time-consuming task. Thus, an acceptable solution should utilize an input device with more than two DOF to provide maximum control of navigation through the volume image data set. But, as stated above, it should still be kept simple, cost-efficient, and resemble currently deployed solutions in radiology to maximize its acceptability and actual utility. For this reason a desktop-based approach was chosen over an immersive environment. The standard computer (2D) mouse was extended to 3D by a motion-tracking sensor to allow for direct three-dimesional interaction. Other than that, utilization of specialized hardware was avoided and off-the-shelf PC technology was used instead.
The user is provided with two views, usually on two monitors. A 3D view (Fig. , left) shows the full data set visualized with VR together with slices or slabs of selectable size and thickness and corresponding clip planes. For better orientation, a virtual model representing the input device and other objects, like a volume-related coordinate system, can be added and the edges of the slices/slabs can be color-coded. A 2D view shows the current slice or slab in plan view (Fig. , right). The slabs can be rendered using MIP (Fig. a–b), average intensity (ray sum) projection (Fig. c), or standard (alpha blending) VR technique (Fig. d). The VR of the full data set in the 3D view primarily serves as orientation and allows the user to better assess the spatial location of a slice or slab. In the following it will be referred to as the reference volume. To visualize multiple slices/slabs, the user can select the current one as a key image so it will remain in the 3D view until deleted.
Fig 1 The 3D view (left) shows the volume data set (volume rendered, here: head and neck CT scan), reconstructed slice(s) with optional clipping planes(s) and some basic orientation markers (here: volume-related coordinate system). The 2D view (right) shows (more ...)
Fig 2 Exploring a head MRI data set with slabs. Maximum intensity projection in the 3D (a) and 2D view (b). 2D view of average intensity (ray sum) projection for the same slab (c). Standard (alpha blending) VR of a cube-like slab with corresponding clipping (more ...)
Figure illustrates the basic interaction metaphor. A 3D mouse is used to define position and orientation of the reference volume and of the slices or slabs. It works either as a conventional mouse (2D mode) or, when lifted from the desktop, provides simultaneously three translational and three rotational DOF, ie, six DOF (3D mode). In the latter case, the mouse movement, along the three spatial axes, and its rotation around these three axes are measured at the same time.
Fig 3 Illustration of the basic user interaction metaphor. A 3D mouse is used to change position and orientation of slices or slabs, either in an absolute (a) or relative manner (b, e), and of the reference volume together with the slices/slabs in a relative (more ...)
In the 3D mode, the mouse can be used for positioning a slice or slab in an either absolute or a relative manner. In the first case (Fig. a), the mouse works like an ultrasound probe. The user examines a virtual volume situated on the desktop in a manner analogous to a real-time ultrasound examination. The reference volume is virtually located at a specific position in space and the (virtual) location of the current slice or slab image is directly and constantly linked to the (real) location of the mouse. In the second case (Fig. b), the position and orientation of the slice will be relatively adjusted according to the movement of the mouse only while the user is pressing a specific mouse button. After having started to hold the button down, the slice or slab’s position and orientation will be changed according to the changes of the mouse’s position and orientation. This allows the user to begin by holding the mouse in an arbitrary and convenient position when relocating the slice or slab because absolute position and orientation of the mouse do not matter. This relative positioning can also be applied to adjust position and orientation of the reference volume together with all existing slices and slabs (Fig. c). The center of rotation is always the center of the volume or the slice/slab, respectively.
In the 2-dimensional mode, apart from performing tasks related to the 2-dimensional graphical user interface (GUI) as usual, the user can rotate the reference volume together with all existing slices and slabs with conventional movements of the mouse (Fig. d). Moving the mouse forward or backward rotates the volume up or down. Moving it to the left or to the right rotates the volume to the left or to the right. In addition, the active slice/slab can be moved in a parallel way with only one DOF by moving the mouse forward and backward (Fig. e).
Allowing the user to absolutely position a slice or slab was motivated by the idea of adopting the well-known ultrasound paradigm to achieve an intuitive and easy-to-understand interaction. To support this (at least for the initial learning phase), the slice or slab can also be sector-shaped as are sliced images produced by a sector scanner, and the 2D view can also simulate an ultrasound-like view (see below). In contrast, the relative positioning provides a more flexible way of using the 3D mouse, thus potentially producing less fatigue and allowing for more accuracy in 3D navigation.
The combination of both a 2D and a 3D mode allows for a smooth integration of conventional with 3D interaction to let the user easily choose the most appropriate interaction mode for the current task without the need of switching between different input devices. As an example, the user might want to use six DOF (lifting the mouse) for defining the orientation and initial position of a slice and afterward use only one DOF (putting the mouse back to the desktop) to scroll forward and backward between several slices parallel to the one first chosen.
Figure gives an example of how the user is interacting with the system defining oblique slices in the absolute manner. An ultrasound probe model represents the input device that is used to define the location of sector-shaped slices. The 3D view also shows two planes representing two real markers on the desktop to help the beginner to understand the relationship between the virtual scene on the screen and the real-world interaction.
Fig 4 Example of how to use the system (screenshots of the 3D view, absolute mode, sector-shaped slice). The input device lies on the table (1). The user lifts the device to explore the data set (2–3). The user marks a slice as key image and continues (more ...)
Several constraints can be applied to the six DOF interactions. The active slice/slab can be automatically adjusted to the nearest orthogonal orientation, either related to the volume coordinates to achieve axial, sagittal, or coronal orientations, or related to an already defined key image to create additional parallel or perpendicular slices. The user can also select different clipping modes. Clipping can be switched off, constantly at one specific side of the slice or slab, on both sides or always automatically on the side that faces the viewer (ie, the part of the volume between the slice or slab and the viewpoint is removed).
The 2D view can be computed in two different ways. In the “normal plain view” the slice or slab is shown in the same basic orientation as the slice in the 3D view (Fig. , right and Fig. c). In contrast, the “ultrasound-like view” is provided to carry the ultrasound metaphor further. The slice or slab is presented as if it would be a 2D ultrasound image, ie, with a locally fixed orientation so that the top of the 2D view is always showing the same edge of the slice/slab (which is marked by a small cone) (Fig. d). Thus, users with experience in ultrasound imaging can produce images similar to those they are used to. In addition, this approach has the advantage that the view is not dependent on the orientation of the treference volume in the 3D view. The first mode, on the other hand, makes sure that the current slice/slab is not shown in an orientation different from that one in the 3D view (to avoid, for instance, that the part of the slice that is shown on the left side within the 3D view is shown on the right side within the 2D view). A disadvantage of this approach is the dependence of the 2D view on the 3D view, which can cause the 2D view to suddenly flip, when the reference volume is rotated or slices/slabs with an orientation nearly perpendicular to the 3D view direction are rotated (Fig. c, iii vs iv).
Fig 5 Different ways of visualizing a volume data set (here: abdominal CT scan) and an oblique slice. a 3D view without clipping. b 3D view with clip plane. c 2D view of the slice. d Ultrasound-like view of the slice. Each slice has a reference edge that is (more ...)