|Home | About | Journals | Submit | Contact Us | Français|
Real‐time volume data acquisition poses substantial challenges for the traditional visualization pipeline where data enhancement is typically seen as a pre‐processing step. In the case of 4D ultrasound data, for instance, costly processing operations to reduce noise and to remove artefacts need to be executed for every frame. To enable the use of high‐quality filtering operations in such scenarios, we propose an output‐sensitive approach to the visualization of streaming volume data. Our method evaluates the potential contribution of all voxels to the final image, allowing us to skip expensive processing operations that have little or no effect on the visualization. As filtering operations modify the data values which may affect the visibility, our main contribution is a fast scheme to predict their maximum effect on the final image. Our approach prioritizes filtering of voxels with high contribution to the final visualization based on a maximal permissible error per pixel. With zero permissible error, the optimized filtering will yield a result that is identical to filtering of the entire volume. We provide a thorough technical evaluation of the approach and demonstrate it on several typical scenarios that require on‐the‐fly processing.
Rapid increases in processing power coupled with advances in ultrasound technology have enabled the real‐time capture of high‐resolution 3D volumes, hence making ultrasound a true 4D imaging modality. 4D ultrasonography is already an important imaging technique in obstetrics and cardiology and has an ever‐increasing range of applications. In echocardiography, 4D ultrasound is used to diagnose the functioning of the valve and in intra‐operative guidance it is used as a navigational aid for minimally invasive valve replacement. In obstetrics, 4D ultrasound offers many potential benefits in the prenatal diagnosis of neurological problems by enabling the assessment of grimacing, breathing movements, swallowing, mouthing, isolated eye‐blinking and revealing the orientation of the limbs [LV11]. Similarly, 4D ultrasound provides significant advantages in assessing fetal heart defects [Ion10]. Recent developments are even progressing toward portable 4D scanners, such as GE's Voluson i system, which open up new possibilities in point‐of‐care medicine, e.g. for emergency medicine in crisis zones.
Despite its advantages, the visualization of 4D ultrasound data is plagued by several challenges. Other tomographic imaging modalities such as CT or MRI have a much better signal‐to‐noise ratio, but ultrasound additionally suffers from various acoustic artefacts such as attenuation, focusing and interference [NPH*00]. In volume visualization, these artefacts are particularly problematic, since they can obscure relevant structures, and hence affect the clinical value of the resulting image. A significant body of research has been devoted to the investigation of filtering strategies for the removal of speckle noise and other artefacts, but effective methods typically have a considerable computational cost [SSHW*12]. For regular 3D data or prerecorded 4D sequences, filtering is regarded as a one‐time operation, and is thus not considered as performance‐critical. However, when data are streamed at 5–20 volumes per second during a live examination, a delay in processing or the skipping of frames is clinically not acceptable. Hence, high‐quality visualization of live 4D ultrasound raises a need to tackle this problem.
We propose a novel output‐sensitive method to address the challenge of time‐demanding filtering of 4D ultrasound data (see also Figure 1). Key to reducing the computational load of filtering operations is to give priority to regions of the volume depending on their contribution to the visualization. Typically, only a small fraction of all voxels will be visible, since they are either classified as transparent or occluded by other structures. However, in contrast to widely used culling techniques, which exploit visibility information for improving rendering performance, we need to account for the fact that filtering operations themselves, since they modify the underlying data values, influence the visibility and the appearance of regions in the volume. We give special attention to the following cases:
(a) Noise A (green) is partly visible in the original dataset (dark green) and it completely occludes object of interest B. (b) Data filtering removes the visible noise. Consequently, the object of interest B is now partially visible.
A 2D example: (a) raw data to be filtered with a simple 3 × 3 averaging kernel. (b) The ground truth, where all data points are filtered. (c) The result if we filtered only the visible (non‐transparent) data.
Fetal ultrasound dataset Anna: (a) raw data, (b) with all voxels filtered and (c) with all voxels that contribute to the visualization filtered (threshold 0%). (b) and (c) yield the same results. In (d), (e) and (f), we are omitting more voxels from filtering ...
We solve these non‐trivial problems by quickly identifying a voxel's impact on the visualization, taking into account its potential change in visibility and color after undergoing the filtering process. This paper extends our previous work on visibility‐driven filtering [SBVB14]. The original approach only considered the case where the visual result of the pipeline was identical to applying the filter to all voxels. In this paper, we generalize and extend the method by quantifying the visual impact of not filtering a voxel. This allows us to introduce a visual tolerance which can be used to prioritize the voxels to be filtered. We propose an optimization scheme that uses the estimated contribution of voxels to maximize the number of voxels excluded from processing while keeping the summed maximal possible error per pixel below a user‐specified threshold. Moreover, we extend our framework to filters that also contain negative values in their kernels.
The topic of visibility is concerned with the classification of all objects in a scene as either being partially visible or totally invisible. The task of rapidly identifying entirely invisible objects is especially important when interactive frame update rates are desired. The surveys by Bittner and Wonka [BW03] and Cohen‐Or et al. [COCSD03] provide a good introduction into basic techniques. Approaches include image‐space methods such as hierarchical z‐buffering and hierarchical occlusion maps, spatial subdivision (e.g. octrees, BSP trees, K‐D trees), portals, and potentially visible sets.
Another related concept is deferred shading, which moves expensive shading operations to later processing stages where they are only applied to those object and image regions that actually need to be processed. First introduced by Deering et al. [DWS*88], a common approach uses two rendering passes where the first pass simply outputs the image‐space depth and normals per fragment. The second pass then uses these values to compute the final image. The deferred shading pipeline allows for fast computation of differential surface properties, e.g. curvature descriptors. Hadwiger et al. [HSBG05] use deferred shading on isosurfaces to realize advanced shading effects such as ridge and valley lines and curvature‐based flow advection.
In volume visualization, exploiting visibility information can lead to massive speedups of the rendering process. Early ray termination terminates ray traversal when the accumulated opacity reaches a certain threshold. Levoy [Lev90] proposed a hierarchical subdivision of the volume for empty‐space skipping. A very common approach is the use of a min‐max octree in combination with a summed area table of the transfer function to quickly identify empty parts of the volume [LL94]. Boada et al. [BNS01] proposed a management policy, which uses a hierarchical approach in homogeneous regions and regions of low interest instead of a one‐texel‐per‐voxel logic on the entire dataset. Kraus and Ertl [KE02] introduced adaptive texture maps and applied them in volume rendering. Mora et al. [MJC02] proposed the use of hierarchical occlusion maps for hidden volume removal in an object‐order volume rendering algorithm.
The elimination of occluded but non‐transparent parts of the volume is particularly important in out of core techniques where costly disk‐to‐main‐memory or main‐memory‐to‐GPU‐memory transfers need to be avoided. A recent state‐of‐the‐art report [BHP14] provides a good overview of large‐scale volume visualization techniques including those using visibility information. Most approaches employ a form of bricking where the volume is subdivided into smaller blocks [BHMF08, LLY06]. Gobbetti and Marton [GM05] introduced a hybrid space‐partitioning scheme based on triangles (leaf nodes) and voxels (inner nodes). Their multi‐resolution framework for interactive rendering of large and complex models employed visibility culling of the nodes. They coupled visibility culling, out‐of‐core construction and view‐dependent rendering to achieve interactive frame rates. Later Gobbetti et al. [GMG08] presented an adaptive out‐of‐core method for rendering large scalar volumes, also exploiting visibility. Kniss et al. [KLF05] use the term deferred filtering to refer to a two‐pass volume rendering approach for compressed formats. In the first pass, a local subset of the data is reconstructed and the second pass can then exploit the GPUs native interpolation capabilities. Crassin et al. [CNLE09] suggested the use of information extracted from the previous frame to guide data streaming from slower memory units. For the visualization of petascale volume data, Hadwiger et al. [HBJP12] presented a visibility‐driven rendering method which employs a virtual memory architecture. They use visibility information to only fetch and reconstruct needed bricks from a multi‐resolution hierarchy. Fogal et al. [FSK13] presented a detailed analysis of ray‐guided approaches for large volume data visualization including an evaluation of the effects of common tunable parameters such as brick size. A common limitation of all these approaches, with respect to our application, is that the volume is considered to be static, i.e. its values do not change from one frame to the next.
Westenberg et al. [WRW07] used attribute filtering, which allow for the selective removal of connected components, and a max‐tree data representation for fast interactive filtering of static volume data. Marton et al. [MGDG14] employed a deferred architecture for smoothing of sharp edges that may appear between blocks during block‐based rendering of large volume data. Jeong et al. [JBH*09] perform on‐demand filtering of electron microscopy data only for visible blocks. A local histogram‐based edge metric, which is used for the enhancement of tissue boundaries, is recalculated during the filtering pass only for visible blocks. However, this approach is not feasible for streaming volume data, where the entire volume is replaced continuously. In this paper, we present a solution that specifically addresses this scenario. Furthermore, our approach also handles the case where invisible parts of the volume become visible after filtering.
Progressive rendering is often either based on image‐ or ray sampling distance, and on system responsiveness constraints. Frey et al. [FSME14] introduced a space‐time error metric for progressive rendering where the spatial error describes the correctness of the pixel values while the temporal error describes the lag of the rendered image. Falk and Weiskopf [FW08] employed the principle of output sensitivity, i.e. the concept that the rendering cost should be primarily determined by the complexity of the output to the image plane, for the on‐the‐fly computation of 3D line integral convolution. Our approach is based on the same ideas, but we apply them to the filtering of time‐dependent volume data.
Only few works have addressed the visualization of 4D streaming ultrasound data. The poster by Bruder et al. [BJE*11] shows how the Voreen volume rendering engine can be used to visualize real‐time 4D ultrasound data and Elnokrashy et al. [EEH*09] present the basic pipeline setup for GPU‐based ultrasound rendering, but neither work discusses the important issue of filtering. Bronstad et al. [BATK12] used visibility information to locally adapt the opacity transfer function for 4D ultrasound data. Most related to our approach is later work by Elnokrashy et al. [EHH*12], where they restrict themselves to single isosurfaces and apply a smoothing filter to the z‐buffer. In contrast, our method enables true filtering of the scalar field in an integrated volume rendering pipeline for live streamed 4D ultrasound.
In the standard visualization pipeline, data enhancement (e.g. filtering) directly follows the acquisition stage. The visual mapping and rendering steps then operate on the enhanced data. The problem arises when in situ visualization of streamed data is required. In this case, for every rendered frame the data need to undergo an expensive processing operation which may be significantly more costly than the mapping/rendering stages themselves. Examples of such costly common data enhancement algorithms are iterative anisotropic diffusion filtering [PM87], iterative bilateral filtering [TM99], lowest‐variance filtering [SSHW*12] and vessel‐enhancing filtering [FNVV98].
The term output‐sensitive algorithm refers to an algorithm whose running time is dominated by the size of its output. In our context, the output of our pipeline are visible color contributions to pixels on a screen, and we aim to exploit the fact that these contributions are not uniform across the voxels of the input volume to reduce the running time of the filtering stage. We observe that in most volume visualization applications, only a fraction of the data is displayed at a time. Transfer functions limit the range of displayed data values and large portions of the volume may be occluded by other structures. Additionally, particularly for ultrasound data, clipping planes or more advanced clipping geometries are commonly used to remove unwanted parts of the data [BBBV12]. Furthermore, when zooming in to investigate small details, large parts of the volume may simply lie outside of the viewing frustum.
Our output‐sensitive filtering scheme utilizes the visibility and color‐variation information in order to quantify the maximal possible contribution of a voxel to the image. Depending on a user‐defined threshold, voxels with zero and small contribution to the image will be excluded from processing. An overview of our approach is depicted in Figure 4. Our pipeline receives one volume of the ultrasound stream at a time. This volume can then undergo multiple sequential processing operations and the result of these operations is then displayed by a volume rendering algorithm. Our strategy to enable on‐the‐fly processing of live streamed volume data is to prioritize processing of those regions that affect the final displayed image the most, while not exceeding a predefined maximal permissible color error per pixel.
The pipeline for output‐sensitive data filtering: 4D data are streamed directly from the ultrasound scanner. In the first stage, a subset of voxels is tagged for processing based on their impact on the visualization. The neighborhood information ...
In order to determine the final impact at every position in the volume, we would have to perform the full processing and volume rendering steps. However, by making certain assumptions about the nature of the processing operation to be performed (see Section 3.1), we can develop methods to conservatively predict the impact of voxels on the visualization prior to the application of the filtering operation. In Section 3.2, we explain the concept of the impact of a voxel on visualization as the potential error ε that would be introduced if this voxel would not be filtered. We also provide a conservative estimate for this error that can be quickly evaluated using a set of simple lookup tables, described in Section 3.3. Then we apply a greedy approach, described in Section 3.4, that aims at maximizing the set of voxels to be excluded from processing, so that the sum of the error ε over all voxels that contribute to a pixel is below a user‐set threshold. Lastly, in Section 3.5 we discuss how the final working set of voxels is determined and filtered.
To predict the visual impact of a filtering operation, we need to make certain assumptions about its nature. Our input volume is a scalar‐valued volumetric function . In general, a filtering operation replaces the original value at a voxel position p with a new value. For any such operation, we require that the filtered function value
Both requirements are trivially true for any convolution with normalized non‐negative weights (mean, truncated Gaussian, etc.), but also hold for a wide range of other smoothing or denoising operations including nonlinear filters such as the median or bilateral filters. In the case of an edge‐detector that contains negative values in the operator mask, and that does not conform with this requirement, we can amend the estimated range of to the following:
where is the sum of all negative weights of the filter kernel and is the sum of all non‐negative weights . Without the above requirements a processing operation could, in principle, be a random number generator and there would be no way to predict the resulting impact on the final image.
The idea behind output‐sensitive processing is to maximize the set of voxels excluded from processing to decrease the computational workload. The impact of a voxel at position p to the final image is equivalent to the maximal potential color error per pixel that would be induced by skipping the processing of this voxel. For each pixel, the accumulated error ε of voxels that contribute to that pixel should stay below a user‐specified threshold T:
where R is the set of voxels that contribute to a pixel. If then only voxels that have no contribution to the pixel can be skipped (because they are completely transparent or fully occluded). If , then any change is permissible, i.e. a pixel can change from black to white or vice versa. Our goal is to minimize the set of voxels that will be processed and at the same time conform to this condition. We start the derivation of from the emission‐absorption volume rendering integral, which defines the light transport from the source at s 0 to the viewpoint [EHK*06]:
The term denotes the extinction between points a and b, and is the respective transmittance factor. The term is the emission factor at point a. For simplicity, the following explanation will assume a background intensity of . We also use short names for the emission and extinction in the original volume ( and , respectively), and in the filtered volume ( and ). The set of voxels that contribute to a pixel is referred to as R, and P is the set of processed voxels. Hence, the set of skipped voxels is . We can then write the accumulated error per pixel as:
i.e. the absolute difference between the filtered and unfiltered contributions to the pixel for all skipped voxels. Evaluating and would defy our goal of skipping processing operations. Therefore, we instead estimate the upper bound of the error. The estimation is based on the prediction of the maximal possible transmittance between s and a viewpoint in the filtered volume, i.e. , and a prediction of the maximal emission difference between the original and filtered volumes:
The proof of the inequality in Equation (6) is discussed in Appendix A. To arrive at a discrete representation of the error term, we use the standard numerical approximation of the transmittance term in the volume rendering integral [EHK*06]:
where , is the sampling distance, κ is the absorption function and α is the opacity. We use the notation for the accumulated opacity between two points a and b along a viewing ray which can be recursively defined as:
where is the corrected opacity according to the sampling distance . For readability, we will use to denote the visibility of a point from the viewpoint. means that this point is invisible. We can then write our error term as:
where is the maximal possible color difference, which is the discrete approximation of the maximal difference in emission and denotes the denotes maximum visibility after filtering. Our approach for obtaining and is detailed in the subsequent sections.
Our error determination method requires the minimal and maximal opacity and a maximal color change that a voxel can obtain after filtering. Based on our assumptions, we know that the voxel value after processing will be for the neighborhood of a voxel at position p. Unlike other approaches, which do not consider dynamically changing data, we cannot rely on any preprocessing. Hence, in order to obtain information about the voxel neighborhood, it needs to be recomputed for every new volume. We compute, for each position p in the volume, the minimum , and maximum value with a given neighborhood determined by the support of the processing operation (i.e. the size of the kernel for convolution operations). We use an OpenCL kernel in a multi‐pass approach which exploits the fact that the min and max filters are separable. While this is not a cheap operation, it is still significantly less costly than the types of filtering operations we want to support.
To quickly determine the relevant opacity contributions for a voxel with a particular neighborhood, we precompute two 2D look‐up tables and for all combinations of possible data values f:
where are in the value range of the volumetric dataset. For example, for a dataset with 8 bits per voxel. and are the minimum and maximum opacity values in the opacity transfer function for all values in the interval . Consequently, the opacity of a processed voxel will be in the range . An example of an opacity transfer function and its corresponding and is shown in Figure 5.
An opacity transfer function and its corresponding and tables. For example, , because 0 is the minimal value of the opacity transfer function in the range [64, 192]. ...
The quantification of the possible color change of a voxel is done in a similar fashion. We generate a third lookup table where i and j correspond to data values in the volume, and the elements store the maximum color difference between any two colors and :
The calculation of is explained in Figure 6. We define a triangle embedded in 3D. Its vertices are defined as the black color and opacity‐premultiplied normalized RGB colors and :
Explanation of the calculation of , the color difference between colors and on an example of a simplified color‐opacity function . ...
is then the length of the longest edge of this triangle, normalized by a factor of . The normalization factor corresponds to the distance between the black and the white color in the RGB color space. The length of an edge is defined as the Euclidean norm of a vector in 3D space. In the proof that is given in Appendix A, we also use the fact that .
All three look‐up tables (, , and ) can be computed simultaneously. The computation is straightforward and only consumes a negligible amount of time when the transfer function is modified. This is conceptually similar to pre‐integrated volume rendering which employs a lookup table for accumulated colors and opacities along a viewing ray [EKE01]. The tables can be stored as three channels of a single 2D texture. Then, during the determination of , values of , and can be quickly determined for a voxel at position p via a single table‐lookup with and . The value of can then be directly used in the accumulation of the maximum visibility , as detailed in the next section. The value of gives us information whether the voxel is completely transparent after filtering, i.e. , and hence can be skipped without affecting the per‐pixel error.
Being able to evaluate the minimum and maximum opacity and the maximum color difference for any voxel, we can now proceed to identify a set of voxels that complies with Equation (3). As our aim is to reduce overall processing time, we use a greedy approach inspired by object‐order volume rendering algorithms such as shear‐warp factorization [LL94]. We traverse the volume in a slice‐by‐slice manner along the principle axis that is most aligned with the viewing direction and store per‐ray information in a view‐aligned intermediate image. Similar approaches have also been used in the precalculation of illumination information [RDR10].
For the computation of visibility, it would be natural to perform a front‐to‐back pass through the volume, i.e. starting with the slice that is closest to the viewpoint. However, we would like to accumulate the error ε during the same pass. In the example in Figure 7, we see that the front‐to‐back accumulation of ε reaches the threshold T significantly faster than back‐to‐front accumulation, i.e. , where i 1 and i 2 are defined as:
(a) Opacity profile of an example of a voxel ray and (b) the corresponding front‐to‐back accumulated opacity . (c) The corresponding error factor . (d) Front‐to‐back accumulation and (e) back‐to‐front ...
From Equation (4), we see that with decreasing transmittance, we have decreasing visibility. If we disregard the factor (Equation (9)) as in Figure 7, we observe that smaller errors are expected farther away from the viewpoint. In this respect, it pays off to accumulate errors back‐to‐front—in this way, a significantly smaller portion of voxels will be tagged for processing.
Since depends on the front‐to‐back accumulated opacities, the challenge is to combine the computation of these terms and the back‐to‐front accumulation of ε in a single pass through the volume. Our solution is to express the front‐to‐back accumulated opacity as a function of the back‐to‐front accumulated opacity and the total accumulated opacity along the ray :
where is recursively defined using corrected opacities as:
We illustrate this procedure in Figure 8. A problem of division by zero will occur when . This happens when the ray hits a sample with or due to numerical imprecision. We refer to Appendix B and the MATLAB‐script in the supplementary material for details on how the division by zero can be circumvented. Using Equations (9) and 14, ε can be written as:
As the term is only known once the back‐to‐front pass through the volume has been completed, we can instead express this as:
We illustrate the accumulation in Figure 9: ρ and can be computed recursively on‐the‐fly in a single back‐to‐front pass, because they only depend on data already traversed (the grey corridor). Both ρ and are stored in an intermediate volume. Since the accumulated error term depends also on positions which were not traversed yet (the blue corridor), we multiply ρ with a correction term in the next step, i.e. during the working‐set assignment. is fetched from the intermediate volume, as explained in Figure 9. The greedy‐accumulation algorithm is written in pseudo‐code in Appendix C.
Back‐to‐front accumulation of ρ and employs the object‐order rendering concept via axis‐aligned slicing and ping‐pong accumulation buffers. The slicing axis is chosen to be closest to the view direction. ...
While the gives us information about which voxel has sufficiently small summed impact on the final image, we also need to make sure that the results of the processing operation are correct. Many data enhancement techniques require information about the neighborhood and many approaches are carried out in an iterative fashion, e.g. when filters are separable. If we rely only on processing set M defined by , iterations can obtain incorrect information from the neighborhood in later passes. This problem is sketched in Figure 10.
If data enhancement consists of one operation only, expansion of the original mask (M) is not necessary. If it runs in two iterations, values within the band of influence (BOI) must have been processed in the 1st iteration since the 2nd iteration relies ...
Our solution to this problem is to dilate the binary mask M defined by to obtain a working set of voxels (WSV). Then we define a band of influence (BOI) as WSV and BOI ∩ M= . For example, if a filtering operation has m iterations and in each iteration, the operation requires a neighborhood of radius n, then the visibility mask must be dilated by a radius of . Using the WSV for filtering calculations will ensure that the values of all voxels M will be correct after the final iteration of the filter. In Figure 11, we compare WSVs for different thresholds and different transfer functions of a synthetic dataset (a box composed of a single scalar value). The WSVs were rotated to show that with increasing threshold, the WSV shrinks primarily from behind.
Comparison of the working set of voxels (WSV) for different thresholds and different transfer function setup. BOI has radius 5 voxels. We used a synthetic dataset—a box composed of a unique scalar value. The transfer function is therefore only ...
Our system was implemented in C++ using OpenCL for the realization of our processing pipeline. The volume rendering itself is performed in OpenGL. Our approach is not tied to any specific volume rendering algorithm, and in fact the current implementation of our pipeline flexibly integrates several different renderers including standard GPU‐based ray casting and slice‐based multidirectional occlusion volume rendering [SPBV10]. The benchmarks in this paper were performed on a PC with a 3.07 GHz CPU with an NVIDIA Quadro K6000 GPU with 12 GB of dedicated graphics memory running Windows 7, except if explicitly mentioned otherwise.
In Figure 12, we compare the performance of different output‐sensitivity thresholds. As can be seen, the number of voxels in the WSV indeed decreases with growing threshold and consequently, the runtime of the filtering operations decreases as well. The most significant improvement of performance compared to processing the entire volume was achieved by excluding voxels with zero impact from processing (T=0). In the case of the cardiac and gallbladder ultrasound in Figure 12, the computation time decreased by 35% (from 30.7 ms to 20.0 ms) and 55% (from 277 ms to 126 ms), respectively. By increasing the sensitivity threshold, we can further improve the performance while still maintaining the main desired properties of the filtering operation. Increasing the sensitivity threshold has a more pronounced effect on highly translucent structures, e.g. the surrounding tissue of the gallbladder, than for more opaque objects such as in the cardiac ultrasound example.
Demonstration of filtering runtime and the relative size of the WSV compared to the volume size on two datasets: cardiac ultrasound (upper row) and gallbladder ultrasound (bottom row). In (a) and (g), we show snapshots of unprocessed streams. For the ...
The performance gain is linearly dependent on the percentage of voxels in the WSV relative to the entire volume. The dependency is also influenced by the size of the dataset and the type of the filter. In Figure 13, we profile the performance of the lowest‐variance filter [SSHW*12] as a function of the relative size of the WSV for ultrasound datasets with different sizes. This measurement was conducted on a NVIDIA GTX680 GPU with 2 GB dedicated memory. The horizontal lines show the constant time that is needed to process the full volume. Only if more than 80% of the data is visible, which is very uncommon for real‐world ultrasound data, our output‐sensitive approach would no longer pay off. The gallbladder dataset was intentionally upsampled to a grid of 2563 to show the performance on larger datasets, even though ultrasound datasets rarely have this size. In this case, the breaking point is around 60%. We also measured performance using other filters, e.g. Perona‐Malik anisotropic diffusion and bilateral filtering, and they showed a very similar trend as lowest‐variance filtering. The extra overhead of calculating the accumulated impact of voxels compared to calculating only visibility as described in our previous work [SBVB14] amounts to approximately 1%. Therefore, we can conclude that computation of the working set of voxels for output‐sensitive processing does not pose a significant performance overhead in the entire pipeline as compared to visibility‐driven processing.
We investigated that our approach for culling invisible regions, i.e. output sensitivity threshold = 0, is correct in the sense that it cannot affect the final image. We showed that if we solve the problems of reduced occlusion and increased opacity, the visualizations will be identical by making difference images and by providing a formal proof [SBVB14]. We also calculated the image difference of Figure 1(b) (the ground truth) and 1(c) (T = 25%). The maximal color difference was in fact 5.88%. For the difference between Figures 1(b) and 1(d) (T = 90%), we calculated 10.85%. In addition, we verified the error in visualizations displayed in Figure 12 and plot it in Figure 14. In all cases, the measured error is much less than the maximal permissible error, so our estimator is very conservative.
On modern ultrasound scanners with a 4D cardiac probe, ultrasound volumes are acquired at rates of 10–15 volumes per second [PVMd12], depending on the spatial extent of the acquired volume (sector). The larger the sector, the bigger is the volume and the lower is the acquisition rate. According to our experience, larger sectors are acquired at approximately 15 volumes per second. Figure 12 shows that we allow for higher frame rates which is a relevant step toward in situ high‐quality processing and visualization for current 3D ultrasound acquisition capabilities.
The presented approach was designed to support complex filtering operations for live streaming volume data. Clearly, for very simple filters (such as, trivially, the min and max filters themselves), the overhead of minimum/maximum computation does not pay off, but in those cases filtering the whole volume is easily possible in real time. Our goal was to enable the application of filters that normally would be too expensive in such a scenario.
Furthermore, naturally our approach becomes less effective if a high percentage of the voxels is tagged for processing. In this case, processing of the full volume might even be faster. However, if we detect a high percentage of tagged voxels in one frame (e.g. due to a highly transparent transfer function and a low sensitivity threshold), we can simply disable our pipeline until the transfer function is modified again. This means that in most practical scenarios, the overall performance will always be as fast as processing of the entire volume.
Currently, there are some limitations in our implementation which we aim to address in the future:
Despite these limitations we have shown that our approach already achieves a considerable reduction in processing times for realistic percentages of visible voxels and hence enables real‐time filtering and, consequently, high‐quality visualization of streaming volume data in cases where this was previously impossible. An interesting direction for future research is to investigate extended schemes for adjusting the processing time. Currently, the decision whether to apply a filtering operation or not is still binary. However, we believe that a more fine‐grained approach where the filter quality could be adjusted based on the expected visual impact could also prove useful.
In this paper, we presented a novel approach for integrated filtering and visualization of streaming volume data. Our method conservatively estimates visibility information to limit processing operations to regions in the volume which can potentially contribute to the final image and hence can result in a considerable reduction of the processing load. We have demonstrated that the resulting pipeline is an important step toward high‐performance integrated filtering and visualization of steaming volume data such as 4D ultrasound.
Supplementary material Video S1
Matlab script related to Figure 8
Matlab example for circumventing zero divisions in Equation 14
This work has been supported by the MedViz lighthouse project IllustraSound and the ISADAF project (In‐Situ Adaptive Filtering, #229352/O70) co‐funded by the VERDIKT program of the Norwegian Research Council. The research was also partially supported by the Vienna Science and Technology Fund (WWTF) project VRG11‐010 and by the EC Marie Curie Career Integration Grant project PCIG13‐GA‐2013‐618680. Parts of this work have been made possible by the University of Bergen, the Bergen University Hospital, and Christian Michelsen Research AS through their strategic support of the MedViz program. The authors also thank GE Vingmed Ultrasound for the support and Matej Mlejnek for providing the dataset Anna. We also acknowledge the usage of Colorbrewer [BH15] for the color map in Figure 14.
Here we provide the proof of Equation (6). We first recall the literals: and represent the emission factors of the original and processed voxel, respectively. They are 3D vectors if we work with RGB emission. The scalar is the transmittance between the viewpoint and s (visibility of point s) in the original volume. The scalar is the visibility of s in a fully processed volume. We will refer to these quantities as and . is the maximal difference of the emission (color) at point s in the processed and original volume, which satisfies .
Proving Equation (6) is equivalent to proving:
Using the triangle inequality, it follows
Since and :
Using and , we obtain
This concludes the proof.
In Section 3.2, we introduced Equation 14 that expresses the front‐to‐back accumulated opacity via the back‐to‐front accumulated opacity. It is possible that the denominator in this equation becomes zero. In this case, cannot be computed using this equation. To circumvent this problem, we can in addition track slabs that have an accumulated opacity . We accumulate A[i,N] normally until 1. Then we store and , reset the accumulation buffer to zero and start accumulating again. After the traversal, when doing the correction of a voxel at position p, we first check if p is farther from the viewer than i 2. If yes, then and for this voxel. If p is further from the viewer than i 1, but closer than i 2, we must tag this voxel for processing, unless its corresponding . If p is closer to the viewer than i 1, we proceed with the correction as described in Section 4.
In the supplementary material, there is an example of a back‐to‐front ray traversal, in which the front‐to‐back accumulated opacities are calculated. The example uses an arbitrary opacity profile, and is implemented as a Matlab script.
Here we give pseudocode for the greedy accumulation algorithm (Algorithm 1) and its integration into the output‐sensitive filtering pipeline (Algorithm 2).
Veronika Solteszova, Email: on.rmc@akinorev.
Åsmund Birkeland, Email: email@example.com.
Sergej Stoppel, Email: firstname.lastname@example.org.
Ivan Viola, Email: email@example.com.
Stefan Bruckner, Email: firstname.lastname@example.org.