PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
IEEE Comput Graph Appl. Author manuscript; available in PMC 2010 September 8.
Published in final edited form as:
PMCID: PMC2935693
NIHMSID: NIHMS50161

Extracting and Representing the Cortical Sulci

Cortical sulci are convoluted regions between cortical folds deeply embedded in the surface of the brain. Because of their significant characteristics, their functional separation of distinct brain regions, and their natural topological partition of anatomy, sulci serve as a basis for structural analysis of the brain, Thus they receive significant attention in the neuroscience community.1,2 Sukal features help define regions of the cortex and provide readily appreciated boundaries for teaching lobar and functional topography. However, the structural complexity of sulci poses a significant challenge for modeling, impeding rapid progress in quantitative morphological analysis.

Figure 1 shows a typical sagittal 2D cross-section of magnetic resonance (MR) brain data. (A sagittal section lies parallel to the 2D plane separating the two brain hemispheres.) Recent research3,4 has made great progress in identifying and representing sulci, but so far automatic extraction and mathematical representation remain elusive.

Figure 1
Human brain section showing deep sulci. Deep fissures in the brain, sulci separate functionally distinct regions. They are complex 3D surface boundaries that partition the brain’s anatomy. Here a 2D section from a postmortem brain specimen cuts ...

In this article we propose a robust voxel-coding methodology to automatically extract and parameterize the sulcal surfaces from volume data sets. We describe sulci as a collection of 2D contours generated from successive data slices. The contours are parameter curves of the sulci. The algorithm includes two phases: contour generation and sulcal surface generation. Phase 1 performs the segmentation, initial contour extraction, and final contour formation. The contours are 2D skeletons extracted from the segmented region. In contrast to those generated by traditional methods,4 our skeletons are one-pixel-thick curves starting with the exterior boundary, rather than a set of disconnected points. They thus provide desirable properties for further processing. The endpoints of each contour can be landmarks that represent it uniquely. In Phase 2, the integration between slices and parametric representation results from testing adjacency relationships of the contour endpoints within adjacent slices to resolve ambiguity in contour connections.

The algorithm has the following features:

  • [filled square] Efficiency: It generates all sulcal surfaces as parametric representations at the same time.
  • [filled square] Flexibility: The number of sulci of interest and output can be controlled easily.
  • [filled square] Simplicity: The implementation doesn’t require any complicated mathematical computation.
  • [filled square] Automation: All the procedures occur without manual interaction.

Background

A number of methods model cortical features. Most focus on the outer cortical surface,5 composed of a pattern of fissures and folds named sulci and gyri, respectively. The complexity and variability of sulci have made them difficult to model. Professionals use the resulting models to calculate morphological characteristics in health and disease, and to teach the complex and variable topography of the human cortex. However, the specific characteristics of cerebral sulci, which extend deeply inside the brain, have become increasingly important in brain mapping. Most approaches model the sulci geometrically, as the medial surfaces of 3D regions consisting of cerebrospinal fluid (CSF), which fills the sulci, and the grey/white matter4,6—two observable tissue types in brain images.

The 3D skeleton extraction method7 offers a powerful voxel-based technique for medial surface generation, successfully applied in computer vision for image compression and reconstruction, and in visualization for endoscopic navigation.7,8 Two major skeleton-extraction methods exist, based on minimum distance coding7 and boundary peeling, respectively. Mangin etal.4 generated sulcal surfaces using a boundary peeling-based skeleton method, and we employed a similar approach in generating the sulci using minimum distance.7 Unfortunately, these methods result in a set of identified voxels, not a structured mathematical representation.

Although a mathematical representation of the surface can be obtained by approximation approaches, such as least-squares methods, the real problem persists —there isn’t enough information to determine how many and what kind of surfaces are needed to match the sulcal voxel set’s complexity. Since modeling of the sulci is a priority in quantitative analysis of brain data, we prefer to generate cerebral sulci as parametric representations. Parametric representations better suit further analyses of sulcal geometry.

With this in mind, Vaillant and Davatzikos3,6 proposed an active contour model supporting the parametric representation of cortical sulci. Their model simulates the sweeping process of active contours from the outside edge of the cortex into the deep folds by employing a deformation mechanism and the medial surface concept. Deformable models provide an efficient method for simulating physical processes.9 Their efficiency results from establishing a certain potential field or force field. The active contour model strictly depends on calculating a center-of-mass force. Furthermore, the model generates the initial active contours based on a complicated curvature map of the outer conical surface. As Vaillant et al.3 pointed out, defining the mass function proves a complex problem, often requiring manual interaction.

Our goal was to create a simple, efficient method to extract sulcal surfaces and output them as parametric representations without manual interaction. In this article we introduce a pixel-coding method, called voxel-coding, that employs both skeleton extraction and deformable model mechanisms.

The core idea behind our method is that we represent the extension of sulci from the cortex’s exterior boundary into the deep extents with a minimum distance voxel-coding technique, similar to region growing or point propagation. Our method codes the pixels within the grown regions by how far away from the starting point they lie.

Contour generation

Contour generation employs a series of 2D slices. The algorithm has three steps: data preprocessing, extraction of initial contours, and final contour generation.

Data preprocessing

Three-dimensional MRI brain data sets consist of a sequence of 2D slices that present cross-sections in an arbitrary plane. All operations to generate contours are performed on slices. Data preprocessing segments the images, explicitly generating regions where we can extract 2D sulcal curves—contours.

Data preprocessing begins with removal of the scalp (Figure 2a). Next, the algorithm identifies regions of high intensity. Selecting pixels with high intensity using only a threshold-based method results in regions either disconnected due to noise or that may coalesce into a large nonspecific region impossible to separate into specific individual components. Therefore, we apply a Gaussian filter several times to dilate the image prior to selecting a threshold for segmentation. After preprocessing, a threshold identifies the high-intensity pixels (Figure 2b).

Figure 2
A series of generated images. (a) The result after scalp removal and (b) the result after preprocessing on (a). The color range from blue, green, dark red, to light red corresponds to pixels in an increasing order of value, (c) Local maximum points with ...

The algorithm sets the high-intensity pixels selected by the threshold to zero in intensity, called 0-pixels, and sets the remaining pixels to one, called 1-pixels. We’re only interested in 1-pixels. We call two 1-pixels adjacent if they’re at least 6-connected, meaning they share at least a grid corner; we call a 1-pixel a boundary pixel if it’s 6-connected with a 0-pixel (for example, the red pixels in Figure 2e). Furthermore, we call a boundary pixel an exterior or interior boundary pixel depending on whether there exists a ray from it that doesn’t intersect any 1-pixel. The exterior boundary consists of a set of all exterior boundary pixels, while the interior boundary encompasses the set of all interior boundary pixels. The exterior boundary in Figure 2c consists of all the white pixels.

For later use, we also define the following: A connected sequence of points means a sequence of 1-pixels where each element and its immediate successor are adjacent. The length of a sequence of points means the total number of points in the sequence. Given a group of connected sequences of points satisfying the condition that the longest sequence S and any other one shares a subsequence of points starting from their endpoints, we refer to the sequence S as a tree trunk or trunk sequence, while referring to all other sequences as its branches or branch sequences. Hereafter, we only consider the longest sequence and the part of any branch sequence not shared with the trunk sequence.

Extraction of initial contours

Given a preprocessed image, we wish to derive a sulcal contour as a curve starting with the exterior boundary and extending deeply into the brain. This implementation employs the voxel-coding scheme.

Voxel-coding uses a recursive layer-by-layer coding and information search process, including four factors: a starting boundary, coding region, code criteria, and feature extraction. The starting boundary consists of a set of seeds. The propagation of seeds depends on identifying a connected region—the coding region.

Our voxel-coding technique uses distance from the original seeds to create a value field, as follows: First, the seeds—the pixels positioned on the exterior boundary—get a value of two (not one, in order to distinguish uncoded 1-pixels). The adjacent 1-pixels (6-connected) get a value of three. Recursively, the uncoded pixels adjacent to pixels with the value of n receive a value of n + 1. This process continues until no adjacent 1-pixels remain uncoded. Obviously, the larger the value of a pixel, the deeper the pixel’s location relative to the starting point.

Figure 2b shows the result of voxel-coding where different colors indicate the pixel value distribution. The voxel-coding propagation process resembles region growing, but with an important difference: the grown region is encoded in pixel or voxel form according to specific criteria, such as a distance transform indicating the encoded elements’ distance from the starting seeds. The code of the elements within the region provides a specific potential field or approximate minimum distance field. We can extract useful features based on such a field, for example, centerline generation, shortest path extraction,7 and so forth.

We can extract the initial contours easily on the basis of the generated value field. First, we find all the local maximum points by simply comparing the adjacent points’ values (Figure 2c). Each local maximum point serves to generate an initial contour—a sequence of points—obtained as follows: Take the local maximum point, having a value of n, as the first point of the sequence. Select the second point as one of the adjacent points with the value n − 1 (this point must exist according to the coding rules). Recursively, suppose we choose the ith point with a value of n − (i − 1); its successor should be one of its adjacent points with the value of ni. This process continues until we reach the exterior boundary. The generated sequence decreases strictly monotonically. Figure 2c illustrates local maximum points, and Figure 2d gives the corresponding initial contours.

The generated initial contours have several advantages. The value of the first point of an initial contour determines the number of points and therefore determines its length. Different local maximum points correspond to different initial contours. However, once we have a common point belonging to all these sequences, the remaining points must all be the same, since the same criteria apply for the next point search. These properties provide a very efficient way of controlling the number of the initial contours and also for removing tiny branch sequences. For example, we can prune the very short contours like cutting tree branches if we’re only interested in the longer ones; otherwise, we can separate main trunk sequences from the branches.

Generating the value field and initial contours are two opposite processes. The latter is a retrieving step; the initial contour consists of a sequence of adjacent points and is ready for parameterization. However, it’s not centered relative to the boundary—the topic of the next section.

Final contour generation

To move all the initial contours toward the center-lines, we adopt a defbrmable curve process, again using a voxel-coding technique. The process differs from the one above in that we use all the pixels located on the interior boundary instead of the exterior boundary as the starting points. These pixels get a value of two. We then follow the same scheme to generate another minimum distance field.

The deformable curve process recursively propagates pixels starting with the initial contours. Each iteration includes three steps. In Step 1, we move the contours one voxel size to adjacent voxels with larger values along the direction normal to the curve. After Step 1, previously adjacent points in the curve could move to the same point, so we retain only one point while removing other coincident ones for further processing. In Step 2, again we adjust the position of an already moved point p by checking whether it lies a user-given distance threshold behind adjacent points. This case occurs due to the influence of noise. If so, we replace point p by the midpoint of the adjacent points.

Generally, after implementing the above two steps, the distance between adjacent points in the sequence could increase, Hence, we need Step 3 for adding points. This step performs a linear interpolation between points. This iteration process continues until no further move happens. Figure 2e shows the results of Figure 2d by using the simple deformation strategies described here, and Figure 2f shows the final contours superimposed on the corresponding MR slice.

Sulcal surface integration and parametric representation

We can interpret cerebral sulcal surfaces as 3D geometrical surfaces originating from the exterior boundary—the surface of the cerebral cortex—and extending deeply into the brain. The contours generated so far are 2D. Furthermore, the endpoint of the final contour determines the origin of the curve, and the endpoint means the point that lies on the exterior boundary. We can assume that two contours are adjacent as sampled parametric curves if their endpoints are adjacent. We regard endpoints as adjacent if they lie closest to each other among all contours’ endpoints. In addition, each contour lacks branches, because of its properties discussed in the subsection “Extraction of initial contours.”

On the basis of the above analyses, we can use the following scheme to generate sulcal surfaces: Let Si be the ith image (i = 0, 1, …, N), where N + 1 equals the number of images in the volume. Suppose C(Si) = {C0i, C1i,…, Cki,…, Cni} indicates that C0i, C1i,…,Cki,…,Cni represent all the contours generated in slice Si. Our reconstruction algorithm starts with the curve C00 in S0. To generate a surface, we take an unprocessed curve Cki as the first parametric curve from Si (Assume we already-considered all the curves in S0 through Si−1 and curves C0i through Cik−1 Si, for the previous surface generation.) We then search for the next parametric curve in Si+1 by comparing the endpoint of the curve Cji+1 with Cki’s endpoint. If the distance lies below a given threshold, we take Cji+1 as the next parametric curve of the active surface. A flag given to it indicates we need consider it no further. A recursive search then follows of the next curves of Cji+1 (note, not Cki); otherwise, we check other curves. If no curve in Si+1 can match Cki, we can integrate no further contour into the active surface. This process continues until all the contours become starting curves or constituent curves of sulcal surfaces.

By comparison, traditional methods leave ambiguities in reconstructing 3D surfaces based on 2D contours, due to a branching problem. Once we’ve obtained all the parametric curves, a variety of interpolation methods produce a mathematical representation.10

Results and comparison with manually derived surface models

We tested our algorithms using three data sets. Experiments then determined the degree of agreement between automatically extracted and manually derived 3D surface models.

3D display results

We used data from three different human brains to test our algorithms. Figures 3 through through66 show surfaces derived from a 3D T1-weighted fast SPGR (spoiled gradient echo) MRI data set, acquired at 2562 × 124 resolution on a General Electric Signa 1.5T Clinical Scanner. We used a repetition time and echo time of 14.3/3.2 ms, flip angle 35 degrees, and a 25-cm field-of-view (FOV). Contiguous 1-mm slices were acquired covering the entire brain. Figures 3 and and44 show the wireframe and solid frame displays of the sulcal surfaces generated in the right hemisphere. Figures 5 and and66 represent sulcal surfaces generated in the left hemisphere in wireframe and solid frame, respectively. Figures 3 through through66 show only sulcal surfaces with a parameter curve length exceeding 20. The intensity threshold for sulcal surface segmentation is 48.5. Figure 7 shows sulcal surfaces with a length of at least 15 pixels extracted from six slices of a 2562 × 50 resolution T2-weighted 8-bit MRI data set from a young normal subject. The threshold is 75. Figure 8 shows the results from a cryosection data set with a threshold of 118.0 and a length threshold of 30. To generate the cryosection data set, we sectioned a postmortem brain specimen and imaged it at 1,0242 × 1,300 resolution.

Figure 3
Sulcal surfaces extracted from slices 205 to 212 (right hemisphere) of a sagittally sampled T1-weighted MRI data set. All major medial wall sulci are represented (cingulate, central, and parieto-occipittal sulci, and their major secondary branches). We ...
Figure 4
The counterpart of Figure 3 in solid frame.
Figure 5
Sulcal surfaces extracted from slices 178 to 183 (left hemisphere) of the same T1 data set. In this brain hemisphere, although an anatomical variant occurs (a doubled cingulate sulcus), both of its components are correctly extracted. By contrast with ...
Figure 6
The counterpart of Figure 5 with multiple slices in solid frame from another viewpoint
Figure 7
Sulcal surfaces extracted from slices 001 to 006 (left hemisphere) of a T2-weighted MRI data set. Despite differences in the signal characteristics (T2-weighted versus T1-weighted MRI), all major parietal, paracentral, and temporal lobe sulci, including ...
Figure 8
Sulcal surfaces extracted from slices 980 to 988 (left hemisphere) of a sagittally sampled color cryosection data set. Despite considerable differences in the imaging process (24-bit color RGB image data rather than 8-bit grayscale MRI data), all major ...

For clarity, we intentionally increased the interval between image slices. For reference, we also show one cross-section with red points indicating the boundary for the deformable curve model and blue points indicating the potential field. Generating sulcal surfaces took 14 seconds for Figures 3 and and4,4, 10 seconds for Figures 5 and and6,6, 1 second for Figure 7, and 18 seconds for Figure 8, measured when running on a Silicon Graphics Power Onyx R10000 CPU with Infinite Reality graphics.

Comparisons

We performed experiments to determine the degree of agreement between automatically extracted and manually derived 3D surface models. Summary results of these experiments appear in Figure 9 (next page). We manually traced specific sulci in corresponding serial MRI sections using an interactive contouring program and converted the resulting outlines into parametric mesh format using previously published algorithms.1 We traced each sulcus a total of six times, then compared surface models obtained in multiple trials with each other and with automatically extracted models of the same sulcus.

Figure 9
Comparison of automatically extracted and manually derived sulcal models. We derived automatically extracted and manually derived parametric mesh models of the precentral sulcus from the 3D MRI scan of a randomly selected subject. A map of the 3D distances ...

Morphometric data

Using algorithms developed for use in morphometric projects, we determined measures of surface curvature, 3D surface area, extent, and fractal dimension (an index of surface complexity) for each parametric mesh model. Means and standard deviations of these surface descriptors appear in Figure 9f. We selected precentral (I) and cingulate (II) sulci because, in addition to their functional significance, they represent deep sulcal surfaces with a comparatively simple and a relatively complex morphology, respectively.

Figure 9 illustrates the strong agreement between manual and automatically derived models. We calculated a map of the 3D distances between the automatic and a randomly selected manually derived surface model (Figure 9a). Only the external limit of the sulcus, where it reaches the external cortical surface, departs more than 0.5 mm (1 pixel) from the manually derived model. Figure 9b shows the inherent variability in the set of six manual tracings of the same sulcus. This variability map encodes the root mean square deviation of the mesh models obtained in multiple trials from an average surface derived from the same data. Parallel studies have shown manually derived models to be reproducible to within 0.5 to 1.0 mm in 3D MRI data.

Probabilistic validation

Statistical validation of automated against manual methods requires careful design, since both approaches include error. We therefore assessed the automated algorithm within a probabilistic framework incorporating a measure of the degree of manual reproducibility for each region of a sulcus. Clearly, using an automated algorithm proves acceptable when the resulting models are statistically indistinguishable from manually derived models of the same structure. We modified a pattern recognition algorithm2 to detect whether structures extracted automatically were statistically distinguishable from those obtained manually in multiple trials. We modeled surface variations as a multi-variate anisotropic random field, estimating the covariance tensor from the manual data. Figure 9c shows that only the medial limit of the automatically extracted precentral sulcus was distinguishable from manual data (p < 0.05). The rest of the structure exhibited strong agreement with manual data, with a spatial accuracy better than 0.5 mm everywhere (Figure 9a).

We found similar results for the cingulate sulcus, a structure that displays greater structural complexity (Figure 9f). In this case, the 3D distance between automated and manually derived models of the cingulate never exceeded 0.5 mm, except at the posterior limit of the structure (Figure 9d). We also calculated inherent variability when tracing the structure in multiple trials, shown in Figure 9e. Strong agreement between manually and automatically derived models, in addition to the high reproducibility of the automated approach, suggests that the algorithm may offer substantial advantages in morphometric applications analyzing regional brain structure.

Conclusions

Our efficient voxel-coding methodology for sulcal surface extraction doesn’t require manual interaction in generating parametric representations. Sulcal characteristics motivated the methodology—the structures are convoluted ribbons starting at the external limit of the cerebral cortex and extending deeply into the brain. Voxel-coding successfully simulates these features by combining a variety of techniques. 2D generation of the contours avoids a complicated 3D search and computation.

We employed voxel-coding for both initial contour extraction and contour deformation. The properties of the initial contours provide a robust principle for unambiguous sulcal surface reconstruction. Numerous comparisons with manually derived surface models showed strong agreement between manually and automatically derived models.

Because sulcal boundaries separate functionally significant regions of the cortex, efficiently identifying them successfully communicates differences among human brains. The voxel-coding techniques presented here provide a potential method for feature extraction and geometric modeling from volume data. Research is under way to apply these techniques on 3D volumes instead of 2D images for direct extraction of 3D sulcal surfaces and other cerebral components. Doing so preserves the advantages of current methods—efficiency, flexibility, simplicity, and automation—and lets us model more complicated cases.

Acknowledgments

Special thanks to Colin Holmes for sharing his experience, providing related information, and communicating his understanding of cerebral sulci, and to Parvati Dev for careful proofreading and suggestions. This work has been supported by the National Science Foundation DBI 9601356 and the National Institutes of Health NCRR RR05956, NS38753, and RR13642.

Biographies

• 

Yong Zhou is Research Assistant Professor at the University of California, Los Angeles, He received his MS in mathematics from Zhejiang University in 1991 and PhD in computer science from Tsinghua University, China, in 1995 His research interests include computer graphics, volume visualization, geometrical modeling, computer vision, image processing, and applications on medical images. He is a member of the IEEE Computer Society.

An external file that holds a picture, illustration, etc.
Object name is nihms50161b1.gif
• 

Paul Thompson is Assistant Professor of Neurology at the University of California, Los Angeles. He received an MA in mathematics from Oxford University, England, and a BA in Greek and Latin languages, also from Oxford University. He was a Fulbright Scholar and a Fellow of the Howard Hughes Medical Institute from 1993 to 1998, and received his PhD in neuroscience from UCLA in 1998. His awards include the SPIE Medical Imaging Best Paper Award in 1997 and the Di Chiro Outstanding Scientific Paper Award in 1998. His research interests include brain mapping in human populations, mathematical algorithms for image analysis, anatomical modeling, brain atlas construction; image registration, pathology detection, and visualization of brain structure and function.

An external file that holds a picture, illustration, etc.
Object name is nihms50161b2.gif
• 

Arthur W. Toga is a Professor of Neurology at the UCLA School of Medicine. He is the editor-in-chief of the journal NeuroImage. He is director of the Laboratory of Neuro Imaging, associate director of the Brain Mapping Division of the Neuropsychiatric Institute, and assistant chairman of the Department for Research Affairs. His research interests include metabolic brain mapping, cognitive neurosdence, and brain structure visualization and modeling. Toga received his MS and PhD degrees from St. Louis University. He is a member of ACM and the IEEE Computer Society.

An external file that holds a picture, illustration, etc.
Object name is nihms50161b3.gif

Footnotes

Current sulcal modeling techniques depend on manual interpretation. We propose a robust voxel-coding method to automatically extract and parameterize the sulcal surfaces from volume data sets.

References

1. Thompson PM, Schwartz C, Toga AW. High-Resolution Random Mesh Algorithms for Creating a Probabilistic 3D Surface Atlas of the Human Brain. NeuroImage. 1996 March;3(1):19–34. [PubMed]
2. Thompson PM, et al. Detection and Mapping of Abnormal Brain Structure with a Probabilistic Atlas of Cortical Surfaces. J of Computer Assisted Tomography. 1997;21(4):567–581. [PubMed]
3. Vaillant M, Davatzikos C, Bryan RN. Finding 3D Parametric Representations of the Deep Cortical Folds. Proc. IEEE Workshop on Mathematical Methods in Biomedical Image Analysis; IEEE Computer Society Press, Los Alamitos, Calif. June 1996; pp. 151–159.
4. Mangin JF, et al. Automatic Construction of an Attributed Relational Graph Representing the Cortex Topography Using Homotopic Transformations. SPIE 2299; SPIE, Bellingham, Washington. 1994. pp. 110–121.
5. Payne BA, Toga AW. Distance Field Manipulation of Surface Models. IEEE Computer Graphics and Applications. 1992 Jan;12(1):65–71.
6. Vaillant M, Davatzikos C. Finding Parametric Representations of the Cortical Sulci Using an Active Contour Model. Medical Image Analysis. 1997;1(4):295–315. [PubMed]
7. Zhou Y, Kaufman A, Toga AW. 3D Skeleton and Centerline Generation Based on An Approximate Minimum Distance Field. The Visual Computer. 1998;14(7):303–314.
8. Hong L, et al. Virtual Voyage: Interactive Navigation in the Human Colon. Proc. of Siggraph 97; ACM Press, New York. Aug. 1997; pp. 27–34.
9. Terzopoulos D, Fleischer K. DeformableModels. The Visual Computer. 1988;4(6):306–331.
10. Farin G. Curves and Surfaces for Computer Aided Geometric Design. Academic Press; Boston: 1990.