|Home | About | Journals | Submit | Contact Us | Français|
Improving the effectiveness of spatial shape features classification from 3D lidar data is very relevant because it is largely used as a fundamental step towards higher level scene understanding challenges of autonomous vehicles and terrestrial robots. In this sense, computing neighborhood for points in dense scans becomes a costly process for both training and classification. This paper proposes a new general framework for implementing and comparing different supervised learning classifiers with a simple voxel-based neighborhood computation where points in each non-overlapping voxel in a regular grid are assigned to the same class by considering features within a support region defined by the voxel itself. The contribution provides offline training and online classification procedures as well as five alternative feature vector definitions based on principal component analysis for scatter, tubular and planar shapes. Moreover, the feasibility of this approach is evaluated by implementing a neural network (NN) method previously proposed by the authors as well as three other supervised learning classifiers found in scene processing methods: support vector machines (SVM), Gaussian processes (GP), and Gaussian mixture models (GMM). A comparative performance analysis is presented using real point clouds from both natural and urban environments and two different 3D rangefinders (a tilting Hokuyo UTM-30LX and a Riegl). Classification performance metrics and processing time measurements confirm the benefits of the NN classifier and the feasibility of voxel-based neighborhood.
Three-dimensional (3D) lidar sensors are a key technology for navigation, localization, mapping and scene understanding in novel ground vehicle systems such as autonomous cars , search and rescue robots , and planetary exploration rovers . One major limitation regarding the use of lidar technology in these challenging applications is the time and computational resources required to process the dense point clouds generated by these sensors.
Classification techniques involving point clouds are used extensively and can be categorized in many ways . For instance, airborne sensors can use elevation and flatness characteristics to classify roof surfaces and urban objects [5,6,7], whereas terrestrial scans are affected by obstructions and varying point density . Furthermore, algorithms have been proposed to identify particular object types, such as vehicles, buildings or trees [9,10], or to classify geometric primitives at point level . In this sense, while some methods segment the cloud before classifying points within the resulting clusters [12,13], others perform classification directly on scan points . Moreover, different machine learning descriptors have been considered (e.g., histograms [4,8] and conditional random fields [14,15]). In particular, many solutions rely on supervised learning classifiers such as Support Vector Machines (SVM) [12,16,17,18], Gaussian Processes (GP) [19,20], or Gaussian Mixture Models (GMM) [11,21,22,23].
This work focuses on improving the effectiveness, both in computational load and accuracy, of supervised learning classification of spatial shape features (i.e., tubular, planar or scatter shapes) obtained from covariance analysis . This is very relevant because classification of primitive geometric features is largely used as a fundamental step towards higher level scene understanding problems . For instance, classifying points into coarse geometric categories such as vertical or horizontal has been proposed as the first layer of a hierarchical methodology to process complex urban scenes . Furthermore, classification of scan points prior to segmentation is useful to process objects with unclear boundaries, such as ground, vegetation and tree crowns . In this sense, spatial shape features can describe the shape of objects for later contextual classification . Thus, classification of spatial shape features based on principal component analysis (PCA) is a constituent process in recent scene processing methods [4,13,18,23,25].
Many classification techniques are point-wise in that they compute features for every point in a cloud by using the points within its local neighborhood, the support region. The k-Nearest Neighbors (KNN) algorithm can produce irregular support regions whose volume depends on the varying sampling density of objects and surfaces from terrestrial scans . For example, KNN has been used to compare the performance between several classifiers  and to classify into planar or non-planar surfaces . The KNN support volume can be limited by setting a fix-bound radius . Furthermore, ellipsoidal support regions of adaptive sizes, denoted as super-voxels, can be built iteratively based on point characteristics [14,27]. Other point-wise classification techniques adopt regular support regions by searching for all neighbors within a given radius [8,9,11,20]. In general, point-wise techniques imply a high computational load. This is why some authors have proposed oversampling techniques to reduce the amount of data in the raw point cloud [14,28].
Grid representations and voxels have also been considered to speed up point cloud classification. In some solutions, grids serve to segment points prior to point-wise classification. For instance, the method proposed in  computes segmentation by projecting non ground points on a 2D grid and  uses voxels for defining groups of points that are later classified with a Neural Network (NN) supervised learning method. Some authors have proposed computing features for points in a voxel by considering support regions defined by neighboring voxels. The authors in  compute PCA for each voxel with a support region defined by the 26-neighbors. Descriptors are used both for segmentation (i.e., voxel clusters) and for later classification of the set of points within a cluster. Furthermore, in , the feature vector for each voxel is obtained from a support region that includes a number of surrounding voxels. In this case, features are not employed for classification but for mapping voxels to a color space used for segmentation. Neither  nor  compute features to classify points within a voxel.
In a previous work , we proposed an NN supervised learning formalism for classification of spatial shape features in lidar point clouds. Our interest was to use this classification method for object segmentation  and construction of 2D occupancy grids for autonomous navigation . In order to reduce the computational load of the NN classifier in , we implemented a computationally simple voxel-based neighborhood approach where all points in each non-overlapping voxel in a regular grid were assigned to the same class by considering features within a support region defined only by the voxel itself. This work advanced promising classification results in a natural environment according to visual validation by a human expert. These preliminary results demand further analysis of the NN method with performance metrics and considering other types of environments and sensors. More importantly, it would be interesting to generalize voxel-based neighborhood so that it can be used with other supervised classifiers.
This paper extends  by addressing these questions. In particular, we analyze the NN classification method by proposing a new general framework for implementing and comparing different supervised learning classifiers that develops the voxel-based neighborhood concept. This original contribution defines offline training and online classification procedures as well as five alternative PCA-based feature vector definitions. We focus on spatial shape classes usually found in literature: scatter, tubular, and planar. In addition, we evaluate the feasibility of the voxel-based neighborhood concept for classification of terrestrial scene scans by implementing our NN method and three other classifiers commonly found in scene classification applications: SVM, GP, and GMM. A comparative performance analysis has been carried out with experimental datasets from both natural and urban environments and two different 3D rangefinders (a tilting Hokuyo and a Riegl). Classification performance metrics and processing time measurements confirm the benefits of the NN classifier and the feasibility of voxel-based neighborhood.
The rest of the paper is organized as follows. The next section reviews supervised learning methods that will be considered in the comparative analysis. Then, Section 3 proposes a general voxel-based neighborhood approach for supervised learning classification of spatial shape features. Section 4 describes the experimental setup and methodology for performance analysis offered in Section 5, which discusses results for different classifiers and feature vector definitions. The paper closes with the conclusions section.
This section briefly reviews supervised learning methods that have been used in the literature for point cloud scene classification: SVM, GP, GMM, and NN.
The purpose of SVM learning  is to find a hyperplane that separates the dataset into a discrete predefined set of classes consistent with labeled training patterns. When patterns are not linearly separable, SVM transforms original data into a new space and uses a kernel function for classification. SVM has shown good generalization even with a reduced training dataset, although its performance can be significantly affected by parametrization . Apart from the definition of the kernel function, SVM uses a box constraint, which is a parameter that controls the maximum penalty imposed on margin-violating observations and contributes to prevent overfitting.
SVM has been applied to classify urban point clouds into ground, and planar and non-planar points on the ground . In this application, every point is evaluated together with its KNN based on covariance analysis that uses a linear combination of eigenvalues and a Radial Basis kernel function. Furthermore, the same kernel function with SVM has been applied to lidar data in intelligent vehicles to detect vegetation  and to classify clusters of points as urban objects [4,12,16].
GP is a generalization of the Gaussian probability distribution  that can be interpreted as a Bayesian version of the SVM method. Each class is modeled as a GP where a covariance function (kernel) is trained to estimate its nonparametric underlying distribution. The problem of learning in GP is exactly the problem of finding suitable parameters (called hyperparameters) for the covariance and mean functions that best model the training input dataset. Generally, the GP method requires defining the following: the number of function evaluations, a covariance function, an inference method, a mean function, a likelihood function, and the initialization of the hyperparameters.
GP has been applied for real-time ground segmentation by considering the relative height of lidar points from a land vehicle . Moreover, a combination of GP and SVM with PCA has been proposed to classify terrain as traversable or non traversable by computing two features representing texture and slope for every point .
A GMM is a probabilistic model that uses a mixture of Gaussian probability distributions to represent subpopulations within a population . In the case of more than two classes, a different GMM is inferred for each class. Then, the learning algorithm tunes the weight, mean, and covariance matrices of a mixture of Gaussian components for each GMM. The training process finds for each GMM given a maximum value .
Lalonde et al.  used GMM with the expectation maximization (EM) algorithm  to classify lidar points in scatter, planar and tubular classes according to saliency features. GMM has also been applied with color and spatial features for pixel-wise segmentation of road images  and object and background classification in point clouds .
The multi-layer perceptron (MLP) is a type of NN commonly used in supervised learning . Implementing an MLP requires a definition of the network topology (i.e., the number of layers and neurons), the transfer function in every layer, the back-propagation learning algorithm, and the learning constant.
MLPs have been used to classify urban objects from non-ground points distributed within point clusters  and voxels . Furthermore, we proposed an MLP formalism for classifying spatial shape features from natural environments . Besides, the problem of classifying vehicles represented as point clouds has been addressed with a combination of NN and genetic algorithms .
This section proposes a voxel-based geometric pattern classification approach which can be generally used by supervised learning methods. General offline training and online classification procedures are detailed. Moreover, five alternative feature vector definitions are given to classify voxels as three spatial shape classes: scatter, tubular, and planar. Furthermore, data structures are proposed for the implementation of the point cloud and the input dataset.
In general, classifiers produce a score to indicate the degree to which a pattern is a member of a class. For an input space with N patterns, the input dataset is defined as , where is the ith input pattern and represents one of the target classes, with . The components of are computed according to a feature vector definition . Supervised learning needs a training dataset whose have been previously labeled with their corresponding .
In this work, the goal is to classify scene points into three classes (i.e., ): , where , , and , correspond to scatter, tubular and planar shapes, respectively. By using voxel-based neighborhood, all points within a voxel are assigned to the same class. With this aim, the point cloud in Cartesian coordinates is voxelized into a 3D grid of regular cubic voxels of edge E. Edge size depends on the scale of the spatial shapes to be detected in the point cloud. Only those voxels containing more points than a threshold ρ are considered to be significant for classification. Thus, the size N of the input dataset is the number of significant voxels.
General training and classification procedures particularized for voxel-based neighborhood are shown in Figure 1. Training is an offline process that has to be done once for a given classifier, whereas classification is performed online for each new point cloud. The training procedure produces a multi-class classifier configuration consisting on a set of classifiers that will be used in the classification procedure. Moreover, the choice of a feature vector definition and a particular classification method must be the same for the training and classification procedures.
A data structure V is defined to contain the input dataset . When all values in V have been set, either manually or automatically, this is considered a “classified V”. An implementation of V is described in Section 3.4.
The training procedure (see Figure 1a) uses a point cloud in Cartesian coordinates where the geometric classes must be represented and discernible. After voxelization, the N significant voxels in the 3D grid are manually labeled with their corresponding class () by a human supervisor. Then, a classified V data structure is built from the labeled voxels by computing for a particular choice of feature vector definition (e.g., one of the definitions proposed in Section 3.3). Training is performed for a given classification method with its particular parameters, where a different configuration is inferred for each class. The output of the training procedure is the trained classifier configuration.
The goal of the online classification procedure (see Figure 1b) is to classify a new point cloud. The voxelized point cloud is used to create the V data structure with values computed with the same feature vector definition as in the training procedure. In the classification step, the trained classifier configuration given by the training procedure completes the classified V by appending values computed by considering the highest score of the classifiers. With voxel-based neighborhood, the classification for each voxel is inherited by all points within its limits.
The local spatial distribution of all the points within a voxel is obtained as a decomposition in the principal components of the covariance matrix from point Cartesian coordinates. These principal components or eigenvalues are sorted in ascending order as .
This definition takes into account that scatterness has no dominant direction (), tubularness shows alignment in one dominant direction (), and planarness has two dominant directions ().
Nevertheless, classifier convergence and performance can be affected by the definition and scaling of . Thus, variants of Equation (1) based on the normalization and linear combination of eigenvalues could improve the performance of a particular classifier. Particularly, five feature vector definitions are considered in this work:
In , , and , the overline over a value c denotes normalization of this value in [0, 1] with respect to a 95% confidence interval. This normalization is computed as follows:
where represents the rounded integer number of the 95% significant voxels in the middle of the distribution of c.
The input patterns in are computed by using the selected definition with the eigenvalues given by the covariance matrix corresponding to the points within the ith significant voxel.
In order to represent , the classification data structure V must be related to a list of Cartesian point cloud coordinates C. Particularly, efficient access to the list of points within each voxel is required both to compute the input patterns and to inherit classification by scan points. With this purpose, this section proposes two data structures that implement the point cloud C and V, respectively.
Then, C is defined as a sorted list of all scan points, where the jth element has the following data:
The structure V that implements is defined as a list of N elements, where the ith element corresponds to a significant voxel and contains:
The computation of these data structures is as follows. First, all scan points in C are indexed with their corresponding voxel index, which is also used to sort the list. After that, if there are more than ρ consecutive elements in C with the same index number, then a new entry for that voxel is created in V. After voxel classification, points in C with the same voxel index inherit the target class of the corresponding voxel in V. Points in non-significant voxels will remain unclassified (i.e., with a null value in the target class field).
This section describes the training and evaluation datasets, the parametrization of classifiers, and the methodology used for the comparative performance analysis offered in Section 5.
Classification has been applied to three evaluation point clouds obtained with representative sensors and illustrative of natural and urban environments:
As for the training procedure, a different point cloud has been considered:
Evaluation and training point clouds have been voxelized with m and (see Section 3.1), which were empirically determined . Table 1 summarizes voxelization and hand labeling of experimental point clouds (evaluation datasets have also been hand labeled to evaluate classification performance). The table presents the resulting number of voxels and points included in the corresponding V structures, as well as the percentage of voxels for each class after hand labeling. In the Urban dataset, most voxels have been labeled as planar because clear floor and building walls dominate the scene. Conversely, in the Natural_1 and Natural_2 voxelized point clouds, a majority of the voxels are scatter or tubular due to bushes and trunks and treetops.
The parametrization of the SVM classifier is the following:
The parameters used for the GP classifier are:
In GMM, the parameters are:
The proposed NN based classifier uses the following configuration:
In addition, the training process of the NN must be stopped at an appropriate iteration to avoid overfitting. This iteration is found by the early stopping method of training , in which the training dataset is split into an estimation subset (80% of the training set) and a validation subset (the remaining 20%). More details of the configuration and implementation of the NN classifier can be found in .
The performance of the classifiers will be compared by using classification statistical measures for each class. In particular, confusion matrices along with a multi-class extension of Matthew’s Correlation Coefficient () have been considered.
In a classification problem with target classes, a confusion matrix is the square matrix () whose th entry, , is the number of elements of true class i that have been assigned to class j by the classifier . Therefore, an ideal classifier would yield a diagonal . In this case, elements are points from significant voxels. Furthermore, in order to achieve a clear comparison between different datasets, normalized confusion matrices can be defined. Elements in the normalized confusion matrix are defined as:
where the sum of row elements is 100.
summarizes the confusion matrix into a single value in the  range, where 1 represents a perfect classification and –1 extreme misclassification.
This section discusses experimental results where the voxel-based approach proposed in Section 3 has been applied to the NN classifier and other supervised learning classifiers: SVM, GP, and GMM. First, all classifiers are compared with a representative feature vector definition. Then, an experimental analysis is performed to select an appropriate feature vector definition for each classifier. The section also includes a discussion of computation times as well as a comparison with a point-wise neighborhood classifier.
The evaluation datasets described in Section 4.1 have been used to compare the performance of the four classifiers trained with , which is the feature vector definition given by Lalonde et al. . Table 2 presents and for each classifier in all evaluations datasets. Regarding , the NN classifier achieves the best results in all datasets. The GMM classifier obtains the second best performance, whereas SVM and GP get poor results. In particular, SVM never classifies patterns as class (tubular), as indicated by null values in the second column of for all datasets. Similarly, GP classifies most points (over 90%) as class (planar). These results indicate poor performance of for some classifiers.
This section offers an experimental analysis to find a suitable selection of for each classifier. With this purpose, all classifiers have been trained with the five feature vector definitions described in Section 3.3 using the Garden dataset. Table 3 summarizes this analysis by showing the corresponding values. These results indicate that GP and SVM are strongly affected by the choice of the feature vector while GMM offers good results for all definitions. In this sense, the NN method achieves better results with the non-normalized definitions, which can be explained by the nonlinear qualities of the MLP. All in all, the best scores have been obtained with for NN, for GMM and GP, and for SVM. These definitions have been selected as the most appropriate choice for each classifier.
Comparative results with the corresponding selections are given in Table 4. Regarding , the NN classifier maintains the best results in all datasets. In addition, GP becomes the second best, clearly improving with respect to Table 2 (where it obtained the worst performance), which denotes the importance of an appropriate selection of . As for , it can be noted that class (tubular) is the most difficult to classify (as indicated by low values of the elements). In this difficult class, NN consistently outperforms all other classifiers and reaches of true positives in the Natural_2 dataset.
Figure 6, Figure 7 and Figure 8 illustrate the application of our NN classifier with the voxel-based neighborhood approach for the three evaluation datasets. These classification results show good accordance with the ground truth (i.e., hand labeled) values given in Figure 2, Figure 3 and Figure 4.
Table 5 presents execution times corresponding to a Matlab (R2015b, MathWorks, Natick, MA, USA) implementation of the classifiers running on a Core i7 processor with a clock frequency of 3.7 GHz and 16 GB of RAM. Computation of data structure V is common for all classifiers. Then, total computation time is obtained by adding the time for V computation to the training process time (in the offline procedure) or to the classification process time (in the online procedure).
V computation time includes voxelization as well as calculation of covariance matrices and their associated eigenvalues for every voxel. This value is proportional to the number of voxels in the data structure, which is greater for the Urban dataset (see Table 1).
Table 5 shows that GP requires much more computation time, for both training and classification, than the rest of classifiers. For offline training, the times for the training process, which offer considerable differences between the four classifiers, are greater than the time required for V computation. As for online classification, GMM, NN and SVM achieve classification times that are significantly faster than V computation, so their total computation times are similar and close to that value. Since the best classification performance in Table 4 was achieved by NN, it can be concluded that NN accomplishes an outstanding compromise between performance and computation time.
Performance of voxel-based neighborhood has also been compared against point-wise neighborhood. In particular, the experimental datasets have been processed with a point-wise GMM classifier with (i.e., the configuration used by Lalonde et al. ) with a support region defined by a radius of 0.5 m. Classification performance and computation times are presented in Table 6 and Table 7, respectively.
Regarding classification performance, Table 6 presents and for point-wise GMM in all evaluation datasets. Comparing values of Table 6 against the first row of Table 4, it can be appreciated that performance results are very similar. Particularly, voxel-based neighborhood outscores the point-wise method in the Natural_2 and Urban datasets.
Total computation time is the sum of neighborhood computation and training/classification times, which are given as two separate rows in Table 7. In this case, most of the time is used for neighborhood computation. The comparison of this table with Table 5 shows that computation times for voxel-based neighborhood are dramatically reduced with respect to point-wise neighborhood.
In general, these results indicate that voxel-based neighborhood classification achieves a dramatic improvement in computation time with respect to point-wise neighborhood, while no relevant differences in performance can be appreciated. Furthermore, voxel-based NN has accomplished better classification performance with the experimental datasets.
Many point cloud classification problems targeting real-time applications such as autonomous vehicles and terrestrial robots have received attention in recent years. Among these problems, improving the effectiveness of spatial shape features classification from 3D lidar data remains a relevant challenge because it is largely used as a fundamental step towards higher level scene understanding solutions. In particular, searching for neighboring points in dense scans introduces a computational overhead for both training and classification.
In this paper, we have extended our previous work , where we devised a computationally simple voxel-based neighborhood approach for preliminary experimentation with a new a neural network (NN) classification model. Promising results demanded deeper analysis of the NN method (using performance metrics and different environments and sensors) as well as generalizing voxel-based neighborhood that could be implemented and tested with other supervised classifiers.
The originality of this work is a new general framework for supervised learning classifiers to reduce the computational load based on a simple voxel-based neighborhood definition where points in each non-overlapping voxel of a regular grid are assigned to the same class by considering features within a support region defined by the voxel itself. The contribution comprises offline training and online classification procedures as well as five alternative feature vector definitions based on principal component analysis for scatter, tubular and planar shapes.
Moreover, the feasibility of this approach has been evaluated by implementing four types of supervised learning classifiers found in scene processing methods: our NN model, support vector machines (SVM), Gaussian processes (GP), and Gaussian mixture models (GMM). An experimental performance analysis has been carried out using real scans from both natural and urban environments and two different 3D rangefinders: a tilting Hokuyo and a Riegl. The major conclusion from this analysis is that voxel-based neighborhood classification greatly improves computation time with respect to point-wise neighborhood, while no relevant differences in scene classification accuracy have been appreciated. Results have also shown that the choice of suitable features can have a dramatic effect on the performance of classification approaches. All in all, classification performance metrics and processing time measurements have confirmed the benefits of the NN classifier and the feasibility of the voxel-based neighborhood approach for terrestrial lidar scenes.
One additional advantage of processing each non-overlapping cell by using points from only that same cell is that this favors parallelization . Developing a parallel version of the proposed method to improve online classification time with multi-core computers will be addressed in future work. Furthermore, it will be also interesting to adapt the method for incremental update of classification results with consecutive scans.
This work was partially supported by the Spanish project DPI2015-65186-R and the Andalusian project P10-TEP-6101-R. The authors are grateful to anonymous reviewers for their valuable comments.
The voxel-based neighborhood approach was developed by V. Plaza-Leiva. The Neural Network classifier was developed by J.A. Gomez-Ruiz. The writing of the manuscript and the design and analysis of experiments have been done by A. Mandow, J.A. Gomez-Ruiz and V. Plaza-Leiva. The work was conceived within research projects led by A. García-Cerezo and A. Mandow.
The authors declare no conflict of interest.