Home | About | Journals | Submit | Contact Us | Français |

**|**Sensors (Basel)**|**v.17(3); 2017 March**|**PMC5375907

Formats

Article sections

- Abstract
- 1. Introduction
- 2. 3D Building Rooftop Reconstruction
- 3. Implicit Regularization of Building Rooftop Models
- 4. Parameter Optimization
- 5. Results and Discussion
- 6. Conclusions
- References

Authors

Related links

Sensors (Basel). 2017 March; 17(3): 621.

Published online 2017 March 19. doi: 10.3390/s17030621

PMCID: PMC5375907

Changshan Wu, Academic Editor and Shawn (Shixiong) Hu, Academic Editor

Department of Earth and Space Science and Engineering, York University, 4700 Keele Street, Toronto M3J 1P3, ON, Canada; Email: moc.liamg@00gnujwj (J.J.); Email: ac.ukroy@awjy (Y.J.)

Received 2017 January 22; Accepted 2017 March 1.

Copyright © 2017 by the authors.

Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

With rapid urbanization, highly accurate and semantically rich virtualization of building assets in 3D become more critical for supporting various applications, including urban planning, emergency response and location-based services. Many research efforts have been conducted to automatically reconstruct building models at city-scale from remotely sensed data. However, developing a fully-automated photogrammetric computer vision system enabling the massive generation of highly accurate building models still remains a challenging task. One the most challenging task for 3D building model reconstruction is to regularize the noises introduced in the boundary of building object retrieved from a raw data with lack of knowledge on its true shape. This paper proposes a data-driven modeling approach to reconstruct 3D rooftop models at city-scale from airborne laser scanning (ALS) data. The focus of the proposed method is to implicitly derive the shape regularity of 3D building rooftops from given noisy information of building boundary in a progressive manner. This study covers a full chain of 3D building modeling from low level processing to realistic 3D building rooftop modeling. In the element clustering step, building-labeled point clouds are clustered into homogeneous groups by applying height similarity and plane similarity. Based on segmented clusters, linear modeling cues including outer boundaries, intersection lines, and step lines are extracted. Topology elements among the modeling cues are recovered by the Binary Space Partitioning (BSP) technique. The regularity of the building rooftop model is achieved by an implicit regularization process in the framework of Minimum Description Length (MDL) combined with Hypothesize and Test (HAT). The parameters governing the MDL optimization are automatically estimated based on Min-Max optimization and Entropy-based weighting method. The performance of the proposed method is tested over the International Society for Photogrammetry and Remote Sensing (ISPRS) benchmark datasets. The results show that the proposed method can robustly produce accurate regularized 3D building rooftop models.

A key problem domain that we address in this paper is to reconstruct a 3D geometric model of building rooftop from remotely sensed data such as airborne laser point clouds. The representation that we follow for 3D rooftop models draws on ideas from geometric modeling used in Photogrammetry and Geographical Information Science (GIS). In the representation scheme, a 3D rooftop is modeled with either primitive geometric elements (i.e., points, lines, planes and objects), or primitive topological elements (i.e., vertices, edges, faces, and edge-groups (rings of edges on faces)). Typically, both primitive geometric and topological elements are used together for representing 3D rooftop models (e.g., CityGML and Esri ArcGIS’s shapefile). CityGML is an open data model and XML-based format for the storage and exchange of virtual 3D city models [1].

In CityGML, 3D rooftop models can be differently represented according to the level-of-detail (LoD). A prismatic model of rooftop that is a height extrusion of a building footprint is defined as LoD1 in CityGML, while LoD2 requires a detailed representation of the primitive geometric and topological elements in a 3D rooftop model. An important aspect in GIS-driven 3D model representation is that the reconstructed model elements should correspond to semantically meaningful spatial entities used in architecture, civil and urban planning: for instance, the reconstructed geometric elements represent roof lines (ridges and eaves), roof planes (gables and hips), vents, windows, doors, wall columns, chimneys, etc. Thus, a photo-realistic reconstructed rooftop model can be used for assisting human decisions on but not limited to asset management, renovating planning, energy consumption, evacuation planning, etc. As discussed in Rottensteiner et al. [2], a city-scale building model will provide an important mean to manage urban infrastructure more effectively and safely for addressing critical issues related to rapid urbanization. In this study, we aim to reconstruct LoD2 models of the rooftops from remote sensed data.

Traditionally, 3D rooftop models are derived through interaction with a user using photogrammetric workstations (e.g., multiple-view plotting or mono-plotting technology). This labor-intensive model generation process is tedious and time-consuming, which is not suitable for reconstructing rooftop models at city-scale. As an alternative method, great research efforts have been made for developing a machine-intelligent algorithm to reconstruct photo-realistic rooftop models in a fully-automated manner for the last two decades [3]. Recently, airborne light detection and ranging (LiDAR) scanners became one of the primary data acquisition tools, which enable rapid capturing of targeted environments in 3D with high density and accuracy. Due to these advantages, state-of-the-art technologies for automatically reconstructing 3D rooftop models using airborne LiDAR data have been proposed by many researchers [2,3,4,5,6]. However, only limited success in a controlled environment has been reported, and the success of developing an error-free rooftop modeling algorithm is not achieved yet [2].

In general, 3D rooftop models are derived automatically from 3D LiDAR point clouds by: (1) extracting the primitive geometric elements, namely “modeling cues”; and (2) recovering the primitive topological elements among the modeling cues. A critical problem to hinder the automation of 3D rooftop model generation is that many portions of the object (rooftop) are unknown, and recovered with errors caused by the following reasons:

- Irregular point distribution: Despite the advantages of acquiring highly accurate and dense 3D point clouds over rooftops by airborne LiDAR, the sensor also has its limitations. Airborne LiDAR transmits a packet of collimated laser beams through an electro-optical scanner, and computes a location of scatter, which surface is reflected from the transmitted laser energy, by measuring a range between the transmitter and scatter with known position and orientation of the laser scanner. The size of the beam footprint and space between adjacent laser points on the ground are determined by the flying height of the airborne platform and scanning frequency. In addition, the weak energy reflectance due to absorption and ill-posed surface angle against scanning pose, where the peak is below a pre-defined threshold, are discarded. Thus, all these system variables produce an irregular distribution of laser point clouds over the targeted object surface. Consequently, the modeling cues are often generated with errors, or are fragmented, or completely missing. These errors have a negative impact on the derivation of the topological elements, and thus the accuracy of rooftop model generation.
- Occlusion: Similar to other sensors, airborne LiDAR also suffers from difficulties in capturing a complete view of objects due to occlusions. A disadvantageous viewing angle between the laser beam direction and object pose may hinder the illumination of laser beams on certain object surfaces, where no laser points are generated. In theory, airborne LiDAR has an ability to penetrate foliage; however, the amount of returned laser energy varies depending on tree species, their maturity, seasonal effect and relative viewing angle between the laser beam and the leaf surface angle. A weak reflected energy will be neglected and not be able to produce any laser points over certain areas of roofs where tree grows nearby. In addition, in urban area, buildings are occluded by adjacent buildings which are found in the path between the sensor and the surface to survey. These negative effects cause errors in recovering the primitive topological elements for reconstructing the rooftop model.
- Unreliable data analysis: A few of the laser point cloud analytics are applied to detecting building objects, classifying non-roof-related objects (e.g., trees, roof superstructures, etc.), segmenting roof planar patches, extracting corners and line primitives, and other algorithms related to recovering the primitive topological elements (e.g., boundary tracing, edge-linking, etc.). The performance of these algorithms varies depending on data resolution, scene complexity and noise; they may produce some errors, which has a negative effect on recovering both modeling cues and topological elements.

As discussed previously, the aforementioned factors lead to errors in recovering the modeling cues sufficiently well for generating an error-free rooftop model. Typically, knowledge of a rooftop object of interest (e.g., roof type, structure, number of roof planes, etc.) is unknown. Thus, recovering all the primitive topological elements accurately with an error-free geometric model is a very challenging vision task. To address this issue, many researchers have introduced some modeling constraints to compensate the limitations of erroneous modeling cues [7,8,9,10]. These constraints are used as prior knowledge on targeted rooftop structures: (1) for constructing the modeling cues to conform to Gestalt law (i.e., parallelism, symmetry, and orthogonality), and linking fragmented modeling cues in the frame of perceptual grouping; and (2) by determining optimal parametric rooftop model fit into part of rooftop objects through a trial-and-error of model section from a given primitive model database. We refer these modeling constraints as an “explicit regularity” imposed on rooftop shape as the definition of regularity is fully and clearly described. However, only a few of the explicit regularity terms can be applicable, and the shapes of rooftops in reality appear too complex to be reconstructed with those limited constraints.

In this paper, we focus on the data-driven modeling approach to reconstruct 3D rooftop models from airborne LiDAR data by introducing flexible regularity constraints that can be adjusted to given objects in the recovery of modeling cues and topological elements. The regularity terms that are used in this study represent a regular pattern of the line orientations, and the linkage between adjacent lines. In contrast to the term of “explicit regularity”, we refer to it as an “implicit regularity” because its pattern is not directly expressed, but found with given data and object (rooftop). This implicit regularity is used as a constraint for changing the geometric properties of the modeling cues and topological relations among adjacent modeling cues to conform to a regular pattern found in the given data. This data-adaptive regularity (or regularization process) allows us to reconstruct more complex rooftop models.

In this paper, we describe a pipeline of 3D rooftop model reconstruction from airborne LiDAR data. First, to gain some computational efficiency, we decompose a rooftop object into a set of homogeneous point clouds based on height similarity and plane similarity, from which the modeling cues of line and plane primitives are extracted. Secondly, the topological elements among the modeling cues are recovered by iteratively partitioning and merging over a given point space with line primitives extracted at a global scale using the Binary Space Partitioning (BSP) technique. Thirdly, errors in the modeling cues and topological elements are implicitly regularized by removing erroneous vertices or rectifying the geometric properties to conform to globally derived regularity. This implicit regularization process is implemented in the framework of Minimum Description Length (MDL) combined with Hypothesize and Test (HAT). The parameters governing the MDL optimization are automatically estimated based on Min-Max optimization and Entropy-based weighting method. The proposed parameter estimators provide optimal weight values that adapt according to building properties such as; size, shape, and the number of boundary points. The proposed pipeline of rooftop model generation was developed based on previous works reported in [11]. We extended the work by proposing data-adaptive parameter estimation, conducting an extensive performance evaluation and engineering works to implement a computationally efficient modeling pipeline.

Numerous building reconstruction algorithms have been published for the past two decades. Although it is difficult to clearly classify these various methods into specific categories, there are several ways to categorize the methods: the used data source (single vs. multi-sources), the data processing strategy (data-driven (or generic), model-driven (or parametric)), and the amount of human interaction (manual, semi-automatic, or fully automated) [12]. Of those, classifying existing methods into data-driven or model-driven approaches provides a good insight for understanding and developing 3D building model reconstruction algorithms.

In the model-driven approaches, 3D building models are reconstructed by fitting parameterized primitives to data. This is possible because many buildings in rural and suburban area have common shapes in whole building or building roof parts. These common roof shapes such as flat, gable, and hip roof are considered as standard primitives for representing building rooftop structures. Simple buildings can be well represented as regularized building models using pre-defined parameterized primitives even with low density data and presence of missing data. However, complex buildings and arbitrarily shaped buildings are difficult to model using a basic set of primitives. In addition, the selection of the proper primitives among a set of primitives is not an easy task. To address the limitations, Verma et al. [8] presented a parametric modeling method to reconstruct relatively complex buildings by combining simple parametric roof shapes that are categorized into four types of simple primitives. In this study, the roof-topology graph is constructed to represent the relationships among the various planar patches of approximate roof geometry. The constructed roof-topology graph is decomposed into sub-graphs, which represents simple parametric roof shapes, and then parameters of the primitives are determined by fitting LiDAR data. Although they decomposed complex buildings into simple building parts, many building parts cannot be still explained by their four simple shape primitives. Similarly, Milde et al. [13] reconstructed 3D building models by matching sub-graphs of the region adjacency graph (RAG) with five basic roof shapes and then by combining them using three connectors. Kada and McKinley [14] decomposed the building’s footprint into cells, which provided the basic building blocks. Three types of roof shapes including basic, connecting, and manual shapes are defined. Basic shapes consist of flat, shed, gabled, hipped, and Berliner roofs while connecting shapes are used to connect the roofs of the sections with specific junction shapes. The parameterized roof shapes of all cells are determined from the normal direction of LiDAR points. The entire 3D building model is represented by integrating the parameterized roof elements with the neighboring pieces. Although a high level of automation is achieved, the method still requires manual works to adjust cell parameters and to model more complex roof shapes like mansard, cupola, barrel, and even some detail elements. Lafarge et al. [15] reconstructed building models from a digital surface model (DSM) by combining generic and parametric methods. Buildings are considered as assemblages of 3D parametric blocks from a library. After extracting 2D building supports, 3D parametric blocks are placed on the 2D supports using Gibbs model, which controls both the block assemblage and the fitting to data. The optimal configuration of 3D blocks is determined using the Bayesian framework. They mentioned that the optimization step needs to be improved to achieve both higher precision and shorter computing time as future work. Based on a predefined primitive library, Huang et al. [10] conducted a generative modeling to reconstruct roof models that fit the data. The library provides three groups including 11 types of roof primitives whose parameters consist of position parameters, contour parameters, and shape parameters. Building roofs are represented as one primitive or an assemblage of primitives allowing primitives overlaps. For combining primitives, they derived combination and merging rules which consider both vertical and horizontal intersections. Reversible Jump Markov Chain Monte Carlo (RJMCMC) with a specified jump mechanism is conducted for the selection of roof primitives, and the sampling of their parameters. Although they have shown potential and flexibility of their method, there are issues to be solved: (1) uncertainty and instability of the reconstructed building model; (2) influence of prior knowledge and scene complexity on completeness of the reconstruction; and (3) heavy computation time.

In contrast with model-driven approaches, data-driven approaches do not make any assumptions regarding to the building shapes, thus they can theoretically handle all kinds of buildings. However, the approach may cause considerable deformations due to the sensitivity to surface fluctuations and outliers in the data. In addition, it requires a regularization step during the reconstruction process. In general, the generic approach starts by extracting building modeling cues such as surface primitives, step lines, intersection lines, and outer boundary lines followed by reconstructing the 3D building model. The segmentation procedure for extracting surface primitives divides a given data set into homogeneous regions. Classical segmentation algorithms such as region growing [16,17] and RANSAC [18] can be used for segmenting building roof planes. In addition, Sampath and Shan [19] conducted eigenanalysis for each roof point within its Voronoi neighborhood, and then adopted the fuzzy k-means approach to cluster the planar points into roof segments based on their surface normal. Then, they separated the clusters into parallel and coplanar segments based on their distance and connectivity. Lafarge and Mallet [20] extracted geometric shapes such as planes, cylinders, spheres, or cones for identifying the roof sections by fitting points into various geometric shapes, and then proposed a method for arranging both the geometric shapes and the other urban components by propagating point labels based on MRF. Yan et al. [21] proposed a global solution for roof segmentation. Initial segmentation is optimized by minimizing a global energy function consisting of the distances of LiDAR points to initial planes, spatial smoothness between data points, and the number of planes. After segmenting points or extracting homogeneous surface primitives, modeling cues such as intersection lines and step lines can be extracted based on geometrical and topological relationships of the segmented roof planes. Intersection lines are easily obtained by intersecting two adjacent planes or segmented points while step lines are extracted at roof plane boundary with abrupt height discontinuity. To extract step lines, Rottensteiner et al. [16] detected edge candidate points and then extracted step lines from an adjustment considering edge points within user-specified threshold. In addition, Sohn et al. [22] proposed a step line extractor, called Compass Line filter (CLF), for extracting straight lines from irregularly distributed LiDAR points. Although outer boundary is one type of step line, it is recognized as a separate process in many data-driven approaches. Some researchers delineated initial boundary lines from building boundary points using alpha shape [23], ball-pivoting [8], and contouring algorithm [24]. Then, the initial boundary was simplified or regularized. The detail reviews for simplification or regularization of boundary will be given in following paragraphs. Once all building modeling cues are collected, 3D building models are reconstructed by aggregating the modeling cues. To reconstruct topologically and geometrically correct 3D building models, Sohn et al. [22] proposed the BSP technique, which progressively partitions a building region into homogeneous binary convex polygons. Rau and Lin [25] proposed a line-based roof model reconstruction algorithm, namely TIN-Merging and Reshaping (TMR), to reconstruct topology with geometric modeling. Oude Elberink and Vosselman [26], and Perera and Maas [27] used a roof topology graph to preserve roof topology. In the latter, roof corners are geometrically modeled using the shortest closed cycles and the outermost cycle derived from the roof topology graph.

Detection of building boundary is an intermediate step for 3D building reconstruction although it is not required in all building reconstruction algorithms. Generally, the initial boundaries extracted from irregular LiDAR points have jagged shape with large numbers of vertices. Thus, a simplification or regularization process is required to delineate plausible building boundaries with certain regularities such as orthogonality, parallelism, and symmetry. Various techniques related to the regularization of building boundary have been proposed in the literature [28]. In most methods, the boundary detection process starts by extracting boundary points from segmented points. From extracted boundary points, initial building boundaries are generated by tracing boundary points followed by a simplification or regularization process, which improves the initial boundary. The easiest method to improve initial boundary is to simplify the initial boundary by removing vertices but preserving relevant points. The well-known Douglas–Peucker (DP) algorithm [29] is widely recognized as the most visually effective line simplification algorithm. The algorithm starts by selecting two points which have the longest distance and recursively adding vertices whose distance from the line is less than a given threshold. However, the performance of the algorithm fully depends on the used threshold and is substantially affected by outliers. Another approach extracts straight lines from boundary points using the Hough Transform [30] or using RANSAC [31]. The extracted lines are then connected by intersections of the extracted straight lines to generate closed outer boundary lines. However, Brenner [28] pointed out that the methods require some additional steps due to missing small building edges.

On the other hand, the regularization process imposes certain regularities when the initial boundary is simplified. Vosselman [7] assumed that building outlines are along or perpendicular to the main direction of a building. After defining the position of a line by the first two boundary points, the line is updated using the succeeding boundary points until the distance of a point to the line exceeds some bound. The next line starts from this point in a direction perpendicular to the previous line. A similar approach was proposed by Sampath and Shan [9]. They grouped points on consecutive edges with similar slopes and then applied a hierarchical least squares solution to fit parametric lines representing the building boundary.

Some methods are based on the model hypothesis and verification approach. Ameri [32] introduced the Feature Based Model Verification (FBMV) for modification and refinement of polyhedral-like building objects. In their approach, they imposed the geometrical and topological model information to the FBMV process as external and internal constraints which consider linearity for straightening consecutive lines, connectivity for establishing topology between adjacent lines, orthogonality, and co-planarity. Then, the weighted least squares minimization was adopted to produce a good regularized description of a building model. Weidner and Förstner [33] adopted the MDL concept to regularize noisy building boundaries. For four local consecutive points, ten different hypothetical models are generated with respect to regularization criteria. Then, MDL, which depends on the mutual fit of the data and model and on the complexity of the model, is used to find the optimal regularity of the local configuration. Jwa et al. [34] extended the MDL-based regularization method by proposing new implicit hypothesis generation rules and by re-designing model complexity terms where line directionality, inner angle and number of vertices are considered as geometric parameters. Furthermore, Sohn et al. [11] used the MDL-based concept to regularize topologies within rooftop model. Zhou and Neumann [35] introduced global regularities in building modeling to reflect the orientation and placement similarities among 2.5D elements, which consist of planar roof patches and roof boundary segments. In their method, roof-roof regularities, roof-boundary regularities, and boundary-boundary regularities are defined and then the regularities are integrated into a unified framework.

Figure 1 shows the overall workflow implemented for generating 3D building rooftop models from airborne LiDAR point clouds, where individual buildings are detected. The method consists of three main parts: (1) modeling cue extraction; (2) topology element reconstruction; and (3) regularization. In the modeling cue extraction, roof element clusters, lines (intersection and step lines), and outer-boundaries are extracted from a set of laser point clouds labeled as single building objects (i.e., building labeled points) (Section 2.1). Then, the topology relations among the modeling cues are established by BSP (Section 2.2). Finally, an implicit regularization process is applied to outer-building boundaries and rooftop polygons (Section 3). The regularization process is based on the framework of MDL in combination with HAT optimization. Note that the regularization process is conducted twice; once for regularizing building outer-boundaries which represent LoD1 models, and then for rooftop models which represent LoD2 models. Two types of weight parameters in the MDL-based objective function are automatically determined by Min-Max optimization and Entropy-based parameter estimation method, respectively (Section 4).

The first step towards generating 3D building models using LiDAR data is to gather the evidence of building structures (i.e., primitive geometric elements). Planes and lines are recognized as the most important evidence to interpret building structures due to the fact that 3D building rooftop models can be mainly represented by planar roof faces and edges. The two different modeling cues (planar and linear modeling cues) have different properties and can be separately extracted from LiDAR points. In Section 2.1.1, building points are sequentially segmented into homogeneous clusters, first based on height similarity and then based on plane similarity. In Section 2.1.2, linear modeling cues are extracted using boundary points of the homogeneous clusters.

Roof element clustering segments building-labeled points into homogeneous rooftop regions with a hierarchical structure. A building rooftop in an urban area is a combination of multiple stories, each of which consists of various shapes of flat and sloped planes. Directly extracting homogeneous regions from entire building points may result in difficulties due to a high degree of shape complexity. To reduce the complexity, the building-labeled points are decomposed into homogeneous clusters by sequentially applying height similarity and plane similarity in order.

In the height clustering step, the rooftop region $R=\left\{{p}_{i}|i=1,2,\dots ,n\right\}$ with *n* numbers of building-labeled points is divided into *m* height clusters $R=\left\{{S}_{1},{S}_{2},\dots ,{S}_{m}\right\}$. Height similarity at each point is measured over its adjacent neighboring points in Triangulated Irregular Network (TIN). A point with the maximum height is first selected as a seed point, and then a conventional region growing algorithm is applied to add neighbor points to a corresponding height cluster with a certain threshold (${\delta}_{h}=1\mathrm{m}$). This process is repeated until all building rooftop points are assigned to one of the height clusters. As a result, the height clusters satisfy the property $R={\cup}_{i=1}^{M}{S}_{i}$, ${S}_{i}\cap {S}_{j}=\left\{\right\}$, $\forall i\ne j$. Note that each height cluster consists of one or more different roof planes.

In the plane clustering step, each height cluster is decomposed into *k* plane clusters $\Pi =\left\{{\pi}_{1},{\pi}_{2},\dots ,{\pi}_{k}\right\}$ based on a plane similarity criterion. The well-known random sample consensus (RANSAC) algorithm is adopted to obtain reliable plane clusters as suggested in previous studies [18,36]. The process starts by randomly selecting three points as seed points to generate an initial plane. After a certain period of random sampling, a plane, which has the maximum number of inliers with a user defined tolerance distance *ζ* (*ζ =* 0.1 m) from the estimated plane, is selected as a best plane. Points, which are assigned in the previous iteration, are excluded in the next step. The process continues until all points of the height cluster are assigned into certain plane clusters. Figure 2b,c shows examples of height clusters and plane clusters, respectively, where different colors represent different clusters.

Once building-labeled points are segmented into homogeneous clusters with a hierarchical structure, linear modeling cues are extracted from the homogeneous clusters. We divide linear modeling cues into three different types in order to reduce the complexity in the modeling cue extraction process as follows: (1) outer boundaries of height clusters; (2) intersection lines; and (3) step lines within each height cluster.

In boundaries of height clusters, two adjacent planes have a large height discontinuity. Thus, outer boundaries of height clusters can be recognized as step lines. However, distinguishing between outer boundaries of height clusters and step lines within each height cluster can reduce ambiguity in the topology recovering process (Section 2.2). In addition, outer boundaries of height clusters can serve to generate the LoD1 model. For these reasons, in this study, we separately extract outer boundaries of height clusters. The process starts by detecting boundary points of height clusters which share neighbor height clusters in a TIN structure. After selecting a starting boundary point, a next boundary point is determined by surveying neighbor boundary points, which are connected with the previous boundary point in TIN structure, and by selecting a boundary point which appears first in an anti-clockwise direction. The process continues until the boundary is closed. Then, the closed boundary is regularized by the MDL-based regularization method which will be described in Section 3.

An intersection line candidate is extracted by two adjacent roof planes. Candidates are accepted as valid intersection line if they separate the point sets of the planes and if a sufficient number of points is close to the generated lines.

For step lines, boundary points of plane clusters, which do not belong to outer boundaries or intersection lines, are considered as candidate points for step lines. Given a sequence $D=\left\{{c}_{1},{c}_{2},\dots ,{c}_{l}\right\}$ of *l* candidate points, step lines are extracted in a similar way to the Douglas–Peucker (DP) algorithm. The process starts with a straight line ($\overline{{c}_{1}{c}_{l}}$) connecting the first point and last point of the sequence and then recursively adding candidate points which have a distance larger than a user-defined tolerance (0.5 m). Each segment of the line segments is considered a step line. Figure 3 gives examples of each type of linear modeling cues.

Once all modeling cues are collected, topological relations among the modeling cues are constructed by the BSP technique. In computer science, the BSP is a hierarchical partitioning method for recursively subdividing a space into convex sets with hyperlines. Sohn et al. [22] used the BSP to recover topological relations of 3D building rooftop planes. We adopt the method to reconstruct a topologically and geometrically correct 3D building rooftop model from incomplete modeling cues. The topology recovery process consists of a partitioning step and plane merging step. In the partitioning step, a hierarchical binary tree is generated by dividing a parent region into two child regions with hyperlines (linear modeling cue). The partitioning optimum is achieved by maximizing partitioning score which consists of planar homogeneity, geometric regularity and edge correspondence [22]. In plane merging step, the adjacent roof planes having similar normal vector angles are merged by applying a user-defined threshold. The merging process continues until no plane can be accepted by the co-planar similarity test. Once all polygons are merged together, 3D building rooftop model can be reconstructed by collecting final leave nodes in the BSP tree. Figure 4 shows results of partitioning step, merging step and corresponding 3D rooftop model.

As mentioned before, recovering error-free 3D rooftop models from erroneous modeling cues is a challenging task. Geometric constraints such as parallelism, symmetry, and orthogonality can be explicitly used as a prior knowledge on rooftop structures to compensate the limitations of erroneous modeling cues. However, explicitly imposing the constraints has limitations on describing complex buildings that appear in reality. In this study, we propose an implicit regularization where regular patterns of building structures are not directly expressed, but implicitly imposed on reconstructed building models providing flexibility for describing more complex rooftop models. The proposed regularization process is conducted based on HAT optimization in MDL framework. Possible hypotheses are generated by incorporating regular patterns that are present in the given data. MDL is used as a criterion for selecting an optimal model out of the possible hypotheses. The MDL concept for model selection is introduced in Section 3.1 while Section 3.2 introduces a method for hypothesis generation.

The MDL proposed by Rissanen [37] is a method for inductive inference that provides a generic solution to the model selection problem [38]. The MDL is based on the idea of transmitting data as a coded message, where the coding is based on some prearranged set of parametric statistical model. The full transmission has to include not only the encoded data values, but also the coded model parameter values [39]. Thus, the MDL consists of model complexity and model closeness as follows:

$$DL=\lambda \mathcal{L}\left(D|H\right)+\left(1-\lambda \right)\mathcal{L}\left(H\right)$$

(1)

where $\mathcal{L}\left(D|H\right)$ indicates a goodness-of-fit of observations *D* given a model *H* while $\mathcal{L}\left(H\right)$ represents how complex the model *H* is. $\lambda $ is a weight parameter for balancing the model closeness and the model complexity. Assuming that an optimal model representing the data has the minimal description length, the model selection process allows a model *H* to be converged to the optimal model *H** as follows:

$${H}^{*}=argmi{n}_{H\in \Phi}\left\{\lambda \mathcal{L}\left(D|H\right)+\left(1-\lambda \right)\mathcal{L}\left(H\right)\right\}$$

(2)

The first term in Equation (1) is optimized for good data attachment to the corresponding model. With an assumption that an irregular distribution of data $D=\left\{{x}_{1},\dots ,{x}_{n}\right\}$ with *n* measurements caused by random errors follows a Gaussian distribution $x~N\left(\mu ,{\sigma}^{2}\right)$ with expectation $\mu $ and variance ${\sigma}^{2}$, its density function can be represented as $P(x)=\frac{1}{\sigma \sqrt{2\pi}}{e}^{-\frac{{\left(x-\mu \right)}^{2}}{2{\sigma}^{2}}}$. By using a statistical model of the data, the degree of fit between a model and data can be measured by $\mathcal{L}(D|\mu ,{\sigma}^{2})$, and then the term of model closeness can be rewritten in a logarithmic form as follows:

$$L(D|\mu ,{\sigma}^{2})=-lo{g}_{2}P(D)=-\left(lo{g}_{2}{e}^{-\frac{{\displaystyle \sum}{\left(x-\mu \right)}^{2}}{2{\sigma}^{2}}}+nlo{g}_{2}\frac{1}{\sigma \sqrt{2\pi}}\right)=\frac{1}{2ln2}{\displaystyle \sum}{\left(\frac{x-\mu}{\sigma}\right)}^{2}+nlo{g}_{2}\sigma +\frac{n}{2}lo{g}_{2}2\pi $$

(3)

In Equation (3), the last two terms can be ignored with an assumption that all the hypotheses have the same $\sigma $. Thus, the equation is simplified as follows:

$$\mathcal{L}\left(D|H\right)=\frac{\Omega}{2ln2}$$

(4)

where $\Omega $ is the weighted sum of the squared residuals between a model *H* and a set of observations *D*, that is ${\left[D-H\right]}^{T}\left[D-H\right]$ in matrix form.

The second term in Equation (1) is designed to encode the model complexity. In this study, the model complexity is explained by three geometric factors: (1) the number of vertices ${N}_{v}$; (2) the number of identical line directions ${N}_{d}$; and (3) the inner angle transition ${N}_{\angle \theta}$. By using the three geometric factors, an optimal model is chosen if its polygon has a small number of vertices and a small number of the identical line directions, and if the inner angle transition is smoother or more orthogonal.

Suppose that ${N}_{v}$, ${N}_{d}$, and ${N}_{\angle \theta}$ are used for an initial model, while ${N}_{v}^{\prime}$, ${N}_{d}^{\prime}$, and ${N}_{\angle \theta}^{\prime}$ are used for a hypothetical model generated from the initial model. To measure the description length for the number of vertices, we start by deriving the probability that a vertex is randomly selected from a given model, $P(v)=\frac{1}{{N}_{v}}$. Then, it can be expressed in bits as $lo{g}_{2}\left({N}_{v}\right)$. Since a hypothetic model generated by hypothesis generation process has ${N}_{v}^{\prime}$ vertices, its description length is ${N}_{v}^{\prime}lo{g}_{2}\left({N}_{v}\right)$. Similarly, the probability for the number of identical line directions ${N}_{d}$ is $P(d)=\frac{1}{{N}_{d}}$ and can be expressed in bits as $lo{g}_{2}\left({N}_{d}\right)$. By considering the required number of line directions ${N}_{d}^{\prime}$, the description length for identical line direction is measured by ${N}_{d}^{\prime}lo{g}_{2}\left({N}_{d}\right)$. To define line directions, we adopt compass line filter (CLF) suggested by Sohn et al. [22], as shown in Figure 5. The CLF is determined by the whole set of eight filtering lines with different slopes $\left\{{\theta}_{i}:i=1,\dots ,8\right\}$ that is equally separated in steps of 22.5°. The representative angle for each slope, ${\theta}_{i}^{REP}$, is calculated by a weighted averaging of angles that takes the summed line length of each CLF slope into account.

Lastly, the description length for inner angle transition is measured by assigning a certain penalty value to quantized inner angles. As depicted in Equation (5), the penalty values ${\gamma}_{i=0,1,2}$ are heuristically determined to have the minimum value of 0 (i.e., favour inner angle) if inner angle $\angle \theta $ is close to 90° or 180°, while the maximum value of 2 (i.e., unfavorable inner angle) is assigned to very acute inner angles. This is because acute inner angle at two consecutive building vectors rarely appears in reality. Thus, the probability for ${N}_{\angle \theta}$ can be derived from an inner angle that is located in one of the quantized angles, $P\left(\angle \theta \right)=\frac{1}{{N}_{\angle \theta}}$, and expressed in bits as $lo{g}_{2}\left({N}_{\angle \theta}\right)$. In the optimal model, the cost imposed by penalty values is ${{\displaystyle \sum}}_{k=1}^{{N}_{v}^{\prime}}{\gamma}_{i=0,1,2}$, and its description length is calculated by ${N}_{\angle \theta}^{\prime}lo{g}_{2}\left({N}_{\angle \theta}\right)$.

$${\mathsf{\gamma}}_{\mathrm{i}=0,1,2}=\{\begin{array}{cc}0\hfill & \mathrm{if}\text{}78.75\xb0\le \angle \theta \le 101.25\xb0\text{}\mathrm{or}\text{}168.75\xb0\le \angle \theta \le 180\xb0\\ 1\hfill & \mathrm{if}\text{}11.25\xb0\angle \theta 78.75\xb0\text{}\mathrm{or}\text{}101.25\xb0\angle \theta 168.75\xb0\\ 2\hfill & \mathrm{if}\text{}0\xb0\text{}\angle \theta \le 11.25\xb0\text{}\end{array}$$

(5)

As a result, the description length for sub-terms of model complexity $\mathcal{L}\left(H\right)$ is obtained by the summation of three geometric factors as follows:

$$\mathcal{L}\left(H\right)={W}_{v}{N}_{v}^{\prime}lo{g}_{2}{N}_{v}+{W}_{d}{N}_{d}^{\prime}lo{g}_{2}{N}_{d}+{W}_{\angle \theta}{N}_{\angle \theta}^{\prime}lo{g}_{2}{N}_{\angle \theta}$$

(6)

where ${W}_{v}$, ${W}_{d}$, and ${W}_{\angle \theta}$ are weight values for each sub-factor in the model complexity.

The hypothesis generation process proposes a set of possible hypotheses under certain configurations of a rooftop model (or building boundary). Suppose a rooftop model consists of a polygon ${\Pi}_{A}=\left\{{v}_{1},{v}_{2},{v}_{3},{v}_{4},{v}_{5},{v}_{6},{v}_{7}\right\}$ and a polygon ${\Pi}_{B}=\left\{{v}_{3},{v}_{4},{v}_{5},{v}_{8},{v}_{9},{v}_{10}\right\}$, where ${v}_{3}$, ${v}_{4}$ and ${v}_{5}$ are common vertices in both polygons (Figure 6a). A task is to generate possible hypotheses at a certain vertex considering a given configuration of rooftop model. The hypothesis generation process starts by defining an Anchor Point (*AP*), Floating Point (*FP*), and Guide Point (*GP*) and then by deriving a Floating Line (*FL* = [*AP*, *FP*]) and Guiding Line (*GL* = [*GP*, *FP*]). The role of *AP* is to define the origin of a line to be changed (*FL*). *FP* is a point to be moved while *GP* is used to generate *GL* which guides the movement of *FP*. Hypotheses are generated by moving *FP* along the *GL* with *AP* as an origin of *FL*. The orientation of *FL* is determined by representative angles of CLF which consists of eight directions as shown in Figure 5. There are different cases for hypothesis generation: (1) depending on a relative direction of *AP* and *FP* (forward (clockwise) and backward (anti-clockwise)); (2) depending on whether a vertex is removed (removal or non-removal); and (3) depending on whether *FP* is a common vertex in more than two adjacent polygons (common vertex or non-common vertex). For the reader's understanding, some cases are explained as follows:

- Case 1 (forward, non-removal, and non-common vertex): As shown in Figure 6b, ${v}_{1}$ and ${v}_{2}$ are assigned as
*AP*(blue circle) and*FP*(red point), respectively. Hypotheses are generated by moving*FP*along to the*GL*where red circles represent new possible positions of ${v}_{2}$. - Case 2 (backward, non-removal, and non-common vertex): As shown in Figure 6c, ${v}_{3}$ and ${v}_{2}$ are assigned as
*AP*and*FP*, respectively. In contrast to case 1,*FP*is located in backward direction of*AP*. - Case 3 (backward, removal, and non-common vertex): As shown in Figure 6d, after removing ${v}_{2}$ (green point), ${v}_{3}$ and ${v}_{1}$ are assigned as
*AP*and*FP*, respectively. New hypotheses are generated by moving ${v}_{1}$. - Case 4 (forward, non-removal, common vertex): As shown in Figure 6e, ${v}_{2}$ and ${v}_{3}$ are assigned as
*AP*and*FP*, respectively. ${v}_{3}$ is a common vertex in ${\Pi}_{A}$ and ${\Pi}_{B}$. Because the position of ${v}_{3}$ changes, shapes of both polygons are changed. - Case 5 (forward, removal, common vertex): As shown in Figure 6f, ${v}_{2}$ and ${v}_{4}$ are assigned as
*AP*and*FP*, respectively. After ${v}_{3}$ is removed, ${v}_{4}$ is assigned as*FP*so that the position of ${v}_{4}$ is changed.

In the MDL-based objective function, two types of weight parameters are used to evaluate the relative importance of sub-terms. One is a weight parameter ($\lambda $) for balancing the model closeness and the model complexity in Equation (1). The other is weight parameters (${W}_{v},{W}_{d},{W}_{\angle \theta}$) for the three sub-terms in the complexity term in Equation (6). In previous research [11], these weight parameters were set as constant values, which were empirically determined, for all building models ($\lambda $ = 0.5 and ${W}_{v}={W}_{d}={W}_{\angle \theta}=1$). However, buildings have different shapes and sizes in reality. In addition, the density of LiDAR points varies on data acquisition settings and flight height. These properties, which vary on individual buildings, may cause unbalanced values in model closeness and model complexity. For instance, when building shape is very simple and the number of observations is significantly large, the closeness value is relatively larger than the complexity value. As a result, optimization process may be dominant to the variation of the model closeness. Thus, the weight parameters have to be appropriately tuned in an automated manner by individually considering the properties of each building. To automatically determine proper weight values, we propose two different weighting methods: (1) Min-Max weighting method (Section 4.1); and (2) Entropy-based weighting method (Section 4.2). The Min-Max weighting method is used to balance the model closeness and the model complexity while the Entropy-based weighting method is employed to determine the weight values for the three sub-terms in the complexity term.

The proposed MDL-based objective function consists of two conflicting terms: the model closeness term $\mathcal{L}\left(D|H\right)$ and the model complexity term $\mathcal{L}\left(H\right)$ as shown in Equation (1). $\lambda $ is a weight parameter which affects modeling result. The smaller the value of $\lambda $, the simpler the optimal model is. In contrast, a larger value of $\lambda $ emphasizes goodness-of-fit to data, causing under-simplified model (or over-fitting problem) (see Figure 7). To automatically estimate an appropriate weight value, we adopt Min-Max criterion [40], which minimizes possible loss while maximizing the gain. In this study, the Min-Max principle is closely related to minimizing the cost value *DL* for each $\lambda $ and maximizing contributions from both of two terms, thereby finding the optimal ${\lambda}^{*}\in \left[0,1\right]$. For each term, this leads to avoid the best scenario where one of two terms dominates by having an excessively low or high value of $\lambda $. To achieve this goal, the “Min” operator first finds the optimal model for each $\lambda $ using Equation (2). Considering the boundary conditions, $\mathcal{L}\left(H\right)$ at $\lambda =0$ and $\mathcal{L}\left(D|H\right)$ at $\lambda =1$ corresponds to zero. Then, $\mathcal{L}\left(D|H\right)$ and $\mathcal{L}\left(H\right)$ are normalized using min-max normalization method, respectively, as follows:

$${z}_{i}=\frac{{x}_{i}-min\left(x\right)}{max\left(x\right)-min\left(x\right)}$$

(7)

where ${z}_{i}$ is a normalized value for the *i*th variable ${x}_{i}$; $min\left(x\right)$ and $max\left(x\right)$ are the minimum value and maximum value for variable *x*. After the total *DL* value is computed from normalized $\mathcal{L}\left(D|H\right)$ and $\mathcal{L}\left(H\right)$ for each $\lambda $, the “Max” operator derives an optimal weight value ${\lambda}^{*}$ by selecting the worst scenario showing the maximum *DL*. Figure 7 shows an example of the Min-Max weighting method. As shown in Figure 7a, as $\lambda $ is close to 0, a simple model is selected as the optimal model. As $\lambda $. gets larger, the optimal model is more complex because the *DL* value is more affected by the closeness term. In this example, 0.4 is selected as the best $\lambda $ because it produces the maximum *DL* value.

Prior to determining the weight parameter $\lambda $, we estimate the weight values of geometric parameters forming the complexity term $\mathcal{L}\left(H\right)$ in Equation (6). The $\mathcal{L}\left(H\right)$ consists of three geometric terms including the number of vertices, the number of identical line directions and the inner angle transition.

In multi-attribute decision making, an entropy weighting method, which is one of the objective methods, is used to determine appropriate weights for attributes [41]: the greater the value of the entropy corresponding to a special attribute, the smaller attribute’s weight. We adopt the entropy weighting method to determine the relative importance of three geometric terms in Equation (6). In information theory, entropy is understood as a measure of uncertainty about attributes drawn from data and can be normally characterized as follows:

$$E(X)=-{\displaystyle \sum}_{i=1}^{n}p({x}_{i}){\mathrm{log}}_{2}\text{}p({x}_{i})$$

(8)

The basic formulation can be rewritten to calculate entropy in the existence of two possibilities *p* and *q* = 1 − *p* as follows:

$$E=-\left(p{\mathrm{log}}_{2}\text{}p+q{\mathrm{log}}_{2}\text{}q\right)$$

(9)

where *p* represents the event that a current hypothesized parameter set belongs to a class of optimal model parameters and *q* indicates the reverse situation of *p*. In this study, a probability for each term in Equation (6) is derived by calculating a probability that each geometric factor in a given model can converge to the optimal model. The optimal model in terms of model complexity, according to the definition of model complexity discussed in Section 3.1, is represented by a rectangle where the number of vertices is four, the number of identical line directions is two, and all inner angles have no penalty. Thus, the probability that four vertices are randomly selected from ${N}_{v}$ vertices is one over four combinations of ${N}_{v}$, $p\left(v\right)=1/{C}_{4}^{{N}_{v}}$. Similarly, the probability that two identical line directions are selected from ${N}_{d}$ identical line directions is one over two combinations of ${N}_{d}$, $p\left(d\right)=1/{C}_{2}^{{N}_{d}}$. The probability of inner angle with no penalty in Equation (5) is 3/16. Because all inner angles have no penalty to be optimal model, the probability for ${N}_{\angle \theta}$ is $p\left(\angle \theta \right)={N}_{v}\times 3/16$. The estimated probabilities are converted into entropy using Equation (9). Weight parameters for three sub-terms are determined as suggested in previous studies [41,42]:

$${W}_{v}=\frac{1-E\left(v\right)}{3-\left(E\left(v\right)+E\left(d\right)+E\left(\angle \theta \right)\right)},\text{}{W}_{d}=\frac{1-E\left(d\right)}{3-\left(E\left(v\right)+E\left(d\right)+E\left(\angle \theta \right)\right)},\text{}{W}_{\angle \theta}=\frac{1-E\left(\angle \theta \right)}{3-\left(E\left(v\right)+E\left(d\right)+E\left(\angle \theta \right)\right)}$$

(10)

The performance of the proposed method was evaluated over the ISPRS benchmark datasets provided by the ISPRS WGIII/4 [43]. The ISPRS benchmark datasets consist of three sub-regions (Area 1, Area 2, and Area 3) of the Vaihingen dataset, and two sub-regions (Area 4 and Area 5) of the Toronto dataset (Figure 8). The Vaihingen dataset was acquired by Leica ALS50 system at an altitude of 500 m above ground level in August 2008. Ten strips are overlapped with 30% rate and an average point density is approximately 6.7/m^{2} (~0.39 m point spacing). The 3D positional accuracy shows approximately ±10 cm. The Vaihingen dataset contains typical European building types showing various shapes including gable, hip roof, and their mixed structures. The Toronto dataset was taken by Optech’s ALTM-ORION M system at an altitude of 650 m in 2009. The test area includes six strips with about 6/m^{2} average point density (~0.41 m point spacing). The dataset contains representative scene characteristics of a modern mega city in North America including a mixture of low- and high-story building and a complex cluster of high-rise buildings. For both datasets, reference building models were generated by manual stereo plotting method. More detailed explanation on the data can be found in [43]. To extract the building points, we applied the classification method described in [44].

The ISPRS benchmark project on urban classification and 3D building modeling led by ISPRS WGIII/4 provides evaluation metrics to estimate the results obtained from the latest state-of-the-art algorithms for building detection and 3D building reconstruction [2]. The ISPRS evaluation metrics were designed for measuring the performance characteristics of individual algorithms by comparing multiple evaluation indices including confusion matrix (area-based and object-based), topological analysis among roof planes, and geometric accuracy (RMSE). Thus, the ISPRS metrics are used to evaluate our proposed method. In addition, we added two shape similarity measures (Hausdorff distance and turning function distance) and an angle-based evaluation index to evaluate different aspects of reconstructed building models. Hausdorff measures shape similarity between reference models and algorithmic models by taking the maximum distance among the minimum distances measured between each vertex for two model datasets [45]. In contrast to RMSE, which assesses the average difference between two models, the Hausdorff distance can measure the maximum shape difference caused by over-simplification and under-simplification without any pre-defined value for the proximity criterion. The turning function distance represents a cumulative measure of the angles through which a polygonal curve turns [46]. A turning function distance enables the direct measuring of turning pattern similarity between reference and algorithmic models. Thus, the turn function distance can measure a resemblance between two models at global scale. Additionally, an angle-based evaluation index measures the difference between main orientation of a building modeled in a reference dataset and the results produced by an algorithm. The main orientation of a building model is determined by analyzing the frequency of building lines for eight direction zones generated by the CLF. Table 1 summarizes an evaluation indices used in this paper.

Evaluations using confusion matrix were applied under three different conditions: (a) by applying area-based method for outer building boundary; and by applying object-based method (b) for all roof planes; and (c) for roof planes with more than 10 m^{2}, respectively (Table 2).

In the area-based evaluation (Table 2a), our proposed rooftop reconstruction algorithm showed that the completeness, correctness, and quality of the reconstructed building models are 91.5%, 97.4%, and 89.2%, respectively. The results indicate that most of resulting building models were properly overlapped to the corresponding reference building models. The error rate for the completeness is larger than the error rate for the correctness. This is due to the fact that most of the boundary points from irregularly distributed points are not exactly located on the real building outline but they often feature a small offset to it. The inexact observations cause boundary displacement which is generally positioned toward the inside of the building. As a result, a building model tends to be shrunken compared to the reference building model. This leads to the increase of *FNs* and the decrease of *TPs*, degenerating the completeness.

In the object-based evaluation methods, a roof plane in one dataset was considered to be a true positive if a certain minimum percentage of its area (50% overlap) is covered by a roof plane in the other dataset. While the completeness, correctness, and quality for all roof planes are 79.5%, 96.0%, and 77.3%, respectively (Table 2b), the values are increased to 93.8%, 96.9%, and 91.3% if only large roof planes (>10 m^{2}) are considered (Table 2c). The results indicate that small roof planes were not detected as well by our proposed method. This is mainly caused by the small number of points on small building roof planes which made it difficult to extract sufficient modeling cues for reconstructing rooftop models. Figure 9 clearly shows the effect of the size of roof plane. When only roof planes with an area smaller than 5 m^{2}, are considered, the completeness is considerably low for all five datasets. In particular, the completeness for Area 2 (Figure 9b) and Area 5 (Figure 9e) were 26.3% and 37.4%, respectively. We observed that buildings in the two regions have many small objects on their roofs which were represented in reference building rooftop models.

Object-based evaluation as a function of the roof plane area: (**a**) Area 1; (**b**) Area 2; (**c**) Area 3; (**d**) Area 4; and (**e**) Area 5.

As shown in Table 2, the area-based evaluations show that similar levels of model quality were achieved for both the Vaihigen dataset and the Toronto dataset. However, the object-based evaluations indicate that the model quality for the Vaihingen dataset is better than one for the Toronto dataset. This is mainly related to segmentation errors which occur more in complex scenes. We observed that many roof planes in the Toronto dataset were under-segmented by merging adjacent clusters. As a result, building rooftop models generated from under-segmented clusters caused a low success rate of the completeness.

In addition, we compared the evaluation results with those assessed for other algorithms that were reported in [2] where area-based evaluation results were not reported (Table 3). The object-based evaluation results (Table 3a) demonstrate that our method can outperform other building reconstruction algorithms except for the BNU in terms of the completeness and quality. In particular, when roof planes, whose area is larger than 10 m^{2}, were considered, our proposed method showed more accurate results. The BNU, which outperform our method, was assessed only for Area 3. With regard to robustness, our proposed method outperforms the BNU. The correctness of our method is better than the average of all other evaluated methods. Considering that the correctness is above 90% for all compared methods except MON and FIE, the correctness of our method is large enough. In addition, the superiority of our method can be proven by Toronto dataset which consists of complex buildings. Only three participants submitted their results for Toronto dataset, and our method achieved the best results for all indices.

Geometrical errors in planimetry, and in height were assessed using RMSE. The RMSE measures Euclidean distance in two different ways: (1) from a vertex in the reconstructed rooftop model to its closest vertex in reference model; and (2) from a vertex in the reference model to its closest vertex in the reconstructed rooftop model. Note that only distances shorter than a certain tolerance distance (<3 m) were considered as introduced by [2].

The average RMSE of distances in planimetry for the Vaihigen dataset and the Toronto dataset are 0.76 m and 0.96 m, respectively. As shown in Table 3b, the geometric accuracy is better than the average geometric accuracy of building models reconstructed by other algorithms. Figure 10 shows the cumulative histogram of geometric accuracy in RMSE over the five sub-regions. Overall, more than 70% of evaluated vertices are located with less than 1.25 m RMSE. In most test regions, the results of RMSE of reference vertices (Figure 10b) are better than those of RMSE of extracted vertices (Figure 10a). The reason is that the proposed method provides under-simplified models with redundant vertices (i.e., having more numbers of vertices compared to the reference model). Note that the closest vertex within a certain tolerance distance (>3 m) was used to calculated RMSE. Thus, RMSE of extracted vertices, which have redundant vertices, tends to be worse than one of reference vertices.

The cumulative histogram of geometrical errors: (**a**) RMSE of extracted vertices with respect to reference vertices; and (**b**) RMSE of reference vertices with respect to extracted vertices.

Hausdorff distance was applied to 2D outer boundaries and to 3D roof planes with *1:1* correspondence, respectively (Table 4b). The averages of Hausdorff distance for 2D outer boundaries and for 3D roof planes are 1.81 m and 1.17 m, respectively. The results show that the maximum distance between the vertices of reference rooftop models and extracted rooftop models is expected to be less than roughly twice the RMSE by our proposed method. In addition, the average of the Hausdorff distance for 2D outer boundaries is larger than the value for 3D roof planes. This is mainly caused by topology relations between roof planes. As shown in Figure 11, two roof planes, which share a common edge in reference models (or in extracted models), were represented by separated roof planes in extracted models (or reference models). The different topology relations caused a large amount of shape differences in outer boundary representation.

Examples of a large amount of Hausdorff distance for 2D outer boundary (Red: Reference, Green: extracted rooftop model).

Turning function distance, which measures how similar two shapes are, was applied to outer building boundaries and to roof planes with 90% overlap, respectively. Roughly, when the value is smaller than approximately 0.03, two corresponding shapes are very similar in terms of visual inspection. However, when the value is larger than approximately 0.05, the shapes are considerably dissimilar (Figure 12). For five sub-regions, the average turning function distances are 0.042 for 2D outer boundaries and 0.033 for 3D roof planes, respectively (Table 4c). Although turning function distances do not provide a specific range for which value is acceptable for building rooftop models, our results can be compared with examples given in Figure 12. The comparison indicates that the building rooftop models reconstructed by the proposed method can achieve acceptable shape similarities compared with reference building rooftop models in terms of visual inspection. Similarly to the results of Hausdorff distance, the turning function distance for 2D outer boundaries is larger than one for 3D roof planes due to different topologies and representations of rooftop models.

Approximate ranges of turning function distance (blue: reference, red: extracted model): (**a**) 0.016; (**b**) 0.055; and (**c**) 0.105.

To evaluate the quality of model orientation, the angle difference was measured by calculating the difference of dominant orientations between reconstructed rooftop models and reference rooftop models. Table 4a shows the angle differences for five sub-regions where the averages of angle differences are 1.17° for 2D outer boundaries and 0.91° for 3D roof planes, respectively. Note that main angles for outer boundary and for 3D roof planes can be different because the main angle is separately determined for outer boundary and 3D roof planes. The orientation error was entirely caused by representative angles of CLF which were used to represent a regular pattern of the line orientation. The representative angles of CLF were calculated from all initial boundary lines connecting boundary points of individual building models without any prior knowledge of building orientations. Thus, a large amount of orientation error in small building models can be accidently caused if angles of the boundary lines were distorted by local distributions of boundary points.

Additionally, topology relations were assessed by comparing overlap area between reference rooftop planes and extracted rooftop planes. Table 5 represents the number of instances of *1:1*, *1:M*, *N:1*, and *N:M* relations. More than 63% of roof planes are matched with *1:1* relations; 22% of roof planes have *N:1* relations; 7% of roof planes have *1:M* relations; and 8% of roof planes have *N:M* relations. The topology errors are mainly caused by incorrect segmentation and incomplete modeling cues. In particular, relatively higher *N:1* relations are caused by under-segmentations and superstructures on roofs which often occur in complex scene. Thus, the *N:1* relations were observed more in the Toronto dataset.

To evaluate an effect of weight parameters in MDL-based objective function, we compared building models generated using fixed weight parameters with building models generated using the proposed weighting methods. Area-based evaluations using confusion matrix and shape-based indices were applied. The area-based evaluations using confusion matrix show an increase of 1.3% for the completeness, a decrease of 0.7% for the correctness, and an increase of 0.6% for the quality when the proposed weighting methods were used (Table 6). For Hausdorff distance and turning function distance, the improvements of 0.44 m and 0.003 were achieved, respectively (Table 7). While evaluation results using confusion matrix and evaluation results for turning function distance show slight improvements, the results for Hausdorff distance show relatively large improvements for all sub-regions except for Area 3. In addition, the most improvements for all evaluation methods were achieved by Area 4 where a relatively large number of shape differences at local scale between extracted models and reference models were observed. Figure 13 shows an example where shape difference at local scale is reduced by the proposed weighting methods. When fixed weight parameters were used, a lower part of the building model (red circle) was under-simplified (Figure 13c). This is related to the number of boundary points and a degree of model complexity. A large number of observations produced relatively high closeness value compared with complexity value. This caused imbalance between two values because fixed weight parameters do not consider the property of an individual building model. In contrast, the closeness term and the complexity term were balanced by using flexible weight parameters (Figure 13d). As shown in Table 6 and Table 7 and Figure 13, applying flexible weight values makes positive effects in preserving shapes similar to reference rooftop models.

Effect on flexible weight parameters: (**a**) boundary points; (**b**) reference building model; (**c**) building model generated with fixed weight parameters; and (**d**) building model generated with flexible weight parameters.

Figure 14 visualizes reconstructed building rooftop models which are representative buildings of five sub-regions. Visual inspection indicates that the proposed building reconstruction method can robustly provide accurate regularized 3D building rooftop models in both simple scenes and complex scenes. Figure 15 shows all reconstruction building rooftop models over our test datasets.

Reconstructed building models with complex roof structure: (**a**) image; (**b**) LiDAR point clouds; and (**c**) perspective view of the reconstructed 3D building model.

In this study, we proposed an automatic 3D building reconstruction method which covers a full chain of rooftop modeling. Building-labeled points were segmented into homogeneous clusters with a hierarchical structure, which enables explicit interpretation of building rooftop configuration. To effectively gather evidence of a rooftop structure, three linear modeling cues including intersection line, step lines, and boundaries were separately extracted by considering their characteristics. In the proposed method, regularization is the most important process, which implicitly imposes geometric regularities on reconstructed rooftop models based on MDL principle. In the MDL framework, finding a regularized rooftop model was recognized as a model selection problem. The best model was selected by minimizing *DL* values among competing hypotheses generated by a newly designed hypothesis generation process. To automatically control weight parameters, a Min-Max based weighting method and Entropy-based weighting method were proposed. The experimental results showed that the proposed method can provide qualitatively and quantitatively well-regularized 3D building rooftop models. More specifically, the results are summarized as follows:

- The proposed method provided a robust solution for 3D rooftop modeling regardless of scene complexity, e.g., typical European style structure with relatively simple building shapes as well as complex clusters of high-rise buildings. This is achieved by the hierarchical clustering of building rooftop points. Even though modeling cues were incompletely extracted, the BSP method produced geometrically and topologically correct rooftop models.
- Evaluation results using confusion matrix showed that the proposed method outperforms other building reconstruction algorithms. However, object-based evaluation results indicated that our method has a limitation on extracting small size rooftops. It is a common problem in data-driven approaches due to the fact it is difficult to extract modeling cues from the small number of roof points. One possible solution for this problem is to combine the data-driven method and model-driven method by taking their complementary properties.
- The proposed weighting methods have a positive effect on the building regularization process. Results for Hausdorff distance showed that the values are considerably improved when flexible weight parameters in MDL objective function were applied. In particular, shape deformation (under-simplified or over-simplified model) at a local scale was reduced by the proposed method.
- Angle based evaluation shows that the method has 1.17° difference compared to the reference. However, the main orientations of building models in this study were determined without any prior knowledge. Thus, the accidently large amount of orientation error can occur in small size buildings. One possible solution for the problem is to use image data, which can explicitly provide the orientation of building model.

In current study, 3D point clouds obtained by airborne LiDAR was used as a primary information source for the automation of reconstructing rooftop models. However, the proposed method can be also applicable to photogrammetric point clouds generated by various dense matching technologies [53]. As future work, we will investigate the impact of photogrammetric point clouds the quality of 3D rooftop models reconstructed, and thus seek for an optimal solution to make the proposed method be robust to various quality of point clouds. In addition, it will be required to examine the impact of the accuracy of building detection, especially in relation to the occlusion caused by the presence of vegetation and adjacent buildings.

This research was partially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery and the NSERC DATA Analytics and Visualization (DAV) CREATE program. We thank ISPRS WG III/4 benchmark team led by Dr. Franz Rottensteiner at Leibniz University of Hannover, who provided valuable data and evaluated our results for supporting this research. We also thank Woo-Sug Cho from Inha University, who provided insights and expertise that assisted the research.

The following are available online at www.mdpi.com/1424-8220/17/3/621/s1, Video S1: Sequential results of implicit regularization for 3D rooftop reconstruction.

Click here for additional data file.^{(288K, mp4)}

Author Contributions

G.S conceived and designed the research work; J.J and Y.J contributed with the implementation of the proposed methods, experiments and data analysis; and J.J, Y.J. and G.S. wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

1. Kolbe T.H., Gröger G., Plümer L. Geo-Information for Disaster Management. Springer; Berlin/Heidelberg, Germany: 2005. CityGML-interoperable access to 3D city models; pp. 883–899.

2. Rottensteiner F., Sohn G., Gerke M., Wegner J.D., Breitkopf U., Jung J. Results of the ISPRS benchmark on urban object detection and 3D building reconstruction. ISPRS J. Photogramm. Remote Sens. 2014;93:256–271. doi: 10.1016/j.isprsjprs.2013.10.004. [Cross Ref]

3. Haala N., Kada M. An update on automatic 3D building reconstruction. ISPRS J. Photogramm. Remote Sens. 2010;65:570–580. doi: 10.1016/j.isprsjprs.2010.09.006. [Cross Ref]

4. Musialski P., Wonka P., Aliaga D.G., Wimmer M., van Gool L., Purgathofer W. A survey of urban reconstruction; Proceedings of the Eurographics 2012; Cagliari, Italy. 13–18 May 2012.

5. Wang R. 3D building modeling using images and LiDAR: A review. Int. J. Image Data Fusion. 2013;4:273–292. doi: 10.1080/19479832.2013.811124. [Cross Ref]

6. Tomljenovic I., Höfle B., Tiede D., Blaschke T. Building extraction from airborne laser scanning data: An analysis of the state of the art. Remote Sens. 2015;7:3826–3862. doi: 10.3390/rs70403826. [Cross Ref]

7. Vosselman G. Building reconstruction using planar faces in very high density height data; Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Science; Munich, Germany. 8–10 September 1999; pp. 87–92.

8. Verma V., Kumar R., Hsu S. 3D building detection and modeling from aerial LiDAR data; Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition; New York, NY, USA. 17–22 June 2006; pp. 2213–2220.

9. Sampth A., Shan J. Building boundary tracing and regularization from airborne LiDAR point clouds. Photogramm. Eng. Remote Sens. 2007;73:805–812. doi: 10.14358/PERS.73.7.805. [Cross Ref]

10. Huang H., Brenner C., Sester M. A generative statistical approach to automatic 3D building roof reconstruction from laser scanning data. ISPRS J. Photogramm. Remote Sens. 2013;79:29–43. doi: 10.1016/j.isprsjprs.2013.02.004. [Cross Ref]

11. Sohn G., Jwa Y., Jung J., Kim H.B. An implicit regularization for 3D building rooftop modeling using airborne data; Proceedings of the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences; Melbourne, Australia. 25 August–1 September 2012; pp. 305–310.

12. Vosselman V., Maas H.-G., editors. Airborne and Terrestrial laser Scanning. Taylor & Francis; New York, NY, USA: 2010.

13. Milde J., Zhang Y., Brenner C., Plümer L., Sester M. Building reconstruction using a structural description based on a formal grammar; Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Science; Beijing, China. 3–11 July 2008; pp. 227–232.

14. Kada M., McKinley L. 3D Building reconstruction from LiDAR based on a cell decomposition approach; Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Science; Paris, France. 3–4 September 2009; pp. 47–52.

15. Lafarge F., Descombes X., Zerubia J., Pierrot-Deseilligny M. Structural approach for building reconstruction from a single DSM. IEEE Trans. Pattern Anal. Mach. Intell. 2010;32:135–147. doi: 10.1109/TPAMI.2008.281. [PubMed] [Cross Ref]

16. Rottensteiner F., Trinder J., Clode S., Kubik K. Automated delineation of roof planes from LiDAR data; Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Science; Enschede, The Netherlands. 12–14 September 2005; pp. 221–226.

17. Kada M., Wichmann A. Sub-surface growing and boundary generalization for 3D building reconstruction; Proceedings of the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences; Melbourne, Australia. 25 August–1 September 2012; pp. 233–238.

18. Tarsha-Kurdi F., Landes T., Grussenmeyer P. Extended RANSAC algorithm for automatic detection of building roof planes from LiDAR data. Photogram. J. Finl. 2008;21:97–109.

19. Sampath A., Shan J. Segmentation and reconstruction of polyhedral building roofs from aerial LiDAR point clouds. IEEE Trans. Geosci. Remote Sens. 2010;48:1554–1567. doi: 10.1109/TGRS.2009.2030180. [Cross Ref]

20. Lafarge F., Mallet C. Creating large-scale city models from 3d-point clouds: A robust approach with hybrid representation. Int. J. Comput. Vis. 2012;99:69–85. doi: 10.1007/s11263-012-0517-8. [Cross Ref]

21. Yan J., Shan J., Jiang W. A global optimization approach to roof segmentation from airborne LiDAR point clouds. ISPRS J. Photogramm. Remote Sens. 2014;94:183–193. doi: 10.1016/j.isprsjprs.2014.04.022. [Cross Ref]

22. Sohn G., Huang X., Tao V. Using a binary space partitioning tree for reconstructing polyhedral building models from airborne LiDAR data. Photogramm. Eng. Remote Sens. 2008;74:1425–1438. doi: 10.14358/PERS.74.11.1425. [Cross Ref]

23. Dorninger P., Pfeifer N. A comprehensive automated 3D approach for building extraction, reconstruction, and regularization from airborne laser scanning point clouds. Sensors. 2008;8:7323–7343. doi: 10.3390/s8117323. [PMC free article] [PubMed] [Cross Ref]

24. Zhou Q.Y., Neumann U. Fast and extensible building modeling from airborne LiDAR data; Proceedings of the 16th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems; Irvine, CA, USA. 5–7 November 2008.

25. Rau J.Y., Lin B.C. Automatic roof model reconstruction from ALS data and 2D ground plans based on side projection and the TMR algorithm. ISPRS J. Photogramm. Remote Sens. 2011;66:s13–s27. doi: 10.1016/j.isprsjprs.2011.09.001. [Cross Ref]

26. Oude Elberink S., Vosselman G. Building reconstruction by target based graph matching on incomplete laser data: Analysis and limitations. Sensors. 2009;9:6101–6118. doi: 10.3390/s90806101. [PMC free article] [PubMed] [Cross Ref]

27. Perera S., Mass H.G. Cycle graph analysis for 3D roof structure modelling: Concepts and performance. ISPRS J. Photogramm. Remote Sens. 2014;93:213–226. doi: 10.1016/j.isprsjprs.2014.04.017. [Cross Ref]

28. Brenner C. Building extraction. In: George V., Hans-Gerd M., editors. Airborne and Terrestrial Laser Scanning. Whittles Publishing; Scotland, UK: 2010.

29. Douglas D., Peucker T. Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. Can. Cartogr. 1973;10:112–122. doi: 10.3138/FM57-6770-U75U-7727. [Cross Ref]

30. Morgan M., Habib A. Interpolation of LiDAR data and automatic building extraction; Proceedings of ACSM-ASPRS 2002 Annual Conference; Denver, CO, USA. 12–14 November 2002.

31. Fischler M.A., Bolles R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM. 1981;24:381–395. doi: 10.1145/358669.358692. [Cross Ref]

32. Ameri B. Ph.D. Thesis. University of Stuttgart; Stuttgart, Germany: 2000. Automatic Recognition and 3D Reconstruction of Buildings through Computer Vision and Digital Photogrammetry.

33. Weidner U., Förstner W. Towards automatic building extraction from high resolution digital elevation models. ISPRS J. Photogramm. Remote Sens. 1995;50:38–49. doi: 10.1016/0924-2716(95)98236-S. [Cross Ref]

34. Jwa Y., Sohn G., Cho W., Tao V. An Implicit geometric regularization of 3D building shape using airborne LiDAR data; Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Science; Beijing, China. 3–11 July 2008; pp. 69–76.

35. Zhou Q.Y., Neumann U. 2.5D building modeling by discovering global regularities; Proceedings of the 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition; Providence, RI, USA. 16–21 June 2012; pp. 326–333.

36. Ameri B., Fritsch D. Automatic 3D building reconstruction using plane-roof structures; Proceedings of the American Society for Photogrammetry and Remote Sensing Conference; Washington, DC, USA. 21–26 May 2000.

37. Rissanen J. Modeling by the shortest data description. Automatica. 1978;14:465–471. doi: 10.1016/0005-1098(78)90005-5. [Cross Ref]

38. Grünwald P. A tutorial introduction to the minimum description length principle. In: Grünwald P., Myung I.J., Pitt M., editors. Advances in Minimum Description Length: Theory and Applications. MIT Press; Cambridge, MA, USA: 2005. pp. 3–81.

39. Davies R.H., Twining C.J., Cootes T.F., Waterton J.C., Taylor C.J. A minimum description length approach to statistical shape modeling. IEEE Trans. Med. Imaging. 2002;21:525–537. doi: 10.1109/TMI.2002.1009388. [PubMed] [Cross Ref]

40. Gennert M.A., Yuille A.L. Determining the optimal weights in multiple objective function optimization; Proceedings of the Second International Conference on Computer Vision; Tempa, FL, USA. 5–8 December 1988; pp. 87–89.

41. Lotfi F.H., Fallahnejad R. Imprecise Shannon’s entropy and multi Attribute decision making. Entropy. 2010;12:53–62. doi: 10.3390/e12010053. [Cross Ref]

42. Zou Z., Yun Y., Sun J. Entropy method for determination of weight of evaluating indicators in fuzzy synthetic evaluation for water quality assessment. J. Environ. Sci. 2006;18:1020–1023. doi: 10.1016/S1001-0742(06)60032-6. [PubMed] [Cross Ref]

43. ISPRS ISPRS Test Project on Urban Classification and 3D Building Reconstruction and Semantic Labeling. [(accessed on 15 December 2016)]. Available online: http://www2.isprs.org/commissions/comm3/wg4/tests.html.

44. Sohn G., Jwa Y., Kim H.B. Automatic powerline scene classification and reconstruction using airborne LiDAR data; Proceedings of the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences; Melbourne, Australia. 25 August–1 September 2012; pp. 167–172.

45. Huttenlocher D.P., Klanderman G.A., Rucklidge W.J. Comparing images using the Hausdorff distance. IEEE Trans. Pattern Anal. Mach. Intell. 1993;15:850–863. doi: 10.1109/34.232073. [Cross Ref]

46. Arkin E., Chew L.P., Huttenlocher D.P., Kedem K., Mitchell J.S.B. An efficiently computable metric for comparing polygonal shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1991;13:209–215. doi: 10.1109/34.75509. [Cross Ref]

47. Awrangjeb M., Zhang C., Fraser C.S. Automatic extraction of building roofs using LIDAR data and multispectral imagery. ISPRS J. Photogramm. Remote Sens. 2013;83:1–18. doi: 10.1016/j.isprsjprs.2013.05.006. [Cross Ref]

48. Oude Elberink S., Vosselman G. Quality analysis on 3D building models reconstructed from airborne laser scanning data. ISPRS J. Photogramm. Remote Sens. 2011;66:157–165. doi: 10.1016/j.isprsjprs.2010.09.009. [Cross Ref]

49. Xiong B., Oude Elberink S., Vosselman G. A graph edit dictionary for correcting errors in roof topology graphs reconstructed from point clouds. ISPRS J. Photogramm. Remote Sens. 2014;93:227–242. doi: 10.1016/j.isprsjprs.2014.01.007. [Cross Ref]

50. Perera S., Nalani H.A., Maas H.G. An automated method for 3D roof outline generation and regularization in airborne laser scanner data; Proceedings of the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences; Melbourne, Australia. 25 August–1 September 2012; pp. 281–286.

51. Bulatov D., Häufel G., Meidow J., Pohl M., Solbrig P., Wernerus P. Context-based automatic reconstruction and texturing of 3D urban terrain for quick-response tasks. ISPRS J. Photogramm. Remote Sens. 2014;93:157–170. doi: 10.1016/j.isprsjprs.2014.02.016. [Cross Ref]

52. Zhang W., Grussenmeyer P., Yan G., Mohamed M. Primitive-based building reconstruction by integration of LiDAR data and optical imagery; Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Science; Calgary, AB, Canada. 29–31 August 2011; pp. 7–12.

53. Shahbazi M., Sohn G., Théau J., Ménard P. Revisiting intrinsic curves for efficient dense stereo matching; ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences; Prague, Czech Republic. 12–19 July 2016; pp. 123–130.

Articles from Sensors (Basel, Switzerland) are provided here courtesy of **Multidisciplinary Digital Publishing Institute (MDPI)**

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |