Home | About | Journals | Submit | Contact Us | Français |

**|**Sensors (Basel)**|**v.17(2); 2017 February**|**PMC5336074

Formats

Article sections

- Abstract
- 1. Introduction
- 2. Overview of the Proposed Method
- 3. Sensor Pose Estimation
- 4. Structure from Motion
- 5. Experimental Results
- 6. Conclusions
- References

Authors

Related links

Sensors (Basel). 2017 February; 17(2): 242.

Published online 2017 February 3. doi: 10.3390/s17020242

PMCID: PMC5336074

Angelo Maria Sabatini, Academic Editor

Received 2016 September 27; Accepted 2017 January 17.

Copyright © 2017 by the authors.

Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

This article has been cited by other articles in PMC.

This paper presents a novel structure-from-motion methodology using 2D lidars (Light Detection And Ranging). In 3D space, 2D lidars do not provide sufficient information for pose estimation. For this reason, additional sensors have been used along with the lidar measurement. In this paper, we use a sensor system that consists of only 2D lidars, without any additional sensors. We propose a new method of estimating both the 6D pose of the system and the surrounding 3D structures. We compute the pose of the system using line segments of scan data and their corresponding planes. After discarding the outliers, both the pose and the 3D structures are refined via nonlinear optimization. Experiments with both synthetic and real data show the accuracy and robustness of the proposed method.

Two-dimensional lidars (Light Detection And Ranging) have been one of the most important sensors in many robotic applications because of its accuracy and robustness at measuring distance. It provides two-dimensional distance measurements on its own scanning plane. Many robotic systems [1,2,3,4] that require localization or even simultaneous localization and mapping (SLAM) have successfully utilized 2D lidars as their essential sensor. In recent years, many low-cost depth cameras have become popular (e.g., Microsoft Kinect, Intel RealSense, etc.) and have been used for various applications, such as photometric vision [5,6], object recognition [7,8] and odometry estimation [9,10,11]. However, 2D lidars are still preferred because of their long range and robustness to lighting conditions.

Compared to depth cameras, one of the key drawbacks of 2D lidars is that it provides only two-dimensional distance measurements at a time. It usually consists of a laser range finder and a rotating mirror to measure distances of scene points on a scan plane. Two-dimensional localization and mapping [12] is a popular application of 2D lidars, which is useful for wheeled robots and ground vehicles. It works based on the assumption that the motion of the lidar is two-dimensional, that the measurements have been collected at a consistent height. Once the assumption is broken due to the tilting motion of the lidar, there is no way to estimate the six degree-of-freedom (DOF) motion of the lidar.

Several approaches have been proposed to handle 2D lidars in a 3D space. The first approach is to fuse another motion information with the lidar measurements. Many systems utilize inertial measurement units (IMUs) to compensate for the tilting motion of the sensor [13,14,15]. However, this sensor fusion mostly aims to make the measurements of the tilted sensor match the 2D map, so that the compensated measurements can be used for 2D applications.

The second and most preferred solution is to build a 3D lidar system. One 2D lidar on a rotating stage provides 3D measurements around the sensor [16,17,18]. This 3D scanning device becomes an essential sensor for outdoor and automated systems such as self-driving cars [19,20,21,22]. Most of the successful systems participating in the defense advanced research projects agency (DARPA) urban challenge [23], which requires self-driving capability in urban environments, have been equipped with the 3D lidar system. Zhang and Singh [24] rotated a 2D lidar to obtain depth information. The advantage of their work is that the whole 3D structure is captured in a static pose, while the proposed system must move. To use their system, however, the motor must also be calibrated with high accuracy because their work assumes that the rotation speed is known. Moreover, their algorithm has a probability of divergence in the optimization process because it considers the system’s motion during a single 3D scan.

The final approach is to use visual information along with the lidar measurements [25]. Because vision sensors provide abundant information about the scene, fusion of the visual information and the distance measurements by the 2D lidars enable 6D pose estimation. Pose estimation using cameras is related to ‘structure-from-motion (SFM)’, which is one of most popular issues in the area of computer vision. However, it requires many distinguishable visual features. Camera-based SFM fails if there are few visual features due to homogeneous areas.

In this paper, we propose a novel methodology of estimating the 6D pose of a system that consists of only 2D lidars. It is basically similar to the SFM framework, but is purely based on the measurements of 2D lidars. Instead of visual features corresponding to a 3D point cloud, we utilize the line segments of the scan data corresponding to the planar structures of a target scene. The pose of the system is initialized and refined using a number of the line–plane correspondences, and the 3D structures of the target scene are also updated via a nonlinear refinement process. The proposed algorithm is verified using real data. It successfully estimated the 3D structures and the 6D poses of the sensors without any additional sensors or motion constraint.

The flow chart of the proposed method is shown in Figure 1. It is divided into two steps, the initialization step and the SFM step. In the first step, we assume that the parameters of the three planes scanned by 2D lidars are known. Line-plane correspondences are designated manually, and the sensor pose is estimated using the correspondences (Section 3). The pose estimation is iterated until new planes are detected (Section 4.2). If we detect a new plane, we refine both the sensor poses and plane parameters by minimizing the squared sum of the Euclidean distances between the scan data and the corresponding plane (Section 4.3). The SFM process after the initialization step is similar to that of the initialization step, except the designation of line-plane correspondences is done automatically using the previous pose information (Section 4.1). After initializing the sensor pose, both the sensor poses and the plane parameters are refined via nonlinear optimization, even when a new plane is not detected.

This section presents the algorithm of estimating the pose of a sensor system that consists of 2D lidars. It is assumed that the system is fully calibrated. The sensor system and its own coordinate system are referred to as the ‘sensor’ and ‘sensor coordinate system’, respectively, for the rest of this paper.

In this section, we show that the sensor pose is represented with two lines on two planes. Let us assume that we have a plane in the world coordinate system with known parameters, $\mathrm{\Pi}={\left[{\mathbf{u}}^{\top}\phantom{\rule{4pt}{0ex}}d\right]}^{\top}$, where $\mathbf{u}$ and *d* denote the unit normal vector and the constant term of the plane, respectively. If it is scanned by a 2D lidar, the intersecting line of the plane and the lidar’s scan plane appears in the scan data. The line can be defined by two points ${\mathbf{p}}^{\prime}$ and ${\mathbf{q}}^{\prime}$ in the sensor coordinate system. They are also represented as $\mathbf{p}=\mathbf{R}{\mathbf{p}}^{\prime}+\mathbf{t}$ and $\mathbf{q}=\mathbf{R}{\mathbf{q}}^{\prime}+\mathbf{t}$ in the world coordinate system, where $\left[\mathbf{R}\phantom{\rule{4pt}{0ex}}\mathbf{t}\right]$ is the sensor-to-world transformation, i.e., the sensor pose. The direction vector $\mathbf{v}\equiv \mathbf{p}-\mathbf{q}$ of the line in the world coordinate system becomes simply

$$\mathbf{v}=\mathbf{R}{\mathbf{v}}^{\prime},$$

(1)

where ${\mathbf{v}}^{\prime}\equiv {\mathbf{p}}^{\prime}-{\mathbf{q}}^{\prime}$ is the direction vector of the line in the sensor coordinate system.

The points $\mathbf{p}$ and $\mathbf{q}$ must lie on the plane Π, and this gives

$$\begin{array}{cc}& {\mathrm{\Pi}}^{\top}\left[\begin{array}{c}\mathbf{p}\\ 1\end{array}\right]={\mathbf{u}}^{\top}(\mathbf{R}{\mathbf{p}}^{\prime}+\mathbf{t})+d=0,\hfill \end{array}$$

(2)

$$\begin{array}{c}{\mathbf{u}}^{\top}\mathbf{R}({\mathbf{p}}^{\prime}-{\mathbf{q}}^{\prime})={\mathbf{u}}^{\top}\mathbf{R}{\mathbf{v}}^{\prime}=0.\hfill \end{array}$$

(3)

Once we have multiple line–plane correspondences and the known rotation $\mathbf{R}$, the translation $\mathbf{t}$ is simply estimated using Equation (2) via a linear estimation. Thus, we focus on the estimation of the rotation matrix $\mathbf{R}$.

We first show that the rotation $\mathbf{R}$ can be decomposed into the two parts, one computed using the direction vectors in the world coordinate system and the other computed using the direction vectors in the sensor coordinate system.

Let us assume that we have two scan lines ${\mathbf{l}}_{1}$ and ${\mathbf{l}}_{2}$ corresponding to two planes ${\mathrm{\Pi}}_{1}$ and ${\mathrm{\Pi}}_{2}$, respectively, as shown in Figure 2. The direction vector of ${\mathbf{l}}_{n}$ is defined as ${\mathbf{v}}_{n}$ in the world coordinate system, and ${\mathbf{v}}_{n}^{\prime}$ in the sensor coordinate system. It should be noted that the plane parameters ${\mathrm{\Pi}}_{n}$ are defined only in the world coordinate system.

Parameterization of rotation matrix using two line angles, ${\theta}_{1}$ and ${\theta}_{2}$, on the scene planes. The sensor-to-world rotation matrix $\mathbf{R}$ is expressed as two rotation matrices ${\mathbf{R}}_{iw}$ and ${\mathbf{R}}_{is}$. The rotation matrix ${\mathbf{R}}_{iw}$ is only dependent on ${\theta}_{}$ **...**

We set an intermediate coordinate system, whose *x*-*z* plane is equal to the plane spanned by ${\mathbf{v}}_{1}$ and ${\mathbf{v}}_{2}$, and its *x*-axis coincides with ${\mathbf{v}}_{1}$. The rotation ${\mathbf{R}}_{iw}$ from the intermediate coordinate system to the world coordinate system becomes

$${\mathbf{R}}_{iw}=\left[\begin{array}{ccc}{\mathbf{r}}_{iw}^{1}& {\mathbf{r}}_{iw}^{2}& ({\mathbf{r}}_{iw}^{1}\times {\mathbf{r}}_{iw}^{2})\end{array}\right],$$

(4)

where

$${\mathbf{r}}_{iw}^{1}=\frac{{\mathbf{v}}_{1}}{|{\mathbf{v}}_{1}|},\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}{\mathbf{r}}_{iw}^{2}=\frac{{\mathbf{v}}_{1}\times {\mathbf{v}}_{2}}{|{\mathbf{v}}_{1}\times {\mathbf{v}}_{2}|}.$$

(5)

Similarly, the rotation ${\mathbf{R}}_{is}$ from the intermediate coordinate to the sensor coordinate is

$${\mathbf{R}}_{is}=\left[\begin{array}{ccc}{\mathbf{r}}_{is}^{1}& {\mathbf{r}}_{is}^{2}& ({\mathbf{r}}_{is}^{1}\times {\mathbf{r}}_{is}^{2})\end{array}\right],$$

(6)

where

$${\mathbf{r}}_{is}^{1}=\frac{{\mathbf{v}}_{1}^{\prime}}{|{\mathbf{v}}_{1}^{\prime}|},\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}{\mathbf{r}}_{is}^{2}=\frac{{\mathbf{v}}_{1}^{\prime}\times {\mathbf{v}}_{2}^{\prime}}{|{\mathbf{v}}_{1}^{\prime}\times {\mathbf{v}}_{2}^{\prime}|}.$$

(7)

Thus, the rotation $\mathbf{R}$ from the sensor coordinate system to the world coordinate system is simply

$$\mathbf{R}={\mathbf{R}}_{iw}{\mathbf{R}}_{is}^{\top}.$$

(8)

Note that ${\mathbf{R}}_{iw}$ is dependent only on ${\mathbf{v}}_{n}$, while ${\mathbf{R}}_{is}$ is only dependent on ${\mathbf{v}}_{n}^{\prime}$. From the sensor measurements, ${\mathbf{R}}_{is}$ is already known as well as the direction vectors ${\mathbf{v}}_{n}^{\prime}$. To estimate the rotation matrix completely, we need only two corresponding direction vectors ${\mathbf{v}}_{1}$ and ${\mathbf{v}}_{2}$ in the world coordinate system.

Now, we represent the direction vectors ${\mathbf{v}}_{n}$ in terms of angles ${\theta}_{n}$, which will be referred to as ‘line angles’ for the rest of this paper. We define the coordinate system of each plane ${\mathrm{\Pi}}_{n}$ so that its *y*-axis coincides with the plane normal ${\mathbf{u}}_{n}$. Then, the direction vector in the plane coordinate system can be expressed as

$${\mathbf{v}}_{n}^{\left\{n\right\}}={\left[\begin{array}{ccc}\mathrm{cos}{\theta}_{n}& 0& \mathrm{sin}{\theta}_{n}\end{array}\right]}^{\top},$$

(9)

where ${\mathbf{v}}_{n}^{\left\{k\right\}}$ denotes the direction vector ${\mathbf{v}}_{n}$ in the ${\mathrm{\Pi}}_{k}$-coordinate system. The direction vector ${\mathbf{v}}_{n}$ in the world coordinate system is equal to ${\mathbf{v}}_{n}={\mathbf{R}}_{{\mathrm{\Pi}}_{n}}{\mathbf{v}}_{n}^{\left\{n\right\}}$, where ${\mathbf{R}}_{{\mathrm{\Pi}}_{n}}$ is the fixed rotation matrix from the ${\mathrm{\Pi}}_{n}$-coordinate system to the world coordinate system. Until now, the rotation estimation from the sensor to the world coordinate systems are parameterized with two line angles, one for ${\mathbf{v}}_{1}^{\left\{1\right\}}$ and the other for ${\mathbf{v}}_{2}^{\left\{2\right\}}$ on two different planes.

One may wonder how to set up the plane coordinate system with just the plane normal. Of course, there is one-dimensional freedom to choose the *x*- and *z*-axis. It can be set arbitrarily. Because we compute the line angle on the plane coordinate system, different selections of coordinate axes induce a different line angle matching the coordinate selection. Thus, the estimated rotation is invariant to the selection of plane coordinate systems.

In the following section, we will derive how to estimate the line angle ${\theta}_{n}$ in the scene plane.

As mentioned in Section 3.1, the sensor-to-world transformation is obtained if two line angles ${\theta}_{1}$ and ${\theta}_{2}$ in each plane coordinate system are known. To estimate the angles, we use the invariant property of angle between direction vectors: the angle between two direction vectors in the sensor coordinate system should be preserved in the world or plane coordinate system. The inner product ${\alpha}_{2}$ between two direction vectors is simply measurable in the sensor coordinate system and is calculated in ${\mathrm{\Pi}}_{2}$-coordinate system as

$${\alpha}_{3}={{\mathbf{v}}_{2}^{\prime}}^{\top}{\mathbf{v}}_{1}^{\prime}={\mathbf{v}}_{2}^{\left\{2\right\}\top}\left(\mathbf{A}{\mathbf{v}}_{1}^{\left\{1\right\}}\right),$$

(10)

where $\mathbf{A}$ is a rotation matrix from ${\mathrm{\Pi}}_{1}$-coordinate to ${\mathrm{\Pi}}_{2}$-coordinate, i.e., $\mathbf{A}={\mathbf{R}}_{{\mathrm{\Pi}}_{2}}^{\top}{\mathbf{R}}_{{\mathrm{\Pi}}_{1}}$, which is known.

Equation (10) contains two unknown variables ${\theta}_{1}$ and ${\theta}_{2}$ in ${\mathbf{v}}_{1}^{\left\{1\right\}}$ and ${\mathbf{v}}_{2}^{\left\{2\right\}}$, respectively. Thus, it is not possible to solve it directly with only a single constraint Equation (10). This is resolved by introducing another line ${\mathbf{l}}_{3}$ on plane ${\mathrm{\Pi}}_{3}$. By adding one line, we have one more unknown ${\theta}_{3}$ and two more constraint equations from the in-between angles as

$$\begin{array}{c}{\alpha}_{2}={{\mathbf{v}}_{3}^{\prime}}^{\top}{\mathbf{v}}_{1}^{\prime}={\mathbf{v}}_{3}^{\left\{3\right\}\top}\left(\mathbf{B}{\mathbf{v}}_{1}^{\left\{1\right\}}\right),\hfill \end{array}$$

(11)

$$\begin{array}{c}{\alpha}_{1}={{\mathbf{v}}_{3}^{\prime}}^{\top}{\mathbf{v}}_{2}^{\prime}={\mathbf{v}}_{3}^{\left\{3\right\}\top}\left(\mathbf{C}{\mathbf{v}}_{2}^{\left\{2\right\}}\right),\hfill \end{array}$$

(12)

where $\mathbf{B}={\mathbf{R}}_{{\mathrm{\Pi}}_{3}}^{\top}{\mathbf{R}}_{{\mathrm{\Pi}}_{1}}$ and $\mathbf{C}={\mathbf{R}}_{{\mathrm{\Pi}}_{3}}^{\top}{\mathbf{R}}_{{\mathrm{\Pi}}_{2}}$ are the rotation matrices from ${\mathrm{\Pi}}_{1}$- to ${\mathrm{\Pi}}_{3}$-coordinate system and from ${\mathrm{\Pi}}_{2}$- to ${\mathrm{\Pi}}_{3}$-coordinate system, respectively. Now, we have three trigonometric equations with three unknowns. The geometric relation is shown in Figure 3.

Three lines on three planes. From the three known angles among the lines, we drive three equations of ${\theta}_{1}$, ${\theta}_{2}$ and ${\theta}_{3}$. $\mathbf{A}$, $\mathbf{B}$ and $\mathbf{C}$ indicate the known rotation matrices from ${\mathrm{\Pi}}_{1}$- to ${\mathrm{\Pi}}_{2}$-coordinate system, from ${\mathrm{\Pi}}_{}$ **...**

The elements of $\mathbf{A}$, $\mathbf{B}$ and $\mathbf{C}$ at the *i*-th row and the *j*-th column are denoted by ${a}_{ij}$, ${b}_{ij}$ and ${c}_{ij}$, respectively. Equations (10) and (11) are expressed in terms of ${\theta}_{n}$:

$$\begin{array}{c}{k}_{1}{c}_{2}+{k}_{2}{s}_{2}={\alpha}_{2},\hfill \end{array}$$

(13)

$$\begin{array}{c}{k}_{3}{c}_{3}+{k}_{4}{s}_{3}={\alpha}_{3},\hfill \end{array}$$

(14)

where we defined that

$$\begin{array}{c}{c}_{n}\equiv cos{\theta}_{n},\hfill \end{array}$$

(15)

$$\begin{array}{c}{s}_{n}\equiv sin{\theta}_{n},\hfill \end{array}$$

(16)

$$\begin{array}{c}{k}_{1}\equiv {a}_{11}{c}_{1}+{a}_{13}{s}_{1},\hfill \end{array}$$

(17)

$$\begin{array}{c}{k}_{2}\equiv {a}_{31}{c}_{1}+{a}_{33}{s}_{1},\hfill \end{array}$$

(18)

$$\begin{array}{c}{k}_{3}\equiv {b}_{11}{c}_{1}+{b}_{13}{s}_{1},\hfill \end{array}$$

(19)

$$\begin{array}{c}{k}_{4}\equiv {b}_{31}{c}_{1}+{b}_{33}{s}_{1}.\hfill \end{array}$$

(20)

From Equation (13) and ${c}_{2}^{2}+{s}_{2}^{2}=1$, we obtain ${c}_{2}$ and ${s}_{2}$ as functions of ${\theta}_{1}$:

$$\begin{array}{c}{c}_{2}=\frac{{\alpha}_{2}{k}_{1}\pm {k}_{2}\sqrt{{k}_{1}^{2}+{k}_{2}^{2}-{\alpha}_{2}^{2}}}{{k}_{1}^{2}+{k}_{2}^{2}},\hfill \end{array}$$

(21)

$$\begin{array}{c}{s}_{2}=\frac{{\alpha}_{2}{k}_{2}\mp {k}_{1}\sqrt{{k}_{1}^{2}+{k}_{2}^{2}-{\alpha}_{2}^{2}}}{{k}_{1}^{2}+{k}_{2}^{2}}.\hfill \end{array}$$

(22)

Similarly, we derive ${c}_{3}$ and ${s}_{3}$ as functions of ${\theta}_{1}$ from Equation (14) and ${c}_{3}^{2}+{s}_{3}^{2}=1$:

$$\begin{array}{c}{c}_{3}=\frac{{\alpha}_{1}{k}_{3}{\pm}^{\prime}{k}_{4}\sqrt{{k}_{3}^{2}+{k}_{4}^{2}-{\alpha}_{3}^{2}}}{{k}_{3}^{2}+{k}_{4}^{2}},\hfill \end{array}$$

(23)

$$\begin{array}{c}{s}_{3}=\frac{{\alpha}_{1}{k}_{4}{\mp}^{\prime}{k}_{3}\sqrt{{k}_{3}^{2}+{k}_{4}^{2}-{\alpha}_{3}^{2}}}{{k}_{3}^{2}+{k}_{4}^{2}}.\hfill \end{array}$$

(24)

The sets of double signs, (±, ) and (${\pm}^{\prime}$, ${\mp}^{\prime}$), are independent. We substitute Equations (21)–(24) into Equation (12)

$${k}_{11}-{k}_{12}=\mp \left({\pm}^{\prime}2({k}_{7}{k}_{8}+{k}_{9}{k}_{10})\sqrt{{k}_{5}{k}_{6}}\right),$$

(25)

where we define that

$$\begin{array}{c}{k}_{5}\equiv {k}_{1}^{2}+{k}_{2}^{2}-{\alpha}_{2}^{2},\hfill \end{array}$$

(26)

$$\begin{array}{c}{k}_{6}\equiv {k}_{3}^{2}+{k}_{4}^{2}-{\alpha}_{1}^{2},\hfill \end{array}$$

(27)

$$\begin{array}{c}{k}_{7}\equiv {\alpha}_{3}\left({c}_{11}{k}_{2}{k}_{3}-{c}_{13}{k}_{1}{k}_{3}+{c}_{31}{k}_{2}{k}_{4}-{c}_{33}{k}_{1}{k}_{4}\right),\hfill \end{array}$$

(28)

$$\begin{array}{c}{k}_{8}\equiv {\alpha}_{2}\left({c}_{11}{k}_{1}{k}_{4}+{c}_{13}{k}_{2}{k}_{4}-{c}_{31}{k}_{1}{k}_{3}-{c}_{33}{k}_{2}{k}_{3}\right),\hfill \end{array}$$

(29)

$$\begin{array}{c}{k}_{9}\equiv {\alpha}_{3}{\alpha}_{2}\left(-{c}_{11}{k}_{1}{k}_{3}-{c}_{13}{k}_{2}{k}_{3}-{c}_{31}{k}_{1}{k}_{4}-{c}_{33}{k}_{2}{k}_{4}\right)+{\alpha}_{1}\left({k}_{1}^{2}+{k}_{2}^{2}\right)\left({k}_{3}^{2}+{k}_{4}^{2}\right),\hfill \end{array}$$

(30)

$$\begin{array}{c}{k}_{10}\equiv {c}_{11}{k}_{2}{k}_{4}-{c}_{13}{k}_{1}{k}_{4}-{c}_{31}{k}_{2}{k}_{3}+{c}_{33}{k}_{1}{k}_{3},\hfill \end{array}$$

(31)

$$\begin{array}{c}{k}_{11}\equiv {k}_{5}{k}_{7}^{2}+{k}_{6}{k}_{8}^{2},\hfill \end{array}$$

(32)

$$\begin{array}{c}{k}_{12}\equiv {k}_{9}^{2}+{k}_{5}{k}_{6}{k}_{10}^{2}.\hfill \end{array}$$

(33)

Squaring both sides of Equation (25) to remove the root sign, we obtain a 16th order equation with two variables, ${c}_{1}$ and ${s}_{1}$. For convenience of calculation, we change the equation to a 16th order polynomial equation of a single variable. We multiply the squared Equation (25) by $1/{s}_{1}^{16}$, and every term of the equation can be expressed as

$$\begin{array}{c}\hfill h{\left(\frac{{c}_{1}}{{s}_{1}}\right)}^{{\sigma}_{1}}{\left(\frac{1}{{s}_{1}^{2}}\right)}^{{\sigma}_{2}},\hfill \\ \hfill ({\sigma}_{1}\ge 0,{\sigma}_{2}\ge 0,{\sigma}_{1}+2{\sigma}_{2}=16),\hfill \end{array}$$

(34)

where we defined that the *h* is a coefficient value, and the ${\sigma}_{1}$ and the ${\sigma}_{2}$ are integer values. Substituting $1+({c}_{1}^{2}/{s}_{1}^{2})$ to $1/{s}_{1}^{2}$, the squared Equation (25) is expressed as a 16th order polynomial equation of ${c}_{1}/{s}_{1}$:

$$\begin{array}{c}h{\left(\frac{{c}_{1}}{{s}_{1}}\right)}^{{\sigma}_{1}}{\left(1+\frac{{c}_{1}^{2}}{{s}_{1}^{2}}\right)}^{{\sigma}_{2}}=h{\left(\frac{{c}_{1}}{{s}_{1}}\right)}^{{\sigma}_{1}}+h{\left(\frac{{c}_{1}}{{s}_{1}}\right)}^{{\sigma}_{1}+2{\sigma}_{2}}.\end{array}$$

(35)

From the 16th order polynomial equation, we obtain 16 candidates of ${c}_{1}/{s}_{1}$, which is equal to $1/tan{\theta}_{1}$, within the range from $-\mathsf{\pi}/2$ to $\mathsf{\pi}/2$. Because $tan({\theta}_{1}+\mathsf{\pi})$ is equal to $tan{\theta}_{1}$, the number of candidates is doubled. They are substituted into Equations (21)–(24) to calculate ${\theta}_{2}$ and ${\theta}_{3}$. Finally, we obtain combinations of the three angles, ${\theta}_{1}$, ${\theta}_{2}$, and ${\theta}_{3}$ that satisfy Equation (3) and Equation (12).

From the solutions of the rotation matrix $\mathbf{R}$ that were calculated in Section 3.2, we obtain solutions of the translation vector $\mathbf{t}$ using a least square method with singular value decomposition (SVD)

$$\left[\begin{array}{c}{\mathbf{u}}_{n}^{\top}\phantom{\rule{4pt}{0ex}}{\mathbf{u}}_{n}^{\top}\mathbf{R}{\mathbf{p}}_{n}^{\prime}+{d}_{n}\\ {\mathbf{u}}_{n}^{\top}\phantom{\rule{4pt}{0ex}}{\mathbf{u}}_{n}^{\top}\mathbf{R}{\mathbf{q}}_{n}^{\prime}+{d}_{n}\end{array}\right]\left[\begin{array}{c}\mathbf{t}\\ 1\end{array}\right]=0.$$

(36)

All the estimated poses $\mathbf{R}$ and $\mathbf{t}$ satisfy the given conditions Equation (3) and Equations (10)–(12), but there exist physically non-realizable poses. To choose only physically realizable poses, we adopt a new physical constraint that the location of the sensor must be in front of all planes.

A plane divides the 3D space into two sides, a front side and a back side. In our implementation, we set the normal directions of planes as their back side (the sign of ${d}_{n}$ is also determined simultaneously). Since the sensor should be located only in front of the planes, the inner product of the plane normals and the sensor position vector must be negative:

$${\mathrm{\Pi}}_{n}^{\top}\mathbf{t}<0.$$

(37)

Multiple pose candidates may meet this condition, and, in our experience, there are two physically realizable poses in most cases. It is impossible to distinguish which one is true. We can further choose the true six-dimensional sensor pose using an additional line–plane correspondence or an assumption that the pose of the sensor does not change much compared to the previous one, in the case of continuous sequences.

The whole process of the proposed method is described in Algorithm 1.

Algorithm 1 6-DOF Pose Estimation of a Lidar System using Three Lines |

INPUT(a) Six points on the three scan lines (two in each line) in the sensor coordinate system (b) Parameters of three planes in the world coordinate system (c) One-to-one correspondences between scan lines and planes. OUTPUT: Sensor-to-world transformation, $\mathbf{R}$ and $\mathbf{t}$1. Compute ${\alpha}_{n}$ using three line vector, ${\mathbf{v}}_{n}^{\left\{n\right\}}={\mathbf{p}}_{n}^{\prime}-{\mathbf{q}}_{n}^{\prime}$, in the sensor coordinate system. ( n = 1–3)2. Compute three rotation matrices $\mathbf{A}$, $\mathbf{B}$, and $\mathbf{C}$ using plane parameters in the world coordinate system. 3. Compute the coefficients of ${k}_{1}$ to ${k}_{12}$. 4. Compute the coefficients of 16th order polynomial equation using ${k}_{1}$ to ${k}_{12}$. 5. Solve the equation to obtain candidates of ${\theta}_{1}$ 6. Obtain combinations of ${\theta}_{1}$, ${\theta}_{2}$ and ${\theta}_{3}$ from Equations (21) to (24). 7. Extract combinations of ${\theta}_{1}$, ${\theta}_{2}$ and ${\theta}_{3}$ that satisfy Equation (3) and Equation (12). 8. Determine solutions of the sensor-to-world transformation, $\mathbf{R}$ and $\mathbf{t}$, which meet the conditions mentioned in Section 3.3. |

As we mentioned before, the line-plane correspondences in the initialization step are designated manually. After the initialization, the correspondences are designated automatically using the previous pose information. A new scan line obtained from the sensor system can be defined by two points, ${\mathbf{p}}^{\prime}$ and ${\mathbf{q}}^{\prime}$, in the sensor coordinate system. We transform the points into the world coordinate system using the previous pose $\left[{\mathbf{R}}_{pre}\phantom{\rule{4pt}{0ex}}{\mathbf{t}}_{pre}\right]$, which gives

$$\begin{array}{cc}\widehat{\mathbf{p}}\hfill & ={\mathbf{R}}_{pre}{\mathbf{p}}^{\prime}+{\mathbf{t}}_{pre},\hfill \\ \widehat{\mathbf{q}}\hfill & ={\mathbf{R}}_{pre}{\mathbf{q}}^{\prime}+{\mathbf{t}}_{pre}.\hfill \end{array}$$

(38)

We compute the distance between two points and every plane and find the minimum value ${l}_{min}$:

$$\begin{array}{c}{l}_{min}=\underset{i=1\sim M}{min}\sqrt{{({\mathbf{u}}_{i}^{\top}\widehat{\mathbf{p}}+{d}_{i})}^{2}+{({\mathbf{u}}_{i}^{\top}\widehat{\mathbf{q}}+{d}_{i})}^{2}},\hfill \end{array}$$

(39)

where *M* is the number of planes. If ${l}_{min}$ is smaller than a user-defined threshold, we designate the pair of the line and the plane as a line-plane correspondence. Lines with ${l}_{min}$ bigger than the threshold do not correspond to any plane. They will be referred to as ‘non-assigned lines’ for the rest of this paper.

The non-assigned lines are accumulated as we repeat the structure-from-motion process. If the number of non-assigned lines is bigger than a user-defined number, we detect new planes using these lines. In this paper, the user-defined number was used as 30, which was determined empirically. We select two lines to generate a plane candidate and measure distances between the candidate and the remaining non-assigned lines. Those with the distance smaller than a user-defined threshold are classified as inliers of the plane candidate. Among the plane candidates generated by all combinations of non-assigned lines, we determine one with the largest number of inliers as a new plane.

All variables in the whole sequence are refined via nonlinear optimization. The cost function *f* for the optimization is the squared sum of the Euclidean distances between the plane and lidar points in the world coordinate system

$$f({\mathrm{\Pi}}_{1}\sim {\mathrm{\Pi}}_{M},{\mathbf{R}}_{1}\sim {\mathbf{R}}_{N},{\mathbf{t}}_{1}\sim {\mathbf{t}}_{N})=\sum _{i=1}^{M}\sum _{j\in {A}_{i}}\sum _{\mathbf{p}\in {B}_{i,j}}{\mathrm{\Pi}}_{i}^{\top}({\mathbf{R}}_{j}\mathbf{p}+{\mathbf{t}}_{j}),$$

(40)

where *N* is the number of scans of sensor data. ${A}_{i}$ is the set of pose indices that scan the plane ${\mathrm{\Pi}}_{i}$, and ${B}_{i,j}$ is the set of points that lie on the plane ${\mathrm{\Pi}}_{i}$ and belong to the *j*-th pose. We adopt the Levenberg-Marquardt [26] algorithm for the nonlinear optimization.

The refinement may be achieved by conventional registration methods such as iterative closest point (ICP) [27,28]. However, the nonlinear optimization is better than the registration because of three reasons listed below:

- the correspondence between points and planes are not changed in the optimization process;
- the overlap between a single scan and a point cloud is very narrow (several lines on two scanning planes);
- the nonlinear optimization utilizes Jacobian while the registration does not.

We evaluated the performance of the proposed pose estimation and SFM algorithms through several experiments. In every experiment, the error of an estimated pose $\mathbf{T}$ is defined as a residual transformation $\delta \mathbf{T}={\mathbf{T}}_{ref}^{-1}\mathbf{T}$ between the reference transformation ${\mathbf{T}}_{ref}$ and the estimated pose. Rotation error is measured by the rotation angle of Rodrigues’ rotation formula [29] of the rotation in the residual transformation, and the translation error is measured by the translation part of $\delta \mathbf{T}$.

We performed two experiments that use synthetic and real data to evaluate the performance of the proposed pose estimation algorithm. For the synthetic experiment, we generate a set of data that consists of a sensor-to-world transformation, three plane parameters and six points on the planes. The transformation is generated randomly with its maximum rotation angle as 50 degrees and maximum translation as four meters along every axis. The plane parameters are also generated randomly with a maximum distance of five meters from the world origin. The points are generated in the interval of $[-4,\phantom{\rule{4pt}{0ex}}4]$ meters along every axis from the world origin. Two of the three coordinates are generated randomly while the other is computed using the plane parameters to generate points ‘on the planes’. We added white noise with varying intervals to the points and compared the result by the proposed algorithm to the ground truth of the sensor-to-world transformation. We performed 1000 trials and computed the mean and standard deviation of the rotation and translation error, shown in Figure 4. The measurement noise in the horizontal axis indicates the interval in which the noise is generated. Both the rotation error and translation error increase as the measurement noise increases. Although the proposed algorithm does not utilize many points to reduce the effect of noise, the result is not very sensitive to the noise of scan data. In real-world cases, moreover, techniques such as line fitting may reduce the effect of noise.

Experimental results using synthetic data. The value *x* of ‘Measurement noise’ (horizontal axis) means that the noise added to the points are generated in the interval of $[-x,\phantom{\rule{4pt}{0ex}}x]$.

For the real data experiment, we designed a hand-held sensor system that consists of two 2D lidars (UTM-30LX, Hokuyo, Osaka, Japan) and three cameras (Flea3, PointGery, Richmond, BC, Canada, $640\times 480$ pixels). The lidars are pointed at the ceiling, and the two cameras are pointed at the side of the lidars (see Figure 5), and one camera is pointed at the forward direction of the system. The cameras are only used to acquire color information. The maximum scan rate of the system is about 20 Hz. The system is fully calibrated: camera intrinsic parameters using [30], lidar to camera [31,32] and between two lidars [33].

Hand-held sensor system for experimental validations. It consists of two 2D lidars headed to the ceiling and three cameras headed to the side of the lidars and heading to the direction of the system. We use only two 2D lidars for pose estimation. The **...**

We captured three non-parallel checkerboard patterns in three different configurations (see Figure 6) to test the performance of the proposed method in various scene structures. For each configuration, the relative poses among the planes are computed using a number of images captured by two cameras with known intrinsic parameters. The ground plane is scanned by both lidars, while the other two are captured by each lidar and camera. The images captured by two cameras are used to compute the ‘reference’ relative poses between the planes and the sensor system. Among all the scans, we selected 351 scans whose images do not include serious motion blur (the sensor was in motion while the data is captured). The projection errors of the checkerboard corners in the selected scans are smaller than one pixel.

The distribution of the difference between the poses computed using laser scans (proposed) and checkerboard images (reference) are displayed in Figure 7, which is estimated by the kernel density estimation method. We show the error distributions of the three different configurations. Most scans have rotation errors smaller than four degrees and translation errors smaller than four millimeters. The mean and standard deviation values of the pose errors are shown in Table 1. The experimental results shows that the proposed method performs robustly in various plane configurations.

The proposed SFM algorithm can be used in many applications, especially localization and mapping. All processes except data collection (i.e., motion estimation, structure-from-motion, and nonlinear-optimization) were performed offline. We set two geometric parameters for the experiments. Only lines longer than 0.5 m were used for line extraction with lidar’s scan points, and the inlier bound of the plane was set at 0.05 m. These parameters are determined empirically. For the first experiment, we scanned a small room using the sensor system continuously and estimated the pose of each scan. As shown in Figure 8a, three perpendicular planes, ${\mathrm{\Pi}}_{1}$, ${\mathrm{\Pi}}_{2}$ and ${\mathrm{\Pi}}_{3}$ are used as references for the initialization step. It should be noted that we extracted four lines from two lidar scans—lines corresponding to ${\mathrm{\Pi}}_{1}$ and ${\mathrm{\Pi}}_{3}$ from one lidar, and lines corresponding to ${\mathrm{\Pi}}_{2}$ and ${\mathrm{\Pi}}_{3}$ from the other lidar—to determine a unique solution for each scan, as mentioned in Section 3.3. In Figure 8b, scanned line segments are displayed in colors of corresponding planes. The results shown in Figure 8c verifies that the proposed method gives accurate pose information of the system. Six-hundred-and-fifty scans are accumulated and eight planes are detected based on the poses to reconstruct the 3D structure of the room.

3D reconstruction results of the proposed method: (**a**) real environment (**b**) scanned line segments displayed in colors of corresponding planes; and (**c**) 3D reconstruction results by accumulating scan data.

The size of the room measured from the reconstruction result is given in Table 2, and compared to that measured by a laser distance meter (GLM 80, Bosch, Stuttgart, Germany).

For the second experiment, we captured scan data while we passed through a stairway. Figure 9 shows the 3D reconstruction result of the target scene. As shown in Figure 10d, the environment is very challenging for vision based methods because it does not contain enough visual features. However, the proposed method does not suffer from the homogeneous areas because it requires only structural information, not visual features. Thus, it gives an accurate 3D reconstruction result as shown in Figure 9a,b. In Figure 9c, the locations of the sensor system (red dots) are overlaid on the 3D reconstruction result. Scanned line segments are also displayed in colors of corresponding planes, in Figure 9d. In this experiment, we detected 23 planes in total. The steps are not detected as planes because the stairs are scanned vertically – the scanning plane was perpendicular to the longer side of the stairs so that lines on them were shorter than the threshold (0.5 meters in our experiment). In Figure 10, we obtained color information of scan data from the images just for visualization. As shown in Figure 10a–c, we can recognize the shapes of both the handrail and the stairs. In this experiment, the sensor system moved about 25 m (down the stairway) in 125 s while it captured 3500 frames of scan data. This demonstrates the potential of the proposed method in lidar-only mapping and navigation.

Reconstruction results: (**a**,**b**) 3D reconstruction results using the proposed method; (**c**) estimated trajectory (red dots) of the sensor system with 3500 scans down the stairs; and (**d**) scanned line segments displayed in colors of corresponding planes.

In this paper, we have presented a novel structure-from-motion methodology using 2D lidars. The proposed algorithm uses only 2D range data, without any additional sensors. We have proposed a pose estimation method using three line-plane correspondences. To estimate the line angles, we have used the angles between measured lines and derived a 16th polynomial equation. We have also proposed a new structure-from-motion process using 2D lidar data. After discarding outliers, both the pose and the 3D structures were refined via nonlinear optimization. The experiments using real data validate that the proposed algorithm provides accurate and robust results in real environments.

There are several works that could improve the proposed system and method. The proposed system uses two 2D lidars to obtain three lines that lie on each different plane using a minimal number of sensors. We will try several sensor configurations to find the optimal line information. In this paper, we performed 3D reconstructions using the proposed method, but the result can be more accurate using recent loop closure techniques on scene matching. Additional sensors, such as GPS, and IMU, or even cameras, can be attached to improve the accuracy of the motion estimation.

This work was supported by the Technology Innovation Program (No. 2017-10069072) funded By the Ministry of Trade, Industry & Energy (MOTIE, Korea).

Author Contributions

D.-G. Choi designed and developed algorithms for the pose estimation and the structure-from-motion; Y. Bok designed the overall methodology and experiments; J.-S. Kim conceived and designed the pose estimation algorithm. I. Shim contributed implementation and experiment of the structure-from-motion algorithm; I.S. Kweon discussed the weaknesses of the system while it was being implemented and tested.

Conflicts of Interest

The authors declare no conflict of interest.

1. Newman P., Cole D., Ho K. Outdoor SLAM using visual appearance and laser ranging; Proceedings of the IEEE International Conference on Robotics and Automation; Orlando, FL, USA. 15–19 May 2006; pp. 1180–1187.

2. Nüchter A., Lingemann K., Hertzberg J., Surmann H. 6D SLAM with approximate data association; Proceedings of the 12th International Conference on Advanced Robotics; Seattle, WA, USA. 17–20 July 2005; pp. 242–249.

3. Cole D.M., Newman P.M. Using laser range data for 3D SLAM in outdoor environments; Proceedings of the IEEE International Conference on Robotics and Automation; Orlando, FL, USA. 15–19 May 2006; pp. 1556–1563.

4. Choi D.G., Shim I., Bok Y., Oh T.H., Kweon I.S. Autonomous homing based on laser-camera fusion system; Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems; Vilamoura, Portugal. 7–12 October 2012; pp. 2512–2518.

5. Han Y., Lee J.Y., Kweon I.S. High quality shape from a single rgb-d image under uncalibrated natural illumination; Proceedings of the IEEE International Conference on Computer Vision; Sydney, Australia. 1–8 December 2013; pp. 1617–1624.

6. Yu L.F., Yeung S.K., Tai Y.W., Lin S. Shading-based shape refinement of rgb-d images; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Portland, OR, USA. 23–28 June 2013; pp. 1415–1422.

7. Lai K., Bo L., Ren X., Fox D. Consumer Depth Cameras for Computer Vision. Springer; London, UK: 2013. RGB-D object recognition: Features, algorithms, and a large scale benchmark; pp. 167–192.

8. Schwarz M., Schulz H., Behnke S. RGB-D object recognition and pose estimation based on pre-trained convolutional neural network features; Proceedings of the IEEE International Conference on Robotics and Automation; Seattle, WA, USA. 26–30 May 2015; pp. 1329–1335.

9. Kerl C., Sturm J., Cremers D. Dense visual SLAM for RGB-D cameras; Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems; Vilamoura, Portugal. 7–12 October 2012; pp. 2100–2106.

10. Kerl C., Sturm J., Cremers D. Robust odometry estimation for RGB-D cameras; Proceedings of the IEEE International Conference on Robotics and Automation; Karlsruhe, Germany. 6–10 May 2013; pp. 3748–3754.

11. Whelan T., Johannsson H., Kaess M., Leonard J.J., McDonald J. Robust real-time visual odometry for dense RGB-D mapping; Proceedings of the IEEE International Conference on Robotics and Automation; Karlsruhe, Germany. 6–10 May 2013; pp. 5724–5731.

12. Wolf D.F., Sukhatme G.S. Mobile robot simultaneous localization and mapping in dynamic environments. Auton. Robots. 2005;19:53–65. doi: 10.1007/s10514-005-0606-4. [Cross Ref]

13. Buckley S., Vallet J., Braathen A., Wheeler W. Oblique helicopter-based laser scanning for digital terrain modelling and visualisation of geological outcrops. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008;37:1–6.

14. Hesch J.A., Mirzaei F.M., Mariottini G.L., Roumeliotis S.I. A Laser-aided Inertial Navigation System (L-INS) for human localization in unknown indoor environments; Proceedings of the IEEE International Conference on Robotics and Automation; Anchorage, AK, USA. 3–8 May 2010; pp. 5376–5382.

15. Bosse M., Zlot R., Flick P. Zebedee: Design of a spring-mounted 3-D range sensor with application to mobile mapping. IEEE Trans. Robot. 2012;28:1104–1119. doi: 10.1109/TRO.2012.2200990. [Cross Ref]

16. Patz B.J., Papelis Y., Pillat R., Stein G., Harper D. A practical approach to robotic design for the darpa urban challenge. J. Field Robot. 2008;25:528–566. doi: 10.1002/rob.20251. [Cross Ref]

17. Bosse M., Zlot R. Continuous 3D scan-matching with a spinning 2D laser; Proceedings of the IEEE International Conference on Robotics and Automation; Kobe, Japan. 12–17 May 2009; pp. 4312–4319.

18. Sheehan M., Harrison A., Newman P. Experimental Robotics. Springer; Berlin/Heidelberg, Germany: 2014. Automatic self-calibration of a full field-of-view 3D n-laser scanner; pp. 165–178.

19. Montemerlo M., Becker J., Bhat S., Dahlkamp H., Dolgov D., Ettinger S., Haehnel D., Hilden T., Hoffmann G., Huhnke B., et al. Junior: The stanford entry in the urban challenge. J. Field Robot. 2008;25:569–597. doi: 10.1002/rob.20258. [Cross Ref]

20. Leonard J., How J., Teller S., Berger M., Campbell S., Fiore G., Fletcher L., Frazzoli E., Huang A., Karaman S., et al. A perception-driven autonomous urban vehicle. J. Field Robot. 2008;25:727–774. doi: 10.1002/rob.20262. [Cross Ref]

21. Markoff J. Google cars drive themselves, in traffic. The New York Times. Oct 9, 2010.

22. Shim I., Choi J., Shin S., Oh T.H., Lee U., Ahn B., Choi D.G., Shim D.H., Kweon I.S. An autonomous driving system for unknown environments using a unified map. IEEE Trans. Intell. Transp. Syst. 2015;16:1999–2013. doi: 10.1109/TITS.2015.2389237. [Cross Ref]

23. Buehler M., Iagnemma K., Singh S. The DARPA Urban Challenge: Autonomous Vehicles in City Traffic. Volume 56 Springer; Berlin/Heidelberg, Germany: 2009.

24. Zhang J., Singh S. LOAM: Lidar odometry and mapping in real-time; Proceedings of the Robotics: Science and Systems Conference (RSS); Berkeley, CA, USA. 12–16 July 2014; pp. 109–111.

25. Zhang J., Singh S. Visual-lidar odometry and mapping: Low-drift, robust, and fast; Proceedings of the IEEE International Conference on Robotics and Automation; Seattle, WA, USA. 26–30 May 2015; pp. 2174–2181.

26. Marquardt D.W. An algorithm for least-squares estimation of nonlinear parameters. J. Soc. Ind. Appl. Math. 1963:431–441. doi: 10.1137/0111030. [Cross Ref]

27. Besl P.J., McKay N.D. Method for registration of 3-D shapes. Robotics-DL tentative. 1992:586–606.

28. Chen Y., Medioni G. Object modelling by registration of multiple range images. Image Vis. Comput. 1992;10:145–155. doi: 10.1016/0262-8856(92)90066-C. [Cross Ref]

29. Belongie S. Rodrigues’ Rotation Formula. From MathWorld—A Wolfram Web Resource, Created by Eric W. Weisstein. 1999. [(accessed on 15 June 2003)]. Available online: http://mathworld.wolfram.com/RodriguesRotationFormula.html.

30. Zhang Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000;22:1330–1334. doi: 10.1109/34.888718. [Cross Ref]

31. Zhang Q., Pless R. Extrinsic calibration of a camera and laser range finder (improves camera calibration); Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems; Sendai, Japan. 28 September–2 October 2004; pp. 2301–2306.

32. Bok Y., Choi D.G., Kweon I.S. Extrinsic calibration of a camera and a 2D laser without overlap. Robot. Auton. Syst. 2016;78:17–28. doi: 10.1016/j.robot.2015.12.007. [Cross Ref]

33. Choi D.G., Bok Y., Kim J.S., Kweon I.S. Extrinsic calibration of 2-D lidars using two orthogonal planes. IEEE Trans. Robot. 2016;32:83–98. doi: 10.1109/TRO.2015.2502860. [Cross Ref]

Articles from Sensors (Basel, Switzerland) are provided here courtesy of **Multidisciplinary Digital Publishing Institute (MDPI)**

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |