PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of sensorsMDPI Open Access JournalsMDPI Open Access JournalsThis articleThis JournalInstructions for authorssubscribe
 
Sensors (Basel). 2016 June; 16(6): 838.
Published online 2016 June 8. doi:  10.3390/s16060838
PMCID: PMC4934264

A Global Calibration Method for Widely Distributed Cameras Based on Vanishing Features

Fabrizio Lamberti, Academic Editor

Abstract

This paper presents a global calibration method for widely distributed vision sensors in ring-topologies. Planar target with two mutually orthogonal groups of parallel lines is needed for each camera. Firstly, the relative pose of each camera and its corresponding target is found from the vanishing points and lines. Next, an auxiliary camera is used to find the relative poses between neighboring pairs of calibration targets. Then the relative pose from each target to the reference target is initialized by the chain of transformations, followed by nonlinear optimization based on the constraint of ring-topologies. Lastly, the relative poses between the cameras are found from the relative poses of calibration targets. Synthetic data, simulation images and real experiments all demonstrate that the proposed method is reliable and accurate. The accumulated error due to multiple coordinate transformations can be adjusted effectively by the proposed method. In real experiment, eight targets are located in an area about 1200 mm × 1200 mm. The accuracy of the proposed method is about 0.465 mm when the times of coordinate transformations reach a maximum. The proposed method is simple and can be applied to different camera configurations.

Keywords: widely-distributed vision sensors, global calibration, parallel line, vanishing points, vanishing lines, pose estimation

1. Introduction

Vision sensors have advantages of flexibility and high-precision. Distributed vision sensors (DVS) are always used due to their wider fields of view (FOVs). Calibration is an important step for most camera applications. Calibration of a DVS typically has two stages: the intrinsic calibration which can be done separately and the global calibration which calculates the relative poses of the camera frames and the global coordinate frame (GCF). Then the information extracted from each camera can be integrated into the GCF. Generally, the coordinate frame of the reference camera is selected as GCF. However, vision sensors are usually widely distributed to have a better coverage. As two adjacent cameras usually have small or no overlapping FOV, the global calibration of DVS becomes of prime importance.

DVS can be calibrated by high-precision 3D measurement equipment. Lu et al. [1] constructed a measurement system and achieved calibration of non-overlapping DVS using two theodolites and a planar target. Calibration methods without expensive equipment have also been investigated. Peng et al. [2] proposed an approach omitting translation vectors between cameras, due to the loss of depth information during the camera projection [3]. It assumes that all the cameras have approximately the same optical center. However, this assumption is not appropriate when the relative distances between cameras are not small in comparison with the distances to the captured scene. In addition, feature detection and matching, such as scale invariant feature transform (SIFT) [4] is not reliable in insufficient textural environments due to lack of enough distinctive feature points [2].

Global calibration methods for DVS with overlapping FOV cannot be applied in the case of non-overlapping FOV. Most of the global calibration methods for DVS with non-overlapping FOV are based on mirror reflections [5,6,7], rigidity constraints of calibration targets [8,9,10], movements of the platform [11] and the auxiliary camera [12]. For general distributed vision sensors, it is hard to make sure that each camera has a clear sight of the targets through mirror reflections, especially in complex environments. Liu et al. [9] proposed a global calibration method by placing multiple targets in front of the vision sensors at least four times. Bosch et al. [10] use a poster to determine the relative poses of multiple cameras in two steps. It requires that different parts of the poster are observed by at least two cameras at the same time, so that SIFT features can be utilized. However, it is not flexible to use a long one-dimensional target [8], rigidly-connected targets [9] or a large area poster [10] for the calibration of widely distributed cameras. Pagel [11] achieved extrinsic calibration of a multi-camera rig with non-overlapping FOV by moving the platform. However, it required that at least two adjacent targets be visible and two cameras can see the target at the same time. Sun et al. [12] used an auxiliary camera to observe all the sphere targets. However, all the targets can hardly been observed by one camera at the same time due to widely distribution of the vision sensors.

Structure from motion [13] solves similar problems as that in the global calibration of DVS. The difference is that the global calibration transforms local coordinate frames into GCF, while structure from motion estimates the locations of 3D points from multiple images [14]. Fitzagibbon et al. [15] recover the 3D scene structure together with 3D camera positions from a closed image sequence. Compared with open sequences, the closed image sequence contains additional constraints. Zhang et al. [16] propose an incremental motion estimation algorithm to deal with long image sequences.

Generally, one calibration target is selected as the reference target. Compared with employing the auxiliary camera to capture all the targets in one image, capturing neighboring pairs of targets is more suitable for widely distributed cameras. The relative poses between neighboring pairs of the targets can be solved separately. Then the relative poses between each target and the reference target can be achieved by chainwise coordinate transformations. However, the error accumulates with increasing times of transformations. When dealing with DVS that provides a vision of the surrounding scene just as [2,11], vision sensors are always configured in ring-topologies to have a better coverage of the surroundings. The first sensor adjoins the end to form a closed chain. Thus a closed image sequence of neighboring targets can be acquired by the auxiliary camera.

Line features are more stable than point features in detection and matching [17]. The principle of perspective projection indicates that an infinite scene line is mapped onto an image plane as a line terminating in a vanishing point. Vanishing points and vanishing lines are the distinguish features of perspective projection [18]. Xu et al. [19] proposed a pose estimation method based on vanishing lines of a T-shaped target. Wang [20] used a target with three equally spaced parallel lines to estimate the rotation matrix by moving the target into at least three different positions. Wei et al. [21,22] calibrated a line-structured vision sensor and a binocular vision sensor by a planar target with several parallel lines. Two mutually orthogonal groups of parallel lines are common in urban environments, such as crossroads, facades of architectures and so on. They can be used as the calibration targets. Even if they are absent in the scene, targets with two mutually orthogonal groups of parallel lines can be employed.

In this paper, we focus on the calibration of widely distributed cameras in ring-topologies. A planar target with two mutually orthogonal groups of parallel lines is allocated to each camera. The vanishing line of the target plane is obtained from two vanishing points. Then the relative pose between each camera and its corresponding target is initialized and refined based on vanishing features and the known line length. A closed image sequence of neighboring pairs of calibration targets is acquired by repeated operations of the auxiliary camera. Then the relative poses between two adjacent targets can be obtained and the transformation matrix from each target to the reference target is initialized in a chainwise manner. In order to adjust the accumulated error due to the chain of transformations, a global calibration method is proposed to optimize relative poses of the targets based on the constraint of the ring-type structure. Finally, using the targets as media, the optimal relative poses between each camera and the reference camera are obtained.

The rest of the paper is organized as follows: preliminary work is introduced in Section 2. The proposed global calibration method is described in Section 3. Accuracy analysis of different factors’ effects is given in Section 4. Synthetic data, simulation images and real data experiments are carried out in Section 5. The conclusions are given in Section 6.

2. Preliminaries

2.1. Coordinate Frame Definition

In this paper, the camera coordinate frame is used as the vision sensor coordinate frame. Assuming DVS consists of M cameras, CkCF (1 ≤ kM) denotes the coordinate frame of camera k. AiCF (1 ≤ iM) denotes the coordinate frame of the auxiliary camera that captures two adjacent targets (i, j). The origins of CCF and ACF are fixed at the optical centers, respectively. IkCF (1 ≤ kM) denotes the image coordinate frame of camera k in pixels. The origin of ICF is fixed at the center of the image plane.

As shown in Figure 1, the target is constructed of two mutually orthogonal groups of parallel lines with a known length L1 and the distance L2. Pmk and lmk denote the mth corner point and the mth feature line of target k, 1 ≤ m ≤ 6. TkCF (1 ≤ kM) denotes the coordinate frame of target k. l6k and l1k coincide with the x-axis and the z-axis of the target, respectively. The y-axis is decided by the right hand rule. ECF represents the ground coordinate frame. The origin of ECF is fixed on the ground. Plane xeoeze lies in the ground plane. The y-axis of ECF is decided by the right hand rule.

Figure 1
Planar target with two mutually orthogonal groups of parallel lines: (a) Planform of target k; (b) Perspective projection of target k onto the image plane.

2.2. Measurement Model

In this paper, a two-dimensional image point is denoted by p=[u,v]T, a three-dimensional spatial point by P=[X,Y,Z]T. p˜ and P˜ are the corresponding homogeneous points, p˜=[pT,1]T, P˜=[PT,1]T. The projection of a spatial point in TCF onto the image plane is described as:

sp˜=[Κ|03×1]TP˜, K=[fx0u00fyv0001], T=[R3×3t3×101×31], R=[R11R12R13R21R22R23R31R32R33]
(1)

where K is the intrinsic parameter matrix, fx and fy are the equivalent focal length in horizontal and vertical directions, respectively. (u0, v0) is the principal point. T denotes the transformation matrix between targets and cameras. R is a 3 × 3 rotation matrix, t is a 3 × 1 translation vector. The rotation matrix can be expressed in terms of Y-X-Z Euler angles: yaw angle [var phi], pitch angle θ and roll angle ϕ:

R(φ,θ,ϕ)=[cosϕcosφ + sinθsinϕsinφcosθsinϕcosφsinθsinϕ  cosϕsinφcosϕsinθsinφ  cosφsinϕcosθcosϕsinϕsinφ + cosϕcosφsinθcosθsinφ sinθ cosθcosφ]
(2)

In this paper, definitions of the transformation matrices are shown in Table 1.

Table 1
Definition of the transformation matrices.

3. The Principle of Global Calibration

The principle of global calibration is shown in Figure 2. In this paper, we choose camera 1 as the reference camera as well as target 1 as the reference target. The main process of the proposed global calibration method works as follows:

  1. Intrinsic calibration is done separately for each camera using the J. Bouguet Camera Calibration Toolbox based on Zhang’s calibration method [23,24].The intrinsic parameters are treated as fixed and the cameras’ poses are unchangeable during the calibration.
  2. Place the planar targets in each camera’s FOV. The symmetry axis of each target is set to approximately orient to its corresponding camera. Image Ik denotes target k captured by camera k. An image sequence I = {Ik|1 ≤ kM} is obtained.
  3. Use the auxiliary camera to capture neighboring pairs of the targets. As shown in Figure 2a, image I˜i denotes two adjacent targets (i, j) captured by the auxiliary camera. A closed image sequence I˜ is acquired, I˜={I˜i|1iM}; 1 ≤ iM; j = i +1 if i < M, j = 1 if i = M.
  4. All the images are rectified to compensate for cameras’ distortion based on the intrinsic calibration results. The linear equation of parallel lines in the image plane can be obtained from the feature points extracted by Steger’s method [25].
  5. Compute the transformation matrix Tktc based on the undistorted image sequence I.
  6. Compute the transformation matrix Tijtt of two adjacent targets based on the undistorted image sequence I˜.
  7. Calculate the initial value of Tk1tt (2 ≤ kM) by multiple coordinate transformations, as shown in Figure 2b. Then Tk1tt (2 ≤ kM) is refined by the global nonlinear optimization.
  8. Compute the transformation matrix Tk1cc (2 ≤ kM).The calibration is completed.
Figure 2
(a) Two adjacent targets (i, j) captured by the auxiliary camera; (b) The principle of the global calibration.

3.1. Sloving Tktc

3.1.1. Feature Extraction

The feature points on a line can be extracted by Steger’s method [25], denoted by pi = [ui, vi]T, where 1 ≤ is. s is the number of feature points. The projection of a line onto the image plane is also a straight line. The equation of a line on the image can be expressed as au + bv + c = 0.

Let A=[p1p2ps]T,w=[111]T, A is a s×2 matrix, w is a s×1 vector. Thus the relation of a, b and c can be obtained by the least squares method:

[ab]=c[ATA]1[ATw]
(3)

Thus, the linear equation of line lmk can be found by the above method, where 1 ≤ m ≤ 6. Then the coordinates of p˜mk are obtained from line intersections. As shown in Figure 1b, two vanishing points v1 and v2 can be found by lines l1k, l3k and lines l5k, l6k, respectively. Then the linear equation of the vanishing line is obtained, denoted by:

a˜u+b˜v+c˜=0
(4)

3.1.2. Computing the Vanishing Line

As shown in Figure 1b, two groups of parallel lines converge at vanishing points v1 and v2 in the image plane, respectively. The line crossing v1 and v2 is the vanishing line. The equations of two non-parallel lines in TkCF are:

aix+ciz+di=0, y=0,(i=1,2)
(5)

where a1c2a2c10.

Let V˜1 and V˜2 be the infinite points in the two lines, V˜i=[ci,0,ai,0]T, i=1,2 (the elaboration is given in Appendix A). We have:

siv˜i=[Κ|03×1]TktcV˜i (i=1,2)
(6)

where K denotes the intrinsic parameter matrix of camera k.

Combining Equations (1) and (6), we have:

v˜i=[fxciR11+aiR13ciR31+aiR33+u0,fyciR21+aiR23ciR31+aiR33+v0,1]T
(7)

Vanishing line l can be computed by l=v˜1×v˜2. With Equation (7), we have:

l=[fy(R23R31R21R33)fx(R11R33R13R31)fxfy(R21R13R11R23)+fxv0(R13R31R11R33)+fyu0(R21R33R23R31)]
(8)

Combining Equations (2) and (8), the linear equation of the vanishing line is expressed as:

sinϕfx(uu0)+cosϕfy(vv0)tanθ=0
(9)

3.1.3. Computing the Rotation Matrix of Tktc

Combining Equations (4) and (9), the roll angle ϕ and pitch angle θ can be obtained:

{ϕ=tan1[fxa˜/(fyb˜)]θ=tan1(c˜a˜sinϕfxsinϕfxu0cosϕfyv0)
(10)

Vanishing points are determined by the directions of the parallel lines [18], we have:

v˜i=Kdi(i=1,2)
(11)

where di is the 3 × 1 direction vector of the line in CCF.

l1k and l6k coincide with the z-axis and the x-axis of TkCF, respectively. Thus:

{d1/d1=R[001]Td2/d2=R[100]T
(12)

According to the orthogonal constraint of a Rotation matrix, the rotation matrix R of Tktc can be obtained:

R=[d2d2d1×d2d1d2d1d1]
(13)

3.1.4. Computing Translation Vector of Tktc

P7k is a virtual point in the target plane. As vector P1kP2kd2, the projections of P1kP7k and P2kP7k onto the vector d2 are equal. We have:

d2TP1kP7k=d2TP2kP7k
(14)

Combining Equations (1) and (14), we have:

z1z2=d2TK1p˜2kd2TK1p˜1k
(15)

where z1 and z2 are the z coordinates of P1k and P2k in CkCF, respectively. p˜1k and p˜2k are known coordinates of the corner points.

Besides, the length of P11P2k is known:

z1K1p˜1kz2K1p˜2k=L1
(16)

Combining Equations (15) and (16), z1 and z2 can be found. Thus, the coordinate of P1k in CkCF is obtained. In addition, P1k is the origin of TkCF, the translation vector t of Tktc can be obtained:

t=z1K1p˜1k
(17)

3.1.5. Nonlinear Optimization

Let P˜mk(1m6) be the homogeneous coordinate of Pmk in TkCF. Let p˜mk be the corresponding coordinate in the image Ik. We have:

smkp˜mk=[Κ|03×1]TktcP˜mk
(18)

Assuming that image points are corrupted by independently and identically distributed Gaussian noise, the maximum likelihood estimation is obtained by minimizing the sum of squared distances between the observed feature lines and the re-projected corner points. Tktc (1 ≤ kM) are refined separately by minimizing the following function using Levenberg-Marquardt algorithm [26]:

f(Ω)=m=16[d2(p˜mk,lmk)+d2(p˜mk,lnk)]
(19)

n={m1, if  m26,    if  m=1
(20)

where Ω = Tktc. lmk and lnk denotes the projections of lmk and lnk onto the image Ik, respectively. d(·) denotes distances between points and lines. R ofTktc is parameterized using the Rodrigues’ formula [27].

3.2. Initializing Tk1tt

Generally, target pair (i, j) is visible in the image I˜i, that:

j={i+1,  if   iM11,     if   i = M
(21)

As shown in Figure 2a, Tiita and Tjita are the transformation matrices from target i and target j to the auxiliary camera, respectively. Tiita and Tjita can be initialized and refined separately by the methods described in Section 3.1. Then the initial value of Tijtt can be calculated by:

Tijtt=(Tjita)1Tiita
(22)

The initial value of Tk1tt (2 ≤ kM) can be obtained by the minimum times of chainwise coordinate transformations:

Tk1tt={[Tk1,kttTk2,k1ttT2,3ttT1,2tt]1,   if  k(M/2)TM,1ttTM1,MttTk+1,k+2ttTk,k+1tt,   if  k>(M/2)
(23)

3.3. Global Calibration of the Targets

According to the camera model, we have:

{smiip˜mii=[Κ|03×1]Tjita(Tj1tt)1Ti1ttP˜mismjip˜mji=[Κ|03×1]Tiita(Ti1tt)1Tj1ttP˜mj 
(24)

where Kdenotes the intrinsic matrix of the auxiliary camera; p˜mii and p˜mji denote the reprojections of Pmi and Pmj onto the image I˜i, respectively.

Assuming image points are corrupted by independent and identical Gaussian noise, Tk1tt (2 ≤ kM) can be optimized by minimizing the following function using Levenberg-Marquardt algorithm [26]:

f(Ω)=i=1Mm=16[d2(p˜mii,lmii)+d2(p˜mii,lnii)+d2(p˜mji,lmji)+d2(p˜mji,lnji)]
(25)

where Ω=(T2,1tt,, TM1tt), T1,1tt=I4×4 lmii and lmji denote the projections of line lmi and lmj onto the image I˜i, respectively. R of Tk1tt(2kM) are parameterized by the Rodrigues formula. A good starting point of the optimization is provided by Equations (22) and (23). (m,n) and (i,j) subject to Equations (20) and (21), respectively.

3.4. Solving Tk1cc

After the global calibration of the targets, the transformation matrix from each camera to the reference camera can be found:

Tk1cc=T1tcTk1tt(Tktc)1 (2kM)
(26)

where Tktc (1 ≤ kM) are the results of Equation (19), Tk1tt (2 ≤ kM) are the optimization results of Equation (25).

4. Accuracy Analysis of Different Factors’ Effects

In this section, analysis of several factors’ effects on the accuracy of the proposed method is performed by synthetic data experiments. The auxiliary camera’s intrinsic parameters are fx = fy = 512, u0 = 512, v0 = 384. The image resolution is 1024 pixel × 768 pixel.

The cameras’ positions are represented by the coordinates of the cameras’ origins in ECF. The cameras’ orientations are denoted by the Euler angles ([var phi], θ, ϕ) from ECF to CCF. Targets are placed on the ground for convenience. The targets’ positions are represented by the coordinates of the targets’ origins in ECF. The targets’ orientations are denoted by the yaw angle [var phi] from positive z-axis of ECF to the symmetry axis of the target.

dR and dt denote the 2-norm of rotation vector and translation vector differences between the calculation results and the real data. The RMS errors of dR, dt are used to evaluate the accuracy. The number of points that emulate feature lines is equal to the line length in pixels. Gaussian noise with zero mean and different noise levels is added to the image coordinates of the points of feature lines. Analysis of the factors’ effects is shown as follows.

4.1. Accuracy vs. the Pitch Angle of Camera Relative to the Target

The image sequence I˜ is acquired by the auxiliary camera. The pitch angle of the auxiliary camera relative to the target plane is one of the factors affecting the calibration accuracy. In this experiment, two adjacent targets are captured by the auxiliary camera at different pitch angles. The targets’ positions in ECF are [−450, 0, 300]T and [450, 0, 300]T, respectively. The yaw angles of the targets relative to ECF are −18° and 18°, respectively. Two targets are symmetric about the plane yeoeze. The optical axis of the auxiliary camera lies in the symmetry plane yeoeze. The error of T1,2tt obtained by Equation (22) is used to evaluate the effect of the pitch angle. Gaussian noise with σ = 0.2 pixel is added. L1 = 500 mm, L2 = 200 mm. For each level of pitch angle θ, 100 independent trials are performed.

From Figure 3, the RMS errors of rotation and translation are roughly U-shape. When θ→−90°, the optical axis of the auxiliary camera is perpendicular to the target plane. The vanishing points approximate to infinity, which leads to higher errors. When θ→0°, the number of the extracted feature points decreases, also leads to higher errors. It is ideal to capture pair targets when θ = −40°.

Figure 3
Error vs. the pitch angle of camera relative to the target plane: (a) RMS error of rotation vs. the pitch angle; (b) RMS error of translation vs. the pitch angle.

4.2. Accuracy vs. the Yaw Angle Difference between Two Adjacent Targets

In this experiment, [increment][var phi] denotes the yaw angle difference between the symmetry axes of two adjacent targets. [increment][var phi] varies according to the cameras’ distribution. We also use the error of T1,2tt calculated by Equation (22) to evaluate the effect of [increment][var phi].

The positions of the two targets are same with those in Section 4.1, while [increment][var phi] varies from 0 to 85°. The two targets remain symmetric about the plane yeoeze and the auxiliary camera lies in the symmetry plane. Gaussian noise with σ = 0.2 pixel is added. L1 = 500 mm, L2 = 200 mm. For each level, 100 independent trials are performed.

From Figure 4, both the error of rotation and translation rise with the increasing of [increment][var phi]. When [increment][var phi] > 80°, the errors increase sharply. This is because when [increment][var phi]→90°, a group of parallel lines of each target are parallel to the image plane, thus the vanishing points approximate to infinity, which leads to great errors, so it is necessary to avoid [increment][var phi]→90° during the calibration.

Figure 4
Error vs. the yaw angle difference between two adjacent targets: (a) RMS error of rotation vs. the yaw angle difference; (b) RMS error of translation vs. the yaw angle difference.

4.3. Accuracy vs. the Distance of Parallel Lines

In this experiment, we also use the error of T1,2tt obtained by Equation (22) to evaluate the effect of the parallel line distances. The poses of the targets are same with those in Section 4.1. The pitch angle of the auxiliary camera relative to the target plane is set to −40°. L1 = 500 mm, L2 varies from 100 mm to 400 mm. Gaussian noise with different levels is added to the image points. For each distance level, 100 independent trials are performed.

From Figure 5, it can be seen that the error increases linearly with the noise level and decreases with the increasing distance of parallel lines. This is because the difference among slopes of intersecting lines goes up with the increasing distance of parallel lines. Calculation error of vanishing points is inversely proportional to the difference among the slopes of intersecting lines, which have been proven in [21].

Figure 5
Error vs. the distance of parallel lines: (a) RMS error of rotation vs. the distance; (b) RMS error of translation vs. the distance.

5. Experimental Results

5.1. Experiment of a Use-Case

Numerous situations require a system that provides a real-time view of the surroundings [28]. One of the typical cases is the operations on aerial vehicles. In this experiment, eight cameras are used to simulate a DVS mounted on an unmanned aerial vehicle (UAV), as shown in Figure 6. The proposed method is compared with other typical methods by both synthetic data and images simulated by 3ds Max software.

Figure 6
Eight cameras mounted on UAV.

The intrinsic parameters of eight cameras are fx = fy = 796.44, u0 = 512, v0 = 384. The intrinsic parameters of the auxiliary camera and the image resolution are same with those in Section 4. The positions and orientations of the cameras are listed in Table 2. Each target is placed on the ground in its corresponding camera’s FOV. All the targets have the same size, L1 = 500 mm, L2 = 200 mm.

Table 2
The positions and orientations of the cameras.

5.1.1. Description of the Calibration Methods

There are many calibration methods for multiple cameras. Here five typical methods are described as follows and summarized in Table 3:

Table 3
Summary of the calibration methods

Method 1: This method is similar to the proposed method, except that corner points p˜mk are extracted by the corner extraction engine of the J. Bouguet Camera Calibration Toolbox [23], rather than the intersections of feature lines.

Method 2: This method is similar to the proposed method, except that planar checkerboards with 12 × 12 grids are used as the calibration targets. The side length of each square is 50 mm.

Method 3: The calibration targets and the extraction of corner points p˜mk are same with the proposed method. Instead of capturing neighboring target pairs, the auxiliary camera captures all the targets in one image frame, so the relative poses between targets can be computed directly.

Method 4: This method is similar to Method 3, except that corner points p˜mk are obtained by the corner extraction method used in Method 1.

Method 5: This method is similar to Method 3, except that planar checkerboards of Method 2 are used as the calibration targets.

In order to illustrate the effect of the global calibration, there are two sub-methods called chainwise calibration method and global calibration method. The only difference between the two sub-methods is whether to use the global optimization in Section 3.3 or not. Tk1tt (2 ≤ kM) of the chainwise method are obtained directly from multiple coordinate transformations by Equation (23), while the global calibration method is based on an additional global optimization by Equation (25).

5.1.2. Synthetic Data Experiment

In this experiment, the RMS error of Tk1cc (2 ≤ k ≤ 8) is used to evaluate the accuracy. Gaussian noise with σ = 0.2 pixel is added. For each method, 100 independent trials are performed. From Figure 7, the proposed method outperforms Methods 1–5. Figure 7a,b show that the error accumulates with the coordinate transformations, and peaks at camera 5, due to the maximum times of transformations. The methods based on the constraint of ring-topologies can effectively reduce the accumulated error, especially for cameras which are far away from the reference.

Figure 7Figure 7
Calibration error of each method. (a,b) RMS errors of rotation vector and translation vector of the proposed method, Method 1 and 2; (c,d) RMS errors of rotation vector and translation vector of Methods 3–5.

Methods 3–5 do not suffer from the accumulated error issue because all the targets are visible in one image frame. However, due to limitations of image resolutions, the accuracy of the pose estimation decreases with the increasing number of the targets observed in one image.

Compared with lines-feature algorithms, points-feature algorithms are more sensitive to the image noise. Figure 7 shows that Methods 1 and 4 are worse than other methods, respectively. Further discussion is given in Section 5.3.

5.1.3. Accuracy vs. the Image Noise Level

In this experiment, the RMS error of Tk1cc (2 ≤ k ≤ 8) is used to evaluate the effect of the noise level. Synthetic data is same with those in Section 5.1.2. Gaussian noise from 0.0 to 1.0 pixel is added. For each noise level, 100 independent trials are performed.

From Figure 8, the RMS error increases linearly with the noise level. It also shows that the proposed method is superior to other methods. If the noise level of the real DVS is less than 0.5 pixels, the RMS errors of rotation and translation of all the cameras are less than 0.05 deg and 1.1 mm, respectively, which is acceptable for common applications.

Figure 8
Calibration error vs. the noise level: (a) RMS error of rotation vs. the image noise; (b) RMS error of translation vs. the image noise.

5.1.4. Experiments Based on Simulation Images

As shown in Figure 9, we use the 3ds Max software to simulate image sequences. The parameters of the cameras and the targets are same with those in Section 5.1.2. The feature lines are obtained based on feature points extracted by Steger’s method [25]. The error of Tk1cc (2 ≤ k ≤ 8) is used to evaluate the accuracy.

Figure 9
Simulation images: (a) Target 1 captured by camera 1; (b) Target 2 and Target 3 captured by the auxiliary camera.

Figure 10 shows that errors of rotation and translation accumulate with the increasing times of coordinate transformations. The proposed method can reduce the accumulated error due to multiple coordinate transformations. Further discussion is given in Section 5.3.

Figure 10
Calibration error of each method: (a,b) Errors of rotation vector and translation vector of the proposed method, Methods 1 and 2; (c,d) Errors of rotation vector and translation vector of Methods 3, 4 and 5.

5.2. Real Data Experiment

As shown in Figure 11a, eight targets are located in an area about 1200 mm × 1200 mm. As the relative poses of the cameras with non-overlapping FOV are mainly determined by the relative poses of the targets, the RMS errors of point pair distances between eight targets are used as calibration errors in the real experiments.

Figure 11
Global calibration of eight targets: (a) Eight targets and the auxiliary camera in the real experiment; (b) The auxiliary camera captures eight checkerboards in one image frame.

The distance of point pair pmk and pml is computed according to the calibration result, and is called measurement distances, dm. The targets are also calibrated similarly by a calibrated Canon 60D digital camera. The distances of the same point pairs can be obtained in the same way and are used as the ground truth dt, due to its relatively high accuracy.

Distance error can be computed by Δd=dmdt. For the proposed method, Methods 1, 3 and 4, RMS error of Δd(P11P1k),Δd(P21P2k) and Δd(P61P6k) is used to evaluate the accuracy. For Methods 2 and 5, five point pairs are randomly selected and the RMS error of distance error is computed as the calibration error.

The auxiliary camera is a 1/3-in Sony CCD image sensor (ICX673) with a 3.6 mm lens. The image resolution is 720 pixel × 432 pixel. Target parameters are L1=135 mm,L2=70 mm. The image resolution of the Canon device is 1920 pixel × 1280 pixel. The intrinsic parameters of the sensors are calibrated using Bouguet’s calibration toolbox [23], as shown in Table 4.

Table 4
Intrinsic parameters of the vision sensors

Figure 12 also shows that the proposed method achieves the best accuracy. The RMS error of point pair distances between target 5 and the reference target of the proposed method and Methods 1–5 are 0.465 mm, 0.828 mm, 3.94 mm, 3.83 mm, 1.92 mm and 21.6 mm, respectively. The proposed method is superior to other methods.

Figure 12
The distance error of each method. (a) RMS errors of the proposed method, Methods 1 and 2; (b) RMS errors of Methods 3, 4 and 5.

5.3. Discussion

Due to the restriction of image resolutions, the accuracy of the pose estimation decreases with the increasing number of the targets observed in one image. There exists a trade-off between the available features of target projections and the accumulated error from the chain of transformations. The experimental results show that the accumulated error can be effectively adjusted from the constraint of ring-topologies. For the vision sensor such as the Sony CCD sensor, capturing all the targets is not a wise choice. The benefits of direct calculation of the relative poses of the targets are cancelled out by the rise of feature extraction errors.

Moreover, it is not convenient to capture all the targets in some applications, because the auxiliary camera should be far away from the widely distributed targets. As shown in Figure 11b, in order to capture all the checkerboards in one image frame, the targets are pasted on the wall.

The results of simulation images show that the accuracy of Methods 2 and 5 is close to or even better than the proposed method. However, Methods 2 and 5 achieve the worst accuracy in the real experiments. Figure 13 shows that simulation images are very sharp and clear, which greatly benefit the corner extraction of checkerboards. However, real images could not be so ideal.

Figure 13
Simulation image and real image. (a) Checkerboards simulated by software; (b) Checkerboards captured by the auxiliary camera.

These results indicate that the proposed method is accurate and robust, especially when dealing with real images. Methods 2 and 5 are not stable against the image quality. However, there is a gap among the results of synthetic data, simulation images and real experiments. There may be some reasons for this.

Firstly, there is a measurement error during the feature extraction. In our method, the line extraction algorithm is a common-used method with acceptable accuracy and good generality. Line extraction algorithms with higher accuracy contribute to improve the accuracy, which will be further studied in the future. Secondly, the targets used in real experiments are printed on paper. They may not be strictly planar, which also leads to measurement errors.

In addition, the Sony CCD vision sensor is not a professional high-precision vision sensor, which is usually used for security cameras and radio controlled vehicles. High-resolution vision sensors can be used to improve the accuracy.

6. Conclusions

In this article, we have developed a new global calibration method for vision sensors in ring-topologies. Line-based calibration targets are placed in each camera’s FOV. Firstly, the relative poses of cameras and targets are initialized and refined based on the principle of vanishing features and the known line length. Next, in order to overcome small or no overlapping FOV between adjacent cameras, an auxiliary camera is used to capture neighboring targets. The relative poses of the targets is initialized in a chainwise manner, followed by nonlinear optimization to minimizing the squared distances between the observed feature lines and the re-projected corner points. Then the transformation matrix between each camera and the reference camera is determined.

The factors that affect the calibration accuracy are analyzed by synthetic data experiments. Synthetic data, simulation images and real data experiments all demonstrate that the proposed method is accurate and robust to image noise. The accumulated error can be adjusted effectively based on the constraint of ring-topologies. Real data experiments indicates that the measurement accuracy of the farthest camera by the proposed method is about 0.465 mm in an area about 1200 mm × 1200 mm.

The poses of targets need not be known previously and can be adjusted according to the distribution of cameras. It does not need to place the targets into different positions, one placement is enough. Our method is simple and flexible and can be applied to different configurations of multiple cameras. It is well suited for the on-site calibration of widely distributed cameras.

In this paper, we focus on the calibration of DVS in ring-topologies, which contains additional constraint. When dealing with DVS in open-topologies, accumulated errors cannot be adjusted. In addition, vanishing points approximate to infinity when feature lines are parallel to the image plane, which leads to higher errors, so the angle between parallel lines and the image plane should be in a certain range to avoid vanishing points approximate to infinity.

Restricted by hardware conditions, experiments using eight sensors mounted on an UAV are temporarily lacking. We plan to apply our method for the calibration of multiple vision sensors mounted on the vehicle in the future. Methods based on the feature lines in both indoor and outdoor environments instead of planar targets will also be investigated.

Acknowledgments

This work is supported by the Industrial Technology Development Program under Grant B1120131046.

Abbreviations

The following abbreviations are used in this manuscript:

DVSDistributed vision Sensors
FOVField of view
GCFGlobal coordinate frame
SIFTScale invariant feature transform
CCFCamera coordinate frame
ACFAuxiliary camera coordinate frame
ICFImage coordinate frame
TCFTarget coordinate frame
ECFGround coordinate frame
RMSroot mean square
CCDCharge coupled device

Appendix A

A plane in a 3D space can be represented by an equation ax+by+cz+d=0. Thus, a plane may be represented by the vector p=[a,b,c,d]T. A 3D spatial point with homogeneous coordinates x=[x1,x2,x3,x4]T lies in the plane p if and only if xTp=0.

Homogeneous vectors [x1,x2,x3,x4]T such that x40 corresponds to finite points in 3. The points with last coordinate x4=0 are known as points at infinity. The set of points at infinity can be written as x=[x1,x2,x3,0]T. Note that x lies in the plane at infinity, denoted by the vector p=[0,0,0,1]T, because xTp=0.

From Equation (5), note that [ci,0,ai,0][ai,0,ci,di]T=0, [ci,0,ai,0][0,1,0,0]T=0, thus the line aix+ciz+di=0,y=0 intersects the infinite plane in the infinite point [ci,0,ai,0]T.

Author Contributions

Author Contributions

The work presented in this paper has been done in collaboration of all authors. Xiaolong Wu conceived the method, designed the experiments and wrote the paper. Sentang Wu was the project leader and in charge of the direction and supervision. Zhihui Xing and Xiang Jia performed the experiments and analyzed the data. All authors discussed the results together and reviewed the manuscript.

Conflicts of Interest

Conflicts of Interest

The authors declare no conflict of interest.

References

1. Lu R.S., Li Y.F. A global calibration method for large-scale multi-sensor visual measurement systems. Sens. Actuators A Phys. 2004;116:384–393. doi: 10.1016/j.sna.2004.05.019. [Cross Ref]
2. Peng X.M., Bennamoun M., Wang Q.B., Ma Q., Xu Z.Y. A low-cost implementation of a 360 degrees vision distributed aperture system. IEEE Trans. Circuits Syst. Video Technol. 2015;25:225–238. doi: 10.1109/TCSVT.2014.2335832. [Cross Ref]
3. Bazargani H., Laganiere R. Camera calibration and pose estimation from planes. IEEE Instrum. Measur. Mag. 2015;18:20–27. doi: 10.1109/MIM.2015.7335834. [Cross Ref]
4. Lowe D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004;60:91–110. doi: 10.1023/B:VISI.0000029664.99615.94. [Cross Ref]
5. Kumar R.K., Ilie A., Frahm J.M., Pollefeys M. Simple calibration of non-overlapping cameras with a mirror; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Anchorage, AK, USA. 23–28 June 2008; pp. 1–7.
6. Hesch J.A., Mourikis A.I., Roumeliotis S.I. Algorithmic Foundation of Robotics VIII. Springer; Berlin, Germany: 2009. Mirror-based extrinsic camera calibration; pp. 285–299.
7. Takahashi K., Nobuhara S., Matsuyama T. A new mirror-based extrinsic camera calibration using an orthogonality constraint; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Providence, RI, USA. 16–21 June 2012; pp. 1051–1058.
8. Liu Z., Zhang G.J., Wei Z.Z., Sun J.H. Novel calibration method for non-overlapping multiple vision sensors based on 1D target. Opt. Lasers Eng. 2011;49:570–577. doi: 10.1016/j.optlaseng.2010.11.002. [Cross Ref]
9. Liu Z., Zhang G.J., Wei Z.Z., Sun J.H. A global calibration method for multiple vision sensors based on multiple targets. Measur. Sci. Technol. 2011;22:125102. doi: 10.1088/0957-0233/22/12/125102. [Cross Ref]
10. Bosch J., Gracias N., Ridao P., Ribas D. Omnidirectional underwater camera design and calibration. Sensors. 2015;15:6033–6065. doi: 10.3390/s150306033. [PMC free article] [PubMed] [Cross Ref]
11. Pagel F. Calibration of non-overlapping cameras in vehicles; Proceedings of the 2010 IEEE Intelligent Vehicles Symposium (IV); San Diego, CA, USA. 21–24 June 2010; pp. 1178–1183.
12. Sun J.H., He H.B., Zeng D.B. Global calibration of multiple cameras based on sphere targets. Sensors. 2016;16:14. doi: 10.3390/s16010077. [PMC free article] [PubMed] [Cross Ref]
13. Ullman S. The interpretation of structure from motion. Proc. R. Soc. Lond. B Biol. Sci. 1979;203:405–426. doi: 10.1098/rspb.1979.0006. [PubMed] [Cross Ref]
14. Szeliski R. Computer Vision: Algorithms and Applications. Springer Science & Business Media; London, UK: 2010.
15. Fitzgibbon A.W., Zisserman A. Computer Vision—ECCV'98. Springer; Berlin, Germany: 1998. Automatic camera recovery for closed or open image sequences; pp. 311–326.
16. Zhang Z., Shan Y. Incremental motion estimation through modified bundle adjustment; Proceedings of the 2003 International Conference on Image Processing; Barcelona, Spain. 14–17 September 2003.
17. Ly D.S., Demonceaux C., Vasseur P., Pégard C. Extrinsic calibration of heterogeneous cameras by line images. Mach. Vis. Appl. 2014;25:1601–1614. doi: 10.1007/s00138-014-0624-3. [Cross Ref]
18. Hartley R., Zisserman A. Multiple View Geometry in Computer Vision. 2nd ed. Cambridge University Press; Cambridge, UK/New York, NY, USA: 2003.
19. Xu G.L., Qi X.P., Zeng Q.H., Tian Y.P., Guo R.P., Wang B.A. Use of land’s cooperative object to estimate UAV’s pose for autonomous landing. Chin. J. Aeronaut. 2013;26:1498–1505. doi: 10.1016/j.cja.2013.07.049. [Cross Ref]
20. Wang X.L. Novel calibration method for the multi-camera measurement system. J. Opt. Soc. Korea. 2014;18:746–752. doi: 10.3807/JOSK.2014.18.6.746. [Cross Ref]
21. Wei Z.Z., Shao M.W., Zhang G.J., Wang Y.L. Parallel-based calibration method for line-structured light vision sensor. Opt. Eng. 2014;53:033101. doi: 10.1117/1.OE.53.3.033101. [Cross Ref]
22. Wei Z., Liu X. Vanishing feature constraints calibration method for binocular vision sensor. Opt. Express. 2015;23:18897–18914. doi: 10.1364/OE.23.018897. [PubMed] [Cross Ref]
23. Bouguet J.-Y. Camera Calibration Toolbox for Matlab. [(accessed on 28 March 2016)]. Available online: http://www.vision.caltech.edu/bouguetj/calib_doc/index.html.
24. Zhang Z.Y. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000;22:1330–1334. doi: 10.1109/34.888718. [Cross Ref]
25. Steger C. An unbiased detector of curvilinear structures. IEEE Trans. Pattern Anal. Mach. Intell. 1998;20:113–125. doi: 10.1109/34.659930. [Cross Ref]
26. Moré J.J. Numerical Analysis. Springer; Berlin, Germany: 1978. The levenberg-marquardt algorithm: Implementation and theory; pp. 105–116.
27. Diebel J. Representing attitude: Euler angles, unit quaternions, and rotation vectors. Matrix. 2006;58:1–35.
28. Rose M.K., Chamberlain J., LaValley D. Real-Time 360° Imaging System for Situational Awareness. SPIE; Orlando, FL, USA: 2009.

Articles from Sensors (Basel, Switzerland) are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)