Precise calibration of linear camera equipped with ... - OSA Publishing

240 downloads 0 Views 3MB Size Report
Abstract: The linear camera equipped with cylindrical lenses has prominent advantages in .... Am. A 28(3), 318–326 (2011). 17. T. Bothe, W. Li, ... fluid-dynamics—I surface approximations and partial derivative estimates,” Comput. Math.
Precise calibration of linear camera equipped with cylindrical lenses using a radial basis function-based mapping technique Haiqing Liu,1 Linghui Yang,1,* Yin Guo,2 Ruifen Guan,1 and Jigui Zhu1 1

State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072, China State Key Laboratory of Precision Measurement Technology and Instruments, Tsinghua University, Beijing 100084, China * [email protected]

2

Abstract: The linear camera equipped with cylindrical lenses has prominent advantages in high-precision coordinate measurement and dynamic position-tracking. However, the serious distortion of the cylindrical lenses limits the application of this camera. To overcome this obstacle, a precise two-step calibration method is developed. In the first step, a radial basis function-based (RBF-based) mapping technique is employed to recover the projection mapping of the imaging system by interpolating the correspondence between incident rays and image points. For an object point in 3D space, the plane passing through the object point in camera coordinate frame can be calculated accurately by this technique. The second step is the calibration of extrinsic parameters, which realizes the coordinate transformation from the camera coordinate frame to world coordinate frame. The proposed method has three aspects of advantage. Firstly, this method (black box calibration) is still effective even if the distortion is high and asymmetric. Secondly, the coupling between extrinsic parameters and other parameters, which is normally occurred and may lead to the failure of calibration, is avoided because this method simplifies the pinhole model and only extrinsic parameters are concerned in the simplified model. Thirdly, the nonlinear optimization, which is widely used to refine camera parameters, is better conditioned since fewer parameters are needed and more accurate initial iteration value is estimated. Both simulative and real experiments have been carried out and good results have been obtained. ©2015 Optical Society of America OCIS codes: (150.1488) Calibration; (120.0120) Instrumentation, measurement, and metrology; (150.6910) Three-dimensional sensing; (150.0155) Machine vision optics.

References and links 1. 2. 3. 4. 5. 6. 7. 8.

J. Draréni, S. Roy, and P. Sturm, “Plane-based calibration for linear cameras,” Int. J. Comput. Vis. 91(2), 146–156 (2011). W. Cuypers, N. Van Gestel, A. Voet, J.-P. Kruth, J. Mingneau, and P. Bleys, “Optical measurement techniques for mobile and large-scale dimensional metrology,” Opt. Lasers Eng. 47(3), 292–300 (2009). V. Macellari, “CoSTEL: a computer peripheral remote sensing device for 3-dimensional monitoring of human motion,” Med. Biol. Eng. Comput. 21(3), 311–318 (1983). Basler, “Basler vision technologies,” (2014).http://www.baslerweb.com/. C. A. Luma and M. Mazo, “Calibration of line-scan cameras,” IEEE Trans. on Inst. Meas. 59(8), 2185–2190 (2010). L. Ai, F. Yuan, and Z. Ding, “Measurement of spatial object's exterior attitude based on linear CCD,” Chin. Opt. Lett. 6(7), 505–509 (2008). F. Gazzani, “Performance of a 7-parameter DLT method for the calibration of stereo photogrammetric systems using 1-D transducers,” J. Biomed. Eng. 14(6), 476–482 (1992). R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE J. Robot. Autom. 3(4), 323–344 (1987).

#226360 - $15.00 USD © 2015 OSA

Received 10 Nov 2014; revised 11 Jan 2015; accepted 12 Jan 2015; published 4 Feb 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.003412 | OPTICS EXPRESS 3412

9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27.

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). C. Ricolfe-Viala and A. J. Sanchez-Salmeron, “Camera calibration under optimal conditions,” Opt. Express 19(11), 10769–10775 (2011). C. Ricolfe-Viala, A. J. Sanchez-Salmeron, and E. Martinez-Berti, “Accurate calibration with highly distorted images,” Appl. Opt. 51(1), 89–101 (2012). J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992). M. D. Grossberg and S. K. Nayar, “The raxel imaging model and ray-based calibration,” Int. J. Comput. Vis. 61(2), 119–137 (2005). P. Sturm and S. Ramalingam, “A generic concept for camera calibration,” in Computer Vision-ECCV 2004 (Springer, 2004), pp. 1-13. Z. Xiang, X. Dai, and X. Gong, “Noncentral catadioptric camera calibration using a generalized unified model,” Opt. Lett. 38(9), 1367–1369 (2013). W. Li and Y. Li, “Generic camera model and its calibration for computational integral imaging and 3D reconstruction,” J. Opt. Soc. Am. A 28(3), 318–326 (2011). T. Bothe, W. Li, M. Schulte, C. von Kopylow, R. B. Bergmann, and W. P. Jüptner, “Vision ray calibration for the quantitative geometric description of general imaging and projection optics in metrology,” Appl. Opt. 49(30), 5851–5860 (2010). N. Gonçalves and H. Araújo, “Estimating parameters of noncentral catadioptric systems using bundle adjustment,” Comput. Vis. Image Underst. 113(1), 11–28 (2009). R. Franke, “Scattered data interpolation: Tests of some methods,” Math. Comput. 38(157), 181–200 (1982). A. Bauer, S. Vo, K. Parkins, F. Rodriguez, O. Cakmakci, and J. P. Rolland, “Computational optical distortion correction using a radial basis function-based mapping method,” Opt. Express 20(14), 14906–14920 (2012). O. Cakmakci, S. Vo, H. Foroosh, and J. Rolland, “Application of radial basis functions to shape description in a dual-element off-axis magnifier,” Opt. Lett. 33(11), 1237–1239 (2008). G. Wang, Y. J. Li, and H. C. Zhou, “Application of the radial basis function interpolation to phase extraction from a single electronic speckle pattern interferometric fringe,” Appl. Opt. 50(19), 3110–3117 (2011). B. Ergun, T. Kavzoglu, I. Colkesen, and C. Sahin, “Data filtering with support vector machines in geometric camera calibration,” Opt. Express 18(3), 1927–1936 (2010). M. A. Golberg, C. S. Chen, and H. Bowman, “Some recent results and proposals for the use of radial basis functions in the BEM,” Eng. Anal. Bound. Elem. 23(4), 285–296 (1999). E. J. Kansa, “Multiquadrics—a scattered data approximation scheme with applications to computational fluid-dynamics—I surface approximations and partial derivative estimates,” Comput. Math. Appl. 19(8), 127–145 (1990). E. J. Kansa and Y. C. Hon, “Circumventing the ill-conditioning problem with multiquadric radial basis functions: applications to elliptic partial differential equations,” Comput. Math. Appl. 39(7), 123–137 (2000). B. Fornberg and G. Wright, “Stable computation of multiquadric interpolants for all values of the shape parameter,” Comput. Math. Appl. 48(5), 853–867 (2004).

1. Introduction The linear camera equipped with cylindrical lenses (LCEWCL) is different from traditional linear cameras (e.g. line-scan camera [1]), which are equipped with spherical lenses. The LCEWCL consists of a linear CCD and cylindrical lenses. The linear CCD records a 1D image projected by the cylindrical lenses. LCEWCL is used to detect the position of active light spot as in [2,3]. For the LCEWCL, whose field-of-view (FOV) is a 3D space instead of a 2D plane (the FOV of the traditional linear cameras). One LCEWCL determines a plane passing through the light spot, as detailed in section 2. With three LCEWCLs arrangement in front of the measurement field, the 3D coordinate of the light spot can be reconstructed by multi-plane constraint. The LCEWCL has prominent advantages in high-precision coordinate measurement and dynamic position-tracking. The reason is described as follows. On the one hand, the 1D image processing for the LCEWCL is easier and quicker than the processing for 2D image including the 2D image from 2D camera and the full image from the line-scan camera [1]. On the other hand, linear CCD has a better resolution and a higher frame rate when compared with planar CCD. Existing linear cameras embed sensors up to 12288 pixels and deliver 1D image at an outstanding frame-rate of 140 KHz [4]. Camera calibration is a necessary step to extract metric information from image information when the acquired images are meant for metrology purposes. With respect to the traditional linear cameras, many calibration methods have been developed such as [1,5] (to cite a few). #226360 - $15.00 USD © 2015 OSA

Received 10 Nov 2014; revised 11 Jan 2015; accepted 12 Jan 2015; published 4 Feb 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.003412 | OPTICS EXPRESS 3413

Nevertheless, there are few reports about the calibration of the LCEWCL. The reason is that the image in the LCEWCL is highly distorted and the distortion type is not known by a prior. In [6], the LCEWCL is calibrated by the 7-parameter direct linear transformation (7DLT) method neglecting lens distortion. The seven parameters are functions of intrinsic parameters and extrinsic parameters. When high accuracy is required, more parameters should be added to model lens distortion as in [7]. Due to the high and asymmetry distortion of the cylindrical lenses, the conventional methods suffer low accuracy and how to accurately calibrate the LCEWCL remains challenging. However, the calibration technique for 2D camera is mature and can provide direct guide for the effectiveness of calibration method. Classical calibration methods for 2D camera recover the projection mapping of the imaging system by the pinhole model with supplemental polynomial corrections [8,9]. Linear approach followed by nonlinear minimization [10,11] is normally used to infer the parameters of the pinhole model, which include intrinsic parameters, extrinsic parameters and distortion parameters. The linear approach provides an initial approximation of camera parameters. The nonlinear minimization is followed to refine these initial parameters. However for the LCEWCL, there are significant drawbacks with the classical calibration methods. At first owning to the high distortion of the cylindrical lenses, the linear approach provides an inaccurate initial approximation. Secondly, existing lens distortion models hardly correct the distortion of cylindrical lenses, which is high and asymmetric. Finally, the nonlinear minimization easily converges to a local solution because of the above two drawbacks [12]. Therefore, the classical calibration methods are not suitable for the LCEWCL. In the last decade, new generic calibration approaches have been developed [13,14]. The generic approaches is based on the fact that all imaging systems perform a mapping from incoming scene rays to photosensitive elements on the image detector. In [13], this mapping is described by a set of virtual sensing elements called raxels and special structured light patterns are used to extract the parameters of raxel. In [14], a more generic method is presented and the images of a planar calibration object are used to determine the mapping between rays and pixels. The generic camera calibration methods have been applied on a wide variety of central and non-central 2D cameras [15–18]. However for the LCEWCL, the generic calibration method has not been studied yet and an accurate calibration method is still absent. In this paper, we propose a precise calibration method for the LCEWCL based on the generic calibration approach. The algorithm has two steps, which are independent. The first step is to recover the projection of the imaging system using the RBF-based mapping technique. By which the imaging system is considered a black box and the calibration task is converted into setting up the correspondence between incident rays and the corresponding image points. The technique is still effective even if the lens distortion is high and asymmetric due to the black box calibration. For the light spot within the FOV, the plane passing through the light spot in camera coordinate frame can be computed accurately due to the excellent approximation properties of the multiquadric RBF [19]. From a practical point of view, this plane should be transformed to world coordinate frame to realize the 3D coordinate measurement by multi-plane constraint. Consequently, the second step, the extrinsic camera parameters process, is followed to realize this coordinate transformation. Theoretically, the proposed method could be a generic and accurate camera calibration approach. Both simulative and real experiments will validate the effectiveness of the proposed method. The rest of the paper is organized as follows. Some preliminaries about the LCEWCL are given in Section 2. The first step, RBF-based mapping technique, is presented in Section 3. The second step, extrinsic parameters calibration process, is introduced in Section 4. Both simulative and real experiments are carried out to validate the proposed method in Section 5. Finally, the main conclusions and future plan of this work are summarized in Section 6.

#226360 - $15.00 USD © 2015 OSA

Received 10 Nov 2014; revised 11 Jan 2015; accepted 12 Jan 2015; published 4 Feb 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.003412 | OPTICS EXPRESS 3414

2. Preliminaries The LCEWCL consists of two components, a linear CCD and cylindrical lenses. As is normally known, the simple imaging lens is still made of multiple lens elements. Figure 1 shows the optical layout of the LCEWCL as displayed in the optical design software (ZEMAX). Within YZ view, the rays of the whole FOV are compressed to intersect with the linear CCD, as shown in Fig. 1(a). In addition, the rays with different Y-direction incident angles and the same X-direction incident angles are projective to the same image point. Within XZ view, the rays are focused on the CCD to guarantee the image quality, as shown in Fig. 1(b). The rays with different X-direction incident angles are projected to different image points detected by the CCD. Therefore, one LCEWCL determines one angle, i.e., one plane passing through the object point.

Fig. 1. Optical layout of the camera lens with a 40 × 40 deg FOV and f-number/19. The focal length is 38mm. The position of the aperture stop (AS) is indicated above. (a) YZ view, (b) XZ view.

In such a situation, the projection mapping model of the LCEWCL is put forward, as shown in Fig. 2(a). The linear CCD is vertical to the nodal axis of the cylindrical lens. The illuminated photo-element of the linear CCD identifies an angle α thereby a plane lying along the nodal axis of the cylindrical lens and containing the light spot. The distribution of the lens distortion lying along the transverse axis of linear CCD is displayed in Fig. 2(b). The distortion is computed according to Grid Distortion data in ZEMAX. We can see that the distortion of the cylindrical lenses is up to ten pixels at the edge of FOV. In addition, the effect of manufacturing defect and misalignment effect are not included. Thus the real distortion is much higher and more asymmetry than the one shown in this figure. As described in the introduction, this high and asymmetry distortion limits the application of classical calibration methods.

Fig. 2. (a) Projection mapping model of the LCEWCL. (b) Lens distortion distribution.

3. Principle of the RBF-based mapping technique For single LCEWCL, the purpose of calibration is to obtain the key angle α (as shown above) which determines the plane passing through the light spot. The RBF-based mapping technique

#226360 - $15.00 USD © 2015 OSA

Received 10 Nov 2014; revised 11 Jan 2015; accepted 12 Jan 2015; published 4 Feb 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.003412 | OPTICS EXPRESS 3415

is employed to realize this purpose. First of all, a brief review of the RBF interpolation algorithm used in this study is introduced. 3.1 Brief review of the RBF interpolation The RBF is a primary tool for interpolating multidimensional scattered data. It has been successfully applied to different areas because of the excellent approximation properties and ease of implementation [20–23]. A brief synopsis of the interpolation algorithm that will be important for the forthcoming part is presented as follows. For a given set containing n scattered points xi (i = 1, 2, , n) and the corresponding function value f i (i = 1, 2, , n) , construct an approximation function z ( x ) to pass through all scattered points using radial basis φi ( x ) and polynomial basis p j ( x ) : n

m

i =1

j =1

z ( x ) =  wiφi ( x ) +  c j p j ( x ) = AT ( x )W + P T ( x )C ,

(1)

where wi and c j are the weight coefficients for φi ( x ) and p j ( x ) , respectively. m is the polynomial term. The vectors are defined as: W T = [ w1 , w2 , , wn ], C T = [c1 , c2 , , cm ],

(2)

AT ( x ) = [φ1 ( x ), φ2 ( x ), , φn ( x )], P T ( x ) = [ p1 ( x ), p2 ( x ), , pm ( x )].

Radial basis φi ( x ) is the function of distance ri expressed as:

φi ( x ) = φi (ri ) = φi ( x, y ),

(3)

ri = [( x − xi ) 2 + ( y − yi ) 2 ]1 2 .

(4)

Polynomial basis p j ( x ) has the following monomial terms as: P T ( x ) = [1, x, y, x 2 , xy, y 2 ,].

(5)

Given the fact that the interpolation passes through all scattered points, therefore there is the interpolation conditions z ( xk ) = f k for the kth point. n equations on the coefficients wi and ci are set up: n

m

i =1

j =1

f k = z ( xk , yk ) =  wiφi ( xk , yk ) +  c j p j ( xk , yk ),

k = 1, 2, , n.

(6)

To guarantee the interpolation is unique [24], the additional constraints are imposed: n

 w p ( x , y ) = 0, i =1

i

j

i

i

j = 1, 2, , m.

(7)

Combining Eq. (6) and Eq. (7), the constraint relationship is expressed in a matrix form: A  PT 

P  W   Z  , = 0   C   0 

(8)

where Z is the vector for function values

#226360 - $15.00 USD © 2015 OSA

Received 10 Nov 2014; revised 11 Jan 2015; accepted 12 Jan 2015; published 4 Feb 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.003412 | OPTICS EXPRESS 3416

Z = [ f1 , f 2 , f 3  , f n ]T ,

(9)

A is the n × n matrix on unknowns W

 φ1 ( x1 , y1 ) φ2 ( x1 , y1 )  φn ( x1 , y1 )  φ ( x , y ) φ ( x , y )  φ ( x , y )  2 2 2 n 2 2  , A= 1 2 2         φ1 ( xn , yn ) φ2 ( xn , yn )  φn ( xn , yn ) 

(10)

P is the n × m matrix on unknowns C

 p1 ( x1 , y1 )  p (x , y ) P= 1 2 2     p1 ( xn , yn )

pm ( x1 , y1 )  pm ( x2 , y2 )  (11) .      p2 ( xn , yn )  pm ( xn , yn )  Because the distance is directionless, there is φk ( xi , yi ) = φi ( xk , yk ) , which means that the matrix A is symmetric. Unique solution is obtained if the inverse of matrix A exists p2 ( x1 , y1 ) 

p2 ( x2 , y2 ) 

W   A  C  = PT   

−1

P Z  . 0   0 

(12)

Once coefficients matrix [W , C ] is found, the approximation function from Eq. (1) can be T

used to estimate value of the function at any point. 3.2 Setting up the correspondence The RBF-based mapping technique requires setting up the correspondence between incident rays and the corresponding image points. This correspondence is really a discrete sampling of the continuous correspondence, which is denoted by (α i , β i ) ↔ ui for i th sample point. α i and β i are the horizontal and vertical angles of the 3D ray in camera coordinate frame, and ui is the corresponding image coordinate. The establishment of (α i , βi ) ↔ ui is illustrated in Fig. 3. Horizontal angle α i is provided by the 1D rotation stage. Vertical angle β i is provided by the spherically-mounted reflector (SMR) of laser tracker, which is brought to the appointed position by the motion stage. Concrete procedure will be described as follows.

Fig. 3. Schematic of setting up the correspondence

#226360 - $15.00 USD © 2015 OSA

(α i , β i ) ↔ ui

for

i th

sample point.

Received 10 Nov 2014; revised 11 Jan 2015; accepted 12 Jan 2015; published 4 Feb 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.003412 | OPTICS EXPRESS 3417

(1)Adjust the position of the camera by a 6D adjusting stage until n1 parallels to l , and AS lies on l . n1 is the normal vector of reference plane1. l is the rotary axis of the 1D rotation stage. AS is the aperture stop of the camera, which is a physical hole with diameter of 2mm. The reference planes are physical planes. The position of the AS, n1 and l can be measured by the laser tracker. (2)Establish the camera coordinate frame oc − xc yc zc . Take the AS as origin oc . xc -axis and yc -axis are through the oc and parallel to n2 (the normal vector of reference plane2) and n1 , respectively. zc -axis is determined according to the right-hand rule. (3)Move the SMR to the appointed position by the motion stage and record its 3D coordinate ( x, y, z ) in oc − xc yc zc by the laser tracker. Then replace the SMR with the light spot which is a spherical component, whose diameter is 1.5 inch that is the same size as SMR. Both SMR and the light spot can put on a magnetic nest that moves up and down freely with the motion stage. With ( x, y, z ) , the incident angle β i can be calculated by β i = arc tan ( y z ) .

(4)In each appointed position of the above step, rotate the camera around yc -axis and acquire a line of image at every horizontal interval. After that, the correspondence consisting of a list of sample points (α i , β i , ui ) is set up. ui

is obtained by real-time image processing. α i is calculated by α i = θ + arc tan ( x z ) . θ

comes from the precise 1D rotation stage, whose minimum rotation interval is 1 arc sec and rotation accuracy is 0.3 arc sec. Although the above procedure is time-consuming (acquiring 312 sample points consumes 3 hours), this process is automatically run by interface program in Visual C + + . 3.3 Completing the RBF-based mapping Nowadays, an approximation function is constructed to pass through the above sample points (α i , βi , ui )(i =1, 2, , N sample ) by RBF interpolation. N sample is the number of sample points. The commonly used radial basis includes Gaussian, thin-plate spline and multiquadric (MQ). In this paper, we report on our choice of 2D MQ: φ (r ) = (r 2 +χ 2 )1 2 as the radial basis. Kansa [25] had shown MQ is an excellent approximation to 2D surface. The purpose of interpolation is to get the key angle α . Therefore, α is the output object, u and β are the input objects. If the polynomial basis p j ( x ) has the form P T ( x ) = [1, x, y ] , the approximation function from Eq. (1) is rewritten as:

α ( x) =

N sample

 i =1

3

wiφi ( x ) +  c j p j ( x ) = AT ( x )W + P T ( x )C ,

x = (u , β )

(13)

j =1

12

φi ( x ) = (r 2 +χ 2 )1 2 = (u − ui ) 2 +(β − β i ) 2 + χ 2  ,

(14)

where χ is a shape parameter to be chosen heuristically based on the specific application. The accuracy of the RBF interpolation depends on the shape parameter χ . For some values of χ , the interpolation may become ill-conditioned [26]. Additionally, u and β should be normalized to the same scale for computation convenience [20]. The coefficients W and C in Eq. (13) were solved by Eq. (12) according to the sample point (α i , β i , ui ) . Once learned, these coefficients completely define the projection mapping of #226360 - $15.00 USD © 2015 OSA

Received 10 Nov 2014; revised 11 Jan 2015; accepted 12 Jan 2015; published 4 Feb 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.003412 | OPTICS EXPRESS 3418

the LCEWCL. After that, for a light spot within the FOV, the target value α could be computed by Eq. (13) only if its image coordinate u and another incident angle β (i.e. the input objects of Eq. (13)) are known. u is an observable value. How to get β during the extrinsic parameters calibration and coordinate measurement will be introduced in subsection 4.2 and subsection 5.3, respectively. Here, the first step of the proposed calibration method has been finished. 4. Extrinsic parameters calibration

After the first step, the target angle α thereby the plane passing through the light spot in camera coordinate frame is obtained. From a practical viewpoint, this plane should be transformed to the world coordinate frame. In such a situation, extrinsic parameters calibration is carried out to realize this coordinate transformation. 4.1 Coordinate measurement based on the multi-plane constraint A typical application for 3D coordinate measurement is employed to explain the extrinsic parameters calibration process. As described previously, one LCEWCL determines a plane passing through the light spot. With three LCEWCLs arrangement in front of the measurement field, the 3D coordinate of the light spot can be reconstructed by multi-plane constraint, as shown in Fig. 4. However, these three planes are in their own camera coordinate frames. In order to reconstruct the 3D coordinate of the light spot by the multi-plane constraint, the extrinsic parameters calibration is carried out to realize the coordinate transformation from the camera coordinate frame to world coordinate frame. The calibration of extrinsic camera parameters can be easily completed by placing several control points, whose 3D coordinate are provided by the laser tracker. The following contents will give a detailed explanation.

Fig. 4. Schematic of 3D coordinate measurement based on multi-plane constraint.

4.2 Simplifying the pinhole model The proposed method simplifies the pinhole model as follows. The laser tracker coordinate frame is defined as the world coordinate frame O − XYZ . The camera coordinate frames oci − xci yci zci (i = 1, 2,3) had been established in the procedure of setting up the correspondence. The extrinsic parameters calibration processes of three cameras are independent, thus we take the camera 1 as an example to explain this process. The relationship between O − XYZ and oc1 − xc1 yc1 zc1 is expressed as:

#226360 - $15.00 USD © 2015 OSA

Received 10 Nov 2014; revised 11 Jan 2015; accepted 12 Jan 2015; published 4 Feb 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.003412 | OPTICS EXPRESS 3419

X   xc1     r11  y  = R t  Y  = r ]    21  c1  [ Z  zc1     r31 1

r12 r22

r13 r23

r32

r33

X  t1    Y t2    , Z  t3    1

(15)

where ( X , Y , Z ,1)T are the homogeneous coordinates of the control point in O − XYZ , and ( xc1 , yc1 , zc1 )T are the corresponding coordinates in oc1 − xc1 yc1 zc1 . [ R, t ] are the rotation and translation which relate oc1 − xc1 yc1 zc1 to O − XYZ . For each control point, the plane passing through the control point in oc1 − xc1 yc1 zc1 is formulized as: tan α = xc1 / zc1 .

(16)

Substituting Eq. (15) into Eq. (16), Eq. (16) is rewritten as: tan α =

r11 X + r12Y + r13 Z + t1 , r31 X + r32Y + r33 Z + t3

(17)

where α is computed by the RBF-based mapping technique, with which the incident angle β of the control point should be known. Therefore, we establish the camera coordinate frame again in the same way with establishing oc1 − xc1 yc1 zc1 (i.e. select the same AS and reference planes to establish the camera coordinate frame), and the new camera coordinate frames are denoted by oi − xi yi zi (i = 1, 2,3) . With the 3D coordinate of the control point in o1 − x1 y1 z1 , the angle β is found. Then angle α can be calculated by Eq. (13). We can see that the unknown parameters in Eq. (17) are only extrinsic parameters [ R, t ] without intrinsic parameters and distortion parameters. Consequently, the proposed method simplifies the pinhole model. As a result, the coupling between extrinsic parameters and other parameters (e.g. intrinsic parameters and distortion parameters), which is normally occurred for classical calibration methods and may lead to the failure of calibration [10–12], is avoided. The optimization computation of the simplified pinhole model is expounded in the next subsection. 4.3 Computing the simplified pinhole model Nonlinear minimization approach is used to compute the simplified pinhole model. Assuming that n control points Pi (i = 1, 2 , n) are denoted by ( X i , Yi , Z i ) in O − XYZ and α i in oc1 − xc1 yc1 zc1 , respectively. According to Eq. (17), we have: Fi = X i r11 + Yi r12 + Z i r13 + t1 − tanα i X i r31 − tanα iYi r32 − tanα i Z i r33 − tanα i t3 =0. (18)

The unknown [ R, t ] to be determined includes 8 parameters: r11 , r12 , r13 , t1 , r31 , r32 , r33 ,

t3 . From Eq. (18), each control point gives one equation. Therefore, the unknown [ R, t ] can

be determined by the solution of the linear equations provided by eight or more control points. However, because of the coefficient error of Eq. (18) caused by the 3D coordinate measurement error of the control point and the mapping error of angle α , the rotation matrix R determined with this method does not in general satisfy the orthogonal constraint condition. Thus, a nonlinear approach is adopted to determine [ R, t ] . The rotation matrix R satisfies the following orthogonal constraints

#226360 - $15.00 USD © 2015 OSA

Received 10 Nov 2014; revised 11 Jan 2015; accepted 12 Jan 2015; published 4 Feb 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.003412 | OPTICS EXPRESS 3420

 f1 = r112 + r122 + r132 − 1 = 0  2 2 2  f 2 = r31 + r32 + r33 − 1 = 0 . f = r r +r r +r r =0  3 11 31 12 32 13 33

According to Eq. (18) and Eq. (19),

[ R, t ]

(19)

can be solved by minimizing the following

function: n

3

i =1

j =1

E =  Fi 2 + M  f j2 ,

(20)

where M is the penalty factor. Minimizing Eq. (20) is a nonlinear minimization problem, which is solved with Levenberg-Marquardt algorithm. It requires an initial iteration value, which can be obtained as follows. From Eq. (18), the plane equations provided by n control points are expressed in a matrix form: AX = 0,

(21)

where  X1 X A= 2    Xn

Y1 Y2

Z1 1 − tan α1 X 1 Z 2 1 − tan α 2 X 2

− tan α1Y1 − tan α 2Y2

− tan α1 Z1 − tan α 2 Z 2

 Yn

   Z n 1 − tan α n X n

 − tan α nYn

 − tan α n Z n

− tan α1  − tan α 2     − tan α n 

is a n × 8 matrix, X = [r11 , r12 , r13 , t1 , r31 , r32 , r33 , t3 ]T is the initial iterative value. The closed-form solution of Eq. (21) is given by the eigenvector of AT A associated with the smallest eigenvalue. In comparison with traditional nonlinear optimization, the one used in this paper is bettered conditioned due to the following two aspects. On the one hand, the number of unknown parameters is less because only extrinsic parameters are concerned in the simplified pinhole model. On the other hand, since the RBF has excellent approximation properties, the initial iteration value determined by the proposed method is more accurate than the one derived from the conventional methods. Consequently, the proposed method decreases the risk of finishing the nonlinear minimization process in a local solution. 5. Experiments and results

The RBF-based mapping technique is tested on both computer simulation and real data by evaluation of the angle mapping accuracy in subsection 5.1 and 5.2, respectively. Additionally, the coordinate measurement based on the multi-plane constraint is also carried out to verify the whole calibration process by evaluation of 3D coordinate measurement accuracy in subsection 5.3. 5.1 Verification of the RBF-based mapping technique by computer simulation To verify the RBF-based mapping technique, the visualization process through the imaging system was simulated in ZEMAX. The incident ray denoted by its incident angle (α , β ) passes through the cylindrical imaging system and imprints an image point u on the linear CCD. The data of scattered point (α , β , u ) is derived from Grid Distortion data in ZEMAX. Enough (up to 1012 ) scattered points can be obtained within the desired FOV. Some scattered points are used for sample points to construct the approximation function by RBF interpolation. Others

#226360 - $15.00 USD © 2015 OSA

Received 10 Nov 2014; revised 11 Jan 2015; accepted 12 Jan 2015; published 4 Feb 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.003412 | OPTICS EXPRESS 3421

are used for test points to evaluate the performance of RBF-based mapping technique. For a set of 412 sample points filling the whole FOV ( 40 × 40 degrees), coefficients matrix [W , C ]T was calculated by Eq. (12). Then the approximation function from Eq. (13) was used to estimate the function value at the 1012 test points within the whole FOV, illustrated in Fig. 5(a). The RBF-based mapping-computed value ( α RBF ) is compared with the true value from ZEMAX ( α ZEMAX ). Comparison result, Δα =α RBF − α ZEMAX , is shown in Fig. 5(b) yielding accuracies higher than 1.4 arc sec across a high density of test points. As we can see that the values of Δα at the edge especially the corner of FOV are much larger than the ones at other locations, which is due to the inherent edge effects of RBF interpolation [20]. Therefore, the whole FOV should be larger than the working FOV (in this paper, it is 36 × 36 degrees). In other words, sample points should be full of whole FOV to train the approximation function and test points (or measurement points) should be within the working FOV to obtain the nodal support.

Fig. 5. (a) Set of sample points and test points corresponding to the red points and blue points, respectively. Not all points are drawn for clarity; (b) Angle mapping error for the test points.

The following experiments will give a quantitative analysis of how the number of sample points N sample and the shape parameter χ affect the accuracy of the RBF-based mapping technique. The mean value and standard deviation of the error | α RBF − α ZEMAX | is indicated by error bars. Figure 6(a) shows the error under different N sample . It can be seen that the error decreases as N sample increases, until about 1000 sample points where there is no gain in accuracy from adding sample points. As more sample points are added, the sample points become densely packed and reach a point at which the error will eventually grow, or more technically, the RBF interpolation algorithm becomes computationally unstable with larger N sample that leads to ill-conditioning problem [27]. For a subset of 312 sample points, the mean value and standard deviation of the error is 0.09 arc sec (corresponds to 0.003 pixels) and 0.07 arc sec. The results indicate that the RBF-based mapping technique can recover the projection mapping of the cylindrical imaging system well. Figure 6(b) shows the error under different shape parameter χ . Interpolation algorithm should be computationally efficient but not too sensitive to a small change of shape parameter χ . From this figure, we can see good performance of the RBF-based mapping technique is got when χ varies from 0.1 to 7. The optimal value is 5 and the sensitivity to a small change of χ is low.

#226360 - $15.00 USD © 2015 OSA

Received 10 Nov 2014; revised 11 Jan 2015; accepted 12 Jan 2015; published 4 Feb 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.003412 | OPTICS EXPRESS 3422

Fig. 6. The error of RBF-based mapping with respect to different configurations by computer simulation: (a) N sample and (b) χ .

5.2 Verification of the RBF-based mapping technique by real data

The experimental platform for setting up the real correspondence between incident rays and the corresponding image points is shown in Fig. 7. As mentioned in subsection 3.2, the real correspondence consisting of a set of sample points (α , β , u ) can be set up by automatically run after adjusting the position of the camera and establishing the camera coordinate frame.

Fig. 7. Experimental platform for setting up the correspondence. (1) linear camera, (2) laser tracker, (3) light spot, (4) 1D rotation stage, (5) 6D adjusting stage, (6) motion stage, (7) SMR.

The RBF-based mapping technique is now routinely used to see the effect of N sample and

χ on angle mapping accuracy by real data. Angle mapping accuracy with respect to N sample : N sample sample points (α , β , u ) are used to train the approximation function, and 732

scattered points are used to validate the RBF-based mapping technique. As shown in Fig. 8(a), the mean value and standard deviation of the error | α RBF − α ZEMAX | decreases as N sample increases until 312 sample points where there is no gain in accuracy from adding sample points, which is consistent with the simulative results. For a subset of 312 sample points, the mean value and standard deviation of the error is 1.9 arc sec (corresponds to 0.05 pixels) and 1.5 arc sec. This experiment plays a role in getting an optimal value of N sample , which makes a compromise between calibration time and calibration accuracy. This experiment plays another role in providing a way of setting up the real correspondence for any single viewpoint camera consisting of linear or planar CCD. Angle mapping accuracy with respect to χ : As displayed

#226360 - $15.00 USD © 2015 OSA

Received 10 Nov 2014; revised 11 Jan 2015; accepted 12 Jan 2015; published 4 Feb 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.003412 | OPTICS EXPRESS 3423

in Fig. 8(b), the RBF-based mapping technique shows a higher accuracy when χ varies from 0.1 to 7. The sensitivity to a small change of χ is low and it is computationally efficient.

Fig. 8. The error of RBF-based mapping with respect to different configurations by real data: (a) N sample and (b) χ .

5.3 Verification of the whole calibration process by coordinate measurement

As shown in Fig. 9, the 3D coordinate measurement system consists of a light spot and three linear cameras which are fixed on a tripod for mobile and large-scale metrology as in [2].

Fig. 9. Photo of 3D coordinate measurement system. Measurement results are compared with the laser tracker. (1) camera 1, (2) camera 2, (3) camera 3, (4) light spot, (5) laser tracker.

For an arbitrary measurement point Pm , whose 3D coordinate measurement based on multi-plane constraint is introduced as follows. (1) An initial 3D coordinate in O − XYZ is computed by the 7DLT method [6]. (2) According to this initial 3D coordinate, the incident angles β i (i = 1, 2,3) of Pm corresponding to three cameras are calculated by the rotations and translations which relate oi − xi yi zi (i = 1, 2,3) to O − XYZ . (3) Through the angles β i (i = 1, 2,3) and the corresponding image points ui (i = 1, 2,3) , the angles α i (i = 1, 2,3) are calculated by the proposed RBF-based mapping technique. (4) With α i (i = 1, 2,3) , the multi-plane constraint can be established by Eq. (17) and then a new 3D coordinate of Pm in O − XYZ are computed. Finally, take the new coordinate as the initial value and execute the above (2)-(4) again, then the final 3D coordinate will be obtained. The 3D coordinate of the measurement points are compared with the laser tracker (Leica AT901 Laser Tracker with accuracy of 15 μm + 6 μm m ). Coordinate comparison is performed in a 1200 mm × 1200 mm × 1200mm volume that is 3 m away from the middle

#226360 - $15.00 USD © 2015 OSA

Received 10 Nov 2014; revised 11 Jan 2015; accepted 12 Jan 2015; published 4 Feb 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.003412 | OPTICS EXPRESS 3424

camera. The distance between two nearest cameras is 0.75 m. The error distribution about the 3D coordinate measurement along X, Y and Z directions using different calibration methods (the proposed method and the 7DLT method) are displayed in Fig. 10(a-c), respectively. We can see that the 3D coordinate measurement error acquired by the proposed method is much less than the one obtained by the 7DLT method. Numerical statistical results are summarized in Table 1. The measurement accuracy of Y and Z directions is superior to that along X direction (X direction is the depth direction of the multi-plane constraint or the direction of optical axis of the middle camera). This is attributed to the fact that the proposed method is essentially based on multi-angle intersection, which provides a weak constraint in the depth direction and makes the accuracy in the depth direction more sensitive to that of the angle measurement. To sum up, the obvious improvement on the 3D coordinate measurement accuracy verifies that the proposed calibration method is feasible and effective with good accuracy.

Fig. 10. 3D coordinate error distribution for 60 measurement points using different calibration methods: 7DLT method and the proposed method. (a)-(c): the 3D coordinate measurement error along X, Y and Z directions, respectively. Table 1. Summary of statistic results for 3D coordinate measurement error using different calibration methods. Coordinate discrepancies are shown as maximum values. Calibration Methods

Number of points

7DLT method The proposed method

60 60

Coordinate discrepancies (mm)

Standard errors (mm)

ΔX

ΔY

ΔZ

σ ΔX

7.82 0.43

3.77 0.21

2.65 0.16

3.15 0.19

σ ΔY

σ ΔZ

1.46 0.09

1.21 0.07

6. Conclusions and future plan

A precise two-step calibration method for the LCEWCL has been defined. In the first step, the RBF-based mapping technique is employed to recover the projection mapping of the LCEWCL by interpolating the correspondence between incident rays and image points. For the light spot

#226360 - $15.00 USD © 2015 OSA

Received 10 Nov 2014; revised 11 Jan 2015; accepted 12 Jan 2015; published 4 Feb 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.003412 | OPTICS EXPRESS 3425

within the FOV, the plane passing through the light spot in camera coordinate frame can be calculated accurately. The second step is the calibration of extrinsic parameters, which realizes the coordinate transformation from the camera coordinate frame to world coordinate frame. In principle, the proposed method can be applied to calibrate any single viewpoint camera even if its distortion is high and asymmetric. The proposed method simplifies the pinhole model and only extrinsic parameters are concerned in the simplified model. Therefore, the coupling between extrinsic parameters and other parameters is avoided. Additionally, the nonlinear optimization, which is used to compute the parameters of the simplified pinhole model, is better conditioned since fewer parameters are needed and more accurate initial iteration value is obtained due to the good performance of the RBF-based mapping technique. Both computer simulation and real data have been used to validate the RBF-based mapping technique by evaluation of angle mapping accuracy. The mean value of the angle mapping error is 0.09 arc sec and 1.9 arc sec, respectively. Coordinate measurement experiment based on multi-plane constraint is also carried out to verify the whole calibration process by evaluation of the 3D coordinate measurement accuracy. The proposed method promotes the maximum value of 3D coordinate error along X, Y and Z directions from 7.82 mm, 3.77 mm and 2.65 mm to 0.43 mm, 0.21 mm and 0.16 mm, respectively, compared with conventional method. Experimental results demonstrate that the proposed method is effective and the LCEWCL calibrated by the proposed method satisfies most large-scale metrology and dynamic position-tracking applications that require sub-millimeter accuracy. In future work, firstly beam splitter prism will be introduced to the optical system. The new camera will consist of two orthogonal linear CCDs, which are used to detect the two incident angles of the 3D ray. The two angles can be calculated by the proposed RBF-based mapping technique and they correspond to two planes passing through the light spot. Therefore, one more constraint will be added for each camera, which is beneficial to improve the measurement accuracy. Secondly, the application of the proposed method for the calibration of 2D cameras will be studied for vision measurement. Acknowledgments

This work is supported by the National Natural Science Funds for Distinguished Young Scientists (51225505), National Natural Science Foundation of China (51405338) and the National High-technology Research & Development Program of China (863 Program, 2012AA041205). We thank Shimin Zhu for designing the cylindrical lenses of the linear camera. We also thank Mengmeng Xu for providing the Grid Distortion data employed in simulation experiment.

#226360 - $15.00 USD © 2015 OSA

Received 10 Nov 2014; revised 11 Jan 2015; accepted 12 Jan 2015; published 4 Feb 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.003412 | OPTICS EXPRESS 3426