Improving calibration accuracy of structured light

0 downloads 0 Views 3MB Size Report
Oct 22, 2013 - systems using plane-based residual error compensation. Dong Han ... result in a much bigger error in the projector calibration. This ..... For a total number of I target poses, the bundle adjustment .... a NEC NP50 DLP projector.

Improving calibration accuracy of structured light systems using planebased residual error compensation Dong Han Antonio Chimienti Giuseppe Menga

Optical Engineering 52(10), 104106 (October 2013)

Improving calibration accuracy of structured light systems using plane-based residual error compensation Dong Han Politecnico di Milano Dipartimento di Elettronica, Informazione e Bioingegneria Milano 20133, Italy and Tianjin University School of Electronic Information Engineering Tianjin 300072, China E-mail: [email protected] Antonio Chimienti National Research Council Istituto di Elettronica e di Ingegneria dell’Informazione e delle Telecomunicazioni Torino 10129, Italy Giuseppe Menga Politecnico di Torino Dipartimento di Automatica e Informatica Torino 10129, Italy

Abstract. The performance of a structured light system (SLS) highly depends on its calibration especially in applications such as surface metrology and product quality control where high accuracy is required. Motivated by building a real-time highly accurate flatness measurement system, we propose a plane-based residual error compensation algorithm for improving the calibration accuracy of SLSs. Following the highly accurate procedure of geometric calibration using circular control points, the proposed algorithm enforces the planar constraint on the three-dimensional reconstruction of the circular control points, which are projected on a perfect plane, to further reduce the residual calibration error. Our method compensates for the largest proportion of the residual error, in most cases being the model error of the lens distortion in the system. Experimental results show that the compensation elevates the calibration accuracy to a higher level—the planarity error is reduced by 70% and this accuracy is comparable to a well-reputed industrial mechanical measuring machine. © 2013 Society of Photo-Optical Instrumentation Engineers (SPIE) [DOI: 10.1117/ 1.OE.52.10.104106]

Subject terms: structured light system; calibration; compensation; triangulation. Paper 131090 received Jul. 18, 2013; revised manuscript received Sep. 19, 2013; accepted for publication Sep. 30, 2013; published online Oct. 22, 2013.

1 Introduction In industrial production, it is often necessary to examine the planarity of mechanical pieces for quality control. The traditional way of examining a batch of products consists of random batch sampling and testing samples by coordinate measuring machines (CMM). The drawback of this procedure is twofold: on one hand, random sampling does not guarantee the qualification of every piece in the batch; on the other hand, touch-trigger probes in measuring machines make the measurement process extremely slow. Therefore, motivated by building a real-time in-progress flatness measurement system for full quality control, we turned to the active stereo vision system, the structured light system (SLS). SLSs have been substantially used in high-precision three-dimensional (3-D) measurements,1–3 such as shape measurement of turbine blades and precise planarity measurement, etc. A camera and a projector are two indispensable components of such systems. The projector projects some user-defined pattern onto the object in the field of view and the camera captures the object-distorted pattern, which reveals the texture and depth information of the object. According to the mechanism, the pattern is projected and the SLS can be classified into two broad categories: single-shot and multiple-shot. The projection and acquisition of patterns in the single-shot SLS can happen in a fraction of a second, making this kind of SLS very suitable for performing on-line tasks. However, although being faster than CMM, the SLS could only provide a measuring accuracy that is comparable to that of CMM when proper calibration is 0091-3286/2013/$25.00 © 2013 SPIE

Optical Engineering

performed. In this article, we will focus on improving the calibration accuracy of SLS, especially for planarity measurement. 2 Related Work The calibration of SLS has been a hot research topic in recent years, and numerous calibration methods have been proposed. The calibration methods mainly differ in the way of performing bundle adjustment and building point correspondence between two views. Sadlo et al.4 uses a stratified approach in which the calibration of the projector depends on the 3-D points recovered by a precalibrated camera. In such an approach, a small error in the camera calibration would result in a much bigger error in the projector calibration. This problem is solved5 by simultaneously calibrating the camera and the projector using bundle adjustment and a better result is achieved. Finding accurate point correspondence is of great importance for accurate calibration. Most of the current flexible calibration methods project phase-shift patterns to acquire point correspondences because of their convenience. Some of them6–8 also use calibration boards to assist the calibration, while Ref. 9 deserts the calibration board and resorts to decomposition of the radial fundamental matrix to calibrate the two devices simultaneously. The problem with phase-shift based methods is that they depend on photometry to achieve subpixel accuracy and are not accurate when there are sharp edges or dark regions. Therefore, many researchers choose to project geometric patterns such as chessboard or circular control points to minimize the photometric impact. In this category, while projecting chessboard pattern10,11 is more popular, using circular control points12,13 is more accurate. The reason for that is obvious: while occupying the


October 2013/Vol. 52(10)

Han, Chimienti, and Menga: Improving calibration accuracy of structured light systems. . .

same area, the detection of the circle centers is less error prone than the detection of chessboard corners. So far, all the calibration systems have a similar structure, which includes point correspondences establishment, camera and projector model fitting. We seek to further improve the calibration accuracy by adding compensation to the existing structure. In this article, in addition to highly accurate SLS calibration using circular control points, a plane-based residual error compensation scheme is proposed to refine the initial calibration. The compensation scheme enforces the planar constraint on the reconstructed circular control points to rectify inaccuracies of the initial system calibration. A preliminary version of this compensation algorithm was sketched in Ref. 14. The proposed method is easy to apply, and high accuracy is achieved using only off-theshelf devices without any assistance of expensive and sophisticated mechanical equipment. The rest of the article is organized as follows: Sec. 3 describes initial calibration procedure, including system model, calibration target, data acquisition, initialization, and optimization. The residual error is then analyzed and the compensation scheme is described in Sec. 4. Section 5 shows the experimental results. Some discussion is provided in Sec. 6. Section 7 concludes the article. 3 Initial Calibration of the SLS The initial system calibration is based on observing and projecting circular control points. The system is illustrated in Fig. 1. To make clearer development, subscripts “W,” “C,” and “P” are added in the following context to distinguish between the world, camera, and projector coordinate system, whereas the superscripts “c” and “p” are added to distinguish between data or parameters for camera and projector calibration, respectively. 3.1 Camera Model The camera is modeled as a common pinhole camera integrated with radial and tangential lens distortion.15 The pinhole model describes the projection relationship between the 3-D point XW ¼ ½X; Y; Z; 1T in the world reference frame and its corresponding image point xC ¼ ½x; y; 1. The relationship is

λxC ¼ A½RjtXW ;


where λ is a nonzero arbitrary scale factor, ½Rjt are the 3 × 4 joint rotation and translation matrix that aligns the world coordinate system with the camera coordinate system. A is the matrix of intrinsic parameters of the form " A¼


0 fy

# px py ; 1


where f x , f y are the camera focal lengths and ðpx ; py Þ is the principal point. In practice, a point is not projected to the point predicted by the model xC but into xdC ¼ ½xdC ; ydC ; 1T due to lens distortion. To compensate the lens distortion, the pinhole model is extended with three radial lens distortion coefficients, k1 , k2 , k3 , and two tangential distortion coefficients, p1 , p2 . Let xC0 ¼ ½x 0 ; y 0 ; 1T be the world point xW described in camera coordinate frame, i.e., λ 0 xC0 ¼ ½RjtxW . The distorted image point xdC ¼ AxC0 0 , where xC0 0 ¼ ½x 0 0 ; y 0 0 ; 1T ,  00  x y00   ð1 þ k1 r2 þ k2 r4 þ k3 r6 Þx 0 þ 2p1 x 0 y 0 þ p2 ðr2 þ 2x 02 Þ ¼ ð1 þ k1 r2 þ k2 r4 þ k3 r6 Þy 0 þ p1 ðr2 þ 2y 02 Þ þ 2p2 x 0 y 0 (3) and r2 ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x 02 þ y 02 :


To lighten the notation in the following sections, we group the camera intrinsic parameters into one single vector Φc ¼ ff cx ; f cy ; pcx ; pcy ; kc1 ; kc2 ; kc3 ; pc1 ; pc2 g and the extrinsic parameters in Θc ¼ frcxyz ; tc g. rcxyz is the three-element vector of a minimal representation for rotation matrix R following the Rodrigues formula.16 In the calibration process, the camera model will be used to describe the projection of the circle centers XcW on the calibration target plane to the centers xcC on the image plane. For convenience, we denote this projection as a function such that xcC ¼ Dc ðXcw ; Φc ; Θc Þ.

Fig. 1 System model. (a) System components and (b) geometrical model.

Optical Engineering


October 2013/Vol. 52(10)

Han, Chimienti, and Menga: Improving calibration accuracy of structured light systems. . .

3.2 Projector Model The projector can be modeled as an inverse camera6 and, therefore, the camera model developed above applies as well. One can obtain the projector intrinsic parameters Φp ¼ ff px ; f py ; ppx ; ppy ; kp1 ; kp2 ; kp3 ; pp1 ; pp2 g and the extrinsic parameters in Θp ¼ frpxyz ; tp g. Since the projector cannot capture image by itself, a regular grid of circles is projected interleaved between the black circles on the calibration target plane and is viewed by the camera. The centers of the projected circles in the world coordinate frame are computed through the known parameters of the calibrated camera, p c c i.e., xpW ¼ D−1 c ðxC ; Φ ; Θ Þ. The relationship between the circle centers and their corresponding centers in the projector image is described by the projector model. We also denote this relationship as a function xpP ¼ Dp ðxpW ; Φp ; Θp Þ ¼ Dp ðxpC ; Φc ; Θc ; Φp ; Θp Þ. 3.3 Calibration Target The calibration target is designed as a squared grid of equal spaced circular control points. The diameter of the circles and the space among them are chosen in order to gain some space for the interleaved projection of small circles by the projector (shown as dotted circles in Fig. 2). These projected circles will enable the simultaneous calibration of the camera and the projector. We choose circular control points instead of standard chessboard pattern because they provide better results. This is understandable if the following two points are considered: (1) when occupying the same region in an image, the center of a circle is more accurately detected since it benefits from a longer contour compared with the edges of a chessboard and (2) the sensitivity to rotation and translation is reduced because of its isotropic shape. The accuracy comparison between using the circle centers and the chessboard corners is given in Sec. 4. 3.4 Data Acquisition In order to calibrate the system, a certain number of poses of the calibration target shall be viewed by the system. For each pose, two series of data are collected corresponding to the printed markers and the projected circles, respectively. The first one, which consists of the centers of printed markers and their projection in the camera image, is used to make an initial calibration of the camera. The second one, which includes the centers of projected white circles and their

corresponding center points in the projector image, is used for both projector initial calibration and bundle adjustment. The centers of printed markers in the camera image are easily obtained by efficient contour extraction17 and ellipse fitting. However, to project each white circle onto the blank space between its four neighboring black markers, a homography between the target plane and the projector image plane is to be calculated. An approximate homography calculated without lens distortion compensation is enough for this aim because the homography is only employed to avoid interference between projected circles and printed ones. The homography relationships are illustrated in Fig. 3. Three steps are necessary to obtain the target project homography: 1. Compute the target-to-camera homography Hwc, which is calculated from the circle centers in the pattern target xcW and their corresponding centers in the camera image xcC using the method described in Ref. 15. 2. Compute the camera-to-projector homography Hcp. In order to collect point correspondences, horizontal and vertical lines are projected and their intersections are recorded both in projector image and camera image. The lines in the camera image are detected using the algorithm from Ref. 18. 3. The target-to-projector homography Hwp is established by multiplying the previous homographies, Hwp ¼ Hcp Hwc . This is the final homography that enables us to obtain the central coordinates of circles which are to be projected. It is clear that ellipses/circles centers xpC in camera image and synthetic circle centers xpP in the projector image are corresponding points. After repeating the above steps for each pose of the target, the data acquisition is completed. 3.5 Initialization A bundle adjustment scheme is adopted to calibrate both the camera and projector simultaneously. To initialize the optimization, the camera and projector are precalibrated separately using the method of Zhang.19 The camera is precalibrated using the circle centers xcW of all poses of the calibration target and their corresponding projections xcC in the camera image, whereas the projector is precalibrated using the correspondences between the projected ellipse centers xpW on the calibration target and their corresponding circle centers xpP in the projector image. Different from the procedure adopted to estimate the homography Hwc, Hcp , the precalibration estimates the radial and tangential distortions

Fig. 2 Calibration pattern. The black circles are preprinted, whereas the dotted markers are projected by the projector.

Optical Engineering


Fig. 3 Homographies between target, camera, and projector.

October 2013/Vol. 52(10)

Han, Chimienti, and Menga: Improving calibration accuracy of structured light systems. . .

for both the camera and the projector. The initialization yields the intrinsic parameters of the camera and the projector along with the poses relative to the target. 3.6 Bundle Adjustment For a total number of I target poses, the bundle adjustment employs Levenberg–Marquardt algorithm,20 which tries to minimize the reprojection error in both the camera and the projector simultaneously. It searches for the best solution of the system parameters Ψ ¼ fΦc ; Φp ; Θpc g in a combined calibration volume. The objective function of the minimization is PM  I  X kxc −D ðXc ; Φc ; Θc Þk2 þ PN i¼1 p CðikÞ cp WðikÞc c kp p 2 ; EðΨÞ ¼ j¼1 kxPðjkÞ −Dp ðxCðjkÞ ; Φ ; Θk ; Φ ; Θk Þk k¼1 (5) where M is the number of printed markers and N is the number of projected circles in each image. Θpc ¼ frpc ; tpc g is the best estimated pose of the projector relative to the camera, computed using corresponding Θck and Θpk . 4 Residual Error Compensation Unlike common open loop calibration system design concepts, we propose a closed loop system design pattern to further improve the accuracy. In the new design pattern, the residual calibration error is further rectified in the compensation step. To this end, the composition of calibration error is first analyzed and then the dominant component is canceled by the proposed compensation algorithm. 4.1 System Error Analysis The residual calibration error is evaluated by examining the planarity of the reconstruction of a perfect planar surface at a random pose. A grid of S × T circles is projected on the planar surface and their centers are triangulated. A plane π is fitted to the 3-D points by minimizing the sum of Euclidean distances between every reconstructed point to the plane. The mean μd of the distance should be 0 and the standard deviation σ d shows the error. This error encompasses several aspects: first, nonplanarity of the calibration target and the uncertainty of target measurement; second, correspondence error introduced by ellipse center localization algorithm; third, bad estimation of projector lens distortion; and fourth, inevitable white noise. Ideally, the error distribution across the measuring area should be uniform, which means the system is so optimized that white noise is the only source of error. In that case, the error stays unchanged when the measuring area is expanded, otherwise the reconstructed plane is generally a quadratic surface and the error will increase quadratically when the measuring area increases. As long as the distribution is not uniform, a better performance can be achieved by improving any of the first three aspects that are responsible for the system error. In SLS, there is usually a larger lens distortion in the common commercial projector than in the camera. This nonlinear distortion of the projector is not perfectly predicted by distortion model leveraging on an even-order polynomial. In fact, the model falls into a dilemma in which a fourth order of polynomial model is not enough to reduce the distortion Optical Engineering

error to a satisfactory level, whereas a sixth model will result in an over-fitting problem. Therefore, we believe that the bad estimation of the lens distortion, especially the projector lens distortion, is the dominant component of the error and that other errors, such as calibration target nonplanarity and circle center localization error, do not affect the system accuracy as substantially as the lens distortion does.21

4.2 Compensation Scheme Following the above assumption, a compensation scheme by taking advantage of a perfectly planar metallic surface is proposed to further rectify the projector distortion. The geometrical principle is illustrated in Fig. 4. Suppose that D, E, F are three circle centers projected on the perfect planar surface ON by projector P. GD 0 , HE 0 , and IF 0 are the ray directions predicted by the projector model. Due to the existence of residual lens distortions, the actual light paths are GD, HE, and IF instead of GD 0 , HE 0 , and IF 0 , respectively. Here, it is reasonable to assume that the camera is perfectly calibrated since there is usually little lens distortion. Therefore, if triangulation is performed, the reconstructed points are D 0 , E 0 , F 0 , which are the intersections between camera rays and the rays predicted by the projector model. The reconstructed points deviate from the true circle centers D, E, F and the amount of deviation is related to the amount of residual lens distortion in the projector. In our compensation algorithm, we propose to intersect the camera rays with the fitting plane O1 N 1 , which is calculated based on the noisy reconstruction D 0 , E 0 , F 0 , to provide the best estimation of the true circle centers. These estimated centers are denoted by D1, E1 , and F1 . The angle between PD1 and PD 0 is the rectification for the residual distortion of the direction represented by point D 0 , and the rectification for the other points follows a similar procedure. The detailed implementation procedure for compensation is summarized in Algorithm 1. The discrete compensation values are interpolated using B-spline to construct a continuous compensation vector field for the whole projector plane. Once constructed, the compensation vector field can be used to rectify future measurements of projector correspondence points. To make a clear demonstration, a discrete point compensation map is shown in Fig. 5. The arrow length at each point shows the compensation magnitude, and its direction is determined by the ratio between the x and y component. After adding the residual error compensation step, a new procedure for testing the planarity of a surface is described as follows:


Fig. 4 The geometrical principle of the compensation scheme.

October 2013/Vol. 52(10)

Han, Chimienti, and Menga: Improving calibration accuracy of structured light systems. . .

5 Experimental Results

Algorithm 1 Calibration error compensation algorithm.

5.1 Experimental Setup 1. Calculate the fitting plane π: ax þ by þ cz þ d ¼ 0 from the noise reconstruction points, xi0 , i ¼ 1; · · · ; m, where m is the number of circle center projected. 2. Intersect the camera rays with π using  c c xi ¼ D 0−1 c ðxiC ; Φ Þ x i · nπ ¼ 0


where nπ ¼ ½a; b; c; d T is the normal vector of plane π, and c c D 0−1 c ðxiC ; Φ Þ is the mathematic description of the process of backprojecting a camera image point to a ray in the 3-D space. 3. Project xi into the projector image according to the projector model : to get xcomp P ¼ D p ðxi ; Φp ; Θp Þ xcomp P


4. The compensation for projector distortion in the direction represented by the point xpP is the difference between the compensated point and the original point, i.e., − xpP , ½eTP ; 0T ¼ xcomp P



where eP ¼ ½e x ; e y is the vector containing compensation values for x and y coordinates respectively.

1. Project onto the surface the predesigned circle grid with the projector and capture the projected pattern with the camera. 2. Compensate the correspondence points in the projector image with the pre-estimated compensation vector field. 3. Reconstruct the 3-D points with the camera points and the corresponding compensated projector points. 4. Calculate the planarity of the surface.

The system used to provide the experimental results consists of a grayscale AVT-Dolphin camera with 12.5-mm lens and a NEC NP50 DLP projector. The length of the base line is about 250 mm and the angle between the two devices is about 75 deg. The calibration volume is about 120 × 120 × 100 mm3 . 5.2 Reprojection Error of Initial Calibration The initial calibration is conducted using 18 different poses of the calibration target over entire calibration volume. The projector projects circular points onto each pose of the calibration target and the camera captures images containing both the printed target and the projected pattern. Before performing compensation, we evaluate the root mean square of the reprojection error (RMSE) for the initial calibration and two comparisons are made in terms of RMSE. The first one is the comparison between different calibration targets, i.e., chessboard and circular points. The width of a square in the chessboard pattern is chosen to be the same as the distance between circle centers in circular point pattern. The projector always projects circles onto the calibration target: if the pattern is chessboard, they are projected onto the white squares; if the pattern is circular points, they are projected as described in Sec. 4.1. The comparison results are shown in Table 1. From the table we can see, by using circular pattern instead of chessboard pattern, the calibration accuracy is raised about six to eight times. In the second comparison as shown in Table 2, we compare the RMSE of our method (using circular point patterns) with the state-of-the-art SLS calibration methods in current literature. The methods with which we selected to compare are good representatives, in terms of RMSE, of the two general categories in finding point correspondences as mentioned in Sec. 2. It is shown that our result is comparable to, if not better than, most of the methods. 5.3 Planarity Error of Compensated Calibration In order to assess the overall accuracy of the system, we examine the planarity error of the reconstruction of a perfect planar surface following the procedure of system error analysis described in Sec. 4. The planar surface is a rectangular metallic piece with a guaranteed planarity of 5.9 μm, which value before our experiments was confirmed on the Table 1 Reprojection root mean square error (RMSE) comparison of different patterns.

Camera (pixels) Pattern type



Projector (pixels) x


Chessboard corner





Circle center









Percentage of reduced RMSE by using circle center Fig. 5 Regional vector field of projector distortion compensation.

Optical Engineering


October 2013/Vol. 52(10)

Han, Chimienti, and Menga: Improving calibration accuracy of structured light systems. . .

Table 2 Accuracy comparison with state-of-art methods.

Reprojection RMSE (pixels) Authors



Zhang and Huang6



Ashdown and Sato22



W. Gao et al.23





Li et al.7



Ouellet et al.13



Our method



Ma et al.8

Hexagon CMM machine. The pose of the metallic surface is different from the one from which the error compensation vector field is constructed. The measured area is about 120 × 120 mm2 and the planar object is placed at ∼300 mm from the system. A grid of 15 × 17 circles is projected, and n projector points are rectified using precalculated compensation value before triangulation. The resultant standard deviation of the planarity error is a fairly small number, 9.86 μm, which is one third of the uncompensated values. We also investigate the relationship between the planarity error and measuring area and compare it with uncompensated data. The result is illustrated in Fig. 6. Here, we change the measuring area merely by extracting a subregion of the projected circles not by adjusting the position of the plane. It shows that the compensation dramatically reduces the planarity error, especially when the measuring area is large. Actually, the error reduces by 72% when the measurement area reaches 120 × 120 mm2 . One also notices that the planarity error after compensation almost stays the same when measuring area increases. The stability is a sign of the systematic error being almost purely white, which is demonstrated by the distribution of the planarity error in the measuring area, listed in Fig. 7(a). It shows a distribution of nearly white noise,

Fig. 6 Planarity error comparison before and after compensation.

Optical Engineering

however, in comparison, a clear quadratic shape and higher error amplitude are observed in Fig. 7(b), which is the distribution of the uncompensated error obtained in the same setup with Fig. 7(a). Note that the planarity error is a signed value, where the negative value means that the reconstruction point is below the fitting plane, whereas the positive value means that it is above the fitting plane. At last, we compare our planarity error with the state-ofart. Tsi et al.24 used Gray code plus line shift method to reconstruct the surface of a planar surface and achieved a relatively small planarity error at that time. More recent results are obtained by Li et al.7, Ma et al.8, Ouellet.13 These results together with the measurement from a CMM are included in the comparison. However, one slight problem about the comparison is that the planarity error provided by any SLS is an optimized result over its own configuration space. The difference in the working volume makes it difficult to compare the accuracy of various SLS in a unified framework. Fortunately, we have observed that for that for SLS whose working volume profile is a square, the planarity error is proportional to the width of the square. This rule allows us to compare the planarity error of different SLSs. The comparison result is listed in Fig. 8. We can see that our planarity error is far below the other methods and it is close to the value provided by the DEA PIONEER measuring machine of Hexagon Metrology, which declares to have accuracy of 3 μm on point coordinate measurements.25 6 Discussion The error compensation vector field depends only on the ray direction represented by pixel position in the projector image, not on the poses of the planes based on which the vector field is calculated. The reason for this is that the extent to which a ray is distorted depends only on the ray direction and lens properties, not on the external plane positions. Rotation and translation tests are conducted to examine the robustness of the proposed compensation algorithm. A reference plane of a random pose is used to compute the compensation vector field, and the same vector field is applied to compensate measurements of planes of other different poses. These various poses are obtained by rotating and translating the reference plane. The test results are shown in Fig. 9. We can see from the figure that the planarity error remains virtually the same regardless of the difference in plane poses. This means that even though the rotation and translation information is not included when constructing the vector field, it does not impair too much the calibration accuracy. We also list in the figure the uncompensated results to allow a comparison. In the development of our algorithm, we have assumed that the residual error comes mostly from the projector lens distortion. However, one may argue that it is actually the combination of errors that exist in the system all together. Such errors may come from not only the residual lens distortion but also other sources such as the nonplanarity of the calibration artifact, equipment vibration during movement, variation of the temperature, etc. In this sense, we could conclude from the experimental results that within an appropriate working volume, the proposed method compensates for the calibration inaccuracies, which include lens distortions and other noises present in the system. This leads to


October 2013/Vol. 52(10)

Han, Chimienti, and Menga: Improving calibration accuracy of structured light systems. . .

Fig. 7 The error distribution of a reconstructed plane (a) with compensation and (b) without compensation.

Fig. 8 Planarity error comparison with start-of-art methods.

Fig. 9 Rigid motion test for the compensation vector field. (a) Compensation vector field translation test and (b) compensation vector field rotation test.

a further practical application in the industrial production line: saving the maintenance people from the trouble of recalibrating the whole SLS when it merely suffers from minor accuracy degradation, which may be due to either small vibration during production or poor initial calibration. A single shot on a planar surface in a flash of a second should rectify the calibration degradation. Optical Engineering

7 Conclusion and Future Work In this article, we present a highly accurate approach for calibration of SLS composed of one camera and a projector. The contribution of this work, driven by a direct analysis of the calibration error, is the introduction of a calibration compensation framework to further reduce the systematic error caused by the residual distortion. Systematic error is


October 2013/Vol. 52(10)

Han, Chimienti, and Menga: Improving calibration accuracy of structured light systems. . .

compensated for by taking advantage of the introduction of a perfect planar surface into the scene assuming that, as the error analysis highlights, the distortion of projector is the main error source. Although the compensation algorithm is primarily targeted to rectify the residual lens distortion in the system, we have demonstrated that the effectiveness of the algorithm goes even beyond that: our algorithm is able to compensate for composite error sources including the lens distortion, the nonplanarity of the calibration targets, and the inaccuracies of the image processing algorithms involved. Despite of the use of off-the-shelf devices, experimental results show that compensated planarity error is smaller than most of the existing state-of-the-art SLS calibration methods and is close to the value provided by a wellreputed mechanical measuring machine. Other issues that will help the improvement of calibration accuracy of the system and will enable other applications may include: •

the use of a better calibration pattern made of a metallic planar plate with circular points, instead of a printed paper pasted on a planar surface; • the adoption of a more accurate point correspondence algorithm, e.g., projecting the contours instead of the ellipse centers; • the use of a more accurate positioning device for the calibration target to prevent the formation of small vibrations; • the extension to dense 3-D reconstruction. An interesting evolution may encompass the on-the-fly recalibration of the extrinsic parameters (camera projector relative pose) that takes into account the small movement caused by the fact that the devices work in a dirty environment as a production plant may; the recalibration can done by using the same planar support used for error compensation. Acknowledgments This work is supported by China Scholarship Council. We wish to thank Mr. Giuseppe Pettiti of Istituto di Elettronica e di Ingegneria dell'Informazione e delle Telecomunicazioni (IEIIT) of the National Research Council (CNR) of Italy for providing the experimental environment and support. We would also want to thank HEXAGON company in Turin for providing the machine measuring data.

8. S. Ma et al., “Flexible structured-light-based three-dimensional profile reconstruction method considering lens projection-imaging distortion,” Appl. Opt. 51(13), 2419–2428 (2012). 9. S. Yamazaki, M. Mochimaru, and T. Kanade, “Simultaneous self-calibration of a projector and a camera using structured light,” in Computer Vision and Pattern Recognition Workshops, pp 60–67, IEEE Computer Society, Colorado Springs (2011). 10. T. Li, F. Hu, and G. Zheng, “Geometric calibration of a camera-projector 3D imaging system,” in Proc. of 10th International Conference on Virtual Reality Continuum and Its Applications in Industry (VRCAI'11), pp. 187–194, ACM, New York (2011). 11. S. Fernandez and J. Salvi, “Planar-based camera-projector calibration,” in 7th International Symposium on Image and Signal Processing and Analysis (ISPA), pp. 633–638, IEEE Computer Society, Croatia (2011). 12. A. Ben-Hamadou et al., “Flexible calibration of structured-light systems projecting point patterns,” Comput. Vision Image Underst. 117(10), 1468–1481 (2013). 13. J.-N. Ouellet, F. Rochette, and P. Hébert, “Geometric calibration of a structured light system using circular control points,” in The Fourth International Symposium on 3D Data Processing, Visualization and Transmission (2008). 14. D. Han, A. Chimienti, and G. Menga, “Accuracy improvement in structured light system calibration using plane based residual error compensation,” in 4th European Workshop on Visual Information Processing, pp. 154–159, IEEE Computer Society, Paris (2013). 15. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed., Cambridge University Press, New York (2004). 16. Y. Ma et al., An Invitation to 3-D Vision, Springer, New York (2003). 17. A. Cumani, “Efficient contour extraction in color images,” in Proc. 3rd Asian Conf. on Comp. Vision (ACCV), Vol. I, pp. 582–589, Springer Berlin Heidelberg, Hong Kong (1998). 18. G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools, Cambridge University Press, Cambridge, UK (2000). 19. Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” in 7th Proc. IEEE Int. Conf. Computer Vision, Vol. 1, pp. 666–673, IEEE, Kerkyra, Greece (1999). 20. W. H. Press et al., Numerical Recipes in C: The Art of Scientific Computing, Cambridge University Press, New York (1992). 21. D. Douxchamps and K. Chihara, “High-accuracy and robust localization of large control markers for geometric camera calibration” IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 376–383 (2009). 22. M. Ashdown and Y. Sato, “Steerable projector calibration,” in Proceedings of Computer Vision and Pattern Recognition Workshops(CVPRW ’05), Vol. 3, pp. 98, IEEE Computer Society, San Diego, California (2005). 23. W. Gao, L. Wang, and Z.-Y. Hu, “Flexible method for structured light system calibration,” Opt. Eng. 47(8), 083602 (2008). 24. M.-J. Tsai and C.-C. Hung, “Development of a high-precision surface metrology system using structured light projection,” Measurement 38(3), 236–247 (2005). 25. Hexagon Metrology, “DEA PIONEER brochure,” http://www (October 28 2013). Dong Han received his bachelor’s degree in electronic information engineering and an MS in information and signal processing from Tianjin University, China, in 2007 and 2009, respectively. He is now a PhD student in the Department of Electronics, Computer Science and Bioengineering at Politecnico di Milano. His research interests include computer vision, pattern recognition and machine learning, especially in computer integrated manufacturing, human motion recognition,

References 1. J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011). 2. J. Pagès et al., “Overview of coded light projection techniques for automatic 3D profiling,” in Proceedings of IEEE International Conference on Robotics and Automation, pp. 133–138, IEEE, Taipei, Taiwan (2003). 3. F. Chen, G. M. Brown, and M. Song, “Over-view of three dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 10–22 (2000). 4. F. Sadlo et al, “A practical structured light acquisition system for pointbased geometry and texture,” in Proceedings of the Eurographics Symposium on Point-Based Graphics, pp. 89–98, IEEE, Stony Brook, New York (2005). 5. R. Legarda-Sáenz, T. Bothe, and W. P. Jüptner, “Accurate procedure for the calibration of a structured light system,” Opt. Eng. 43(2), 464–471 (2004). 6. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). 7. Z. Li et al., “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008).

Optical Engineering

and 3-D reconstruction. Antonio Chimienti graduated in information science (summa cum laude) from the University of Turin, Italy, in 1987. Since 1988, he has been with Istituto di Elettronica e di Ingegneria dell’Informazione e delle Telecomunicazioni (IEIIT) of the National Research Council (CNR) of Italy, where he is currently senior researcher. His former research activity was in video coding area in which he developed algorithms for vector quantization, wavelet coding, motion compensation, error resilience, and error concealment. He has also worked on the protection of video streaming on packet networks and satellite links. In 2002, he joined the Artificial Vision group, working on image processing and


October 2013/Vol. 52(10)

Han, Chimienti, and Menga: Improving calibration accuracy of structured light systems. . . image understanding for application in the cultural heritage field. Currently, his research interests focus on computer vision in the field of noncontact metrology: geometric and colorimetric measurements, scene monitoring, and 3-D reconstruction of surfaces for application in robotics, environment monitoring, and cultural heritage management.

level, he has been member of the Advisory Committee and technical editor of the IEEE Robotics and Automation Society; for the same society, he served as general chairman of the International IEEE Robotics and Automation Conference held, the first time in its history outside USA, in Nice, France, 1992. As professional activities, in 1978, he founded SYCO, a consulting company in the fields of robotics, automation, and software engineering.

Giuseppe Menga received his Laurea degree in electronic engineering from Politecnico di Torino, Turin, Italy, in 1967. He was a fellowship researcher at the University of Colorado in 1973, and a senior postdoctoral associate at the NASA AMES Research Center from 1974 to 1975. From 1970 to 1980, he was an associate professor at Politecnico di Torino, and since 1980, he has been full professor of automatic control at Politecnico di Torino. At the international

Optical Engineering


October 2013/Vol. 52(10)

Suggest Documents