Determination of optimal measurement configurations ... - Springer Link

0 downloads 0 Views 3MB Size Report
To do so, a pre-calibration of the robotic visual inspection system is needed to obtain the hand-eye and robot exterior relationship to implement the inverse ...
The International Journal of Advanced Manufacturing Technology https://doi.org/10.1007/s00170-018-1739-x

ORIGINAL ARTICLE

Determination of optimal measurement configurations for self-calibrating a robotic visual inspection system with multiple point constraints Chengyi Yu 1 & Xiaobo Chen 1 & Juntong Xi 2 Received: 24 September 2017 / Accepted: 6 February 2018 # Springer-Verlag London Ltd., part of Springer Nature 2018

Abstract In this paper, we propose an algorithm to determine optimal measurement configurations for self-calibrating a robotic visual inspection system with multiple point constraints. The algorithm aims to improve the robotic visual inspection system’s calibration accuracy. To do so, a pre-calibration of the robotic visual inspection system is needed to obtain the hand-eye and robot exterior relationship to implement the inverse kinematic algorithm. The candidate measurement configurations with one point constraint can be obtained using the inverse kinematic algorithm for the robotic visual inspection system, so DETMAX is implemented to determine a given number of optimal measurement configurations from the candidate measurement configurations. Particle swarm optimization is used to optimize the positions of the multiple points one by one. To verify the efficiency of the proposed approach, experiment evaluation is conducted on a robotic visual inspection system. Keywords Optimal measurement configurations . Robotic visual inspection system . Self-calibration . In-line calibration

1 Introduction In the automotive field, the deployment of robots is increasing since at least the 1970s. Most of the time, robots are designed for repeated work such as spray painting, picking, placing, and welding; consequently, manufacturers specify repeatability rather than absolute accuracy of robots. However, the robotic visual inspection system, designed for in-line inspection of body-in-white (BIW) task, requires the collected measurement Electronic supplementary material The online version of this article (https://doi.org/10.1007/s00170-018-1739-x) contains supplementary material, which is available to authorized users. * Juntong Xi [email protected] Chengyi Yu [email protected] Xiaobo Chen [email protected] 1

School of Mechanical Engineering, Shanghai Jiao tong University, Shanghai, China

2

The State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao tong University, Shanghai, China

data to be expressed into the part coordinate frame and com3pared with a nominal CAD model. In other words, the measuring task depends on the absolute accuracy of the robotic visual inspection system. Similar to robot calibration, the robotic visual inspection system also needs to be calibrated to improve its absolute accuracy. A robotic visual inspection system consists of an industrial robot and an optical sensor mounted at the flange of the robot as shown in Fig. 1. In the real production environment, a robotic visual inspection system being used for in-line inspection of BIW requires the system to be calibrated continuously and automatically in order to ensure the system’s accuracy during the production process. The calibration is necessary for instance to compensate for expansion/contraction of the robot’s links due to selfheating or ambient temperature changes, mechanical wear, and the replacement of optical sensor. According to the method of acquiring the calibration data, the robot calibration can be classified into two categories: robot calibration using external measurement devices [1–6] and robot self-calibration by imposing physical constraints [7–11]. Robot calibration methods using external measurements share the following shortcomings: they are time-consuming, are difficult to operate, need a lot of human intervention, and are unsuitable for inline calibration in a real production environment. The robot

Int J Adv Manuf Technol Fig. 1 Schematic diagram of a robotic visual inspection system

self-calibration is designed to overcome the above limitations and can be used in a production line [12]. Similarly, the robotic visual inspection system should be self-calibrated to make the calibration technique automatic, time-saving, and convenient to implement. Four spherical calibration targets are placed within the measurement envelope of the robotic visual inspection system to self-calibrate it in-line without operator intervention, and the position of the spherical calibration target’s centers are used to identify the kinematic parameters of the inspection system. Consequently, the self-calibration of the robotic visual inspection system is implemented by imposing multiple point constraints, because the optical sensor coordinate frame is constrained by four spherical centers during the self-calibration process. As we know, to minimize the impact of the unmodeled errors and measurement errors, the measurement configurations should be selected in an optimal fashion [13–15]. In order to measure the goodness of measurement configurations, five observability indexes [13, 16–20] have been proposed. The higher the observability index value, the higher the attribution of the position errors to the parameter errors, which means that the effects of unmodeled errors and measurement noise are less significant. As a result, better parameter estimation is achieved, which leads to greater accuracy improvement. The additional challenge is to choose the right observability index. To answer this question, some researchers have addressed this issue. On one side, based on the mathematical analysis, Sun and Hollerbach [20] considered the third observability index O3 as the best choice. On the other side, Horne and Notash [21] did not recommend that choice, which

is based on a simulation study they performed on the kinematic calibration of a 2DOF serial robot. Zhou et al. [22] found the fourth observability index O4 is the best option when joint stiffness is included in the calibration model based on a simulation study. To shed more light on the issue of choosing the right index, Joubair and Bonev et al. [14, 15] performed much more extensive simulation study and experiment verification to address the issue, and they found that the first observability index O1 seems to be the most appropriate option for all the studied robots. How to select the optimal measurement configuration set is another question to be answered. To overcome this problem, optimization algorithms are necessary. In the literature, many optimization algorithms have solved this problem through different approaches. These approaches can be divided into two classes: choose n measurement configurations among N candidate configurations and choose n measurement configurations from the robot workspace. DETMAX, which belongs to the first class, is the most common algorithm [14, 15, 23]. To reduce the complexity of computing the first observability index using DETMAX, Sun and Hollerbach [24] proposed an exchange-add-exchange algorithm to compute the determinant of identification Jacobian matrix J iteratively. To select n measurement configurations from the robot workspace, a lot of intelligent algorithms were proposed such as simulated annealing algorithm [23], genetic algorithms [22, 25], tabu search [26], hybrid optimal method [27], etc. The existing optimal measurement configuration selection methods mainly assume that the robot can achieve each configuration during the robot calibration process. However, it may be not true during the self-calibration process

Int J Adv Manuf Technol

such as the in-line self-calibration of a robotic visual inspection system with multiple point constraints. In this paper, a method of determining optimal measurement configurations for a robotic visual inspection system self-calibration with multiple point constraints is presented. First, a pre-calibration of the robotic visual inspection system is proposed to obtain the hand-eye relationship and robot exterior relationship of the robot visual inspection system. Then a candidate configuration set can be obtained using an inverse kinematic algorithm based on the screw theory after giving one point constraint, so DETMAX can be used to select a given number of optimal configurations from the candidate configuration set. The PSO [28] is used to optimize the position of the spherical center. Finally, four positions of the spherical centers are determined one by one considering known position of spherical center. Experiment is carried out to verify the proposed method. The remainder of this paper is organized as follows: Section 2 briefly introduces the system construction. Section 3 presents the self-calibration method for the robotic visual inspection system. Section 4 explains the methodology used for selecting optimal measurement configurations for self-calibration with multiple point constraints. Experimental validation is conducted in Section 5 and the paper ends with concluding remarks in Section 6.

2 System design and experimental setup 2.1 Optical sensor construction A robotic visual inspection system consists of an industrial robot and an optical sensor fixed on the robot hand as shown in Fig. 1. As shown in Fig. 2, the optical sensor is mainly comprised of a CCD camera (Basler acA1300/30 um) with a 16-mm lens, a laser line projector (wavelength of 730 nm, beam width ≤ 1 mm), and a one-mirror galvanometer element. The field of view is 27 × 37 × 10 mm. The working principle is as follows: an incoming laser stripe plane, which is projected by a laser line projector, hits the reflective mirror and the light stripe formed by intersecting outgoing laser stripe plane and the object’s surface is captured by the fixed camera. The 3D characteristic information of the object’s surface can be derived from the 2D distorted structured light stripe images after 3D reconstruction. The reflected laser stripe plane scans across the object’s surface along with the rotation of the galvanometer element, and the 3D information of the object’s whole surface can be reconstructed. The measuring process of the spherical calibration target (diameter of 30 mm) is as shown in Fig. 3. In Fig. 3a, points of the spherical surface are extracted in the ROI (region of interest) when the laser line scans across the calibration target. Figure 3b shows the 3D point cloud of the measured sphere after 3D reconstruction

Object

mirror

Laser stripe plane

Projector

ZC

Camera galvanometer element

XC

YC

Fig. 2 Schematic diagram of the optical sensor

and spherical center can be obtained using nonlinear leastsquares sphere fitting [29]. More details about the optical sensor can be referred to the paper [30].

2.2 Experimental setup In order to calibrate the robotic visual inspection system inline without operator intervention, four spherical calibration targets are placed within the measurement envelope of the robotic visual inspection system as shown in Fig. 4. The locations of these calibration targets should not be co-linear and further the accurate locations of these targets are determined using the proposed method and then placed with the help of a laser tracker. To not affect the yield of the BIW line, the robotic visual system measures the calibration targets at the interval of clamping the BIW. Consequently, to balance the efficiency and calibration accuracy, each calibration target is measured at six different configurations and consequently there are 24 measurements which create 72 equations to identify the kinematic parameters. Here, the laser tracker is used to place four calibration targets at required positions and verify the accuracy of the robotic visual inspection system after selfcalibration.

Fig. 3 The measuring process of the calibration target. a The processed gray image. b The fitted sphere

Int J Adv Manuf Technol Fig. 4 Schematic diagram of the experimental setup

3 Self-calibration method for robotic visual inspection system A robotic visual inspection system includes three coordinate transformation relationships as shown in Fig. 1, i.e., the hand-eye relationship (T HS ), the robot itself relationship (B HT ), and robot exterior position relationship (P HB ). They form a coordinate transformation chain from the sensor coordinate frame to the part coordinate frame, and the collected measurement data can be expressed into the part coordinate frame after finishing the above three relationship calibration referred to as hand-eye calibration, robot calibration, and robot exterior calibration, respectively.

3.1 System mathematic model Here, the robotic visual inspection system mathematic model is mainly built based on MDH model to calibrate the above three relationships simultaneously in order to avoid error propagation. What is more, the redundant parameters are analyzed and eliminated in the system mathematic model to derive a MDH model without redundancy. The most popular method for developing a kinematic model is the one proposed by Denavit and Hartenberg [31] (D-H model), which establishes a link coordinate system on each of the joint axes, and then represents the relationship between two consecutive link coordinate systems by means of homogeneous transformation matrices. The homogeneous transformation matrices between (i-1)th and ith coordinate systems are shown in:

2

Cθi 6 Sθi i−1 Hi ¼ 6 4 0 0

−Sθi Cαi Cθi Cαi Sαi 0

Sθi Sαi −Cθi Sαi Cαi 0

3 ai Cθi ai Sθi 7 7 di 5 1

ð1Þ

where ai,αi, di,and θi are generally named as link length, link twist, link offset, and joint angle, respectively. Cθi denotes cosθi, Sθi denotes sinθi, and so on. As pointed out by Hayati [32], the D-H model does not satisfy the continuity requirement in case of two consecutive parallel joints or nearly parallel joints. This causes numeric instability during the identification process. In order to avoid the singularity problem, MDH model [32] is proposed. The MDH model adds a small rotation of β about y-axis to D-H model while setting the link offset d to zero to solve singularity problem. As for two parallel or nearly parallel consecutive joints, the homogenous transformation matrices are shown in: 2

Cθi Cβ i −Sθi Sαi Sβ i 6 Sθi Cβ i þ Cθi Sαi Sβ i i−1 m 6 i ¼ 4 −Cαi Sβi 0

−Sθi Cαi Cθi Cαi Sαi 0

Cθi Sβi þ Sθi Sαi Cβi Sθi Sβi −Cθi Sαi Cβi Cαi Cβi 0

3 ai Cθi ai Sθi 7 7 0 5 1

ð2Þ For a robotic visual inspecting system as shown in Fig. 5, the optical sensor is attached to the flange of the robot and the robot is fixed to the concrete base, so the relationship between the sensor coordinate frame {S} and the tool coordinate frame {T} is constant but unknown and so is the relationship between the robot’s base coordinate frame {B} and the part coordinate frame {P}. The pose of the sensor coordinate frame and the robot’s base coordinate frame with respect to the {O5} coordinate frame and the part coordinate frame are expressed as 5 H S and P H 1 respectively. Here, the {Oi} is short for the {Oi} coordinate frame for convenience.

Int J Adv Manuf Technol Fig. 5 Schematic representation of a robotic visual inspection system with D-H convention

y3

x3 y4

z3

Robot s base coordinate frame {B}

zB

yB

T

zT

z4

x2

y2

x4

Tool coordinate frame {T} x

yT xS

z5

z2

y5

z1

yS

x5

x1

xB y1

zp yp

Base

zS

Sensor coordinate frame {S}

xp

Part coordinate frame {P}

5

ð3Þ

andP H B , so it is reasonable to eliminate them in the kinematic model in advance. Finally, the pose of the sensor coordinate frame with respect to part coordinate frame can be represented by

ð4Þ

P

HS ¼ 5 HT T HS ¼ translðx1 ; y1 ; z1 Þrotzðγ 1 Þrotyðφ1 Þrotxðε1 Þ

P

H1 ¼ HB H1 P

B

¼ translðx2 ; y2 ; z2 Þrotzðγ 2 Þrotyðφ2 Þrotxðε2 ÞBm H1 3 2 1 0 0 a1 6 0 Cα1 −Sα1 0 7 Bm 7 H1 ¼ 6 4 0 Sα1 Cα1 0 5 0 0 0 1

ð5Þ

where ϵi,φi,and γi represent the rotational part in the coordinate frame transformation, while xi, yi, and zi represent the translational part. It should be noticed that there is some difference between the two equations, because the sensor coordinate frame rotates around the z-axis of the local reference coordinate frame ({O5}) but the {O1} rotates around a fixed axis in the part coordinate frame {P} as shown in Fig. 5. As a result, there are more than six identified parameters to represent the pose of the {O1} with respect to the {P} during the motion of the robot. As shown in Fig. 5, the {B} should only satisfy that its z-axis should coincide with the axis of the 0th link. And consequently, the {Bm}, whose x-axis and z-axis coincide with the x-axis of the {O1} and the z-axis of the {B} respectively, is substituted for {B}. As a result, the pose of the {O1} with respect to the {Bm} denoted by Bm H1 in which only two kinematic parameters (the link length a1 and link twist α1 while the link offset d1 and the encoder zero-point offset Δθi are assigned to be 0) need to be identified, and the pose of the {Bm} with respect to {P} is represented using six parameters. There are totally 14 identified parameters in Eqs. (3) and (4) by eliminating four redundant kinematic parameters in 5 H B T and two redundant kinematic parameters in H 1 respectively. They cannot be distinguished from parameters in T H s

3 4 5 2 HS ¼ P H1 1 Hm 2 H3 H4 H5 HS

ð6Þ

where 1 H m 2 is expressed as Eq. (2) for the axes of joint 2 and joint 3 are nominal parallel and the rest homogenous transformation matrices for other two consecutive joints are expressed as Eq. (1). It should be noted that there are only 30 identified parameters without any redundant parameters in the kinematic model of the robotic visual inspection systems. What is more, numerical method [33, 34] is used to validate that there is no redundant parameters in the proposed mathematical model in Section 5.

3.2 Parameter identification To balance the efficiency and calibration accuracy, each calibration target is measured at six different configurations and consequently there are 24 measurements which create 72 equations to identify the kinematic parameters. Based on the nonlinear least-squares sphere fitting algorithm, the coordinates of the centers of the calibration targets with respect to the sensor coordinate frame can be determined. As indicated in Eq. (6), the coordinates of the centers of the calibration targets can be obtained in the part coordinate frame, as shown in Eq. (7).   ð7Þ Pi ¼ P HS i Ci where Ci is the coordinate of the center of the calibration target with respect to the sensor coordinate frame at the ith robot   position, P H S i is the corresponding homogeneous transformation matrix that indicates the pose of the sensor coordinate

Int J Adv Manuf Technol

frame with respect to the part coordinate frame at the ith robot position, and Pi is the coordinates of the center of the calibration target with respect to the part coordinate frame at the ith robot position. The kinematic parameters are identified by minimizing the summed square of 3 × 1 positional error vector ΔPi associated with m number of measurement configurations. The objective function is given as T E ¼ ∑m i¼1 ½ΔPi  ½ΔPi 

ð8Þ

where ΔPi is expressed by ΔPi ¼ ½δx; δy; δzT ¼ Pi −Pni

ð9Þ

where Pni , measured by the laser tracker in advance, is the actual position of the spherical center with respect to the part coordinate frame, and δx, δy, and δz are the position errors in the x, y, and z directions, respectively. On account of robustness and efficiency of the identification algorithm, the paper employs the LM method [35] to identify the kinematic parameters.

using D-H model with the robot’s nominal parameters.

4.2 Inverse kinematic algorithm for the system Assume one calibration target center is located at Pw ¼ 0 ðxw yw zw Þ with respect to the robot base coordinate frame, and then sample the orientation of the senor coordinate frame at an interval of 5 degrees. What is more, the optical sensor cannot measure the target upward because the robot may collide with the calibration target, so samples of the orientation whose z-axis direction is positive should be eliminated from the sample set. The ideal condition is that the target center is located at the z-axis of the optical sensor, and the z value can be estimated using the working distance of the sensor and the radius of the sphere. The z value is calculated by adding the working distance 120 mm and the radius of the sphere 15 mm and then subtracting half of field of view 5 mm (z direction), so the ideal z value is 130 mm. The position of the sensor coordinate frame with respect to the robot base frame can be calculated as follow: ~ w ¼ B HS P ~s P

4 Algorithm for selecting the measurement configurations with multiple point constraints The existing optimal measurement configuration selection methods can solve two classes of problems: choose n measurement configurations among N candidate configurations and choose n measurement configurations from the robot workspace. However, as to the self-calibration for robotic visual inspection system, the sensor coordinate frame is constrained by four points during the calibration process.

4.1 Pre-calibration of the system The robotic visual inspection system should be pre-calibrated for the following inverse kinematic problem. The locations of these calibration targets should not be co-linear and further encompass a relatively large volume to guarantee calibration accuracy. What’s more, the accurate locations of these targets with respect to the part coordinate frame are measured by a laser tracker in advance. Each target is measured at six different measurement configurations and consequently there are 24 measurements which establish 72 equations to identify 30 kinematic parameters. After calibrating the system, 5 H S is obtained and then T H S also can be calculated as indicated in Eq. (10). T

HS ¼ T H5 5 HS

ð10Þ

where T H 5 , which is the inverse of the matrix 5 H T , represents the position of {O5} with respect to {T}. The 5 H T is calculated

ð11Þ

fw ¼ ½ xw yw zw 1 T is the homogeneous coorwhere P dinate of the spherical target center in the robot base coordinate frame {B},; B H S represents the position of the sensor coordinate frame {S} with respect to the robot base frame {B}, and the rotational part of B H S is selected from the samfS ¼ ½ 0 0 zs 1 T is the nominal homogeneous ple set; P coordinate of the calibration target center in the sensor coordinate frame {S}. The rotational part is chosen from the sample set and the translational part is calculated as follows: t 1 ¼ xw −r3 ⋅zs t 2 ¼ yw −r6 ⋅zs t 3 ¼ zw −r9 ⋅zs

ð12Þ

The position of the tool frame {T} with respect to the robot base frame {B} is given as follows: B

HT ¼ B HS S HT

ð13Þ

where S H T , which is the inverse of the matrix T H s , represents the position of {T} with respect to {S}. If the position B H T is obtained according to Eq. (13), the corresponding joint angles can be calculated based on the screw theory [36]. It should be noticed that the actual position corresponding to the obtained joint angles is slightly different from the nominal position because the inverse kinematic problem is solved using nominal kinematic parameters. Fortunately, thanks to the optical sensor’s gauge range (27 × 37 mm) and view depth (10 mm), the calibration target center

Int J Adv Manuf Technol

can also be measured even if the actual position deviates from the nominal one at several millimeters.

4.3 Select optimal measurement configurations with one point constraint

where X represents the position of the calibration target center and O1 is the first observability index as shown in Eq. (15). The PSO algorithm is used to determine the optimal positions of the calibration target centers one by one.

4.5 Summary The candidate measurement configurations with one point constraint can be obtained using an inverse kinematic algorithm for the system, and a given number of optimal measurement configurations should be selected from the candidate set. The position of the calibration target center is determined one by one, and the optimal measurement configurations with the first point constraint should be considered in optimizing the following calibration target centers and so on. All five observability indexes are calculated based on the identification Jacobian matrix J, which represents the relationship between the position errors ΔP3n in the measurement configurations and the parameter errors ΔXm. The identification Jacobian matrix J can be estimated using the error model [37] and is given as follows: ΔP3n ¼ J3nm ΔXm

ð14Þ

where n is the number of the measurement configurations and m is number of identification parameters. The singular values σi (i = 1,…,m) of J can be computed using singular value decomposition and arranged in a descending order. Here, the first observability index O1 is chosen as the criterion to select optimal measurement configurations. The index O1 is defined by the following equations: 1 ðσ1 σ2 ⋯σm Þ =m pffiffiffi O1 ¼ n

ð15Þ

The problem of selecting n optimal measurement configurations with one point constraint is classified into the first category, so DETMAX is used to select n optimal measurement configurations from the candidate measurement configurations. References [14, 15, 24] have detailed description on the DETMAX algorithm.

To sum up, the procedure of the proposed algorithm for selecting the optimal measurement configurations with multiple point constraints is described in Fig. 6 as follows: Step 1: Generate initial position of the calibration target center—an initial position of calibration target center is generated within the workspace of the system by PSO. Step 2: Generate the candidate measurement configurations—sample the sensor’s orientational part and calculate its translational part. A set of tool poses with respect to the robot base frame can be estimated using the sensor poses and pre-calibrated kinematic parameters. Then an inverse kinematic algorithm based on the screw theory is implemented to obtain a set of candidate measurement configurations. Step 3: Calculate fitness—the DETMAX algorithm is implemented to select n optimal measurement configurations from the candidate measurement configurations. Here, the optimal measurement configurations corresponding to the previously determined spherical target center are considered as additional measurement configurations when the fitness function (observability index) is calculated. Step 4: Adjust the position of the calibration target center—the position of the calibration target center is optimized using PSO, and repeat step 2 to step 3, and iteratively optimize the calibration target center with optimal measurement configurations using PSO algorithm.

5 Experimental verification 4.4 Optimize positions of spherical target centers using PSO The PSO method is a population-based, self-adaptive searching optimization method which is first introduced in 1995 [38]. Owing to its high efficiency of convergence and simplicity, PSO is one of the most promising algorithms in the field of global optimization and has a wide range of applications. Mathematical formulation and algorithm description of PSO can be referred to the paper [28]. The fitness of PSO is expressed as follows: f ðXÞ ¼ −O1

ð16Þ

In this section, experiment is conducted on a robotic visual inspection system to verify the efficiency of the proposed method. The experimental setup in the laboratory is shown in Fig. 7. The Fanuc robot (M-710iC/50) is used as a part of the robotic visual inspection system which is installed at an iron base (1500×1500×30 mm). An optical sensor which described in Section 2 is mounted at the flange of the robot. Four spherical calibration targets are placed in the measurement envelope of the robotic visual inspection system to identify the kinematic parameters. A laser tracker (API T3) is used to place four spherical calibration targets at required

Int J Adv Manuf Technol Start

5.1 Pre-calibration of the system

Generate initial target position

A set of sensor poses Pre-Calibration A set of measurement configurations

Additional configurations

Select optimal configurations using DETMAX

Adjust position with PSO

Quit?

N

Y End

Fig. 6 Flowchart of the proposed optimal measurement configuration selection algorithm

position and verify the accuracy of the system after calibration. Fig. 7 The experimental setup in the laboratory condition

The locations of four spherical calibration targets should not be co-linear and further encompass a relatively large volume to guarantee calibration accuracy. Each calibration target is measured using the optical sensor at six different measurement configurations. There are totally 24 measurements which can create 72 equations to identify 30 kinematic parameters. The positions of four calibration target centers with respect to the robot base frame are listed in Table 1 and six measurement configurations are randomly selected at each calibration target. The first observability index O1 of the pre-calibration configurations is 0.0472. The kinematic errors between the identified robot’s kinematic parameters and its nominal parameters are illustrated in Fig. 8. As we know, the kinematic errors are caused by errors during manufacturing and assembling process, so they are very small. However, the six kinematic parameters (the link offset d1, the encoder zero-point offset Δθ1, the link length a5, link twist α5, the link offset d5, and the encoder zero-point offset Δθ5) are redundant parameters, and it is impossible to separate them from robot external parameters and hand-eye parameters, so the redundant six kinematic parameters are not included in Fig. 8. The rank of the identification Jacobian matrix is 30 and equals to the identified parameters. Consequently, there are no redundant parameters in the self-calibration model according to numerical algorithms.

Int J Adv Manuf Technol Y

Table 1 Positions of four spherical calibration targets for precalibration Target nos.

x/mm

y/mm

z/mm

P1 P2 P3 P4

− 647.14 1742.87 1428.57 − 704.45

1254.27 1138.35 − 1347.92 − 1265.87

− 289.37 − 260.87 102.31 680.54

Q1

Q2

Q2 y1

Q1 Q1

x1

x2

5.2 Select optimal measurement configurations using the proposed method

y2 Q2

The optimal measurement configurations with four point constraints are determined by the proposed method. It should be noticed that four calibration targets (temperature-invariant silicium carbide) are brought from the Perceptron. Inc., so the z values of the four calibration targets are fixed in advance and only two variables (x, y) are optimized using PSO algorithm. What is more, the calibration targets should not be placed at the iron base to guarantee stability of positions of calibration targets in spite of changing temperature. The constraint can be implemented by separating the feasible region into two regions Q1 and Q2 as shown in Fig. 9, and then adding the following nonlinear constraints to the PSO algorithm as shown in Eqs. (17) and (18). Q1 :

ðx−x1 Þðx−x2 Þðy−y1 Þðy−y2 Þ < 0  ðx−x1 Þðx−x2 Þ > 0 ðy−y1 Þðy−y2 Þ > 0

Q2 :

ð17Þ ð18Þ

2

P ar a m e t er E rr o r s( m m / de g r ee )

1.5

1

0.5

0

-0.5

-1.5

Angle Errors(degree) Length Errors(mm)

Pre-Calibration The Proposed Method

-1

2

4

6

8

10

12

14

16

18

Parameter Index Fig. 8 Errors between the identified robot’s kinematic parameters and its nominal parameters using the pre-calibration configurations

X

Q1

Q2

Fig. 9 Diagrammatic sketch of the feasible region

where (x1,y1) is the position of the upper left corner of the base and (x2,y2) is the position of the lower right corner of the base in the robot base coordinate frame. The size of a particle is selected as 50 for the PSO and the optimal results are shown in Fig. 10. The positions of the four calibration targets are listed in Table 2 and the observability index O1 of the determined optimal measurement configurations is 0.086. The four spherical calibration targets are placed with the help of the API laser tracker. The 24 optimal measurement configurations are used to calibrate the robotic visual inspection system as described in Section 3. The kinematic errors between the identified robot’s kinematic parameters and its nominal parameters using the optimal measurement configurations are illustrated in Fig. 8. To verify the efficiency of the proposed method, comparison of the calibrated accuracy between the precalibration method and the proposed method is conducted. The difference of the pre-calibration method and the proposed method is that the pre-calibration method uses 24 randomly selected measurement configurations and the proposed method uses 24 optimal measurement configurations to identify the kinematic parameters. One hundred positions of the spherical calibration target are placed uniformly in the workspace of the robotic visual inspection system. The Euclidean errors are calculated to evaluate the accuracy of the system [15]. 0:5    ð19Þ δD ¼ PMeasurement −PNominal  ¼ δ2x þ δ2y þ δ2z where PMeasurement is the position measured by the robotic visual inspection system, PNominal is the position measured by the API laser tracker, and δx,y,z∈R are the deviation in x, y, and z directions.

Euclidean position Error

Int J Adv Manuf Technol

Fig. 10 Optimal results using the proposed method Va lid ati o

To illustrate the accuracy improvement, comparison of the Euclidean position errors between the pre-calibration method and the proposed method is presented in Fig. 11. By analyzing Fig. 11, the proposed method yields a better accuracy improvement and the root mean square error of the proposed method is 18.60% lower than that of the pre-calibration method.

nP

osi

tio nI nd ex

Fig. 11 Comparison of accuracy between the pre-calibration method and the proposed method

6 Conclusions In this paper, a method is proposed to determine the optimal measurement configurations for self-calibrating a robotic visual inspection system with multiple point constraints. To do so, a pre-calibration of the inspection system is needed to obtain the hand-eye and robot exterior relationship to implement the inverse kinematic algorithm. The candidate measurement configurations with one point constraint can be obtained using inverse kinematic algorithm for the system, so DETMAX is implemented to determine a given number of optimal measurement configurations from the candidate measurement configurations. PSO is used to optimize the positions of the multiple points one by one. Moreover, an experiment using a robotic visual inspection system

is performed and confirms that the proposed method can improve its self-calibration accuracy with multiple point constraints. In addition, the idea of the proposed method can be used for reference when robot self-calibration is needed. Funding information This work is supported by the National Natural Science Foundation of China (51575354), the National Key Technology Research and Development of the Ministry of Science and Technology of China (2012BAF12B01, 973 Program 2014CB046604), and the Shanghai Municipal Science and Technology project (16111106102).

References 1.

Table 2

Optimal positions of four spherical calibration targets

Target nos.

x/mm

y/mm

z/mm

P1

1072.73 − 823.53 − 1207.93 946.27

1394.23 806.94 − 1377.56 − 1043.51

− 288.43 − 261.44 101.86 682.61

P2 P3 P4

2.

3.

4.

Driels MR, Swayze LW, Potter LS (1993) Full-pose calibration of a robot manipulator using a coordinate-measuring machine. Int J Adv Manuf Technol 8(1):34–41 To M, Webb P (2012) An improved kinematic model for calibration of serial robots having closed-chain mechanisms. Robotica 30:963– 971. https://doi.org/10.1017/S0263574711001184 Wang HX, Shen SH, Lu X (2012) A screw axis identification method for serial robot calibration based on the POE model. Ind Robot Int J 39(2):146–153. https://doi.org/10.1108/01439911211201609 Santolaria J, Conte J, Gines M (2013) Laser tracker-based kinematic parameter calibration of industrial robots by improved CPA

Int J Adv Manuf Technol

5.

6.

7.

8. 9.

10.

11.

12.

13.

14.

15.

16.

17.

18.

19.

20.

method and active retroreflector. Int J Adv Manuf Technol 66(9– 12):2087–2106. https://doi.org/10.1007/s00170-012-4484-6 Wang Z, Maropoulos PG (2016) Real-time laser tracker compensation of a 3-axis positioning system-dynamic accuracy characterization. Int J Adv Manuf Technol 84(5–8):1413–1420. https://doi.org/ 10.1007/s00170-015-7820-9 Zeng YF, Tian W, Li DW, He XX, Liao WH (2017) An errorsimilarity-based robot positional accuracy improvement method for a robotic drilling and riveting system. Int J Adv Manuf Technol 88(9–12):2745–2755. https://doi.org/10.1007/s00170016-8975-8 Gong CH, Yuan JX, Ni J (2000) A self-calibration method for robotic measurement system. J Manuf Sci Eng Trans ASME 122(1):174–181. https://doi.org/10.1115/1.538916 Khalil W, Besnard S, Lemoine P (2000) Comparison study of the geometric parameters calibration methods. Int J Robot Autom 15(2) Meng Y, Zhuang HQ (2007) Autonomous robot calibration using vision technology. Robot Comput Integr Manuf 23(4):436–446. https://doi.org/10.1016/j.rcim.2006.05.002 Du GL, Zhang P (2013) Online robot calibration based on vision measurement. Robot Comput Integr Manuf 29(6):484–492. https:// doi.org/10.1016/j.rcim.2013.05.003 Yin SB, Guo Y, Ren YJ, Zhu JG, Yang SR, Ye SH (2014) Real-time thermal error compensation method for robotic visual inspection system. Int J Adv Manuf Technol 75(5–8):933–946. https://doi. org/10.1007/s00170-014-6196-6 Chen X, Xi J (2018) Simultaneous and on-line calibration of a robot-based inspecting system. Comput Integr Manuf 49:349– 360. https://doi.org/10.1016/j.rcim.2017.08.006 Borm J-H, Meng C-H (1991) Determination of optimal measurement configurations for robot calibration based on observability measure. Int J Robot Res 10(1):51–63 Joubair A, Bonev IA (2013) Comparison of the efficiency of five observability indices for robot calibration. Mech Mach Theory 70: 254–265 Joubair A, Tahan AS, Bonev IA, Ieee (2016) Performances of observability indices for industrial robot calibration. 2016 IEEE/RSJ Int Conf Intell Robot Syst, pp 2477–2484 Menq C-H, Borm J-H, Lai JZ (1989) Identification and observability measure of a basis set of error parameters in robot calibration. J Mech Transm Autom Des 111(4):513–518. https://doi.org/10.1115/ 1.3259031 Driels MR, Pathre US (1990) Significance of observation strategy on the design of robot calibration experiments. J Robot Syst 7(2): 197–223 Nahvi A, Hollerbach JM, Hayward V (1994) Calibration of a parallel robot using multiple kinematic closed loops. In: Proceedings of the 1994 I.E. International Conference on Robotics and Automation 401:407–412 Nahvi A, Hollerbach JM (1996) The noise amplification index for optimal pose selection in robot calibration. In: Proceedings of the 1996 I.E. International Conference on Robotics and Automation 641:647–654 Sun Y, Hollerbach JM (2008) Observability index selection for robot calibration. In: IEEE International Conference on Robotics and Automation 831–836

21.

Horne A, Notash L (2009) Comparison of pose selection criteria for kinematic calibration through simulation. Springer, Berlin 22. Zhou J, Kang HJ, Ro YS (2010) Comparison of the observability indices for robot calibration considering joint stiffness parameters. Commun Comput Inf Sci 93:372–380 23. Daney D (2002) Optimal measurement configurations for Gough platform calibration. In: Robotics and automation. Proceedings. ICRA'02. IEEE International Conference on, 2002. IEEE 1:147– 152 24. Sun Y, Hollerbach JM (2008) Active robot calibration algorithm. In: IEEE International Conference on Robotics and Automation. pp 1276–1281 25. Zhuang H, Wu J, Huang W (1996) Optimal planning of robot calibration experiments by genetic algorithms. In: Proceedings of the 1996 I.E. International Conference on Robotics and Automation 982:981–986 26. Daney D, Papegay Y, Madeline B (2005) Choosing measurement poses for robot calibration with the local convergence method and tabu search. Int J Robot Res 24(6):501–518. https://doi.org/10. 1177/02783649053185 27. Huang C, Xie C, Zhang T (2008) Determination of optimal measurement configurations for robot calibration based on a hybrid optimal method. In: International Conference on Information and Automation, pp 789–793 28. Perez R, Behdinan K (2007) Particle swarm approach for structural design optimization. Comput Struct 85(19):1579–1588 29. Sun WJ, Hill M, McBride JW (2008) An investigation of the robustness of the nonlinear least-squares sphere fitting method to small segment angle surfaces. Precis Eng J Int Soc Prec Eng Nanotechnol 32(1):55–62. https://doi.org/10.1016/j.precisioneng. 2007.04.008 30. Yu C, Chen X, Xi J (2017) Modeling and calibration of a novel onemirror galvanometric laser scanner. Sensors 17(1):164 31. Denavit J (1955) A kinematic notation for lower-pair mechanisms based on matrices. Trans of the ASME J Appl Mech 22:215–221 32. Hayati SA (1983) Robot arm geometric link parameter estimation. In: Decision and control. The 22nd IEEE Conference on, 1983. IEEE 22:1477–1483 33. Everett LJ, Suryohadiprojo AH (1988) A study of kinematic models for forward calibration of manipulators. In: Proceedings of the 1988 I.E. International Conference on Robotics and Automation 792:798–800 34. Joubair A, Bonev IA (2014) Kinematic calibration of a six-axis serial robot using distance and sphere constraints. Int J Adv Manuf Technol 77(1–4):515–523 35. Moré JJ (1978) The Levenberg-Marquardt algorithm: implementation and theory. In: Numerical analysis. Springer, vol.630, pp 105– 116 36. Chen Q, Zhu S, Zhang X (2015) Improved inverse kinematics algorithm using screw theory for a six-DOF robot manipulator. Int J Adv Robot Syst 12(10):140. https://doi.org/10.5772/60834 37. Yin SB, Ren YJ, Zhu JG, Yang SR, Ye SH (2013) A vision-based self-calibration method for robotic visual inspection systems. Sensors 13(12):16565–16582 38. Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Neural networks. Proceedings., IEEE International Conference on, 1995. IEEE 4:1942–1948