Robust Online Model Predictive Control for a ... - IEEE Xplore

8 downloads 0 Views 1MB Size Report
written as Vc = [vcx, vcy, vcz, ωcx, ωcy, ωcz]T . Also, Ls ∈ κ×6 is named as the image Jacobian or the interaction matrix. According to [1], this matrix is written as ...
2242

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 63, NO. 4, APRIL 2016

Robust Online Model Predictive Control for a Constrained Image-Based Visual Servoing Amir Hajiloo, Student Member, IEEE, Mohammad Keshmiri, Student Member, IEEE, Wen-Fang Xie, Senior Member, IEEE, and Ting-Ting Wang

Abstract—This paper presents an online image-based visual servoing (IBVS) controller for a 6-degrees-of-freedom (DOF) robotic system based on the robust model predictive control (RMPC) method. The controller is designed considering the robotic visual servoing system’s input and output constraints, such as robot physical limitations and visibility constraints. The proposed IBVS controller avoids the inverse of the image Jacobian matrix and hence can solve the intractable problems for the classical IBVS controller, such as large displacements between the initial and the desired positions of the camera. To verify the effectiveness of the proposed algorithm, real-time experimental results on a 6-DOF robot manipulator with eye-in-hand configuration are presented and discussed. Index Terms—Image-based visual servoing (IBVS), model predictive controller (MPC), robotic, visual servoing.

I. I NTRODUCTION

V

ISUAL servoing has been used extensively in robotics as a solution to make machines faster and more dexterous [1]. It is, also, referred to as the use of computer vision data to control the motion of a robot in many applications such as robotic assembly [2], unmanned aerial vehicle [3], robotized endoscope tracking [4], ultrasound probe guidance [5], etc. Typical visual servoing controls can be classified into three categories: 1) position-based visual servoing (PBVS); 2) imagebased visual servoing (IBVS); and 3) hybrid visual servoing [1], [6]. The main idea of a visual servoing system is illustrated in Fig. 1. The system consists of two separate controllers which are visual servoing and robot controller. The visual servoing block uses a controlling command to generate a velocity screw as the control input for the robotic systems which leads to the desired joint velocity. The robot controller takes the signal produced by the visual servoing block as its desired path and the robot controller drives the robot to follow that path [7], [8]. Manuscript received March 31, 2015; revised July 28, 2015; accepted October 31, 2015. Date of publication December 22, 2015; date of current version March 8, 2016. This work was supported in part by a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada (NSERC), and in part by the National Natural Science Foundation of China under Grant 61403122. A. Hajiloo and W.-F. Xie are with the Department of Mechanical and Industrial Engineering, Concordia University, Montréal, QC H3G 1M8, Canada (e-mail: [email protected]). M. Keshmiri is with the Department of Computer Science, McGill University, Montréal, QC H3A 0G4, Canada. T.-T. Wang is with the Department of Mechanical and Electrical Engineering, Hohai University, Changzhou 213022, China. Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TIE.2015.2510505

The classical IBVS uses a set of geometric features such as points, segments, or straight lines in image plane as image features [9]. The controller is designed using the inverse (or pseudo-inverse) of the image Jacobian matrix to obtain the control errors in Cartesian space, and normally the proportional control law is applied to achieve a local convergence to the desired visual features. This proportional controller is very easy to implement; however, its drawbacks are its possible unsatisfactory behavior due to the difficulty of constraint handling. Also, the stability of the system is only guaranteed in the region around the desired position, and there may exist image singularities and image local minima, leading to IBVS performance degradation. Moreover, if the errors between the initial position and the desired one are large, with the visibility constraint, the camera motion may be affected by loss of features, or conflict with the robot physical limitations, or even lead to servoing failure. Hence, numerous published literatures focus on improving the control performance and overcoming the visibility problem [10], [11]. To solve the problem of image singularities, finding the suitable visual features such as Polar [12], cylindrical [13], spherical [14] coordinate systems, and moment features [15], for visual servoing is a good solution. In [16], the authors used Takagi–Sugeno fuzzy framework to model IBVS, and they could handle singularity. However, these methods still have not addressed the constraints explicitly which are crucial for real systems control designing [17]. Also, some advanced control techniques have been applied in visual servoing controller design [18]–[21]. A switching controller is proposed in [18] to realize a large displacement grasping task. In [19], a robust fuzzy gain scheduled visual servoing with sampling time uncertainties has been reported. A model predictive control (MPC) method based on the discrete time visual servoing model is introduced to obtain the convergence of robot motion by nonlinear constraint optimization in [20]. Another predictive control for constrained IBVS is proposed in [21]. The authors solve a finite horizon open-loop-constrained optimization problem and demonstrate the results in simulation. A robust model predictive control (RMPC) based on the polytopic model of IBVS has been proposed in [22] with the fixed-depth parameter. In [22], the optimization time for calculating the control signal exceeds the real system-sampling time. Hence, the proposed controller is not implemented online and cannot be applied for the real system. To our best knowledge, very few experimental results have been obtained on this topic. The major motivation of this paper is to apply real-time MPC on the IBVS which is a fast dynamic system.

0278-0046 © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

HAJILOO et al.: ROBUST ONLINE MPC FOR CONSTRAINED IBVS

2243

II. LPV V ISUAL S ERVOING M ODEL

Fig. 1. General visual servoing approach [7].

The main application of the MPC is in the industrial process and its practical application has been very successful [23]. However, the major drawback of MPC is long computational time required to solve the optimization problem which often exceeds sampling interval in real-time situation [24]. In order to make MPC implementable in practice for the fast dynamic systems, the optimization problem must be solved within the time dictated by the sampling period of the system. The underlying goal of this work is to design an online RMPC which allows explicit incorporation of plant uncertainties and constraints when system parameters vary over a given range. In this paper, based on the chosen image features, image points, and the discretized model of image Jacobian matrix, an RMPC law is formulated for IBVS. Using the discretized relationship between the time derivative of image features and camera velocity, a discrete time model of the visual servoing system is obtained. In the whole working space, the image Jacobian matrix varies with the bounded variable parameters of image point coordinates and the object depth. Therefore, it is considered as linear parameter-varying (LPV) model. A polytopic model of a discrete time model for the visual servoing system is obtained using tensor product (TP) model transformation described in [25] and [26] from LPV model. Hence, the robust control signal can be calculated at every sampling time instant, by performing convex optimization involving linear matrix inequalities (LMIs) in MPC [27]. Using RMPC law, robot workspace limitations, the visibility constraints, parametric uncertainties, and actuator limitations can be easily posed as inequality constraints associated with the output and the input of the visual servoing model. Since the proposed IBVS controller avoids the direct inverse of the image Jacobian matrix, some intractable problems in the classical IBVS controller, such as large displacement from the initial pose to the desired pose of the camera, can be solved by this method. At the same time, the visual features are kept in the image plane even when both the initial and the desired image features are close to the field-of-view (FOV) boundary. The real-time experimental results demonstrate the effectiveness of this method. The paper is organized as follows. In Section II, the IBVS model is established to predict the future behavior of the system. Then, an online MPC algorithm for IBVS is given in Section III. In Section IV, the experiments on an eye-in-hand 6-degrees-of-freedom (DOF) robot illustrate the effectiveness of the proposed approach. Finally, conclusion and future work are given in Section V.

In this work, MPC is used to control the IBVS system for a robotic system consists of a 6-DOFs manipulator with a camera installed on its end-effector. The target object is assumed to be stationary with respect to robot’s reference frame. The constrained infinite-horizon optimal controller design is based on the optimization technique in which the current control action is obtained by minimizing the cost function online [28]. The cost function includes the current measurement, the prediction of the image future states, and the current and future control signals based on a discrete time model of the system [29]. The purpose of measuring the states and considering them at each time step is to compensate for unmeasured disturbances and model uncertainty [30]. To control the system using MPC, we need to find a model whereby the future behavior of the image feature vector can be predicted. The relationship between the time variation of the image feature vector of the predictor s˙ m and the camera velocity screw Vc can be written as [1] s˙ m = Ls Vc

(1)

in which Vc is the control signal, i.e., the camera velocity screw written as Vc = [vcx , vcy , vcz , ωcx , ωcy , ωcz ]T . Also, Ls ∈ κ×6 is named as the image Jacobian or the interaction matrix. According to [1], this matrix is written as follows:  1  − Z 0 Zx xy −(1 + x2 ) y Ls = (2) 0 − Z1 Zy 1 + y 2 −xy −x where x and y are the projected coordinate of the feature position on the camera frame and Z is the depth of the feature with respect to camera frame. Usually, the depth parameter, Z in the image Jacobian matrix is assumed to be known [31]. However, in the monocular eye-in-hand configuration, it is difficult to measure the depth online. Thus, this parameter can be considered as an uncertain variable which varies over a given range. Therefore, all the parameters in the image Jacobian matrix are time varying variables. Thus, Ls is the function of the vector of time-varying parameters defined as q(t) = {x, y, Z}. Here, q(t) ∈ Ω is the element of the closed hypercube Ω = [xm , xM ] × [ym , yM ] × [Zm , ZM ] ⊂ 3 , in which xm , xM , ym , and yM are the minimum and maximum ranges of the image point coordinates, and Zm and ZM are the minimum and maximum depths between the object and the camera, respectively. In order to apply the controller in real time, we need to sacrifice the accuracy to the computational load at each sampling time. Therefore, instead of using the given LPV model, its TP model is used for control design [26]. For the considered timevarying parameter vector q(t), a convex combination for the polytopic vertex form of the image Jacobian matrix can be obtained for the LMI-based RMPC controller design. The first step of obtaining TP model is to discretize a given LPV model over the transformation space Ω. It means that the resulting TP model is only explicable in Ω [26]. To apply TP transformation on LPV model, an N -dimensional hyper rectangular equidistant grid-by-grid net over the closed hypercube Ω

2244

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 63, NO. 4, APRIL 2016

is generated as follows: gn,mn = an +

bn − a n (mn − 1), Mn − 1

n = 1, . . . , N

(3)

where N is the total number of the time-varying parameters in the image Jacobian matrix or the dimension of Ω, which is equal to 3. Also, an and bn are the minimum and maximum of the closed hypercube elements on each dimension, respectively, and are given as follows: a1 = x m , a 2 = ym , a3 = Z m b1 = x M , b 2 = y M , b 3 = Z M . Also, an ≤ gn,mn ≤ bn , mn = 1, . . . , Mn stands for the corresponding grid line locations and Mn is the number of grids on nth dimension [26]. Then, the discretization of LPV model, Bd (q(t)) is obtained by sampling over the grid points in Ω as follows: Lm1 ,m2 ,m3 = Ls (gm1 ,m2 ,m3 )

(a)

(b)

Fig. 2. Experimental setup. TABLE I C AMERA PARAMETERS

(4)

D

so the size of the discretized model L (superscript “D” denotes “discretized”) is M1 × M2 × M3 × 2 × 6, i.e., M1 × M2 × M3 different image Jacobian matrices are obtained within the domain of Ω and each of which represents an image Jacobian matrix at a specific time. The corresponding image Jacobian matrix becomes D

L =

M2  M3  M1  3 

III. C ONTROLLER D ESIGN wn,mn (qn )Lm1 ,m2 ,m3

(5)

m1 =1 m2 =2 m3 =1 n=1

where wn,mn (qn ) is the weighting function value evaluated at the discretized values of qn = gn,mn over the n-dimension interval [an , bn ]. Based on (5), the (N + 2)-dimensional coefficient tensor LD ∈ M1 ×M2 ×M3 ×2×6 is constructed from linear time invariant (LTI) vertex systems Lm1 ,m2 ,m3 ∈ 2×6 . In order to have convex TP model, the weighting function for all q ∈ Ω should satisfy the following conditions: ∀n, m, qn wn,m (qn ) ≥ 0 ∀n, qn

Mn 

wn,m (qn ) = 1.

(6) (7)

mn =1

Conditions (6) and (7) imply that the TP model type is nonnegativeness (NN) and sum-normalization (SN), respectively. The next step of TP transformation modeling is to execute higher-order singular value decomposition (HOSVD) for each n-mode of LD , n = 1, . . . , N . To reduce the computational load of the control design, we make a trade-off between complexity and accuracy. So, we can discard some nonzero singular values in HOSVD (so-called reduced HOSVD). The error between the exact tensor LD and the reduced one LˆD can be approximated as follows [32]:  = LD − LˆD 2 =

R 

σi

(8)

i=1

where σi is the discarded singular values and R is the number of discarded singular values. Now the extracted-reduced TP model can be used to design the MPC.

To design the MPC, we use the discrete time model instead of the continuous time dynamic model given in (1). The discrete time state-space model of each feature point can be expressed as sm (t + 1) = Ism (t) + LVc (t)

(9)

where sm = [xm , ym ]T is the projected position of each feature point on the camera frame. Also, L = hLs (t) is the discrete time image Jacobian matrix and h is the sampling time. In (9), I is 2 × 2 identity matrix which is fixed and only matrix L is the function of time-varying parameters. It is well known that a unique camera pose can theoretically be obtained by using at least four image points. Hence, we choose m features where m = 1, . . . , 4. To design MPC, we define the image features error at sample time k as e(k) = s(k) − sd , in which s = [s1 , s2 , s3 , s4 ]T . Also, sd is the desired feature vector acquired from the image of the stationary object taken in the robot target pose. The underlying goal of designing RMPC is to find a control law for the system input Vc , so that each image features error e(k) defined at sampling time k can be steered to zero within a desirable time period. The control signal is defined as linear state feedback Vc,k = Fk ek that minimizes an upper bound of the worst-case infinite horizon quadratic cost at sampling time k Jk = max Z∈Z

∞ 

T eTk+i|k Qek+i|k + Vc,k+i|k RVc,k+i|k

(10)

i=0

where Q ≥ 0 and R > 0 are weighting matrices which let the designer make a trade-off between small control signal (big

HAJILOO et al.: ROBUST ONLINE MPC FOR CONSTRAINED IBVS

2245

Fig. 3. SNNN-type weighting functions of the reduced TP model of 8 LTI systems. (a) w1 . (b) w2 . (c) w3 .

value of R) and fast response (big value of Q). The Lyapunov function V (e) = eTk Pk ek with Pk > 0 defined at sampling time k is an upper bound on the worst-case cost if it holds for all vertices that satisfy the following inequality [33]: eTk+1|k Pek+1|k



eTk|k Pek|k ≤ −eTk|k Qek|k



T Vc,k|k RVc,k|k .

(11)

It can be seen that by summing up the left-hand and righthand side of the above inequality from 0 to ∞, and inserting a linear feedback uk = F ek , a matrix inequality can be obtained as follows: (I + Lk F )T P(I + Lk F ) − P ≤ −Q − F T RF

(12)

where Lk is the discrete time Jacobian matrix at sampling time k. According to Boyd et al. [34], by applying a congruence transformation P = X −1 , defining Y = F X, and using a Schur complement, we can find an LMI in X and Y as follows: ⎤ ⎡ X ∗ ∗ ∗ ⎢X + Lk Y X ∗ ∗ ⎥ ⎥≥0 ⎢ (13) ⎣ X 0 Q−1 ∗ ⎦ −1 Y 0 0 R where the symbol ∗ represents a symmetric structure in LMIs. This LMI is only valid for one model, but for our system which is a TP model, LMI should hold for all possible model or vertices. In order to solve the LMI online and consider all of the possible vertices, we define the control signal at time k, as uk+i|k = Fk ek+i|k which is used for the future as well. The control signal is obtained by minimizing the upper bound on the worst-case value of the quadratic objective function considered as γk = eTk|k Pk ek|k . The state feedback gain matrix Fk is obtained at each sampling time k by Fk = Yk Xk−1

(14)

where Xk = γk Pk > 0 and Yk are obtained from the solution of the following semidefinite program [35]:

min

γk ,Xk ,Yk

γk

(15)

subject to ⎡

Xk ⎢ Xk + L j Y k d ⎢ ⎣ Xk Yk 

⎤ ∗ ∗ ∗ Xk ∗ ∗ ⎥ ⎥≥0 0 γk Q−1 ∗ ⎦ 0 0 γk R−1

 1 xTk|k ≥0 xk|k Xk

(16)

(17)

where j = 1, 2, . . . , L (L is the number vertices). To ensure that the constraints are satisfied, we can define constraints on input and output as follows [36]:  2  vmax I Yk ≥0 (18) YkT Xk 

 2 I Xk + Ljd Yk ymax ≥0 (Xk + Ljd Yk )T Xk

(19)

where vmax and ymax are the upper bound for input and output, respectively. Under the above-designed closed-loop feedback law, the solution for the optimization in (15) can be obtained using the LMI technique, which results in stabilizing the LPV system and steering the state variables to zero. At each sampling time, an optimal upper bound on the worst-case performance cost over the infinite horizon is obtained by forcing a quadratic function of the state to decrease by at least the amount of the worst-case performance cost at each prediction time. Such online step-by-step optimization can lead to asymptotically stable evolution. In order to implement MPC, we use YALMIP toolbox which is used for modeling and solving the convex and nonconvex optimization problems [27]. Using this toolbox, we are able to accomplish the online optimization and obtain the control signal at each sampling time.

2246

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 63, NO. 4, APRIL 2016

TABLE II I NITIAL (I) AND D ESIRED (D) L OCATION OF F EATURE P OINTS IN P IXEL

Fig. 4. Results for Test 1. (a) Feature trajectory in image plane. (b) Robot joint angles. (c) Camera 3-D trajectory.

IV. E XPERIMENTAL R ESULTS In this section, the proposed controller is tested on a 6-DOF Denso robot. The experimental setup consists of a VS-6556G Denso robot and a camera mounted on the end-effectors (Fig. 2). The robot communicates with its controller with a frequency of 1 kHz. A CCD camera is used as the vision system and is mounted on the robot’s end-effector. The camera characteristics are given in Table I.

Fig. 5. Results for Test 2. (a) Feature trajectory in image plane. (b) Robot joint angles. (c) Camera 3-D trajectory.

HAJILOO et al.: ROBUST ONLINE MPC FOR CONSTRAINED IBVS

Fig. 6. Results for Test 3. (a) Feature trajectory in image plane. (b) Robot joint angles. (c) Camera 3-D trajectory.

The camera capturing rate is 30 frames/second. The object is stationary in the working space. The visual servoing task is completed when the image features match the desired features. In this work, four different tests with different strategies have been performed to validate the algorithms. A. TP Model Transformation In order to find the discretization model, we generate the hyper-rectangular N-dimensional space grid and use the TP model transformation. Hundred equidistant grid lines are considered on each dimension for discretization. Therefore, a 100 × 100 × 100 × 2 × 6 tensor of the system is obtained. The

2247

Fig. 7. Results for Test 4. (a) Feature trajectory in image plane. (b) Robot joint angles. (c) Camera 3-D trajectory.

values of the parameters in Ω are considered as xm = −0.4 m, xM = 0.4 m, ym = −0.4 m, yM = 0.4 m, Zm = 0.2 m, and ZM = 0.6 m. After applying HOSVD on each 3-D of the system tensor, the nonzero singular values in each dimension are obtained as follows: σ1,1 = 51.15, σ2,1 = 51.15, σ3,1 = 53.02, σ1,2 = 7.23, σ2,2 = 7.23, σ3,2 = 4.00, σ1,3 = 0.13, σ2,3 = 0.13. To reduce the computational load of the control design, we make a trade-off between complexity and accuracy. So, we keep two first eigenvalues of the first and second dimensions and all the nonzero singular values of the third dimension.

2248

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 63, NO. 4, APRIL 2016

The error between the full rank of tensor LD and the reduced one LˆD by discarding singular values σ1,2 , and σ1,3 can be approximated by 2 2 + σ1,3 ≈ 0.034. LD − LˆD 2 ≤ σ1,3

(20)

have presented the simulation results. However, in this work, all experiments have carried out in real-time. In order to have the fast convergence response, we choose the bigger elements of the matrix Q in comparison with the ones of the matrix R as follows:

Therefore, the results show that the system in (9) can be (22) Q = 10 × I8×8 , R = I6×6 approximately given in the HOSVD-based canonical polytopic model form with minimum 2 × 2 × 2 = 8 LTI vertex models. where In×n is the identity matrix. The experimental results of In order to have the convex TP model which can satisfy LMI the four different cases are given as follows. control design conditions, we use SN- and NN-type weighting functions [26]. The weighting functions are illustrated in Fig. 3. The LTI system matrices of the polytopic TP model are Test 1   0.08 0 0.03 0.01 0.59 −0.01 In the first test, the initial and desired features are chosen in L1,1,1 = 0 0.08 0.03 −0.59 −0.01 0.01 a way that a 90◦ rotation about the camera’s center is required   to complete the task. The initial and desired locations of the −0.13 0 −0.05 0.01 0.59 −0.01 L1,1,2 = features are given in Table II. The results of this test are given 0 −0.13 −0.05 −0.59 −0.01 0.01 in Fig. 4.   Fig. 4(a) shows the trajectory of the features in image plane. 0.08 0 0.03 −0.04 0.59 0.09 L1,2,1 = The trajectory starts from the initial position indicated as the 0 0.08 −0.20 0.40 0.04 0.01 triangle sign and ends at the positions indicated as the circle   sign. This figure shows how the controller takes the features to −0.13 0 −0.05 −0.04 0.59 0.09 L1,2,2 = their desired value without any unnecessary motions. A similar 0 −0.13 0.33 0.40 0.04 0.01 test was performed in [7]. The comparison between two results   0.08 0 −0.20 −0.04 −0.40 −0.01 shows that considering the constraints in the controller could L2,1,1 = 0 0.08 0.03 −0.59 0.04 −0.09 improve the trajectory of the features in image plane. The joint angles’ changes during the visual servoing task are shown in   −0.13 0 0.33 −0.04 −0.40 −0.01 Fig. 4(b). Finally, the 3-D trajectory of the robot end-effector in L2,1,2 = 0 −0.13 −0.05 −0.59 0.04 −0.09 space is shown in Fig. 4(c).   0.08 0 −0.20 0.24 −0.40 0.09 L2,2,1 = 0 0.08 −0.20 0.40 −0.24 −0.09 Test 2   −0.13 0 0.33 0.24 −0.40 0.09 In the second test, a long distance visual servoing task is perL2,2,2 = . 0 −0.13 0.33 0.40 −0.24 −0.09 formed. The initial and the desired locations of the features are The image Jacobian matrix or interaction matrix L at each located in relatively far distance from each other as shown in Table II. The results of this test are given in Fig. 5. sampling time can be written as follows: One of the drawbacks of the IBVS controller is that it cannot 2  2  2  keep the end effector inside the workspace when a long disL(t) = w1,i (x(t))w2,j (y(t))w3,k (Z(t))Ldi,j,k tance task is required. The MPC controller could provide better i=1 j=1 k=1 (21) results for such tasks because of its prediction algorithm. Thus, where w1,i , i = 1, 2, w2,j , j = 1, 2, and w3,k , k = 1, 2 are the MPC controller prevents reaching the limits of the space during the operation. The results of Test 2 show how the MPC weighting functions which are shown in Fig. 3. The interaction matrix (21) varies in a polytope Ω (convex controller succeeds in completing the task. The sequence of the Hull) of vertices which satisfies convexity conditions given in result figures is the same as the one in Test 1. (6) and (7). Test 3 B. Results and Analysis The maximum control input of camera velocity screw in (19), vmax is limited to 0.25 m/s for the translational speed and 0.25 rad/s for the rotational speed. YALMIP toolbox is used to solve the optimization problem in (10). Using the proposed method and this toolbox, the experiments are done in real-time and the computational time of the optimization problem is less than the sampling time (0.04 s). There is scant research on developing real-time MPC technique for IBVS control. For example, both [22] and [37] only

In Test 3, another long distance task is tested where the features move close to the FOV limit. The initial and desired locations of the features are given in Table II. Performing the same test using an IBVS controller causes the features to leave the FOV [7]. The IBVS controller rotates the features while taking the features to the desired position. However, the proposed MPC controller avoids rotating the features in order to respect the system constraints. The rotation of the features happens when the features are close to the desired features. The results of this test are given in Fig. 6.

HAJILOO et al.: ROBUST ONLINE MPC FOR CONSTRAINED IBVS

Test 4 In Test 4, a visual servoing task is prepared which requires a complicated motion in 3-D space and involves all the 6-DOF motions in space. The initial and desired locations of the features are given in Table II. The results for this test are given in Fig. 7. The results show how the proposed MPC controller manages to reach the desired target while keeping the image features and the robot within limits. According to the obtained results, it is obvious that for different tests, the camera has different translational movement in Z direction from initial pose to the desired or final pose. Fig. 7(c) shows that camera moves from Z = 0.25 to Z = 0.4 m which is a long vertical translation. In most of the researches such as [7], for simplification purpose, a constant depth value is considered as the depth of the object with respect to the camera. This assumption can affect the performance of the controller unless the controller is designed with the robustness against the uncertainty. Therefore, we design robust MPC by using robust optimization method in which Z is considered as an uncertain bounded variable. The designed robust MPC can deal with the time-varying depth of object. The results of Test 4 demonstrate that it is far better to consider the variable depth instead of the fixed one. V. C ONCLUSION In this paper, an online MPC-based IBVS controller was developed based on the discretized model of image Jacobian matrix. The control signal is obtained by minimizing the cost function based on the error in image plane and provides the stability and convergence of the robot motion. The constraints due to actuator limitations and visibility constraints can be taken into account using MPC strategy. The experimental results on a 6-DOF eye-in-hand visual servoing system have demonstrated the effectiveness of the proposed method. The experiments have been carried out in a true real-time fashion. The ability of MPC to keep the system within the desired limits increase the success chance of visual servoing tasks compared to basic visual servoing controllers. ACKNOWLEDGMENT The authors would like to thank Quanser Company for support with the experimental setup. R EFERENCES [1] F. Chaumette and S. Hutchinson, “Visual servo control. I. Basic approaches,” IEEE Robot. Autom. Mag., vol. 13, no. 4, pp. 82–90, Dec. 2006. [2] Y. Wang, H. Lang, and C. de Silva, “Visual servo control and parameter calibration for mobile multi-robot cooperative assembly tasks,” in Proc. IEEE Int. Conf. Autom. Logist. (ICAL’08), Sep. 2008, pp. 635–639. [3] N. Guenard, T. Hamel, and R. Mahony, “A practical visual servo control for an unmanned aerial vehicle,” IEEE Trans. Robot., vol. 24, no. 2, pp. 331–340, Apr. 2008. [4] L. Ott, F. Nageotte, P. Zanne, and M. de Mathelin, “Robotic assistance to flexible endoscopy by physiological-motion tracking,” IEEE Trans. Robot., vol. 27, no. 2, pp. 346–359, Apr. 2011.

2249

[5] R. Mebarki, A. Krupa, and F. Chaumette, “2-D ultrasound probe complete guidance by visual servoing using image moments,” IEEE Trans. Robot., vol. 26, no. 2, pp. 296–306, Apr. 2010. [6] F. Chaumette and S. Hutchinson, “Visual servo control. II. Advanced approaches [tutorial],” IEEE Robot. Autom. Mag., vol. 14, no. 1, pp. 109– 118, Mar. 2007. [7] M. Keshmiri, W.-F. Xie, and A. Mohebbi, “Augmented image-based visual servoing of a manipulator using acceleration command,” IEEE Trans. Ind. Electron., vol. 61, no. 10, pp. 5444–5452, Oct. 2014. [8] D.-H. Park, J.-H. Kwon, and I.-J. Ha, “Novel position-based visual servoing approach to robust global stability under field-of-view constraint,” IEEE Trans. Ind. Electron., vol. 59, no. 12, pp. 4735–4752, Dec. 2012. [9] Y. Fang, X. Liu, and X. Zhang, “Adaptive active visual servoing of nonholonomic mobile robots,” IEEE Trans. Ind. Electron., vol. 59, no. 1, pp. 486–497, Jan. 2012. [10] N. Garcia-Aracil, E. Malis, R. Aracil-Santonja, and C. Perez-Vidal, “Continuous visual servoing despite the changes of visibility in image features,” IEEE Trans. Robot., vol. 21, no. 6, pp. 1214–1220, Dec. 2005. [11] A. Remazeilles, N. Mansard, and F. Chaumette, “A qualitative visual servoing to ensure the visibility constraint,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., Oct. 2006, pp. 4297–4303. [12] P. Corke, F. Spindler, and F. Chaumette, “Combining cartesian and polar coordinates in IBVS,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS’09), Oct. 2009, pp. 5962–5967. [13] M. Iwatsuki and N. Okiyama, “A new formulation of visual servoing based on cylindrical coordinate system with shiftable origin,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., Oct. 2002, vol. 1, pp. 354–359. [14] R. Fomena and F. Chaumette, “Improvements on visual servoing from spherical targets using a spherical projection model,” IEEE Trans. Robot., vol. 25, no. 4, pp. 874–886, Aug. 2009. [15] F. Chaumette, “Image moments: A general and useful set of features for visual servoing,” IEEE Trans. Robot., vol. 20, no. 4, pp. 713–723, Aug. 2004. [16] I. Siradjuddin, L. Behera, T. McGinnity, and S. Coleman, “Image-based visual servoing of a 7-DOF robot manipulator using an adaptive distributed fuzzy PD controller,” IEEE/ASME Trans. Mechatronics, vol. 19, no. 2, pp. 512–523, Apr. 2014. [17] W. Sun, H. Gao, and O. Kaynak, “Adaptive backstepping control for active suspension systems with hard constraints,” IEEE/ASME Trans. Mechatron., vol. 18, no. 3, pp. 1072–1079, Jun. 2013. [18] W.-F. Xie, Z. Li, X.-W. Tu, and C. Perron, “Switching control of image-based visual servoing with laser pointer in robotic manufacturing systems,” IEEE Trans. Ind. Electron., vol. 56, no. 2, pp. 520–529, Feb. 2009. [19] B. Kadmiry and P. Bergsten, “Robust fuzzy gain scheduled visualservoing with sampling time uncertainties,” in Proc. IEEE Int. Symp. Intell. Control, Sep. 2004, pp. 239–245. [20] C. Lazar and A. Burlacu, “Visual servoing of robot manipulators using model-based predictive control,” in Proc. 7th IEEE Int. Conf. Ind. Informat. (INDIN’09), Jun. 2009, pp. 690–695. [21] G. Allibert, E. Courtial, and F. Chaumette, “Predictive control for constrained image-based visual servoing,” IEEE Trans. Robot., vol. 26, no. 5, pp. 933–939, Oct. 2010. [22] T. T. Wang, W. F. Xie, G. D. Liu, and Y. M. Zhao, “Quasi-min-max model predictive control for image-based visual servoing with tensor product model transformation,” Asian J. Control, vol. 17, no. 2, pp. 402–416, 2015. [23] S. Qin and T. A. Badgwell, “A survey of industrial model predictive control technology,” Control Eng. Pract., vol. 11, no. 7, pp. 733–764, 2003. [24] R. Milman and E. Davison, “Evaluation of a new algorithm for model predictive control based on non-feasible search directions using premature termination,” in Proc. 42nd IEEE Conf. Decis. Control, Dec. 2003, vol. 3, pp. 2216–2221. [25] P. Baranyi, “TP model transformation as a way to LMI-based controller design,” IEEE Trans. Ind. Electron., vol. 51, no. 2, pp. 387–400, Apr. 2004. [26] Y. Y. Baranyi and P. Várlaki, Tensor Product Model Transformation in Polytopic Model-Based Control. New York, NY, USA: Taylor & Francis, 2014. [27] J. Lofberg, “YALMIP: A toolbox for modeling and optimization in MATLAB,” in Proc. 2004 IEEE Int. Symp. Comput. Aided Control Syst. Des., Sep. 2004, pp. 284–289. [28] T. Besselmann, J. Lofberg, and M. Morari, “Explicit model predictive control for linear parameter-varying systems,” in Proc. 47th IEEE Conf. Decis. Control (CDC’08), Dec. 2008, pp. 3848–3853.

2250

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 63, NO. 4, APRIL 2016

[29] T. Besselmann, J. Lofberg, and M. Morari, “Explicit MPC for LPV systems: Stability and optimality,” IEEE Trans. Autom. Control, vol. 57, no. 9, pp. 2322–2332, Sep. 2012. [30] L. Grune and J. Pannek, Nonlinear Model Predictive Control: Theory and Algorithms. New York, NY, USA: Springer, 2011. [31] M. Keshmiri and W. F. Xie, “Augmented imaged based visual servoing controller for a 6 DOF manipulator using acceleration command,” in Proc. 51st IEEE Conf. Decis. Control (CDC), Dec. 2012, pp. 556–561. [32] P. Baranyi, D. Tikk, Y. Yam, and R. J. Patton, “From differential equations to PDC controller design via numerical transformation,” Comput. Ind., vol. 51, no. 3, pp. 281–297, Aug. 2003. [33] D. Mayne and H. Michalska, “Receding horizon control of nonlinear systems,” IEEE Trans. Autom. Control, vol. 35, no. 7, pp. 814–824, Jul. 1990. [34] S. P. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory. Philadelphia, PA, USA: SIAM, 1994, vol. 15. [35] A. Hajiloo and W. F. Xie, “The stochastic robust model predictive control of shimmy vibration in aircraft landing gears,” Asian J. Control, vol. 17, no. 2, pp. 476–485, Mar. 2015. [36] W. Sun, Y. Zhao, J. Li, L. Zhang, and H. Gao, “Active suspension control with frequency band constraints and actuator input delay,” IEEE Trans. Ind. Electron., vol. 59, no. 1, pp. 530–537, Jan. 2012. [37] G. Allibert, E. Courtial, and F. Chaumette, “Predictive control for constrained image-based visual servoing,” IEEE Trans. Robot., vol. 26, no. 5, pp. 933–939, Oct. 2010.

Amir Hajiloo (S’12) received the M.Sc. and the Ph.D. (first class) degrees in mechanical engineering from Guilan University, Rasht, Iran, in 2006 and 2012, respectively. Currently, he is working toward the Ph.D. degree in control and robotics in the Department of Mechanical and Industrial Engineering, Concordia University, Montreal, QC, Canada. His research interests include nonlinear control, robotics, real-time model predictive control, optimal robust control, complex dynamic systems modeling, and multiobjective controller design.

Mohammad Keshmiri (S’12) received the B.Sc. and M.Sc. degrees in mechanical engineering from Isfahan University of Technology (IUT), Isfahan, Iran, in 2006 and 2009, respectively, and the Ph.D. degree in mechatronics from Concordia University, Montreal, QC, Canada, in 2014. He was an active member of the Dynamic and Robotic Research Group, IUT, and was involved in several projects of the group. He is currently a Postdoctoral Fellow with the Department of Computer Science, McGill University, Montreal, QC, Canada. He is also working as a Robotic Researcher with Robotmaster. His research interests include robotics and control, computer vision, nonlinear systems, visual servoing, system identification, and artificial intelligence. Wen-Fang Xie (SM’12) received the Master’s degree in flight control from Beihang University, Beijing, China, and the Ph.D. degree in intelligent process control from Hong Kong Polytechnic University, Hung Hom, Hong Kong, in 1991 and 1999, respectively. She became an Assistant Professor with Concordia University, Montreal, QC, Canada, in 2003, and was promoted to Associate Professor, Professor in 2008. She has been a Professor with the Department of Mechanical and Industrial Engineering, Concordia University, since 2014. Her research interests include nonlinear control and identification in mechatronics, visual servoing, model predictive control, neural networks, and advanced process control and system identification. Ting-Ting Wang received the Ph.D. degree in control theory and control engineering from Jiangnan University, Wuxi, China, in 2012. She was a Visiting Researcher with Concordia University, Montreal, QC, Canada, in 2010 and 2011. She is currently a Lecturer with the Department of Mechanical and Electrical Engineering, Hohai University, Changzhou, China. Her research interests include visual servoing, image processing, artificial intelligent control, and mobile robotics.