Robust Grasping Manipulation of Deformable

0 downloads 0 Views 147KB Size Report
hand, grasping and manipulation interfere with each ... control the location of the object as well as its de- ... Let pi,j. = [xi,j, yi,j]T be position vector of the (i, j)- th lattice point with respect to the ... where A, B, and C are stiffness matrices, which can be .... from rotation by angle α + π via one-dimensional ... and j ∈ [0, H − 1].
Robust Grasping Manipulation of Deformable Objects Shinichi Hirai, Tatsuhiko Tsuboi, and Takahiro Wada† †

Dept. of Robotics, Ritsumeikan Univ., Kusatsu, Shiga 525-8577, Japan Dept. of Intelligent Mechanical Systems, Kagawa Univ., Takamatsu, Kagawa 761-0396, Japan E-mail: [email protected] http://www.ritsumei.ac.jp/se/˜hirai/ Abstract

A simple but robust control law for the grasping manipulation of deformable objects is presented. In the handling of deformable objects, grasping and manipulation must be performed simultaneously despite uncertainties during the handling process. We will propose a control law to perform grasping and manipulation of deformable objects using a realtime vision system.

1

Introduction

Handling of deformable soft objects can be found in many industrial fields including food industry and recycle industry. Many handling operations of deformable objects are performed by humans and their automatic operations are required for the cleanness and the quality of the products. In the handling of rigid objects, grasping and manipulation can be performed independently. Grasping of a rigid object requires the control of grasping forces while manipulation of a rigid object results in the control of its position and orientation. On the other hand, grasping and manipulation interfere with each other in the handling of deformable objects. Control of grasping forces yields the deformation of a deformable object, which may change the location of the object. Moreover, manipulation of a deformable object requires to control the location of the object as well as its deformation. Then contact between fingers and the object may be lost and grasping cannot be performed due to the deformation at the fingertips. Therefore, in the handling of deformable objects, grasping and manipulation must be performed simultaneously. This simultaneous operation is referred to as grasping manipulation of deformable objects. In this article, we will establish a simple and robust control law for the grasping manipulation of deformable objects. Manipulation of deformable objects have been studied in the previous decade[1, 2]. Most researches deal with manipulation and grasping separately. Sequence of manipulation and grasping in the unfolding operation of clothes has been studied in [3], which is specific to unfolding and cannot be applied to other operations. Control law for the positioning of deformable objects has been proposed in [4], where grasping is out of focus. Mechanics of contact between fingers and an

Figure 1: Grasping manipulation of deformable object object has been investigated in [5] but operation control is not studied there. The goal of this article is to establish a simple but robust control law for the grasping manipulation of deformable objects. First, we will describe grasping manipulation in a systematic manner. Second, we will derive a control law for the manipulation of deformable objects. Next, we will develop a vision algorithm to detect the location of positioned points during the grasping manipulation. Finally, we will evaluate the control law for the grasping manipulation experimentally.

2

Grasping Manipulation of Deformable Objects

Manipulation of a deformable object requires to control the location of the object as well as its deformation while the manipulation of a rigid object results in the control its location alone. Let us select a finite number of points on a deformable object so that the motion and the deformation of the object can be described in a coherent manner. Selecting an appropriate point set, we can specify the translational and the rotational motion of the object by the motion of selected points. Moreover, motion of selected points describes a certain range of its deformation. Figure 1 shows a two-dimensional grasping manipulation of a deformable object. Three points are selected to describe the planar motion of the object. The selected points are referred to as positioned points of a deform-

CCD camera

vision signals tactile signals

deformable object

tactile sensors

mechanical fingers controller

motion command

Figure 2:

Control scheme for grasping manipulation of deformable object

able object. Consequently, the motion and the deformation of a deformable object can be specifies by the motion of positioned points. Grasping manipulation of a deformable object should be performed robustly despite of uncertainties in the environment. For example, the object can deform unexpectedly and its mechanical properties including stiffness and damping may vary during a manipulation process. Mechanical fingers yield surface contact with a deformable object, as shown in Figure 1. The area of contact region and force distribution in the region may not be measured during the process. Moreover, relative locations between individual fingers and the object may change due to slip and rolling of the fingers. To cope with these uncertainties, we will introduce external sensors and will derive a control law robust against the uncertainties. Locations of positioned points should be measured to control the motion and the deformation of a deformable object. Grasping forces at individual fingertips must be measured to maintain the grasping of the object. Thus, we will introduce a vision sensor and tactile sensors, as illustrated in Figure 2. A vision system detects the locations of positioned points and tactile sensors, which are mounted at individual fingertips, measure grasping forces applied to the object by individual fingers. Then, the motion of mechanical fingers should be determined according to vision sensor signals and tactile sensor signals so that the grasping manipulation can be performed successfully.

3

Robust Control Law for Object Manipulation

In grasping manipulation of a deformable object, multiple points should be guided simultaneously and grasping forces at individual fingertips should be maintained by controlling the motion of mechanical fingers. Motion of individual mechanical fingers and that of positioned points are interfered one another. This implies that the grasping manipulation of a deformable object is a multi-input and multi-output operation with coupling between inputs and outputs. Thus, a model of a deformable object is indispensable to determine the motion of mechanical fingers. On the other hand, it is difficult to build an exact model of a deformable object since its deformation may be

nonlinear and often shows hysteresis. The goal of this research is not to build an exact model of a deformable object but to establish a control law for the grasping manipulation of a deformable object. The object model can be simple and may involve uncertainties if a control law is robust against the discrepancy between an actual object and an object model. Thus, we will build a simple but essential model of a deformable object and will construct a robust control law for grasping manipulation based on the simple model. Mechanical fingers yield surface contacts with a deformable object. We will describe the surface contacts between the fingers and the object by representative points involved in individual contact regions. These points are referred to as manipulated points. Recall that the location and the deformation of a deformable object are denoted by a set of positioned points. Behavior of the object is then described by grasping forces at manipulated points and the motion of positioned points. Let us formulate the static behavior of a deformable object. For the simplicity of the modeling, we will employ a lattice modeling technique. Assume that an object deforms in two-dimensional plane. Let us describe the object by a set of lattice points and springs connecting the lattice points. Lattice structure should be determined so that positioned points and manipulated points are on lattice points. Let O − xy be a coordinate system on the two-dimensional plane. Let pi,j = [xi,j , yi,j ]T be position vector of the (i, j)th lattice point with respect to the coordinate system. The shape of a deformable object can be described by a set of position vectors. Let rp be a collective vector consisting of coordinates of positioned points. This vector is referred to as positioned variable vector. Also, rm be a vector consisting of coordinates of manipulated points, which is referred to as manipulated variable vector. Lattice points except the positioned points and manipulated points are referred to as non-target points. Vector consisting of coordinates of non-target points is denoted as rn , which is referred to as non-target variable vector. Figure 3 illustrates a lattice model of the object corresponding to grasping manipulation shown in Figure 1. Positioned variable vector, manipulated variable vector, and non-target variable vector are then given as follows respectively: rp

=

[x1,1, y1,1 , x2,1 , y2,1 , x2,2 , y2,2]T ,

rm

=

[x2,0, y2,0 , x0,1 , y0,1 , x2,3 , y2,3]T ,

rn

=

[x0,0, y0,0 , x1,0 , y1,0 , · · · , x3,3, y3,3 ]T .

Assume that a deformable object is in equilibrium state. Let δrp , δrm , and δrn be a derivation of positioned variable vector, that of manipulated variable vector, and that of non-target variable vector around the equilibrium. Recall that no external forces are applied to positioned points and non-target points. Equilibrium equations at positioned points and nontarget points are thus described collectively as follows: Aδr p + Bδr m + Cδrn = 0

(1)

y

y

y ρ

positioned points

x O

x

(a) Positioned points and manipulated points on object model

where A, B, and C are stiffness matrices, which can be computed from the object model. The above equation yields the following equation:    −1 δrm =− B C Aδrp . (2) δrn Let us formulate an iterative control law for object manipulation. Let rkp , rkm , and rkn be positioned variable vector, manipulated variable vector, and non-target variable vector at the k-th iteration, respectively. Let rdp be a set of desired values of positioned variables. We will regard δr p in eq.(2) as a set of positioning errors, say, rdp − rkp . We will regard δrm as the difference between the next values of manipulated variables k and their current values, say, rk+1 m − r m . Also, we will k+1 k regard δr n as the difference r n − r n . Then, we have the following iterative control law:  k+1   k   −1 rm rm = −α B C A(r dp − r kp ) (3) k+1 k rn rn where α is a scaling factor. Vector rkp is a set of positioned variables, which can be obtained using a vision system. The upper half of eq.(3) controls the motion of manipulated variables while the other half estimates the values of non-target variables. Note that all variables in the right-side of eq.(3) can be evaluated at the k-th iteration. Thus, the iterative control law for the manipulation of a deformable object is given by eq.(3) and the manipulation can be achieved by a control system illustrated in Figure 2.

4

θ x

O

Figure 3:

ξ

θ

manipulated points

O

ρ

ξ

Video-frame Rate Detection of Position and Orientation

The location of positioned points must be detected during the grasping manipulation of deformable objects. Thus, a realtime detection of the location of positioned points is essential. Note that regions around positioned points may translate or may rotate during the grasping manipulation. In this section, we will show a video-frame rate vision algorithm that can detect the position and the orientation of planar motion objects based on one-sided Radon transform. First, let us briefly explain one-sided Radon transform. Let O − xy be a coordinate frame fixed on a

Figure 4:

(b)

Integral paths in Radon transform and in one-sided Radon transform

grayscale image. Let g(x, y) be a pixel value at a point (x, y) on the image. Let us integrate the pixel value g(x, y) along a line where the distance from the coordinate origin is given by ρ and the angle from the x-axis is specified by θ, as illustrated in 4-(a). This integral depends on two parameters, ρ and θ, and is denoted as R[g](ρ, θ). Namely,  ∞ g(ξ cos θ − ρ sin θ, ξ sin θ + ρ cos θ)dξ. R[g](ρ, θ) = −∞

(4) The integral R[g](ρ, θ) is referred to as Radon transform. Radon transform computes integrals along lines. The lines coincide with themselves by rotating π around their foot of perpendicular from the coordinate origin. This causes the difficulty in distinguishing rotation with angle α and that with angle α + π. We thus introduce integrals along half-lines instead of integrals along lines so that rotation with angle α and that with angle α + π can be distinguished. Namely, let us integrate pixel value g(x, y) along a half-line where the distance from the coordinate origin is given by ρ and the angle from the x-axis is specified by θ, as illustrated in Figure 4-(b). Namely,  ∞ g(ξ cos θ − ρ sin θ, ξ sin θ + ρ cos θ)dξ. U [g](ρ, θ) = 0

(5) This integral is referred to as one-sided Radon transform. We have developed several algorithms to detect the position and the orientation of planar motion objects using the properties of one-sided Radon transform [6]. The detection is achieved by comparing a sample image and an input image. Let gsample be a sample image and ginput be an input image. Let Usample (ρ, θ) and Uinput (ρ, θ) are one-sided Radon transforms of these images. Let Gsample and Ginput are the image gravity center of the object in a sample image and that in an input image, respectively. The simplest algorithm is summarized as follows: Algorithm Step 1 Compute gravity centers Gsample and Ginput .

Step 2 Compute Usample (0, θ) and Uinput (0, θ) around individual gravity centers. Step 3 Find α that satisfies Usample (0, θ − α) ≡ Uinput (0, θ) ∀θ. Note that one-sided Radon transform at ρ = 0 is a function of period 2π while Radon transform at ρ = 0 is a function of period π. Namely, one-sided Radon transform U (0, θ) usually differs from U (0, θ+π). This implies that we can distinguish rotation by angle α from rotation by angle α + π via one-dimensional matching between two one-sided Radon transforms Usample (0, θ) and Uinput (0, θ). Thus, the rotation angle can be computed through the above algorithm. Let us implement the proposed algorithm on a computer. A grayscale image is captured by a CCD camera and is sent to a PC under NTSC format. Let W and H be the width and the height of an image, respectively. A grayscale image is given by a set of pixel values gi,j at lattice points Pi,j , where i ∈ [0, W − 1] and j ∈ [0, H − 1]. The pixel value g(x, y) at a point (x, y) is then computed by interpolating pixel values at lattice points around the point.

(a) triangle

Table 1: Measurements act. meas. error 0◦ 0◦ 0◦ 30◦ 30◦ 0◦ ◦ ◦ 60 61 1◦ 90◦ 89◦ −1◦ (a) triangle act. 0◦ 30◦ 60◦ 90◦ 120◦ 150◦

meas. −1◦ 29◦ 58◦ 85◦ 114◦ 140◦

of rotational angle act. meas. error 0◦ −1◦ −1◦ 30◦ 30◦ 0◦ ◦ ◦ 60 60 0◦

error act. −1◦ 180◦ −1◦ 210◦ −2◦ 240◦ −5◦ 270◦ ◦ −6 300◦ −10◦ 330◦ (c) curved

(b) square meas. 171◦ 206◦ 239◦ 266◦ 297◦ 326◦

error −9◦ −4◦ −1◦ −4◦ −3◦ −4◦

θ

ρ

(b) Radon Figure 5:

(c) curved

Figure 6: Planar objects in experiments

(a) image θ

(b) square

ρ

(c) one-sided Radon

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

Radon transform and one-sided Radon transform

The developed algorithm is implemented on a PC with Pentium III 450MHz, which is governed by a Vine Linux. The program is written in C and is compiled by GCC ver2.95.2. Let us first demonstrate the difference between Radon transform and one-sided Radon transform using an image shown in Figure 5-(a). Radon transform of this image and its one-sided Radon transform are shown in Figure 5-(b) and (c), respectively. Since the object is symmetry around the coordinate origin, Radon transform is symmetry with respect to θ-axis, say, ρ = 0 while one-sided Radon transform is not symmetry with respect to the axis. Let us then demonstrate the proposed algorithm. At step 3 in this algorithm, we will compute an error

Figure 7:

Successive images in planar motion of curved object

300

curved object position

curved object position 200

200

y [pixel]

x [pixel]

250

150

150 100

100 50

50 0

0 0

1

2

3

4 5 6 time [frame]

7

8

9

0

1

2

3

4 5 6 time [frame]

7

8

9

(a) pattern 1

(b) pattern 2

(c) pattern 3

(d) pattern 4

(e) pattern 5

(f) pattern 6

(a) position 300 curved object rotation angle [degree]

250 200 150 100 50 0

Figure 9: Desired motion of positioned points

-50 0

1

2

3

4 5 6 time [frame]

7

8

9

(b) orientation Figure 8:

Computed position and orientation of curved object

function defined as follows:  2π 2 {Usample (0, θ − τ ) − Uinput (0, θ)} dθ. S(τ ) = 0

This function reaches to its minimum value 0 at τ = α. Thus, we will compute the minimum value Ssample = S(τsample ). If the minimum value Ssample is less than a predetermined threshold, we will decide that an object in the input image coincides with an object in the sample image and that the rotation angle α is given by τsample . Let us evaluate the computation of rotational angle in the algorithm using three planar objects given in Figure 6. Actual angles and computed angles for the three objects are listed in Table 1. Measured angles are computed by considering the rotational symmetry of the polygons. It turns out that the rotational angle of the polygons can be detected with the accuracy of 1 degree. Table 1-(c) shows the measurements for a curved object given in Figure 6-(c). As demonstrated here, the proposed algorithm can be applied to curved objects while the accuracy in the detection of the rotational angle descends to 10◦ . Note that the algorithm uses U (0, θ) alone and does not use U (ρ, θ) at ρ = 0. This causes the error in the detection of rotational angle. Figure 7 shows successive images in the motion of a curved object. The images are captured in the videoframe rate. The object moves on an air floating table. Since the friction between the object and the table is negligible, the object will move with a constant velocity and a constant angular velocity. The computed position and the orientation from these images are plotted in Figure 8. From the computed result, we find that the velocity and the angular velocity of the object are almost uniform. This implies that the proposed algorithm can compute the position and the

orientation of an object correctly. Moreover, we have found that it takes 4msec, 8msec, and 4msec in average at step 1, step 2, and step 3, respectively. Thus, we conclude that the algorithm has a capability to detect the position and the orientation of an object in the video-frame rate.

5

Experiments on Deformable Object Manipulation

In this section, we will evaluate a control law proposed in the previous section experimentally. Location of positioned points are detected using a CCD camera and the vision algorithm developed in Section 4. The resolution of the vision system is 0.72mm. Three 2DOF fingers manipulates a deformable sponge made of polyester with 90mm in length, 90mm in width and 30mm in thickness. Positioned points and manipulated points are located as shown in Figure 3. Thus, the initial value of positioned variable vector and that of manipulated variable vector are given respectively as follows: r p = [30.0, 30.0, 60.0, 30.0, 60.0, 60.0] T , r m = [60.0, 0.0, 0.0, 30.0, 90.0, 60.0] T . Let us perform manipulative operations illustrated in Figure 9. Figure 9-(a) describes the translation of an object. Figure 9-(b) shows its translation and shrinkage while Figure 9-(c) provides its translation and expansion. Figure 9-(d) describes the rotation of an object. Figure 9-(e) shows its shrinkage. Figure 9-(f) defines the translation and the rotation of an object. The desired locations of positioned points are then given as follows, respectively: pattern 1:

rdp = [35.0, 30.0, 65.0, 30.0, 65.0, 60.0]T ,

pattern 2:

rdp = [35.0, 30.0, 62.0, 30.0, 62.0, 60.0]T ,

pattern 3:

rdp = [32.0, 30.0, 65.0, 30.0, 65.0, 60.0]T ,

pattern 4:

rdp = [27.6, 32.8, 57.2, 27.6, 62.4, 57.2]T ,

pattern 5:

rdp = [32.0, 32.0, 58.0, 32.0, 58.0, 58.0]T ,

pattern 6:

rdp = [32.6, 37.2, 62.2, 32.6, 67.4, 62.2]T .

    

   [D[LV>PP@

  



   [D[LV>PP@

 

Figure 10:



   [D[LV>PP@



(c) pattern 3 

æ0DQLSXODWHGSRLQWV æ3RVLWLRQHGSRLQWV æ'HVLUHGORFDWLRQV



   

(d) pattern 4



\D[LV>PP@

\D[LV>PP@

\D[LV>PP@



   [D[LV>PP@





æ0DQLSXODWHGSRLQWV æ3RVLWLRQHGSRLQWV æ'HVLUHGORFDWLRQV









(b) pattern 2 

æPDQLSXODWHGSRLQWV æ3RVLWLRQHGSRLQWV æ'HVLUHGORFDWLRQV





 

(a) pattern 1 

0DQLSXODWHGSRLQWV 3RVLWLRQHGSRLQWV 'HVLUHGORFDWLRQV



 

   



   [D[LV>PP@



(e) pattern 5



   [D[LV>PP@



(f) pattern 6

Behavior of positioned points and manipulated points

We will evaluate the performance of manipulative operations by an error norm  xp − xdp . We have examined the performance of 10 trials for each manipulative operation. From the trials, it has turned out that the error norm converges less than 1.0mm in pattern 1, in pattern 4, and in pattern 6. The norm converges less than 2.0mm in pattern 2 and pattern 5. The norm does not converges in pattern 3. Note that pattern 3 cannot be performed by pushing fingers. Figure 10 shows a representative behavior of positioned points and manipulated points in individual operations. We have found that possible operations are achieved in several iterations.

6



æ0DQLSXODWHGSRLQWV æ3RVLWLRQHGSRLQWV æ'HVLUHGORFDWLRQV

 \D[LV>PP@

\D[LV>PP@



æ0DQLSXODWHGSRLQWV æ3RVLWLRQHGSRLQWV æ'HVLUHGORFDWLRQV

\D[LV>PP@



Concluding Remarks

We have proposed an iterative control law for the grasping manipulation of deformable objects. The control law has been derived based on a coarse object model. Moreover, we have developed a realtime vision algorithm to detect the position and the orientation of planar motion objects based on one-sided Radon transform. The developed algorithm is applied to the detection of the location of positioned points during the grasping manipulation. We have shown that the translation, the rotation, and the deformation of a planar soft object can be performed using the proposed control law and the developed vision system. We have to derive the control law for stable grasping so that the operation can be performed without losing the contact between an object and individual fingers. We have already derived a control law for stable grasping of a one-dimensional deformable object. Future issues include 1) theoretical analysis of the robustness of the proposed control law, 2) experimental

evaluation of the control law for stable grasping, and 3) extension of the control law for stable grasping to the grasping manipulation of 2D and 3D objects.

References [1] Taylor, P. M. et al., Sensory Robotics for the Handling of Limp Materials, Springer–Verlag, 1990 [2] Henrich, D. and W¨orn, H. eds., Robot Manipulation of Deformable Objects, Springer–Verlag, Advanced Manufacturing Series, 2000 [3] Ono, E. et al., Strategy for Unfolding a Fabric Piece by Cooperative Sensing of Touch and Vision, Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp.441–445, 1995 [4] Hirai, S. and Wada, T., Indirect Simultaneous Positioning of Deformable Objects with Multi Pinching Fingers Based on Uncertain Model, Robotica, Millennium Issue on Grasping and Manipulation, Vol.18, pp.3–11, 2000 [5] Xydas, N., Bhagavat, M., and Kao, I., Study of Soft-Finger Contact Mechanics Using Finite Elements Analysis and Experiments, Proc. IEEE Int. Conf. on Robotics and Automation, Vol.3, pp.2179–2184, San Francisco, 2000 [6] Tsuboi, T., Masubuchi, A., Hirai, S., Yamamoto, S., Ohnishi, K., and Arimoto, S., Videoframe Rate Detection of Position and Orientation of Planar Motion Objects using One-sided Radon Transform, Proc. IEEE Int. Conf. on Robotics and Automation, Seoul, May, 2001