Neptune: Assistive Robotic System for Children with Motor Impairments

6 downloads 193 Views 628KB Size Report
touch screen and other acceleration sensors in the device has made. iPad an interesting .... (Robot Operating System) or on an external Windows laptop via a.
Neptune: Assistive Robotic System for Children with Motor Impairments Pavan Kanajar, Isura Ranatunga, Jartuwat Rajruangrabin, Dan O. Popa, Fillia Makedon Heracleia Human-Centered Computing Laboratory University of Texas at Arlington Arlington, TX 76010

{pavan.kanajar, isura.ranatunga, jartuwat.rajruangrabin}@mavs.uta.edu, {popa, makedon}@uta.edu

ABSTRACT This paper describes Neptune, a mobile manipulator designed as an assistive device for the rehabilitation of children with special needs, such as those suffering from Cerebral-Palsy. Neptune consists of a mobile robot base and a 6DOF robotic arm, and it is interfaced to users via Wii Remote, iPad, Neural Headset, a camera, and pressure sensors. These interfaces allow patients, therapists and operators to interact with the robot in multiple ways, as may be appropriate in assistive scenarios such as: direct physical interaction with the iPad, arm positioning exercises through WiiMote, remote navigation and object retrieval through the environment via the Neural Headset, etc. In this paper we present an overview of the system and discuss its future uses in rehabilitation of CP children.

Categories and Subject Descriptors I.2.9 [ARTIFICIAL INTELLIGENCE]: Robotics – Commercial robots and applications, Sensors, Operator interfaces.

General Terms Algorithms, Performance, Design, Human Factors.

Keywords Human-robot interaction, mobile manipulator, assistive device.

1. INTRODUCTION Cerebral palsy (CP) is an umbrella term encompassing a group of non-progressive, non-contagious motor conditions that cause physical disability in human development, chiefly in the various areas of body movement [6]. Cerebral palsy is caused by damage to the motor control centres of the developing brain and can occur during pregnancy, during childbirth or after birth up to about age three. Resulting limits in movement and posture cause activity limitation and are often accompanied by disturbances of sensation, depth perception and other sight-based perceptual problems, communication ability, and sometimes even cognition;

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. PETRA’11, May 25 - 27, 2011, Crete, Greece. Copyright ©2011 ACM ISBN 978-1-4503-0772-7/11/05…$10.00.

sometimes a form of CP may be accompanied by epilepsy. Of the many types and subtypes of CP, none of them has a known cure. Past work has proposed using touch screen programmable games to treat CP conditions. These games target metrics to measure delay of response, score, stamina /duration of play, accuracy of hand/motor motion. Also such games will increase the performance of kids with CP with daily sessions [7]. iPad has been extensively used for entertainment purposes where the user can browse share and enjoy the web and other services through bigger screen than the mobile devices. The inclusion of touch screen and other acceleration sensors in the device has made iPad an interesting device through which the user can interact. Previous work has shown that children with CP do better after series of game sessions on iPad, lasting for 20 to 40 minutes. Game Interfaces have also become more advanced and user friendly, and they are commonly used as tools for enhanced human robot interaction. One of the popular game controllers is Wii from Nintendo capable of recognition user motions and also has the traditional controller keys [1-2]. The WiiMote has helped to recognize human gestures in real-time where the user holds the remote and makes a gesture in order to create a response action. Gesture recognition can be performed in Wii using either the accelerometer data or using IR camera-tracking data [1]. The work in [2] shows that human motion models can be used to achieve better precision than the existing tracking methods, sufficient for simpler tasks. An experiment was conducted demonstrating that very intuitive control can be achieved, as novice subjects can control a robot arm through simple tasks after just a few minutes of practice and minimal instructions. The Brain-Computer Interface (BCI) is a system that allows individuals with severe neuromuscular disorders to communicate and control devices using their brain waves [3]. BCI has also been used as a tool to interact with the robot. The interface is based on the analyses of EEG signal which is used to extract the special mental activities. The work in [3] demonstrates a robot application for BCI where the user helps the robot in pick and place applications. The robot keeps track of the objects manipulated, but has no prior information about the user intention. Finally, researchers have proposed the use of robotics for treatment and diagnosis of a variety of motor and cognitive impairments, including limb rehabilitation, cardiac conditions, or mental health [8-11]. The treatments with a robot involves either a) direct physical interaction in which the user directs a robotic device by pushing [9,11], b) involvement from a distance through a remote control device [10], or c) robotic response to a user using distance sensing such as visual feedback [12-14].

The Neptune robotic system is intended to speed up the rate of improvements during rehabilitation exercises for children with CP. One of the goals is to make it easier for therapists to administer treatment sessions, for instance in offering aid with the robot during the session through holding the iPad and adjusting its position from time to time. As a result, more sessions can be conducted by patients, in their home, leading to, we postulate, better treatment results. Another goal of the device is to allow recognition of select child hand motions through a WiiMote, and head/face motion through the BCI interface. Beneficial motions can then be rewarded through verbal or visual playback.

Interface Wii Remote EPOC neuroheadset Bluetooth

In this paper we describe Neptune, a mobile manipulator designed as an assistive device for the rehabilitation of children with special needs, such as those suffering from Cerebral-Palsy. Neptune consists of a mobile robot base and a 6DOF robotic arm, and it is interfaced to users via Wii Remote, iPad, Neural Headset, a camera, and pressure sensors. These interfaces have been selected to allow patients, therapists and operators to interact with the robot in ways appropriate to assistive scenarios such as: direct physical interaction with the iPad, arm positioning exercises through WiiMote, and remote navigation and object retrieval through the environment with a Neural Headset.

WiFi

Neptune Mobile Manipulator Encoders

PC VS Code

Force Sensors

Camera

iPad

Recorded Results

Patient

Figure 1. Neptune overall system components.

2. SYSTEM DESCRIPTION An overall system diagram of Neptune is shown in Figure 1, and a picture of the system is shown in Figure 2. The Neptune robot consists of a mobile robot base, the LABO-3™, and a 6 DOF robotic arm – the Harmonic Arm™. The base is a versatile autonomous mobile robot platform with a 2 wheel differential drive steering with zero turning radius, 10 InfraRed (IR) sensors, 2 bumpers, and a 1.6GHz CPU running Redhat Linux. The arm consists of 5 DOF articulated joints and a gripper, driven by high precision harmonic drive, and capable of quick 90 degrees/s turns. The arm is interfaced to the base or to an external PC using Ethernet. Interaction algorithms run on the base PC using the ROS (Robot Operating System) or on an external Windows laptop via a Visual Studio application integrating different sensor/actuator components.

Figure 2. Neptune Robot with mobile base and 6DOF arm, and a kinematic description of the arm’s first 5 DOFs.

In addition to the robotic hardware, we have custom interfaced it through the following devices: 1)

Wii™ Remote: A standard Wii remote is a handheld device containing buttons, a 3-axis accelerometer, and a highresolution high speed IR camera. It has a +/−3 g sensitivity range, 8 bits per axis, and a 100 Hz update rate. The Wii remote has 12 buttons, and it is interfaced to the robot using a wireless Bluetooth connection. The connection uses a Broadcom 2042 chip, which Broadcom designed for devices that conform to the Bluetooth Human Interface Device standard, such as keyboards and mice.

2)

EPOC™ Headset: The EPOC is a commercially available BCI device that includes 14 electrodes placed directly on the user’s head. The headset must first be trained to recognize what kind of thought pattern equates to a certain action, and it can measure following inputs: 

Emotions like Excitement, Boredom, Meditation and Frustration can currently be classified.



Facial expressions: Eyelid and eyebrow positions, eye position in the horizontal plane, smiling, laughing, can currently be detected.

Figure 3. Neptune with iPad and Flexi Force sensors mounted on front (left and right) and back (top and bottom). 3) The iPad is a tablet computer designed, developed as a platform for audio-visual media including music, games etc. The display responds to two other sensors: an ambient light sensor to adjust screen brightness and a 3-axis accelerometer to sense iPad orientation and switch between portrait and landscape modes. The iPad is mounted on the arm end-effector. 4) Flexi Force ™ sensors are ultra-thin and flexible printed circuits, which can be easily integrated surfaces and allow physical pushing force measurements. Flexi Force has better force

sensing and "active sensing area" is a 0.375” diameter circle at the end of the sensor. 4 Flexi Force units have been installed on the iPad, two on the front side, and two on the backside, as shown in Figure 3.

3. INTERACTION ALGORITHMS In this section we describe four types of algorithms that have been implemented on the Neptune system, and will be experimentally demonstrated in Section 4.

Specifically, the relationship between pushing force and the resulting end-effector position is defined as:

̈

̇

,

(1)

in which F is the measured pushing force along the x direction, and M, B, and K are desired mass, damping and spring forces set by the user. The overall control scheme is depicted in figure 4, with more details about this algorithm presented in [15].

3.1 Remote Control of Neptune via the WiiMote Raw Wii data from the device is captured and piped through the CPU to generate corresponding arm motion commands. The pitch angle and yaw angle of the Wii remote is calculated and subsequently applied to the joint motors. The pitch, yaw and roll information is used to move the arm as follows: 

Pitch motion is generated by three joints (joints 2, 3 and 4). A third of the desired pitch angle is applied each of these three joints, while their range is restricted to avoid the end-effector from crashing to ground.



Yaw motion is accomplished via joint 1.



Roll motion is accomplished via joint 5.

For gesture recognition we process the acceleration data from the remote, and user hand gestures such as following a line and a circle were detected. The gesture recognition system uses a database of the gestures. Database entries include acceleration data on X, Y, and Z sampled every 30 ms. The initial data set which is being used to input various hand gestures into the system forms the training set. Then, the user creates several sets of the same gestures with variations, in order to account for the differences when performed by different persons. Around ten to thirteen variations were recorded in the gesture training database. A neural net with 100 neurons in the hidden layer, and training using standard backpropagation with a learning rate of 0.01 was used in training. The sampled data is fed to this network and mean squared error was used as degree of match to recognize the gesture.

3.2 Force feedback Control The typical physical interaction between human and robot (“HRI”) is achieved using a force-torque sensor mounted on the robot wrist. However, these sensors are expensive, not always available, and they do not provide direct measurement of the interaction between the robot chain or the payload with the environment. Several researchers have worked on estimation of interaction force from other proprioceptive sensors only, for instance ways to interact with a robot manipulator without using any sensor by means of Extended Kalman Filtering (EKF). Drawback of this approach includes the presence of a “dead-zone” inside the manipulators friction envelope, which means that small interaction forces will not elicit a manipulator response. In our previous work [15], we proposed a combination of 1D force feedback sensing and joint encoder data sensor fusion using the EKF and impedance control. We applied this scheme in Neptune, by combining the force measurements in 4 Flexiforce units placed on the iPad display with joint encoder information using the EKF framework. Interaction parameters can be set via the impedance block, and are adjusted to make the display motion more responsive or more accurate.

Figure 4. Overall Neptune force feedback control system diagram including a Kalman Filter, an Impedance Controller, and Computed-Torque Control (from [15]).

3.3 Visual servo Control Visual servo control refers to the use of computer vision to control the servo motors of the robot. The vision data can be acquired from a camera mounted on the robot, where the motion of the robot causes camera motion, or from a camera fixed in the workspace, e.g. watching the robot from a fixed vantage point. In the Neptune system, we used the “eye-in-hand” camera-endeffector configuration, with a camera mounted side by side the iPad (and, in future upgrades with iPad 2, the camera will be part of the iPad). Typically vision sensing and manipulation are combined in a feedback loop routine, also referred to as “look” and “move” [1012]. The goal of vision-based control technique is to minimize an error e(t), which is typically given by:

( )

( ( ) )

.

(2)

This formulation is quite general, and it incorporates a wide variety of approaches, as we will see below. The parameters in (1) ( )is a set of image are defined as follows: the vector measurements (e.g., the image coordinates of interest points or the image coordinates of the centroid of an object). These image measurements are used to compute a vector of k visual features, ( ( ) )in which is a set of parameters that represent potential additional knowledge about the system. The vector contains the desired values of the features. Here we consider the case of a fixed goal pose and a motionless target, i.e., is constant, and changes in s depend only on camera motion i.e., the as the manipulator is actuated. In the case of Neptune, we need to control the motion of the camera using the 5DOF Harmonic arm, and we employ an imagebased visual servo control (IBVS), which designs a robot velocity controller to minimize the error in image features. Let the spatial velocity of the camera be denoted by ( , ) where isthe instantaneous linear velocity of the origin of the camera frame and isthe instantaneous angular velocity of the camera frame. In our case, the relationship between ̇ and is given simply as:

̇

,

(3)

where is called the interaction matrix related to s. Using (2) and (3), we immediately obtain the relationship between camera velocity and the time variation of the error:

,

where Le is a constant approximation of the interaction matrix (computed at the initial pose). Considering as the input to the robot controller, we can ensure an exponential decoupled decrease of the error to zero by setting:

, where

Sensor 1 Sensor 2 Sensor 3 Sensor 4

2

(4) Force (N)

̇

2.5

1.5

1

(5)

is the Moore-Penrose pseudoinverse of

0.5

3.4 Control of mobile robot with neuroheadset

0

Using the neuroheadset, facial gestures like blinking of eyes, looking left, right and smiling can be used to control the mobile robot. The API provided with the device uses a Hidden Markov Model to train and recognize facial expressions. Event handlers are attached to a facialExpression event to register changes in facial expression detected, and the status of facial expression is stored as true or false and stored. Various actions such as the mobile robot moving forward can be performed based on the gestures read for eg. if blink is true. Similarly different expressions can be used to command the robot to navigate the environment. Finally, gyroscope data from the headset provides information about the head pose of the user wearing the headset. The user can then make recognizable gestures such as nodding, making circle etc. A gesture recognition system based on a Neural Network similar to the Wii Mote was employed.

0

1

2

3

4 time (s)

5

6

7

Figure 5. Flexi Force sensor measurements, measuring pushing force over time.

500

Z (mm)

450

400

350 300 150

200

Neural Net Size 100 100 100 100 100

Match (X) percentage 81.18 86.94 75.23 65.79 70.80

Match (Z) Percentage 96.89 90.80 67.80 93.13 73.70

Figure 5 shows the raw user interaction forces measured using the Flexi Force sensors mounted on the iPad, during a typical physical interaction session. The user interaction force is used actuate the robot to adjust the screen orientation for the user. Figure 6 shows the 3D iPad screen movement corresponding to the physical force interaction based on desired impedance parameters (similar to the weight of the iPad), , (no desired spring force). The user can also interact with the robot via the Wii remote to set the iPad screen position.

50

X (mm)

Figure 6. Trajectory of robot based on force sensor readings from Figure 5 and force feedback control. 1 X Y Z

0.8 0.6 0.4

G-force (g)

Table 1. Match percentage for circle gesture using a Neural Network with 100 neurons in the hidden layers.

0

Y (mm)

The WiiMote gesture recognition discussed in the previous section has been tested and implemented for detecting user hand circular motions as shown in Table 1, with an average match percentage that can clearly be used to distinguish the gesture.

Testing sets Set 1 Set 2 Set 3 Set 4 Set 5

100

100

4. EXPERIMENTAL RESULTS The algorithms described in Section 3 have been implemented on the Neptune robotic system, and sample experimental results are presented here.

8

0.2 0 -0.2 -0.4 -0.6 -0.8 -1

0

10

20

30

40

50

60

70

time (50 ms)

Figure 7. Wii acceleration data resulting from user. Figure 7 shows the acceleration measured along x, y and z axis by the Wii remote over the period of interaction. The Wii remote acceleration data is used to compute the roll and pitch angles and is further applied to the joint motor of the robot. Figure 8 shows the variations of angle of the joint motor corresponding to Wii remote acceleration. Finally, Figure 9 shows the iPad screen movement attached to the robot end-effector in 3D for the corresponding interaction time.

5

1

Joint Joint Joint Joint

4

1 2 3 4

0.9 0.8

3.5 Smile Extent (0-1)

Joint Angles (radians)

4.5

3 2.5 2

0.7 0.6 0.5

1.5 0.4

1 0.5

0.3

0

10

20

30

40

50

60

70

time (50 ms)

0.2

Figure 8. Joint angles feed to the robot based on user Wii data from figure 7 and the WiiMote interaction algorithm.

0

1

2

3

4 time (s)

5

6

7

8

Figure 10. Smile extent during a typical user session with Neptune’s Neural Headset. 100

440 50

0

400

X (pixels)

Z (mm)

420

380 360

-50

-100

340 100

-150

100

0

50 0

-100

Y (mm)

-200

-50 -200

-100 -150

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

time (s)

X (mm)

(a) 20

Figure 9. Trajectory of robot end-effector based on Wii data from figure 7.

For visual servoing experiments, a rectangular, red colored box was tracked by the eye-in-hand camera mounted on the robot endeffector. The centroid of the object was extracted from the video stream, and the servoing algorithm resulted in object being centered in the camera view as shown in Figures 11 (a) and (b), using λ=0.08, and a 6X6 interaction matrix approximation:

-20 -40

Y (pixels)

The Neural Headset interface can be used in the Expressiv mode, which includes recognizing facial expressions. Figure 10 shows a typical response from the user during smiling, quantitatively characterized through the extent of smile on a scale of 0 to 1.

0

-60 -80 -100 -120 -140 -160

0

0.5

1

1.5

2

2.5

3

3.5

4

time (s)

[

]

5. CONCLUSION In this paper we described Neptune, a novel robotic system designed to assist children with Cerebral Palsy. The system consists of a mobile manipulator, multi-modal interface hardware, and several interaction algorithms designed to make this assistive system more interactive and useful. Experimental results showing interaction date through a Wii remote, neural headset, camera and force sensors are presented. The goal of the work presented here was to get the system to the stage where it can begin interacting with patients.

(b) Figure 11. Tracking error during centering of an object by visual servoing along the coordinates X (a) and Y (b). Since children, including those suffering from CP stand to benefit from game therapy, we are quite excited to use iPad therapeutic games in conjunction with the robot in the near future. We plan to test the robot with CP patients during clinical trials with collaborators from Cook Children’s Northeast Rehabilitation Center, and University of North Texas Health Sciences Center, both located in Dallas-Fort Worth region of the United States. Finally, we expect that much of the fine-tuning of the Neptune interaction parameters will occur in response to actual therapeutic outcomes. Additional technological challenges that we plan to address in the near future include reducing the time delay between

the base and the arm by upgrading the communication to a realtime CAN protocol.

6. ACKNOWLEDGEMENTS This work was supported by US National Science Foundation Grants #CPS 1035913, and #CNS0923494. The authors wish to thank Isao Saito and Eric Becker for their generous support and help with experiments. The second author also acknowledges the NSF Scholar Award to Attend Doctoral Consortium of PETRA 2011.

7. REFERENCES [1] Lee, J.C.2008. Hacking the Nintendo Wii Remote. Pervasive Computing, IEEE. vol.7, no.3, pp.39-45. July-Sept. 2008. [2] Smith, C., and Christensen, H.I. 2009. Wiimote robot control using human motion models. Intelligent Robots and Systems, 2009. IROS 2009. IEEE/RSJ International Conference on. vol., no., pp.5509-5515. 10-15 Oct. 2009. [3] Waytowich, N., Henderson, A., Krusienski, D., and Cox, D. 2010. Robot Application of a Brain Computer Interface to Staubli Robots – Early Stages. World Automation Congress. TSl Press. 2010. [4] Barbosa, A.O.G., Achanccaray, D.R., and Meggiolaro, M.A. 2010. Activation of a mobile robot through a brain computer interface. IEEE International Conference on Robotics and Automation (ICRA). pp.4815-4821. 3-7 May 2010. [5] Tapus, A., Tapus, C., and Mataric, M. 2009. The role of physical embodiment of a therapist robot for individuals with cognitive impairments. The 18th IEEE International Symposium on Robot and Human Interactive Communication RO-MAN 2009. vol., no., pp.103-107. Sept. 27 2009-Oct. 2, 2009. [6] Ashwal, S., Russman, B.S., Blasco, P.A., Miller, G., Sandler, A., Shevell, M. and Stevenson, R.2004. Practical Parameter: Diagnostic assessment of child with cerebral palsy: Report the Quality Standards Subcommittee of the American Academy of Neurology and Practical Committee of the Child Neurology Society. Neurology. vol. 62, 2004. pp. 851-863.

[7] Makedon, F., Le, Z., Huang, H., Becker, E., and Kosmopoulos, D. 2009. An Event Driven Framework for Assistive CPS Environments. Special Issue of SIGBED Review(ISSN :1551-3688),ACM Special Interest Group on Embedded Systems. [8] Tapus, A., Tapus, C., Mataric, M. 2009. The role of physical embodiment of a therapist robot for individuals with cognitive impairments. The 18th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2009. vol., no., pp.103-107. Sept. 27 2009-Oct.2, 2009. [9] Tsai, B.-C., Wang, W.-W., Hsu, L.-C., Fu, L.-C., and Lai, J.S. 2010. An articulated rehabilitation robot for upper limb physiotherapy and training.2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). pp. 1470-1475.18-22 Oct. 2010. [10] Kang, K.I., Freedman, S., Mataric, M.J., Cunningham, M.J., and Lopez, B. 2005. A hands-off physical therapy assistance robot for cardiac patients. 9th International Conference on Rehabilitation Robotics, ICORR 2005. pp. 337- 340. 28 June to 1 July 2005. [11] Ikemoto, S., Minato, T., and Ishiguro, H. 2008. Analysis of physical human-robot interaction for motor learning with physical help. 8th IEEE-RAS International Conference on Humanoid Robots, Humanoids 200.pp.67-72. 1-3 Dec. 2008. [12] Chaumette, F., and Hutchinson, S. 2006. Visual servo control. I. Basic approaches. Robotics & Automation Magazine, IEEE. vol.13. no.4. pp.82-90. Dec. 2006. [13] Chaumette, F., and Hutchinson, S. 2007. Visual servo control. II. Advanced approaches [Tutorial]. Robotics & Automation Magazine, IEEE. vol.14. no.1, pp.109-118. March 2007. [14] Feddema, J.T., and Mitchell, O.R. 1989. Vision-guided servoing with feature based trajectory generation. IEEE Trans. Robot. Automation. vol. 5, pp. 691–700. Oct. 1989. [15] Rajruangrabin, J., and Popa, D.O. 2007. Enhancement of Manipulator Interactivity through Compliant Skin and Extended Kalman Filtering. IEEE Conference on Automation Science and Engineering CASE. September 2007.