Interacting in time and space: Investigating human

0 downloads 0 Views 1MB Size Report
measures to better sensor systems, the research on basic mechanisms of interaction and its .... Hand and body movements were measured by motion tracking devices. .... in the actual manufacturing, the predetermined times of basic motion ...
19th IEEE International Symposium on Robot and Human Interactive Communication Principe di Piemonte - Viareggio, Italy, Sept. 12-15, 2010

Interacting in time and space: Investigating human-human and human-robot joint action Glasauer S., Huber M., Basili P., Knoll A., and Brandt T. Abstract— When we have to physically interact with a robot, the benchmark for natural and efficient performance is our experience of daily interactions with other humans. This goal is still far despite significant advances in human-robot interaction. While considerable progress is made in various areas ranging from improving the hardware over safety measures to better sensor systems, the research on basic mechanisms of interaction and its technical implementation is still in its infancy. In the following, we give an overview of our own work aiming at improving human-robot interaction and joint action. When humans jointly collaborate to achieve a common goal, the actions of each partner need to be properly coordinated to assure a smooth and efficient workflow. This includes timing of the actions, but also, in the case of physical interaction, the spatial coordination. We thus first investigated how a simple physical interaction, a hand-over task between two humans without verbal communication, is achieved. Our results with a human as receiver and both humans and robots as delivering agent show that both the position and the kinematics of the partner’s movement are used to increase the confidence in predicting hand-over in time and space. These results underline that for successful joint action the robot must act predictably for the human partner. However, in a more realistic scenario, robot and human constitute a dynamic system with each agent predicting and reacting to the actions and intentions of the other. We therefore investigated and implemented an assembly task where the robot acts as assistant for the human. Using a Bayesian estimation framework, the robot predicts assembly duration in order to deliver the next part just in time.

R

I. INTRODUCTION

obotic research is developing rapidly [1] so that, based on the current technological capabilities, humans not only can share the same workspace with robots, but also begin employing the robot as a useful supporting assistant at home and at work. One of the main issues of intuitive human-robot interaction is the predictability of the robot’s actions and goals. The behavior of the robotic partner must be easily and intuitively understood in an interaction context and should not require training and adaption from the human side. Moreover, robots have to be efficient and safe to be Manuscript received March 31, 2010. This work was supported by DFG cluster of excellence CoTeSys (Project 147, 410, 411a). S. Glasauer, M. Huber, and P. Basili are with University Clinic Munich, Center for Sensorimotor Research, 81377 Munich, Germany (phone 49-897095-4839; e-mail: [email protected]). A. Knoll is with the Department of Computer Science, Technical University Munich, Germany. T. Brandt holds the Chair for Clinical Neurosciences at the University of Munich, 81377 Munich, Germany.

978-1-4244-7989-4/10/$26.00 ©2010 IEEE

accepted by humans as interaction partners. Empirically we know that these aspects are fundamental when humans interact. In order to design such behavior, we first have to investigate how humans achieve safe, efficient, and intuitive interactions. Only a detailed knowledge about perception, prediction and planning of joint actions among human individuals will enable us to build artificial systems with which we can interact intuitively and easily. While human arm movements such as reaching, grasping, and pointing have been investigated thoroughly in a laboratory context [2][3], there is only little knowledge about the mechanisms of manual joint action. Performing actions alone or together with a partner or assistant are different conditions and therefore lead to different performance and behavior of both partners [4]. Joint action requires both action coordination at the level of motor control [5], and consideration of the partner’s actions. Several studies have investigated action observation and the mapping onto motor representations [6] learning from observation, involvement of the CNS (central nervous system) and the mirror neuron system in action observation [7] and understanding of goal-directed actions [8]. However, here we are interested in joint interaction where people work together and have to react to the actions of their partners with reciprocal actions. The critical question during interpersonal interaction is how well the interacting partners can plan and adjust their actions jointly [4]. One of the basic examples of an every-day joint action between humans is the handing over of objects. The ultimate goal for a successful human-robot scenario using a hand-over task is, therefore, to achieve the same performance as between humans. That implies a natural and intuitive interaction, which does not require extensive training and happens at the right time and at the right place. The requirement of spatial and temporal coordination is common to many joint actions scenarios, in which the goal and the distribution of workload are well known to all participants. In this context, we examined action coordination in hand-over interaction tasks with the aim to answer how hand-over is executed by humans and how it can be transferred to robotic systems. II. TIMING OF A HAND-OVER Hand-over often occurs with both participants sitting or standing at a preset spatial position, e.g., when receiving an item at a sales counter or in a manufacturing scenario. In such a case, the positions of the participants are fixed and

272

Figure 1 Hand-over experiments: left: human-human (Experiments 1, 2, and 3), middle: human-humanoid robot (Experiments 4a and 4b), right side: human-industrial robot (Experiments 5a and 5b) the hand-over requires only arm and hand movement. Only a few studies so far have dealt with passing-over tasks. All of them investigated only two-dimensional planar movements, in which objects were delivered on the surface of a table located between the two interacting subjects. One study argued for differences in grasping strategies based on movement trajectories for the receiving subjects [9]. Another work used potential field-based trajectory planning to simulate human-like trajectories when replacing the delivering subject by a robot [10]. Further studies investigated human preferences concerning the movement velocity of their partners, yielding very low values for preferred peak velocity [11]. However, in all these studies timing was not explicitly considered. A. Human-Human Hand-over Experiment In our experimental setup, hand-over is unconstrained and takes place in 3D space (Fig. 1, left). Objects were handed over by a human experimenter to a human test subject (Fig. 1 left, see also [12]). Each test subjects received 6 cubes. Hand and body movements were measured by motion tracking devices. Data analysis focused on reaction times and spatial hand-over positions. Reaction time was defined as the duration from lifting the cube until the receiving subjects reacted by lifting their hand to grasp the cube. +,-./+,-./!012345-3/67

!

83.965:/!;5-37!