Interactive Humanoid Robots for a Science Museum - CiteSeerX

0 downloads 0 Views 808KB Size Report
adults and children than interaction with an electronic interface such as a touch ... about its environment and the people with whom it is interacting. .... As visitors prepared to leave, one robot greeted them by name, asked them to return their.
ISSI-0068-0806

1

Interactive Humanoid Robots for a Science Museum Masahiro Shiomi, Takayuki Kanda. Hiroshi Ishiguro, and Norihiro Hagita

Abstract-- This paper reports on a field trial with interactive humanoid robots at a science museum where visitors are encouraged to study and develop an interest in science. In the trial, each participating visitor wore an RFID tag while looking around the museum’s exhibits. Information obtained from the RFID tags was used to direct the robots' interaction with the visitors. The robots autonomously interacted with visitors via gestures and utterances resembling the free play of children [1]. In addition, they guided visitors around several exhibits and explained the exhibits based on sensor information. The robots were given high evaluations by visitors during the two-month trial. In addition, we conducted an experiment during the field trial to compare in detail effects of exhibit-guiding and free-play interaction under three operating conditions. The results revealed that the combination of free-play interaction and exhibit-guiding positively affected visitors’ experiences at the science museum. Index Terms— Robotics, Field trial, Interactive robot, Intelligent systems, Science museum.

I. INTRODUCTION

Our objective is to develop an intelligent communication robot that operates in large-scale daily environment such as a museum to support people through interactions with body movements and speech. We have selected a humanoid robot to achieve our objective because its physical structure enables it to interact with people using human-like body movements such as shaking hands, greeting, and pointing. Such interactions are more likely to be understood by both adults and children than interaction with an electronic interface such as a touch panel or buttons. In addition, possessing a human-like body is useful for naturally holding the attention of people [2]. We expect human-like interaction to be important for improving the perceived friendliness of the robot toward people. To behave intelligently during an interaction, the robot requires many types of information about its environment and the people with whom it is interacting. For example, it is reasonable to All authors are with the ATR Intelligent Robotics and Communication Laboratories, 2-2-2 Hikaridai, Keihanna, Science City, Kyoto, Japan, (e-mail: [email protected]) In addition, M. Shiomi and H. Ishiguro are with Osaka University, Suita City, Osaka, Japan.

ISSI-0068-0806

2

suppose that people interacting with the robot would expect some human-like interaction such as greeting by name. It is difficult, however, for a robot to obtain such information in a daily environment; particularly, in a large-scale daily environment such as a museum, simple functions such as person identification are very difficult because the robot’s sensing ability is likely to be affected by the presence and movement of a large number of people as well as unfavorable illumination and background conditions in the environment. To achieve our objective, we integrate autonomous robotic systems and ubiquitous sensors that support robots. This approach involves applying ubiquitous sensors to monitor the environment and to acquire rich sensory information and process it, enabling an autonomous robot to freely interact with people by utilizing the sensory data received from the ubiquitous sensors. For example, a robot would be able to behave based on certain personal information about people such as name and movement history, which is achieved by distributed RFID tags and readers. This approach enables the robot to provide more pertinent information during interaction, such as recommendations based on each visitor's movement history. Moreover, the cameras supply accurate coordinates to the robot that are used for localizing and navigating the robot. We explore the potential of communication robots with this approach. II. RELATED WORKS

Table 1 shows a comparison between our work and previous works based on the concept of the communication robot from four points of view: purpose, experimental environment, details of interactions, and functions. Aibo and Paro [3] had animal-like appearance – respectively, a dog and a seal. These robots provide entertainment or mental care to people through a human-pet style of interaction. Both studies indicated the effectiveness of interaction between people and pet-like robots.

ISSI-0068-0806

3

TABLE 1 Various field experiments with interactive robots Purpose Mental care [3]

Location Hospital

Language education [4]

School

Guidance & navigation [6]

Museum

Interaction & guidance [This paper]

Museum

Assistant [5]

Guidance & navigation [7]

Nursing homes

Expo

Interaction Human-like

–  – – – 

Function

Using personal information

–  – – – 

Person identification

–  – –  

Navigation

– –    

Robovie [4] was used to assist with language education in elementary school. This research detailed the importance of using personal information in an interaction. On the contrary, Robovie only interacted with a limited group of people; thus, it is not clear how a robot should operate in large-scale environments where a wide variety of people visit. Nursebot [5], RHINO [6], and RoboX [7] are traditional mobile robots, the developers of which designed robust navigation functions for daily environments. In particular, RoboX and RHINO guided thousands of people in large-scale environments. Although these works represent the effectiveness of robust navigation functions in interactions between people and robots, their interaction functions are quite different from human-like interactions. To create an intelligent communication robot that can operate in a large-scale environment, we consider it important to investigate the effectiveness of human-like interaction as well as developing robust functions. Therefore, we designed our robots to interact with people using human-like body and personal information obtained via ubiquitous sensors.

ISSI-0068-0806

4

Figure 1. Map of the fourth floor of the Osaka Science Museum III. SYSTEM CONFIGURATION A. General Settings Seventy-five exhibits were positioned on the fourth floor of the Osaka Science Museum. Visitors can freely explore the exhibits. Figure 1 shows a map of the fourth floor of the museum. Generally, visitors walk in the counterclockwise direction from the entrance to the exit. The width and length of the Osaka Science Museum are 84[m] and 42[m], respectively. B. Our experimental Settings We installed the humanoid robots, RFID tag readers, infrared cameras and video cameras in the Osaka Science Museum for experiments. Visitors could freely interact with our robots similar to the other the exhibits. Typically, in our experiment, visitors progress through the following steps: • If a visitor decides to register as part of our project, personal data such as name, birthday, and age (under 20 or not) is gathered at the reception desk (Fig. 1, point A). The visitor receives a tag at the reception desk. The system binds those data to the ID of the tag and automatically produces a synthetic voice for the visitor’s name. • The visitors can freely experience the exhibits in the Osaka Science Museum as well as interact

ISSI-0068-0806

5

with our robots. Four robots are placed at positions B, C, and D on the fourth floor, as shown in Fig. 1. After finishing, visitors return their tags at the exit point (Fig. 1, point E). IV. SYSTEM CONFIGURATION

Our intelligent and interactive system at the Osaka Science Museum consists of four humanoid robots and ubiquitous sensors. Figure 2 represents an overview of the system. The upper part represents the ubiquitous sensors. The lower part represents robots; the details are covered in the section IV-C. The robots behaved based on information from their sensors as well as information from the ubiquitous sensors. The ubiquitous sensors record the movements and positions of visitors on the fourth floor of the Osaka Science Museum via their RFID tags. These sensors are also used to identify visitors and estimate the correct coordinates of the robots. The visitors' information is used for interaction between robots and visitors. The interaction data between robots and visitors were recorded on a database using recording cameras. Generally, the robots behaved as follows: - One robot served as a guide to the exhibits. - Two stationary robots explained the exhibits. - As visitors prepared to leave, one robot greeted them by name, asked them to return their RFID tags, and said goodbye.

ISSI-0068-0806

6

Figure 2 An overview of our system. A. Database All ubiquitous sensor data and robots’ sensor data were recorded in the database with a time stamp; personal information and movement histories were also recorded. These data were used for deciding of robots' behavior, interacting with visitors, and analyzing the results of our experiments. Details of recorded data are as follows: 1) From reception IDs of the visitors’ RFID tags, registered visitors’ personal information, and times of registration/return tags 2) From RFID tag readers Times when visitors approached the particular exhibits and estimated visitors' positions 3) From all ubiquitous sensors and robots Each sensor’s output.

ISSI-0068-0806

7

B. Embedded ubiquitous sensors in an environment On the fourth floor of the Osaka Science Museum, we installed 20 RFID tag readers (SpiderIIIA, RF-CODE) (which included the two fitted to the Robovies), three infrared sensors, and four video cameras. All sensor data were sent to the central database through an Ethernet network. In the following subsections, we describe each type of sensor used. 1) RFID tag readers We used an active type of RFID tag, a technology that enables easy identification of individuals: detection is unaffected by the occurrence of occlusions, the detection area is wide, and the distance between the tag reader and an RFID tag can be roughly estimated. Such benefits make it suitable for large environments. Furthermore, RFID tag readers are used for recording the time when each tagged visitor approached particular exhibits and estimating the visitors' positions. We placed the readers around particular exhibits to detect whether visitors approached them. Figure 1 shows the positions of the tag readers. 2) Infrared cameras We placed an infrared LED on top of Robovie and attached infrared cameras to the ceiling in order to correct the robot’s position (to accurately navigate a robot in a crowded museum). The system produces binary images from the infrared cameras and detects bright areas. It then calculates absolute coordinates with reference to the weighted center of the detection area and sends them to the database. Infrared camera positions are shown in Fig. 1. The distance between the floor and the ceiling is about 4 m. The width and height of images from an infrared camera is 320 and 240 pixels, respectively. One pixel represents about 1 cm2 of area.

ISSI-0068-0806

8

3) Video cameras The video camera positions are also shown in Fig. 1. The output images of each video camera are recorded onto a PC and used to analyze the data generated during the experiment. C. Humanoid robots We used two types of humanoid robot: Robovie and Robovie-M. This section provides details of the robots. 1) Robovie Hardware “Robovie” is an interactive humanoid robot characterized by its human-like physical expressions and its various sensors. The reason we used humanoid robots is because a humanlike body is useful for naturally holding the attention of humans [2]. Its height is 120 cm and its diameter is 40 cm. The robot has two 4*2 DOFs in its arms, three DOFs in its head, and a mobile platform. It can synthesize and produce a voice via a speaker. We also attached an RFID tag reader to Robovie that enables it to identify individuals around it [4]. In this system we use Robovies as sensors because they each contain an RFID tag reader. In effect, they became not only interactive robots but also part of the sensor system. Situated modules Robovie’s software regime comprises situated modules, situated module controller and episode rules [8], which are used to perform consistent-interactive behaviors. Interactive behaviors are designed with knowledge about the robot’s embodiment obtained from cognitive experiments, and then implemented as situated modules with situation-dependent sensory data processing for understanding complex human behaviors. Situated modules realize certain interactive behaviors such as calling a visitor’s name, shaking hands, greeting, explaining

ISSI-0068-0806

9

exhibits, and guiding visitors around exhibits. In our system, the situated modules were designed together with the robot's sensors and the ubiquitous sensors. The relationships among behaviors are implemented as rules governing execution order (named “episode rules”) to maintain a consistent context for communication. Situated module controller selects a situated module based on episode rules and sensory data. Figure 3 shows one example of episode rules and scenes of an interaction between a robot and a visitor. The episode rules were designed for the robots to behave depending on recorded data in the database and outputs of the robots’ sensors. In this case, the robot explores the environment by “Explore,” then detects a visitor’s tag. This causes a reactive transition ruled by episode rule 1. The robot calls the visitor’s name by executing the situated module “Call name” (Fig. 3, top-center). After executing “Call name,” it starts talking about exhibits by executing “Talk about exhibits” (Fig. 3, top-right). This sequential transition is caused by episode rule 2, which has a higher priority than episode rule 3.

Figure 3. An illustration of episode rules and scenes of an interaction The number of situated modules developed to date has reached 230: 110 are interactive

ISSI-0068-0806

10

behaviors such as hand-shaking, hugging, playing paper-rock-scissors, exercising, greeting, and engaging in a short conversation; 40 are idling behaviors such as the robot scratching its head and folding its arms; 20 are moving-around behaviors such as pretending to patrol an area; 50 are explaining and guiding behaviors for exhibits; and 20 are talking behaviors based on information from an RFID tag. The number of episode rules for describing relationships among these modules exceeds 800. 2) Robovie-M “Robovie-M” is a humanoid robot characterized by its human-like physical expressions. We decided on a height of 29 cm for this robot. Robovie-M has 22 DOFs and can perform twolegged locomotion, bow its head, and do a handstand. We used a personal computer and a pair of speakers to enable it to speak, since it was not originally equipped for that. The behavior of each Robovie-M is decided by the behavior of a Robovie or output from the RFID tag readers. V. ROBOT BEHAVIOR A. Locomotive robot We used a Robovie for the locomotive robot, which moved around in parts of the environment, interacted with visitors, and guided them to exhibits. Each robot’s behavior is automatically decided by episode rules that are based on data from its sensors and the database. Such behavior can be divided into four types, the details of which are as follows: 1) Interaction with humans: Childlike interaction The robot can engage in such childlike behavior as handshaking, hugging, and playing the game of “rock, paper, and scissors.” Moreover, it has such reactive behaviors as avoidance and gazing at a touched part of its body, as well as such patient behavior as solitary playing and

ISSI-0068-0806

11

moving back and forth. Figure 4-a shows interaction scenes between Robovies and visitors. 2) Interaction with humans: Using information from RFID tags The robots can detect RFID tag signals around themselves by using their RFID tag reader, which allows them to obtain personal data on visitors using RFID tag IDs. Each robot can greet visitors by name or wish them a happy birthday, and so on. In addition, the system records the time that visitors spend on the fourth floor of the Osaka Science Museum. The robots can behave according to that time. 3) Guiding people to exhibits: Human guidance The robot can guide people to four kinds of exhibits by randomly determining the target. Figure 4-b and c show an example of this behavior. When bringing visitors to the telescope, the robot says, “I am taking you to an exhibit, please follow me!” (b-1), and approaches the telescope (b-2, 3). It suggests that the person look through it and then talks about its inventor (b-4). 4) Guiding people to exhibits: Using information from RFID tags The RFID tags’ data are also used for interaction. We used the amount of time that visitors spent near an exhibit to judge whether visitors tried it. For example, when an RFID-tagged visitor has stayed around the “magnetic power” exhibit longer than a predefined time, the system assumes that the visitor has already tried it. Thus, the robot says, “Yamada-san, thank you for trying ‘magnetic power.’ What did you think of it?” If the system assumes that the visitor has not tried it, the robot will ask, "Yamada-san, you didn't try ‘magnetic power.’ It’s really fun, so why don’t you give it a try?” B. Robots that talk with each other Two stationary robots (Robovie and Robovie-M, Fig. 4-d-1,2) can casually talk about the exhibits as humans do with accurate timing because they are synchronized with each other using

ISSI-0068-0806

12

an Ethernet network. A behavior of Robovie was decided by episode rules based on data from its RFID tag reader and the database. Furthermore, Robovie controlled timing of Robovie-M’s motion and speech. The topic itself is intelligently determined by data from RFID tags. By knowing a visitor’s previous course of movement through the museum, the robots can try to interest the visitor in an exhibit he or she overlooked by starting a conversation about that exhibit. C. A robot bidding farewell This robot is positioned near the exit and, after requesting data from their RFID tags, says goodbye to the departing visitors. It also reorients visitors on the tour who are lost by examining the visitor’s movement history and time spent on the fourth floor of the Osaka Science Museum, which was recorded by the system. If visitors walk clockwise, they will immediately see this robot at the beginning and will be pointed in the right direction by the robot. Figure 4-d-3 shows a scene with this robot.

(1) Robovie shakes hands

(2) A child touching Robovie (3) Robovie hugging children

(a) Scenes of interaction between visitors and Robovie

(1)

(2)

(3)

(b) Robovie guiding visitors to the telescope

(4)

ISSI-0068-0806

13

(c)

(1) Two robots talking

Scenes when visitor had an interest in the telescope

(2) Two robots talking to visitors (3) The visitor talking to the robot

(d) Scenes of stationary robots interacting with visitors Figure 4. Scenes of interaction between visitors and our robots. VI. EXPERIMENT A. A two-month exhibition We performed experiments to investigate the impressions made by robots on visitors to the fourth floor of the Osaka Science Museum during a two-month period. The questionnaire used in the exhibition consisted of the following statements. Respondents indicated the degree to which each statement applies within 1 –to 5 degrees in a scale: (1) “Interesting”: I am interested in the robots (2) “Friendly”: I felt friendly toward the robots when I faced them (3) “Effective”: I find guidance provided by the robots was effective (4) “Anxiety about interaction”: I felt anxious when the robots talked to me (5) “Anxiety about future robots”: I feel anxious about the possible widespread application of

ISSI-0068-0806

14

robots to perform tasks such as those shown at the exhibition in the near future B. Results of the two-month experiment By the end of the two-month period, the number of visitors had reached 91,107, the number of subjects who wore RFID tags was 11,927, and the number of returned questionnaires was 2,891. The results of the two-month experiment indicate that most visitors were satisfied with their interaction with our robots. In this section, we describe three kinds of results: the results of each questionnaire’s item; freely described opinions; and observed interactions between visitors and our robots. 1) Results for each item in the questionnaire Figure 5 shows the results and averages from the questionnaires, providing three findings. The first is that most visitors had interest in and good impressions of the robots because the score for “Interesting” was above four and that for “Friendly” reached four. The second finding relates to the effectiveness of guidance. The score for “Effective” was nearly three, indicating that the robots’ guidance did not leave a particularly strong impression on visitors. The third is that visitors did not feel anxious about interaction with these robots or with future robots. These findings are supported by the scores for “Anxiety about interaction” and “Anxiety about future robots” (both scores are less than the middle value). Altogether, the results from the questionnaires indicate that most visitors had interest in and good impressions of our robots, and did not feel anxious about them.

ISSI-0068-0806

15

5 4 3 2 1

tin g res e t in

y n d l fr ie

e o ts tio n c tiv ro b ra c e e t e ffe r in tu fo r r fu o y t f e i ie ty an x an x

Figure 5. Results of returned questionnaires

2) Freely described opinions The results revealed that visitors held favorable impressions toward the presence of the robots. Moreover, visitors described their favorite robot behavior, such as hugging, the calling out of names, and so on. Such behaviors are basic elements of human society. Most opinions were along the lines of: - We had a really good time. - I had fun because the robots called me by name. - We felt close to the robots. The freely described opinions of visitors were analyzed, and revealed that visitors’ opinions of the robots differed according to age [9]. For example, younger respondents did not necessarily like the robots more than elder respondents. 3) Observed interaction between visitors and our robots We observed some interesting scenes between visitors and our robots during the experiment at the exhibition. We introduce the scenes as evidence that our robots interacted well with visitors. Locomotive robot

ISSI-0068-0806

16

• Often there were many adults and children crowded around the robot. In crowded situations, (mainly) a few children simultaneously interacted with the robot in turn. • Similar to Robovie’s free-play interaction in a laboratory [1], children shook hands, played the paper-rock-scissors game, hugged, and so forth. Sometimes they imitated the robot’s body movements, such as the robot’s exercising. • When the robot started to move to a different place (in front of an exhibit), some children followed the robot to the destination. • After the robot explained about a telescope exhibit, one child went to use the telescope (Fig. 4c). When she came back around the robot, another child used the telescope. • Its name-calling behavior attracted many visitors. They tried to show the RFID tags embedded in the nameplates to the robot. Often, when one visitor did this, several other visitors began showing the robots their nameplates, too, as if they were competing to have their names called. • A visitor reported that when the robot moved to him, he thought that it was aware of him, which pleased him. We demonstrated that robots could provide visitors with the opportunity to play with and study science through exhibits they might have otherwise missed. In particular, it reminds us of the importance of making a robot move around, a capability that attracts people to interact with it. Moreover, as shown in the scene where children followed the locomotive robot, it drew their attention to the exhibit, although the exhibit (a telescope) was relatively unexciting (The museum features many attractive exhibits for visitors to move and operate to gain an understanding of science, such as a pulley and a lever.) Robots that talk to each other • There were two types of typical visitors’ behavior. One was just to listen to the robots’ talk.

ISSI-0068-0806

17

For example, after listening to them, the visitors talked about the exhibit that was explained to them, and sometimes visited the exhibit. • The other is to expect to have their name called. In this case, the visitors paid rather less attention to the robots’ talk, and instead showed their name to the robots, which is similar to the actions observed around the locomotive robot. Often, visitors would leave the front of the robot just after his/her name was called. One implication is that displaying a conversation between robots can attract people and convey information to them, even though the interactivity is very low. Such examples are also shown in other work [10]. Robot bidding farewell • There were two types of typical visitors’ behavior. One was just to watch the robot’s behavior. • The other was, again, to expect to have their name called. In this case, the visitors often showed their name to the robots. The cost of Robovie-M is far cheaper than that of Robovie-II. Although its functionality is very limited, such as its small size, no embedded speech functions (we placed a speaker nearby), and no sensors (an RFID reader was also placed nearby), it entertained many visitors. Particularly, the effectiveness of the name-calling behavior was again demonstrated, as seen in the children’s behavior of returning their RFID tags. C. Experiments on the behavior of robots We expect that free-play interaction would be affected to the effectiveness of robots’ services. We performed experiments to examine the behavior of robots under three operating conditions during one week. We randomly switched conditions between the morning and the afternoon. The

ISSI-0068-0806

18

subjects were the visitors who had RFID tags and played with the robots. After their interaction ended, we asked them to fill out a questionnaire in which they rated three items on a scale of 1 – to 7, where 7 is the most positive. The items were “Presence of the robots” (What did you think about the presence of robots in the science museum?), “Usefulness as a guide” (What was the degree of the robots’ usefulness for easily looking around the exhibits?), and “Experience with science & technology” (How much did the robots increase your interest in science and technology?). The subjects were also encouraged to provide other opinions about the robots as well. The three operating conditions were the following: 1) Interaction Robots behaved according to predefined functions. Each robot engaged in basic interaction, as described in Section 3.1.1. No guide function was performed 2) Guidance The role of the robots was limited to guiding and giving explanations. Each robot only behaved as described in Section 3.1.3. 3) Interaction, guidance and using RFID tags In this operating condition the robots not only combined the previous two operating conditions but also used data from the RFID tags. Each robot preformed every kind of behavior introduced in Section 3.1. It is difficult to compare the conditions of “using RFID” and “not using RFID.” For example, in the “Guide” condition, using information on the RFID tag necessitates that the robot behave interactively, such as calling someone by name. Also, it is difficult to compare the effects of each behavior in crowded situations. Thus, we use this operating condition for comparing the

ISSI-0068-0806

19

importance of the information on the RFID tags among the above conditions. D. Results of robots’ behavior About 100 questionnaires were returned for each operating condition. Figure 6 shows the results and their averages, which are mostly above 6. There was a significant difference for the following item, “Experience with science & technology,” as to whether the robot was in the “Interaction, guidance, and using RFID” operating condition or in another condition (p