Behavior-Based Control of an Interactive Life-Like ... - Semantic Scholar

2 downloads 0 Views 254KB Size Report
Dec 8, 1998 - In this document we describe a life-like character Bouncy, which has been constructed in conjunc- tion with a prototype developed for the ...
Behavior-Based Control of an Interactive Life-Like Character

Paolo Pirjanian, Claus B. Madsen and Erik Granum Laboratory of Image Analysis, Aalborg University Fredrik Bajers Vej 7D, DK-9220 Aalborg East, Denmark E-mail: fpaolo, cbm, [email protected] December 8, 1998

1 Background In this document we describe a life-like character Bouncy, which has been constructed in conjunction with a prototype developed for the STAGING project1 . The main objective of the STAGING project is to develop new multimedia tools to be used in theatrical productions. One ambition of this project is to create virtual actors that partake in a play in cooperation and interaction with real and other virtual actors. Thus a major issue in the project is related to how to construct and then autonomously control such life-like characters. Not only does this ambition require the agents to act autonomously based on perception of their environment, it also requires them to perform their actions in a dramaturgically interesting manner. In short \It is not just what they do but how they do it". Bouncy is the character that we have developed for the rst of a series of such prototypes. The main objective with this prototype has been to provide a platform for one-to-one improvisational plays where the interaction between the human user and the life-like character Bouncy has been the focus. In a nutshell, Bouncy is an animated autonomous life-like character with which a user can interact through a multimodal multimedia interface. He is autonomous in the respect that its behavior is not a function of user commands and interaction only, it is also a function of its own intentions and desires. The focus of this paper is to describe how Bouncy is constructed with a major emphasis on it autonomy. To provide the context we describe the overall prototype system and its constituent components, in section 2. In section 3 we describe how Bouncy's \brain" has been constructed using a behavior-based approach and in section 4 a computational model of personality is described for Bouncy. The paper is concluded with a discussion of important issues in section 5. 1 STAGING \The Staging of Virtual, Inhabited 3D Spaces" is a collaborative project funded by the Danish Research Councils

1

2 Bouncy the dog Sync transmitter Visualization of Bouncy Graphical display

Microphone Data glove

Shutter glasses

Figure 1: The overall system The system that we are working on is illustrated in gure 1 and consists of three main components: 1. Bouncy who is an inhabitant of the virtual world, 2. the user who resides in the real world, 3. and, the interface which de nes the media by which the user and Bouncy can interact.

Figure 2: Bouncy. Bouncy (see gure 2) is a 3D animated life-like character with behaviors similar to that of a dog. He moves around by bouncing up and down (hence the name Bouncy) while moving forward and/or turning. He is able to display various facial expressions (happy, sad etc.) by actuating his mouth, eyes and tail. 2

The user interaction with Bouncy occurs through a set of devices. Bouncy is visualized for the user on a graphical display. The user has the option to wear a pair of shutter glasses to get an stereo-scopic view of Bouncy and its environment. Bouncy perceives the user through a speech interface (the microphone) and a data glove. The data glove allows Bouncy to recognize a set of prede ned hand gestures generated by the user. We have attempted to de ne an interface that feels intuitive for the user. The following table lists how Bouncy perceives the user's behavior:

User's behavior Data glove input None Pointing Open hand

Mic input Shouting Shouting Talking

Bouncy's perception of user Master is calling Master is scolding Master is clapping

In the following we describe Bouncy's behavior followed by a description of his mental attitude and how that was designed.

3 Bouncy's behavior Bouncy is required to act autonomously based on perceptual stimuli and to provide an interesting interaction with the human user. How Bouncy's actions are connected to its perception is described in this section. We have chosen a behavior-based approach [1] to controlling Bouncy. In this approach the control of an agent is distributed among a set of sensory-motoric units known as behaviors. Each behavior is concerned with a speci c and well-de ned task such as eating, mating, playing etc. Based on internal and external sensory stimuli each behavior controls the agent in a way to accommodate its task objective(s). This shared control approach might, however, lead to con ict among behavior objectives; thus it is necessary to coordinate the activities of the behaviors to ensure a coherent and rational behavior. In the literature this problem is know as the action selection problem de ned in the following: \How can an agent select `the most appropriate' or `the most relevant' next action to take at a particular moment, when facing a particular situation?" [3]. Thus action selection constitutes a major component in Bouncy. A major design objective in constructing Bouncy has been to create a character which appears life-like, behaves like a pet (e.g., a dog) and can engage in an interesting interaction with a user. Some level of intelligence is also required from Bouncy. For instance it should not bump into walls and other objects while traveling in its environment. To accommodate these requirements, three classes of behaviors have been de ned for Bouncy: 1. Navigation and safety behaviors, that enable Bouncy to navigate in its environment while ensuring safety, e.g., obstacle avoidance. 2. Liveliness behaviors, that should enable bouncy to behave as a life-like creature, in this case a dog. 3. Believability behaviors, that should enable Bouncy to convey its emotions and mental state. For instance when Bouncy is sad it should express that through an appropriate facial expression. 3

3.1 Navigation and safety behaviors The navigation behaviors enable Bouncy to move safely around in the virtual world. This consists of approaching a target while avoiding obstacles thus we provide two behaviors Avoid Obstacle and Approach Target. The Approach Target behavior itself consists of two elementary navigation behaviors: Maintain Target Heading and Move Forward. The tree in gure 3 depicts the relation between the behaviors. Thus safe navigation to a target position consists of three elementary behaviors. These behaviors should, however, be coordinated in an appropriate manner in order to avoid potential con ict among behaviors. For instance if the target is placed behind a wall then Bouncy should select to avoid the obstacle and then approach the target. This calls for an action selection mechanism. Go To Target

Approach Target

Avoid Obstacles obstacle position

Maintain Heading

Move Forward

target heading

velocity

Figure 3: The relation between the navigation and safety behaviors. For coordination of these navigation behaviors we use a multiple objective action selection mechanism proposed in [4]. This action selection mechanism allows several competing behaviors to be active at a time. In a nutshell, each behavior proposes a preferenced set of actions in terms of a function denoted an objective function. The action with the highest preference corresponds to the action which best satis es that behavior. Multiple behaviors are blended into a single more complex behavior that seeks to select the action that satis es the behaviors as good as possible. In the following we describe how each behavior calculates its preferences for the set of possible actions. Note in the following that the parameters used for controlling Bouncy consist of the translation velocity v and the angular velocity !. Thus a pair of control parameters (v; !) will cause Bouncy to move on a circular path with radius r = !v ; ! 6= 0, or on a straight line if ! = 0.

3.1.1 Move forward This is the simplest of the behaviors and its objective is to make sure that Bouncy moves forward in order to chive its task. The objective function of this behavior can be formulated as a linear function of the translation velocity: bmf (v; w) = a

4

v vmax

+ b;

(1)

1

1

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0 20

0 20 50

15 10 0 −50

v [inch/s]

50

15 10

0

5

(a)

0

5 v [inch/s]

w [degree/s]

0 −50

(b)

w [degree/s]

0.35 0.3 0.25 0.2 0.15 0.1 0.05 20 50

15 10

0

5 v [inch/s]

0 −50

(c)

w [degree/s]

(d)

Figure 4: (a) Move forward behavior. (b) Maintain Heading behavior. (c) Avoid obstacle behavior. represents Bouncy and (d) The situation used to generate the plots in (a), (b) and (c). represents the target. where a and b in our case are chosen to 1 and 0 respectively. The objective function is further normalized and its values are in the interval [0; 1]. Figure 4(a) is a plot of this objective function over discrete values of v and !.

3.1.2 Maintain heading The maintain heading behavior has the objective to move Bouncy towards a given target position: bmh (v; w) =

1+



1

v;!)?target 2

pred (

;

(2)

where pred (v; !) = 0 + !T is the predicted heading for control parameters (v; !), 0 is the current heading and T is the time-constant determined by command generation frequency. A plot of the objective function is depicted in gure 4(b).

5

3.1.3 Avoid obstacle The objective of this behavior is to move Bouncy at a safe distance from obstacles. Simulated range sensors are used to detect obstacles and return information about the distance to nearby objects. Then the obstacle avoidance behavior calculates for each path determined by the pair of control parameters (v; !) the shortest distance to the obstacles on its path. Formally the objective function is given by: bao (v; ! ) = minfF ar(d)jd = dist(v; !; o) 8 o 2 Obsg;

where F ar : R ! [0; 1] is a S-function that maps the distances into the interval [0; 1]. set of detected obstacle positions p = (x; y) and 8 v if ! 6= 0 and (x ? x0 )2 + (y ? y0 )2 = ( !v )2 ; < p ! dist(v; !; o) = x2 + y 2 if ! = 0 and x = 0; : 1 o o otherwise,

(3) Obs

is the

(4)

c = (x0 ; y0 ) is the center of the circular path and is the angle between the line from c to Bouncy's position and the line from c to p. his is illustrated in gure 5.

y p

γ r

c

x

Figure 5: This gure illustrates the geometrical parameters and their relations to explain how the distance to an obstacle is calculated for a given circular path. Figure 4(c) is a plot of the objective function for the example depicted in gure 4(d). Note that the objective function expresses the desire to make a sharp left or right turn (in order to avoid the obstacle). 6

3.1.4 Multiple objective action selection In order to coordinate the activities of the elementary navigation and safety behaviors of obstacle avoidance, maintain heading and move forward we use a multiple objective action selection mechanism (see [4]). Based on the preferences of the behaviors a multiple objective action selection mechanism selects the action, i.e., the (v; !) pair, that satis es as good as possible all behaviors. There are many approaches to nding such actions. The one used for controlling Bouncy is the lexicographic method.

Lexicographic method for multiple objective action selection In the lexicographic method it is required to rank order the importance of each behavior relative to the other behaviors. Assume that the behaviors are ranked in decreasing order of importance, o1 ; o2 ; :::; on . Then a sequential elimination process is started by solving the following sequence of problems until either a unique solution is found or all the problems are solved: : max x2X P2 : max x2X

P1 ::: Pi

1

:

:::

o1 (x);

o2 (x);

max x2X

i?1

oi (x); Xi?1

= fxjx solves Pi?1 g; i = 2; :::; n + 1;

(5)

The basic idea is that the rst behavior's objective function is used to screen the solutions of the second behavior's objective function and so on. In our case we rank order the behaviors in the following decreasing order: bao; bmh ; bmf . Thus obstacle avoidance has the highest priority followed by move to target and move fast forward is assigned the lowest priority. Thus the following set of problems have to be solved: : max (v;! )2X P2 : max (v;! )2X 0 P3 : max 00 P1

v;!)2X

(

bao (v; ! );

bmh (v; ! );

X0

= f(v; !)j(v; !) solves P1 g

bmf (v; ! ); X 00 = f(v; ! )j(v; ! ) solves P2 g

(6)

3.2 Liveliness behaviors To keep it simple in its rst implementation Bouncy was armed with three main liveliness behaviors:

 Play: causes Bouncy to wander about and play when the user is not interacting with it,  Sleep: makes sure that Bouncy gets enough rest,  Interact: makes Bouncy goto and/or follow the user plus engage in interaction with the user. 7

Bouncy's purpose in (virtual) life is to play, interact with and please its owner (the user) and sleep. A relevant behavior is activated according to Bouncy's mental state as well as the situation at hand. For example if the user calls Bouncy then the `Interact' behavior should be activated only if Bouncy is interested. If he is not interested he should respond di erently when he is called. It is the action selection mechanism that determines which behavior to activate in a given situation. Once activated a behavior takes control of Bouncy's motoric and mental capacities. Each behavior is designed to articulate Bouncy so as to manifest the correct attitude and behavior. For example, the `Play' behavior causes Bouncy to \run" and jump in a playful manner. Bouncy's attitude and behavior is also in uenced by its mental state - whether it is happy or sad, excited or not etc. Bouncy's liveliness behaviors are mutually exclusive, i.e., only one can be active at a time. Thus it is possible to control the activation of its behaviors using a straight forward action selection mechanism known as Discrete Event Systems [2], which is based on the nite state automaton (FSA) formalism. Figure 6 depicts the nite state machine that is used to describe Bouncy's action selection mechanism. It is seen that there is one state corresponding to each Behavior. State transitions are caused by events denoted tired, rested etc. How these events are generated will be discussed in the next section. rested

Sleep

tired

Play

bored

called

called tired

Interact

Figure 6: Finite state machine describing the action/behavior selection mechanism for controlling Bouncy. For example, in the play state the `Play' behavior is activated and Bouncy will start running and jumping around. If Bouncy is called a transition is then made to the interact state and hence the `Interact' behavior is activated. The `Interact' behavior is a more complex behavior which also is described using a nite state formalism as depicted in gure 7. Note that this FSA introduces 3 additional states and hence 3 additional corresponding behaviors. The interact state in gure 6 could thus be replaced by the nite state machine in gure 7. However, by making this abstraction the overall behavior of Bouncy is much more clearly described in gure 6.

8

called

bored comforted

play tease

called

please

bored comforted called scolded

sleep

have the blues

tired

scolded

Figure 7: Finite state machine describing the `Interact' behavior.

3.3 Believability behaviors The believability behaviors allow Bouncy to express its mood and emotions in order to increase its credibility of being alive. The behaviors of this class consist of: smile, wag tail and breathe. The Smile behavior controls Bouncy's mouth `muscles' to smile or look sad. This behavior looks at Bouncy's internal mental state (see next section) and controls the mouth to express the correct attitude. It is the liveliness behaviors that trigger these behaviors. In gure 8 it is seen how Bouncy reacts to user interaction. When the user calls Bouncy, it becomes alert. When petted Bouncy becomes happy and smiles and it becomes sad when the user shouts at it.

\Bouncy Bouncy"

\Good dog Bouncy"

\Bad dog Bouncy"

Figure 8: Bouncy's reactions to user interaction. The type of user interaction is indicated by the text below each picture. The Wag Tail behavior works in a similar manner and the manner in which Bouncy wags its tail depends on the internal mental variables. The Breathe behavior is a simple behavior that manipulates the graphics used for visualization of Bouncy to give an impression of Bouncy breathing.

9

4 Bouncy's mental attitude Bouncy is designed as a playful creature with a simple yet appealing personality. We chose to design a creature with dog-like characteristics, attributes, and mentality. Given Bouncy's limited behavior repertoire we chose three parameters to constitute a computational model for Bouncy's mentality: sleepiness, excitedness and mood. The value of the attributes range from ?1 to 1m where ?1 would correspond to low and 1 to high, respectively. The values of these attributes are updated as a function of time and sensory stimuli. Further the attributes are updated di erently depending on which activity (behavior) Bouncy is engaged in.

Sleep

excitedness sleepiness mood

sound time " !0

Blues

scold pet time

# # !0 #

"

" #

Play

sound time

"

Tease

sound time

# " " !0 "

# "

Please

scold pet time

"

#

"

Nomenclature Increment : " Decrement : # Reset : !0 Based on the values of these parameters we de ne a set of perceptions for Bouncy according to the following table: rested tired bored comforted scolded

excitedness sleepiness mood low low high low low low not low high low

The boldfaced entries indicate that the value of the corresponding attribute should be weighed higher than the other attributes. Thus using this table the value of the internal variables is translated into perceptions for Bouncy. These perceptions cause transition from one state to another and hence activation of a relevant behavior (see gure 6).

5 Discussions Using this behavior-based approach we have created Bouncy which can react to a number of stimuli and hence interact with a user in many interesting ways. Recall that Bouncy was developed for the STAGING project, which aims at using virtual reality systems in staging plays. In this rst 10

"

prototype, Vespa I, there is no speci ed storyline. Rather the user creates a story on-line by improvisation and through interaction with Bouncy. Bouncy has been demonstrated for over 400 individuals with very di erent backgrounds and in age groups ranging from 7 to over 70 years old. Our observation of the people's reactions to Bouncy is that most nd him amusing and quite entertaining. Many react with sympathy towards Bouncy when he becomes sad because we scold him under the demonstrations. However, we realize that the interaction that Bouncy provides does only support very limited set of \dramatic plays". In the future prototypes we intend to incorporate some script execution capabilities in our life-like agents.

References [1] Ronald C. Arkin. Behavior-Based Robotics. Intelligent Robotics and Autonomous Agents series. MIT Press, May 1998. [2] Jana Ko^secka and Ruzena Bajcsy. Discrete Event Systems for Autonomous Mobile Agents. . Proceedings Intelligent Robotic Systems '93 Zakopane, pages 21{31, July 1993. [3] Pattie Maes. How To Do the Right Thing. Technical Report NE 43 - 836, AI-Laboratory, Massachusetts Institute of Technology, 545 Technology Square, Cambridge, MA 02139, USA, 1989. [4] Paolo Pirjanian. Multiple Objective Action Selection & Behavior Fusion using Voting. PhD thesis, Department of Medical Informatics and Image Analysis, Institute of Electronic Systems, Aalborg University, Fredrik Bajers Vej 7, DK-9220 Aalborg, Denmark, August 1998. Available online: http://www.vision.auc.dk/~paolo/publications.

11