Robots for social training of autistic children

8 downloads 196093 Views 2MB Size Report
programming environment augmented with learning by demonstration tool and textual commands is being developed. The initial tests showed that this tool can ...
Robots for social training of autistic children Empowering the therapists in intensive training programs

Emilia I. Barakova Faculty of Industrial Design, Eindhoven University of Technology Den Doleh 2 5612 AZ Eindhoven The Netherlands [email protected]

therapists in this process will ensure the uptake of robot mediated training. Robots with different embodiment and level of anthropomorphism have been used to train shared gaze and joint attention abilities [2-3]. Attempts have been made to use robots to improve imitation and turn-taking skills [4-9] , to teach facial and body emotions [10] and to initiate social interaction [6,11-12]. Even though some of these studies show promising results, they are fragmented and do not systematically train the children towards a sustainable improvement. In addition, it is not clear if the therapists will actually use these methods in daily practice. There is a range of problems such as, busy schedule, organizational difficulties, and, of course, fear of technical complexity of controlling a robot. The issue of creating meaningful training sessions with a robot has been addressed previously in [13]. However, in this study the therapist was a knowledge provider and he/she was not meant to take part in the training practice. Differently, we engaged the therapists in a co-creation process for the development of scenarios that they would like to use as an augmentation to their practice. The “Therapist-in-the-loop” approach was proposed by Colton and colleagues in [15], in which the authors attempt to engage the child and facilitate social interactions between the child and a team of therapists. However, they do not consider designing a programming tool that gives affordances to training practices. We are searching for ways to include the robot interventions within established and modern therapies with the far-reaching goal that in some of the training scenarios, parts of the work of the therapists can be replaced by a robot companion. This paper will feature the technical aspects of this enabling process within the framework of its societal necessity. In this paper, Section II outlines the framework for a user platform within user context and different building parts of the platform. Section III reports the results of the first iteration of the design process, both in technical and usability context. Section IV discusses our findings and gives suggestions for future research.

Abstract—We apply a participatory co-creation process to empower health researchers/practitioners with robot assistants or mediators in behavioral therapies for children with autism. This process combines (a) a user centered design of a platform to support therapists to create and share behavioral training scenarios with robots and (b) acquisition of domain specific knowledge from the therapists in order to design robot-child interaction scenarios that accomplish specific learning goals. These two aspects of the process are mutually dependant and therefore require an iterative design of a technological platform that will make gradual steps towards creating optimal affordances for therapists to create robot-mediated scenarios to the best of the technical capabilities of the robot, i.e. through co-creation. For this purpose an end-user programming environment augmented with learning by demonstration tool and textual commands is being developed. The initial tests showed that this tool can be used by the therapists to create own training scenarios with existing behavioral components. We conclude that this platform comply with the needs of the contemporary practices that focus on personalization of the training programs for every child. In addition, the proposed framework makes possible to include extensions to the platform for future developments. Keywords - end-user interfaces; robots for training autistic children; robot-assisted therapy; co-creation design process;

I.

INTRODUCTION

Finding ways for the uptake and deployment of robots and other unconventional technologies in therapies has the potential to alleviate very intensive therapeutic interventions to be available to larger groups of the society. Autism spectrum disorders (ASD) are conditions where no curative treatments are available, but intensive behavioral interventions by young children during one year or longer may bring to substantial improvements [1]. Searching for the answer of how this training can benefit by the use of robots, we identified that therapists are a key element of this process: (1) only the domain knowledge from the therapists can bring to creation of efficient training programs with robots and (2) the actual acceptance and participation of the

c 978-1-4673-0126-8/11/$26.00 2011 IEEE

14

II. END-USER PROGRAMMING PLATFORM BASED ON USER REQUIREMENTS The intention of this work is to make programming of behavior of an arbitrary complexity possible for nontechnically educated people such as therapists. For this purpose we combine an end-user visual programming environment TiViPE and learning by imitation tool as an interface between a robot and a therapist (Figure 1). With TiViPE scenarios for specific learning objectives can easily be put together as if they were graphical LEGO-like building blocks. The functionality of the behaviors from the existing building blocks can be fine-tuned by textual commands typed within the blocks, or blocks with new functionality can be created. This platform controls the NAO robot [14] . End-user programming environment TiViPE

Lego-like programming by connecting behavioral blocks

Textual language to fine-tune behaviors

Learning by demonstration tool

Figure 1. Framework for an interface between a robot and a therapist.

A. Learning by Imitation Imitation learning is a technique for manual programming of robotic systems for basic movement skills. The robotic system ’learns’ a skill either by observing human demonstrations of the desired behavior, or the behavior is shown to the robot by a demonstrator moving the robot limbs and body parts. In the current project imitation learning techniques for robotics could potentially help therapists to create their own training programs. Simple demonstration of the desired behavior to the robot could drastically reduce the required technical knowledge of the therapist. Furthermore, demonstrating the desired behavior could potentially result in a more human-like behavior of the robot in contrast to a design made by graphical programming. Imitation learning differs from simply repeating the demonstrated example behaviors, because in real-life situations the movement often has to be performed with different initial conditions, avoidance of obstacles and with a consideration of the different embodiment of the robot and the human demonstrator. In early research in imitation learning robotic system are programmed to replay prerecorded trajectories. A human operator designs the trajectory in detail and thereby uses his own knowledge of the task at hand. This static approach enables high accuracies and operational velocities required in

industrial settings. However, it requires structured and highly dependable operating environments, i.e. it is not suitable for natural environments and interaction with humans. In unstructured environments the systems fail to perform adequately since they are unable to adapt to a new situation. Furthermore, planning of the trajectories requires a robotic expert and can be very time consuming. A different approach to robotic programming is the use of learning techniques. This approach does not prescribe trajectories but iteratively creates and enhances own trajectories that fulfill certain optimization requirements. The advantage is that the robot can operate in heavily unstructured environments. However, learning the trajectories can be very time consuming as the number of degrees of freedom (i.e. the search space) increases. Furthermore, the definition of a suitable optimization criterion is a difficult process. Imitation learning that we use combines the two previous methods. The desired behavior is first demonstrated to the robot resulting in examples of good trajectories. Subsequently, learning techniques are applied to further enhance and adjust the trajectories. The rationale is to make use of human knowledge of a task that would normally be encoded in prerecorded trajectories. By showing the examples to the robot the users effectively reduce the search space and the learning algorithm thereby avoids unnecessary learning trials [23]. Imitation learning potentially enables non-experts to program robotic systems by a simple demonstration of the correct behavior. Therefore, the term programming by demonstration (PbD) is often used. In the current work, imitation learning techniques for robotics could potentially help therapists to show movements and movement sequences to the robot. However, programming of the overall behavior and interaction of the robot, that requires parallel movement commands, accompanied by speech and guided by interactive answer to a human behavior is practically impossible to be entirely dependent on this method. So far we are mainly working with a graphical programming environment, and the learning by demonstration tool is developed as an additional feature. B. End-user Graphical Environment TiViPE The robot software framework includes commands that interface with the robot, language that insures nonconflicting operation of these commands by parallel actions by and multiple conflicting action possibilities, and visual programming environment that allows easy construction and visualization of the action stream. Actual programming from the perspective of the end user consists of connecting Legolike blocks. In addition, a textual robot language makes possible that action behaviors are created and executed within the TiViPE interface. This graphical language insures that neither in depth knowledge of robot hardware or software is required, nor the designer of behaviors is confined to a specific robot. A minimalistic language, such that the robot commands can be executed in a serial or parallel manner is proposed in [21]. The language resembles adding and multiplying of symbols, where the symbols are equivalent to behavioral

2011 World Congress on Information and Communication Technologies

15

primitives and adding or multiplying defines parallel or serial actions, correspondingly. Using sensory information is crucial for constructing an interactive system. It needs to solve issues as how to construct parallel behaviors that have different cycle (or update) speeds. Moreover, there is a discrepancy between the capacities of a robot to perceive (read sensory information) and act - the capacity to act is around two orders of magnitude smaller than the sensing capacity. Creation of a lively behaviors that are prone to external disturbances and unexpected behavior of the person, interacting with the robot is possible through introduction of a state-space concept. Within this concept a single behavior, which we call behavioral primitive, could be modeled as a single state in a dynamical system. The state space is a collection of primitives that describes a continuous sequence of actions, and the evolution rule predicts the next state or states. Formally, the state can be defined as a tuple [N;A] where: N is a set of states and A is a set of transitions connecting the states. An example of such a state space is given at Figure 2.

children for whom different learning objectives are set. Within the first cycle, the enabling technology is created and its usability – tested. The Learning by demonstration tool is developed with the intension to help facilitating the creation of individual movements and simple instrumental tasks. It intends to augment the programming environment within which behaviors with higher complexity are created. Examples of such behaviors are using multiple body movements and speech of the robot in parallel, performing sequences of movements and cycles of actions, taking decisions to act as a result of a sensing and reasoning process. As said in the previous section, learning by demonstration differs from simply repeating the example behaviors, because in real-life situations the movement often has to be performed with different initial conditions (especially for a robot with technical accuracy of NAO, which is not meant for precision handling) and avoidance of obstacles. When the demonstrator shows a behavior to the robot he/she will leave the robot hand in an arbitrary position before the repetition by the robot takes place - the demonstrator can never succeed manually to position the robot hand in the same initial position, even if desirable. Therefore, the provided examples should be abstracted to represent a generalized basic skill, which is defined as movement primitive. The construction of such a movement primitive is illustrated in Figure 3.

a)

Figure 2. An example of constructing behaviors in state space: a)flowchart diagram of a state space based behavior b)TiViPE implementation of this behavior.

These states can be converted into a flow diagram as given in the left part of this figure. In this diagram every module has 2 internal parameters, the first one gives the identifier of the state, and the second gives a list of transitions (visualized as arcs). In the Fifure 2b) a TiViPE implementation of this state is visualized. III.

RESULTS

So far two groups of results have been achieved. The first group consists of technical developments, and the second of testing the usability of the technology. These results are interdependent and are obtained in an iterative co-creation process. The following steps have to be performed multiple times within this process: (1) a concept for a domain specific solution has to be developed, followed by (2) technical implementation of this concept and (3) testing the concept in a clinical setting. The lessons learned are analyzed to define the tuned domain solution for the following design cycle of the process. The main goal of the developed platform is to facilitate therapists to create custom robot scenarios for

Figure 3. Construction of a movement primitive.

Movement primitives contain a mathematical abstraction of the task space trajectories xj and are created as a set of nonlinear differential equations as suggested in [19]. The Equations (1) and (2) describe this process:

f j ( s) =

∑i wiψ i ( s) ∑ ψ ( s) i

16

(1)

s

i

2011 World Congress on Information and Communication Technologies

(2)

trajectories that are obtained as a result of the force planning approach. The two trajectories are perceptually similar.

where gj represents the (possibly changing) goal state and yi are activation functions that become active depending on the normalized time s. The weighting factors wi are chosen such that the solution to (1) equals to the originally demonstrated trajectory. The advantage of using differential equations for path-planning is that they represent a flow field of trajectories rather than one single trajectory. The so created trajectory is translated to the joint space of the robot. Safe human robot interaction requires adaptation of the traditional robot control paradigm. Using high-gain control to minimize feedback errors introduces unacceptable safety risks. Low gain control decreases the safety risks at expense of low tracking performance. Furthermore, it puts a strain on the accuracy of an inverse dynamic model to compute the required feedforward forces for a given trajectory. The introduction of active force control can be used to overcome some of the safety problems. However, in case of ‘severe’ disturbances such as human interaction trajectory replanning is highly desirable. This effectively means the introduction of an additional high level control loop to the trajectory generator which re-plans the desired trajectory in case of human interaction. Figure 4 shows an example when robot learns to move a ball on a plain. First, the demonstrator grasps the robot hand and moves it to do perform the desired action. Second, the robot repeats this action

Figure 5. Two examples of execution trajectories, obtained as a result of the force planning approach. The he demonstrated and the executed trajectories are perceptually similar.

To allow the design of more complex behaviors, TiViPE uses a box-wire approach to create flow-charts with behavioral sequences, parallel behaviors and processes, and repetitive behaviors that depend on the sensory state. This process complies with the common practice in therapeutic institutions to create training exercises by putting together a learning goal and defining the steps and dependencies for its achievement by the children and the therapists. We match the affordances of the robot and the programming environment with the practice of the therapists, as shown in [17]. Figure 6 depicts a part of a robot-child interaction scenario as prepared in a co-creation process with the therapist and its implementation with TiViPE.

 Figure 4. The snapshot on the right shows a robot repeating the movement demonstrated manually in left snapshot.

Learning by imitation, is constrained also by the different embodiment of the demonstrator and the robot. In addition to having the same effect, the demonstrated and repeated trajectories have to be perceptually similar. In this case, the path and trajectory planning problem changes to a forceplanning problem. This means that the force fj can be interpreted as the force that should be applied so that the system in Equation (1) has a solution that matches the demonstration trajectory. The time evolution of the force fj(s) is considered to be a key characteristic of the movement. Figure 5 shows the demonstrated and the executed

Figure 6. A snapshot of a training scenario created with TiViPE environment. In the extensions are correspondingly: right upper corner the network behind an integrated block; right down corner – a textual command block.

2011 World Congress on Information and Communication Technologies

17

Each box in Figure 6 represents behavioral component with different level of complexity. It can be connected to other components, creating a new network. Several components can be integrated in a single component just by two mouse clicks – select and merge. The properties of a component can also be modified without recompilation of the TiViPE network to be required. Furthermore, networks of components can be also merged into new components, which enable users to build complex, interactive and intelligent behaviors. Along with these technical achievements, two tests were performed to explore whether the therapists actually are able to create training scenarios with the robot using TiViPE. The results show that at that point of development the graphical environment interface could be used by therapists for adaptation of existing scenarios. A pilot test on five participants with healthcare background, and an actual test performed at the Dr. Leo Kannerhuis clinic for autistic children were made. The participants were given four different tasks, derived from the Cognitive Dimensions Framework [22]. The tasks were defined as incrementation, modification, transcription and exploratory design. The tasks that were created were respectively: adding a new component from a library, editing a component that is already integrated in the network, adding a new section to the network and making an own contribution to the network. For the tasks, a scenario was provided in flowchart form on paper, and a network representing this scenario was also pre-made. The participants had to complete, change and run this scenario to accomplish the four tasks. In this first evaluation, all participants were new to the programming environment. The first task was described in detail, and every following task - with less detail. This gave the participants in the test the opportunity to gradually but quickly become familiar with the functionalities of the program, so they could make their own contribution in the last task. The programming environment was not demonstrated beforehand. Instead, a short explanation on paper was provided describing the used components and how these are connected, along with the task list and the flowchart. Participants were asked to think aloud while performing the tasks, and were free to ask questions when they could not progress. After the tasks the experience of the participants were evaluated with a short questionnaire. The questionnaire featured 28 statements related to their experience with components and network in TiViPE, and 7 statements about how the robot was perceived. There was extra room for comments at the end of the questionnaire. Statements could be evaluated with a five-point Likert scale. They were related to one or more of the cognitive dimensions [22]. The goal of the pilot study was to make sure the questionnaire and tasks were defined properly and were performed outside the clinic. After the pilot, the questionnaire and task list were improved to make the different dimensions in the questionnaire more balanced, and improved the wording for the actual clinical test.

18

Both test groups performed the four tasks within the range of 25 minutes. The conclusions made during the tests are taken into account for the current iteration over the technical re-design. These conclusions are directed to the overall design of the graphical interface, the need of external (remote) control, the overall experimental setting. IV.

DISCUSSION AND FUTURE WORK.

The experience so far and the initial results show that the iterative co-creation process is an appropriate direction to follow if advanced technology is to be implemented in clinical practice or other application domains where nontechnical specialists have to be involved. The technical specialists and the therapists from autistic clinic collaborated on regular basis. Most of the time spent during this first iteration phase was devoted to technology development and domain knowledge creation. After the first iteration cycle we found out that the process of co-creation of domain specific knowledge is more time consuming than expected. Within the experimental evaluation of the initial technical platform was established that by providing the appropriate tools and creating the domain relevant tasks makes technically possible high-tech platforms as robots to be used in clinics. We cannot yet give guidelines of how the implementation of such platform in clinical institutions is possible with respect to the human factors involved. Based on the findings from the tests and additional discussions we concluded that therapists need more control over the robot, so a remote control option will be introduced; a remote controller is also needed to fake intelligent behavior when the creation of such is beyond the achievements of the state of the art research. The developments with respect to the technical platform were directed towards creation of affordances that will make possible multimodal interaction between a human and a robot. The robot needs to have or simulate intelligent behavior, such as spontaneous reaction to sensory information, and robustness to disturbances from human or other environmental agents. These requirements are satisfied by introduction of a state-space operation concept with continuous account of multimodal sensory information. Within this project two user tests with therapists were performed to explore the usability issues of the programming environment for therapists. The results show that the graphical environment could be used by therapists for personalization of existing scenarios. We attempt to make possible that the therapists themselves will create training scenarios. For this purpose we aim at development of a critical mass of interactive behavioral components and scenarios. In addition, we develop a Wiki platform, where therapists from different clinics can share the created scenarios. In the long term, imitation learning techniques could potentially help therapists to create their own treatment programs. Simple demonstration of the desired behavior to the robot could drastically reduce the required technical knowledge of the therapist. Furthermore, imitation is closely related to playful behavior, which is an important exploratory activity in which known movements can be

2011 World Congress on Information and Communication Technologies

combined arbitrarily. This might lead to exploration of resources unknown up to this point. Training of imitation behavior could be especially helpful for autistic children [4]. Within the clinic that we work with training of imitation skills is not an essential learning goal, because of the age and IQ level of the children.

[9]

ACKNOWLEDGMENT

[11]

I gratefully acknowledge the support of the InnovationOriented Research Program ‘Integral Product Creation and Realization (IOP IPCR)’ of the Netherlands Ministry of Economic Affairs, Agriculture and Innovation. Furthermore, many thanks to the Ph. D. students working on this project Jan Gillesen and Stijn Boere, to professors Panos Markopoulos, Henk Nijmeijer, Loe Feijs, all from Eindhoven University of Technology, to Bibi Huskens, and Astrid Van Dijk from the autistic clinic Dr. Leo Kannerhuis, The Netherlands, and Tino Lourens from TiViPE company for their multiple contributions to this multidisciplinary research. REFERENCES [1]

[2]

[3]

[4]

[5]

[6] [7]

[8]

O. I. Lovaas OI. “Behavioral treatment and normal educational and intellectual functioning in young autistic children.” Journal of Consulting and Clinical Psychology, 1987, pp.55, 3-9. H. Kozima and C. Nakagawa, “Interactive Robots as Facilitators of Children ’ s Social Development,” Mobile robots: Toward new applications, 2006, pp. 269–286. B. Robins, P. Dickerson, P. Stribling, and K. Dautenhahn, “Robotmediated joint attention in children with autism: A case study in robot-human interaction,” Interaction Studies, vol. 5, no. 2, pp. 161198, Jan. 2004. G. Bird, J. Leighton, C. Press, and C. Heyes, “Intact automatic imitation of human and robot actions in autism spectrum disorders.,” Proceedings. Biological sciences / The Royal Society, vol. 274, no. 1628, pp. 3027-31, Dec. 2007. J. C. J. Brok and E. I. Barakova, “Engaging Autistic Children in Imitation and Turn-Taking Games with Multiagent System of Interactive Lighting Blocks,” Ifip International Federation For Information Processing, pp. 115-126, 2010. K. Dautenhahn and I. Werry, “Towards interactive robots in autism therapy,” Pragmatics & Cognition, vol. 1, no. 12, pp. 1-35, 2004. A. Duquette, F. Michaud, and H. Mercier, “Exploring the use of a mobile robot as an imitation agent with children with low-functioning autism,” Autonomous Robots, vol. 24, no. 2, pp. 147–157, 2008. G. Pioggia, R. Igliozzi, M. Sica, M. Ferro, and F. Muratori, “Exploring emotional and imitational android-based interactions in autistic spectrum disorders,” Journal of CyberTherapy & Rehabilitation, vol. 1, no. 1, pp. 49-61, 2008.

[10]

[12]

[13]

[14] [15]

[16]

[17]

[18]

[19]

[20]

[21]

[22] [23]

B. Robins, K. Dautenhahn, R. Te Boekhorst, and A. Billard, “Robotic assistants in therapy and education of children with autism: can a small humanoid robot help encourage social interaction skills?,” Universal Access in the Information Society, vol. 4, no. 2, pp. 105120, 2005. E. I. Barakova and T. Lourens, “Expressing and interpreting emotional movements in social games with robots,” Personal and Ubiquitous Computing, vol. 14, pp. 457–467, Jan. 2010. E. I. Barakova, J. Gillesen, and L. Feijs, “Social training of autistic children with interactive intelligent agents,” Journal of Integrative Neuroscience, vol. 8, no. 1, pp. 23-34, 2009. D. Feil-Seifer and M. Mataric’, “Robot-assisted therapy for children with autism spectrum disorders,” Proceedings of the 7th international conference on Interaction design and children - IDC ’08, no. 2005, p. 49, 2008. T. Bernd, G. J. Gelderblom, S. Vanstipelen, and L. De Witte, “Short term effect evaluation of IROMEC involved therapy for children with intellectual disabilities,” in Proceedings of the Second international conference on Social robotics, 2010, pp. 259-264. http://www.aldebaran-robotics.com M. B. Colton, D. J. Ricks, M. A. Goodrich, B. Dariush, K. Fujimura, and M. Fujiki, “Toward Therapist-in-the-Loop Assistive Robotics for Children with Autism and Specific Language Impairment,” in AISB New Frontiers in Human-Robot Interaction Symposium, 2009, vol. 24, p. 25. K. Hume, S. Bellini, and C. Pratt, “The Usage and Perceived Outcomes of Early Intervention and Early Childhood Programs for Young Children With Autism Spectrum Disorder,” Topics in Early Childhood Special Education, vol. 25, no. 4, pp. 195-207, 2009. J. C. C. Gillesen, E. I. Barakova, B. E. B. M. Huskens, and L. M. G. Feijs, “From training to robot behavior: Towards custom scenarios for robotics in training programs for ASD,” in IEEE International Conference on Rehabilitation Robotics, 2011, pp. 387 - 393. R. L. Simpson, “Evidence-Based Practices and Students With Autism Spectrum Disorders,” Focus on Autism and Other Developmental Disabilities, vol. 20, no. 3, pp. 140-149, Jan. 2005. A.J. Ijspeert, J. Nakanishi, and S. Schaal. Movement imitation with nonlinear dynamical systems in humanoid robots. In Proceedings of the 2002 IEEE International Conference on Robotics and Automation, pp. 1398–1403. T. Lourens, “TiViPE—Tino’s Visual Programming Environment,” in The 28th Annual International Computer Software & Applications Conference, IEEE COMPSAC, 2004, pp. 10-15. T. Lourens and E. I. Barakova “User-Friendly Robot Environment for Creation of Social Scenarios.” in Foundations on Natural and Artificial Computation. J. Ferrández, et al., Eds., LNCS 6686, 2011,pp. 212-221. T. Green and A. Blackwell, “Cognitive dimensions of information artefacts: a tutorial,” BCS HCI Conference, 1998. A. Billard, S. Calinon, R. Dillmann, and S. Schaal. “Robot programming by demonstration.” In B. Siciliano and O. Khatib, editors, Handbook of Robotics, pages 1371–1394. Springer, 2008.

2011 World Congress on Information and Communication Technologies

19