easy reconfiguration of modular industrial ...

2 downloads 0 Views 32MB Size Report
Aug 12, 2016 - under the European Community's Sixth Framework Programme. Felfernig, A., Friedrich, G., and Jannach, D. (2001). Conceptual modeling for ...
EASY RECONFIGURATION OF MODULAR INDUSTRIAL COLLABORATIVE ROBOTS BY CASPER SCHOU D I S S E R TAT I O N S U B M I T T E D 2016

Easy Reconfiguration of Modular Industrial Collaborative Robots

PhD Dissertation

Casper Schou

Dissertation submitted 2016

Thesis submitted:

August, 2016

PhD Supervisor:

Professor Ole Madsen Aalborg University, Denmark

PhD Committee:

Associate Professor Poul H. K. Hansen (chairman) Aalborg University, Denmark Professor Gunnar Bolmsjö University West, Sweden Associate Professor Lars-Peter Ellekilde University of Southern Denmark

PhD Series:

Faculty of Engineering and Science, Aalborg University

ISSN: ISBN:

xxxx-xxxx xxx-xx-xxxx-xxx-x

Published by: Aalborg University Press Skjernvej 4A, 2nd floor DK – 9220 Aalborg Ø Phone: +45 99407140 [email protected] forlag.aau.dk c Copyright by Casper Schou

Printed in Denmark by Rosendahls, 2016

Abstract The change of the global industrial scene over the past decades has made it increasingly challenging for high-wage European countries to retain manufacturing activities. To remain competitive, the manufacturing industry in highwage countries has excelled in the use of traditional automation. However, caused by the increasing number of product variants; increasing need for shorter innovation cycles; and uncertain product prospects, the industry now needs not only automated production, but also production systems that are flexible and agile. In the domain of industrial robots, the collaborative robots have gained high interest in both research and industry as a flexible, agile robotic concept. In contrast to traditional industrial robots, collaborative robots are not isolated by fences, but work alongside the production staff collaborating to perform common tasks. This change of environment imposes a much more dynamic lifecycle for the robot which consequently requires new ways of interacting. This PhD thesis investigates how the change-over to a new task on a collaborative robot can be performed by the shop floor operators already working alongside the robot. To effectively perform the changeover, the operator must both reconfigure the hardware of the robot and re-program the robot to match the new task. The research in this thesis focus on both aspects. To enable shop floor operators to quickly and intuitively program the robot, this thesis proposes the use of robot skills with a manual parameterization method. Skills represent parametric, task-related capabilities of the robot which can be aggregated into robot tasks. This thesis investigates how these skills can be parameterized intuitively and to which extend this requires robotic knowledge. A skill-based robot operating tool along with a number of skills are developed. Studies of both usability, the required level of expertise, and industrial readiness of the technology are conducted. Reconfiguring the hardware of a collaborative robot entails adding, removing, or modifying some of its components. In this thesis a modular architecture is applied to the robotic system segmenting the hardware into well-defined modules. To reconfigure the hardware, the shop floor operator must first determine a feasible hardware solution and afterwards adapt the physical hardware accordingly. This thesis investigates how software configurator tools can aid the operator in selecting appropriate hardware modules, and how agent-based control structures can be used to make module exchange efficient and effortless for the operator while maintaining a high utilization of module functionality.

iii

Dansk Resumé Over de seneste årtier er det løbende blevet vanskeligere for europæiske lande med et højt lønniveau at bevare produktionsaktiviteter. Fremstillingsindustrien i disse lande har traditionelt brugt automatisering for at bevare konkurrenceevnen, men i takt med en hyppigere produktinnovation, et øget antal produktvarianter og mere usikre produktprognoser, så stiger behovet for mere fleksibelt og omstillingsparat produktionsudstyr. Inden for robotter er der både i forskning og industrien en stor interesse for samarbejdende robotter, som er en ny type robotter med høj fleksibilitet og omstillingsparathed. I modsætning til traditionelle industrirobotter så er de ikke isoleret fra mennesker bag høje hegn, men samarbejder derimod med produktionsassistenterne. Skiftet til det menneskelige arbejdsmiljø medfører dog en noget andet livscyklus for robotten, hvilket kræver nye måder at interagere med robotten på. Denne PhD-afhandling undersøger, hvordan omstilling til en ny opgave for en samarbejdende robot kan udføres af produktionsassistenter. En effektiv omstilling af robotten kræver både konfigurering af robottens hardware og programmering af robotten passende til den nye opgave. Som følge heraf er forskningen i denne afhandling fokuseret på henholdsvis konfigurering af hardware og robot programmering for samarbejdende robotter. For at gøre produktionsassistenter i stand til at hurtigt og intuitivt at programmere samarbejdende robotter er der i denne afhandling udviklet en manuel parameteriseringsmetode af såkaldte skills. Skills repræsenterer parametriske opgave-relaterede kapabiliteter af robotten, som kan sammensættes til en konkret opgave. Denne afhandling undersøger, hvordan skills kan parameteriseres intuitivt, og i hvilket omfang dette kræver robotkompetencer. Et skill-baseret robotsystem og flere skills er udviklet igennem disse undersøgelser, og studier af både brugervenlighed og industrielle applikationer er udført. Konfigurering af hardware for samarbejdende robotter indebærer at tilføje, fjerne eller modificere nogle af robottens komponenter. I denne afhandling anvendes en modulær arkitektur for robottens hardware, hvilket medfører at den inddeles i standardiserede moduler. For at konfigurere hardwaren skal produktionsassistenten først bestemme en acceptabel hardwareløsning og derefter tilpasse den fysiske hardware tilsvarende. Denne afhandling undersøger, hvordan konfigurator-softwareløsninger kan supportere produktionsassistenten i denne opgave, og hvordan agent-baserede kontrol-arkitekturer kan udnyttes til at gøre udskiftning af moduler effektivt og nemt for produktionsassistenten, alt imens en høj udnyttelsesgrad af modulernes funktioner opnås. v

Contents Abstract

iii

Dansk Resumé

v

Thesis Details

ix

Preface

I

xiii

Introduction

1

1 Project Motivation 1.1 Manufacturing Paradigms . 1.2 Industrial Robots . . . . . . 1.3 Collaborative Robots . . . . 1.4 Initiating Research Problem

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

3 3 6 8 9

2 Related Research 2.1 Collaborative Robots . . . 2.2 Robot Programming . . . 2.3 Hardware Reconfiguration 2.4 Conclusion . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

11 11 18 24 38

3 Research Hypothesis and Objectives 3.1 Hypothesis . . . . . . . . . . . . . . 3.2 Research Objectives . . . . . . . . . 3.3 Project Delimitation . . . . . . . . . 3.4 Research Methodology . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

41 41 42 43 43

II

. . . .

Summary Report

45

4 Skill-Based Programming 4.1 Robot Skills . . . . . . . . . . . . . . . . . 4.2 Manual Robot Programming Using Skills 4.3 Usability of Skill-Based Programming . . 4.4 Realizing Skills . . . . . . . . . . . . . . . vii

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

47 47 50 56 58

Contents

4.5 4.6

Industrial Application of Skill-Based Programming . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5 Hardware Reconfiguration 5.1 Modular Hardware . . . . . . . . . 5.2 Module Selection . . . . . . . . . . 5.3 Hardware Management Framework 5.4 Conclusion . . . . . . . . . . . . .

64 69

. . . .

73 74 84 103 108

6 Conclusion 6.1 Summary of Contributions . . . . . . . . . . . . . . . . . . . . 6.2 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . 6.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . .

111 111 113 115

Glossary

117

References

120

III

Papers

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

133

viii

Thesis Details Thesis Title: PhD Student: Supervisor:

Easy Reconfiguration of Modular Industrial Collaborative Robots Casper Schou Professor Ole Madsen, Aalborg University

The main body of this thesis consists of the following papers. 1 Mikkel R. Pedersen, Lazaros Nalpantidis, Rasmus S. Andersen, Casper Schou, Simon Bøgh, Volker Krüger, and Ole Madsen, “Robot Skills for Manufacturing: From Concept to Industrial Deployment,” Robotics and Computer-Integrated Manufacturing (RCIM), Vol. 37, pp. 282–291, 2016. 2 Casper Schou, Rasmus S. Andersen, Dimitrios Chrysostomou, Simon Bøgh, and Ole Madsen, “Skill Based Instruction of Collaborative Robots in Industrial Settings,” Robotics and Computer-Integrated Manufacturing (RCIM), in peer review, submitted June 2016. 3 Casper Schou, Jens S. Damgaard, Simon Bøgh, and Ole Madsen, “HumanRobot Interface for Instructing Industrial Tasks using Kinesthetic Teaching,” Proceedings of the 44th International Symposium on Robotics (ISR), pp. 1–6, 2013. 4 Rasmus S. Andersen, Casper Schou, Jens S. Damgaard, and Ole Madsen, “Using a Flexible Skill-Based Approach to Recognize Objects in Industrial Scenarios,” Presented at the 47th International Symposium on Robotics (ISR), awaiting publication, 2016. 5 Ole Madsen, Simon Bøgh, Casper Schou, Rasmus S. Andersen, Jens S. Damgaard, Mikkel R. Pedersen, and Volker Krüger, “Integration of Mobile Manipulators in an Industrial Production,” Industrial Robot, Vol. 42, pp. 11–18, 2015. Selected by the journal’s editorial team as a Highly Commended Paper in the 2016 Emerald Literati Network Awards for Excellence. 6 Simon Bøgh, Casper Schou, Thomas Rühr, Yevgen Kogan, Andreas Dömel, Manuel Brucker, Christof Eberst, Riccardo Tornese, Christoph Sprunk, ix

Thesis Details

Gian D. Tipaldi, and Trine V. Hennessy, “Integration and Assessment of Multiple Mobile Manipulators in a Real-World Industrial Production Facility,” Proceedings of the 45th International Symposium on Robotics (ISR) and 8th German Conference on Robotics (Robotik), pp. 305–312, 2014. 7 Casper Schou, and Ole Madsen, “Towards Shop Floor Hardware Reconfiguration for Industrial Collaborative Robots,” Workshop on Collaborative Robots for Industrial Applications at The 19th International Conference on Climbing and Walking Robots and Support Technologies for Mobile Machines (CLAWAR), peer-reviewed and accepted for presentation, 2016. 8 Casper Schou, and Ole Madsen, “A Plug and Produce Framework for Industrial Collaborative Robots,” International Journal of Advanced Robotic Systems (IJARS), in peer review, submitted May 2016. In addition to the main papers, the following publications have also been made. A Casper Schou, Christian Carøe, Mikkel Hvilshøj, Jens S. Damgaard, Simon Bøgh, and Ole Madsen, “Human Assisted Instruction of Autonomous Industrial Mobile Manipulator and its Qualitative Assessment,” Proceedings of 1st AAU Workshop on Human-Centered Robotics, pp. 22–28, 2013. B Rasmus S. Andersen, Casper Schou, Jens S. Damgaard, Ole Madsen, and Thomas B. Moeslund, “Human Assisted Computer Vision on Industrial Mobile Robots,” Proceedings of 1st AAU Workshop on Human-Centered Robotics, pp. 15–21, 2013. C Francesco Rovida, Casper Schou, Rasmus S. Andersen, Jens S. Damgaard, Dimitris Chrysostomou, Simon Bøgh, Mikkel R. Pedersen, Bjarne Grossmann, Ole Madsen, and Volker Krüger, “SkiROS: A Four Tiered Architecture for Task-level Programming of Industrial Mobile Manipulators,” Presented at 1st International Workshop on Intelligent Robot Assistants (IRAS), 2014. D Casper Schou, Simon Bøgh, and Ole Madsen, “Outline of a Hardware Reconfiguration Framework for Modular Industrial Mobile Manipulators,” Proceedings of 2nd AAU Workshop on Robotics, 2014. E Steffen N. Jørgensen, Casper Schou, and Ole Madsen, “Developing Modular Manufacturing Architectures - An Industrial Case Report,” Proceedings of the 5th International Conference on Changeable, Agile, Reconfigurable and Virtual Production (CARV), pp. 55–60, 2013. F Sigurd Villumsen, and Casper Schou, “Optimizing Tracking Performance of XY Repositioning System with ILC,” Proceedings of the 3rd IFToMM x

Thesis Details

Symposium on Mechanism Design for Robotics, Vol. 33, pp. 207–217, 2015. G Aljaž Kramberger, Casper Schou, Dimitrios Chrysostomou, Andrej Gams, Ole Madsen, and Aleš Ude, “Fast Setup and Adaptation of Industrial Assembly Tasks with Force-based Exception Strategies,” Presented at the IFTOMM/IEEE/EUROBOTICS 25th International Conference on Robotics in Alpe-Adria-Danube Region (RAAD), awaiting publication, 2016. H Andrej Gams, Aljaž Kramberger, Bojan Nemec, Casper Schou, Dimitrios Chrysostomou, Ole Madsen, and Aleš Ude, “Generalization of Skills for Intelligent Robot-Aided Assembly,” IEEE-RAS International Conference on Humanoid Robots (Humanoids 2016), submitted July 2016. This thesis is based on the submitted or published scientific papers which are listed above. Parts of the papers are used directly or indirectly in the extended summary of the thesis. As part of the assessment, co-author statements have been made available to the assessment committee and are also available at the Faculty.

xi

Preface This thesis has been submitted to the Faculty of Engineering and Science at Aalborg University in partial fulfillment of the requirements for the degree of Doctor of Philosophy. The research presented in this thesis has been carried out from December 2012 to August 2016 at the Department of Mechanical and Manufacturing Engineering at Aalborg University under supervision of Professor Ole Madsen. The thesis is constructed as a collection of papers, however, with several sections dedicated to unpublished research results. The purpose of the PhD project, upon which this thesis is a culmination, has been to investigate and develop intuitive methods enabling non-robotics experts to interact with collaborative, industrial robots. This has resulted in studies on intuitive robot programming and on reconfiguration of robot hardware. The PhD project leading to this thesis has been conducted in correlation with and primarily funded by Innovation Fund Denmark through the project CARMEN. Additionally, the PhD project has been conducted in correlation with and funded by the European Commission under the Seventh Framework Programme projects TAPAS (FP7-260026), ACAT (FP7-600578), and CARLOS (FP7-606363).

Acknowledgement Over the course of this PhD study, I have had the pleasure of collaborating with many researchers and practitioners from both academia and industry. I would like to extend my sincere thanks to everyone who in one way or the other has contributed to the work leading to this thesis. Especially, I would like to thank the partners in the CARMEN, TAPAS, ACAT, and CARLOS projects for their close collaboration on research integration and live demonstrations. I would like to thank my colleagues in the Robotics and Automation Group at the Department of Mechanical and Manufacturing Engineering at Aalborg University for great collaboration, fruitful discussions, and a motivating working environment. In particular, I would like to thank my supervisor Professor Ole Madsen for his helpful and encouraging supervision in this PhD study. Finally, I sincerely thank my love Camilla and my son Sigurd for their great patience and support in accomplishing this PhD study. Casper Schou Aalborg University, August 12, 2016 xiii

Preface

Reader’s Guide This PhD thesis has been formulated as a collection of papers, albeit with several sections dedicated to unpublished work. The thesis contains three parts with six chapters in total. Part I gives an introduction to the thesis. Chapter 1 presents the motivation for the PhD study and gives a general introduction to the field of collaborative robotics. A review of research related to this thesis is presented in Chapter 2. In culmination of Part I, the research hypothesis and objectives of this PhD study are listed in Chapter 3. Part II contains a summary report presenting the main contributions and results of this PhD study. The summary report is formulated as summaries of the included papers accompanied by several sections describing additional research. The summaries describe the respective papers to an extend where the summary report can be read independently of the papers. The summary report contains three chapters. Chapter 4 presents work on intuitive robot programming for collaborative robots, and Chapter 5 presents work on hardware reconfiguration for collaborative robots. The general conclusion on this thesis and directions for future work is described in Chapter 6. In the non-public version of this thesis, Part III contains the papers on which the summary report is based. However, as this is the public version of the thesis, the full-length papers are excluded due to copyrights. A glossary of terms and a list of acronyms used in this thesis can be found on page 117.

xiv

Part I

Introduction

1

Chapter 1

Project Motivation During the past two decades, the global industrial scene has changed significantly. An increased industrialization of emerging countries such as China, Russia, India, Poland, and Romania induced a transfer of manufacturing jobs from the major industrial nations such as USA, Japan, Germany, France, Italy, and UK due to the lower wage in the emerging countries. This ”deindustrialization” is a serious problem for the western countries as the outsourcing of the core production task tend to drag along large parts of the expertise and value chain over time. In response to the deindustrialization and competition with low-wage countries, western manufacturing companies have for the past decade used large scale automation as a solutions to remain competitive. [Roland Berger, 2014] However, as traditional, dedicated, and fully automated manufacturing systems require large batch sizes to be economically feasible, they are typically not well-suited for small and medium enterprises (SMEs). Coupled with the last decades’ globalization of markets; explosion in product variants due to individualization; and decreasing product life cycles with shorter time to market, there is a high need for automated manufacturing equipment suitable for smaller batch sizes [Bi et al., 2008a; ElMaraghy, 2006; Hadar and Bilberg, 2012; Wiendahl et al., 2007]. Such equipment need to be responsive and agile with the ability to be efficiently reconfigured and reused.

1.1

Manufacturing Paradigms

In response to the market changes induced by the globalization; the increasing number of product variants; and the correlated need for production systems suitable for smaller batch sizes, new manufacturing paradigms have been proposed over the last several decades. In the 1980’s the paradigm of flexible manufacturing systems (FMS) emerged proposing the use of manufacturing equipment with built-in, general flexibility. The benefit of this general flexibility is the ability to cope with diverse products on the same production line with little to no changeover effort. In praxis, 3

Chapter 1. Project Motivation

FMS deployments typically consist of computer numerically controlled (CNC) machines which yields a high flexibility, but also a high initial investment cost [ElMaraghy et al., 2013]. Furthermore, the built-in flexibility often exceeds the need in the production line, and should the need arise for capabilities extending the equipment, modification or reconfiguration is often as prolonged as in the case of dedicated manufacturing systems (DMS) [Wiendahl et al., 2007]. Thus, despite their inherent ability to cope with significant product variations, FMS solutions are seldomly economically optimal in praxis [Koren et al., 1999]. In the late 1990’s the paradigm of reconfigurable manufacturing systems (RMS) was proposed by Koren et al. [1999]. Several paradigms based on the same motivation and overall vision as RMS have been have been proposed, albeit using different terminology; e.g. modular manufacturing systems, recomposable manufacturing systems, and retransformable manufacturing systems [Bi et al., 2008a]. Motivated by the short-comings of FMS and the desire to create reusable and scalable manufacturing systems, RMS (and its similar paradigms) obtain flexibility through reconfiguration of modular parts. Thus, contrary to the built-in, general flexibility of FMS, RMS exert a customized flexibility through reconfiguration. Product variations is coped with through removal and/or addition of modules which makes the changeover quicker and less complicated [Koren and Shpitalni, 2010]. In terms of flexibility and capacity, RMS is considered a ”middle-ground” between DMS and FMS [ElMaraghy, 2006]. In a review of RMS and FMS, ElMaraghy state: “...there are sufficient common grounds in philosophy and application between the FMS and RMS paradigms to support the notion that they represent a continuum” [ElMaraghy, 2006]. That is, RMS and FMS do not contradict each other, but rather extend each other. Wiendahl et al. [2007] propose changeable manufacturing systems (CMS) as an umbrella term for changeable, agile manufacturing systems in general; including both FMS and RMS. The evolution of manufacturing systems leading to CMS is depicted in Figure 1.1. In the 1990’s the paradigm of holonic manufacturing systems (HMS) gained interest [Brussel et al., 1994, 1998]. The paradigm is inspired by the way social and biological entities organize and interact to form complex systems. In HMS the entities are called holons and serve as intelligent, cooperative, and autonomous agents. A holarchy is a hierarchical system of holons with its own set of rules and directives governing that system. The holons, which can both be pure software resources or physical equipment, can be aggregated recursively to form increasingly complex holarchies. Hence, a holarchy acts in itself as a holon which in turn can participate in new holarchies. The aggregation of holons into holarchies can either be by design or be done by the holons themselves autonomously forming coalitions. The latter is considered a key characteristic of HMS in that the system autonomously ”evolves” towards a suitable structure by the holons autonomously forming coalitions, and not being arranged by a central controller. A holonic manufacturing system is defined 4

1.1. Manufacturing Paradigms

3FNPWFEEVFUPDPQZSJHIU

Fig. 1.1: Volume and variety of the most well-known manufacturing systems paradigms. The red line indicates the evolution of manufacturing systems from craft production towards mass personalization. [ElMaraghy et al., 2013]

as a holarchy that incorporates the entire range of manufacturing activities from product marketing, to order booking, to manufacturing preparation, to the manufacturing activity itself. [Brussel et al., 1998; Leitão, 2009] In the early 21th century the concept of evolvable production systems (EPS) or evolvable assembly systems (EAS) emerged [Onori, 2002]. Similar to HMS, EPS/EAS proposes an evolvable architecture based on autonomous, self-contained agents. However, where HMS tends to take a system perspective, EPS/EAS takes a ”process-oriented” perspective derived directly from the production challenges on the shop floor. Thus, as the architecture of EPS/EAS is derived from the manufacturing processes, the granularity becomes finer than in HMS [Onori and Barata, 2009]. The evolvability of EPS/EAS, as given by the name, is inspired from biology. One of the primary concerns of EPS/EAS is the actual reconfiguration process itself as undergone during a changeover. As modules are aggregated, the system capabilities are not regarded as simply the sum of the capabilities from each module. Rather, new capabilities emerge from the combination of simpler capabilities. In true EPS/EAS, the modules are regarded as self-contained, self-aware agents, and the higher-level capabilities emerge autonomously from negotiation between the agents. Consequently, the system is said to ”evolve” as agents are added and/or removed, or new coalitions are created; similar to the proposed approach of HMS. [Onori and Barata, 2009; Onori et al., 2006]. Inspired by the symbiotic relationships in the natural world, Ferreira et al. [2014] propose the paradigm of symbiotic assembly systems (SAS). Prior manufacturing paradigms, such as FMS, RMS, HMS, and EPS/EAS, generally 5

Chapter 1. Project Motivation

concern automated manufacturing equipment with a clear separation between human workers and machines. This separation is today almost inevitable due to strict safety regulations. However, Ferreira et al. argue that many assembly tasks are still too complex for current state-of-the-art assembly technology, and thus propose to exploit a close collaboration between human and machine; a socalled symbiotic relation. Such symbiotic relation can exploit the key strength of both human and machine. From humans the ability to make complex decisions under uncertainties, quickly adjust to changes, interpret problems, and learn from experience are desired, whereas the physical power, repeatability, and capacity of the machines are desired. Within the robotic research community, close collaboration between robots and humans have already been pursued in research [Krüger et al., 2009]. Ferreira et al. emphasize this as a clear indication of the feasibility of the symbiotic approach, but stress the need to define a ”paradigm” in order to structure and formalize the concept and thus make it transferable to other domains of manufacturing systems. This PhD project will not conform to a single paradigm, but rather draw inspiration from several of them in its research on reconfigurable robotics. It will exploit the modular idea put forth in RMS to segment the robot into modules, which are easily reconfigured. It will draw inspiration from HMS and create self-contained agents, and it will draw on the ideas of SAS to enable a close human-robot collaboration.

1.2

Industrial Robots

The need for more agile solutions to enable automation of smaller batch sizes is not least relevant in the field of robotics which according to [SPARC, 2015]: “...will be a key differentiator in driving up the productivity of Europe’s manufacturing base”. Today, industrial robots are a common sight in many automated manufacturing plants throughout the world. With high versatility and dexterity, industrial robots are excellent for object handling, component assembly, and process tasks. Traditionally, industrial robots have been deployed in high volume, fully automated manufacturing predominantly in high wage countries as a means of increasing efficiency and lowering production costs. A good example is the automotive industry which is highly automated and the largest employer of industrial robots. Figure 1.2 shows two applications of traditional industrial robot systems from modern automotive manufacturing.

6

1.2. Industrial Robots

(a) BMW factory in Leipzig, Germany

(b) Tesla factory in California, USA

Fig. 1.2: Typical applications of traditional industrial robots in automotive manufacturing. The robots remain fixed, performing the same repetitive tasks in a fully automated production. The automotive industry is the largest employer of industrial robots.

1.2.1

Challenges for Industrial Robots

As industrial robots traditionally are deployed in fully automated, high volume production, the majority of industrial robots are fixed to a dedicated, fenced-in workstation performing the same task for endless cycles. As such, though the industrial robot is a flexible, re-programmable machine, once commissioned in a production the flexibility is often no longer used. The reason is that the robot becomes an integrated part of a complex and highly dedicated manufacturing system. This has been the general praxis in high volume, highly automated production such as the automotive industry, and consequently the robotics industry has adapted to this life cycle of their machines. A life cycle where the setup and programming of the robot is considered an engineering task, and thus the robot control interfaces are designed accordingly. However, as the need for smaller batch sizes increases, the traditional robot solutions are inadequate due to the insufficient flexibility. New robot solutions that offer greater flexibility and can be assigned production tasks outside the shielded, dedicated manufacturing lines are needed. This has generated a strong research focus on increasing flexibility, adaptability, and configurability of robots. In both Europe, Japan, and the United States official strategies for robotics research have been published in recent years [Headquarters for Japan’s Economic Revitalization, 2015; RoboticsVO, 2013; SPARC, 2015]. In response to the need for for more agile manufacturing equipment for automation at European SMEs, the Multi Annual Roadmap (MAR) published in December 2015 by SPARC directly states the following needs for future robotic systems for the manufacturing domain:

7

Chapter 1. Project Motivation

• “The need to design (robotic) systems that are cost effective at lower lot sizes.” • “The need to design (robotic) systems that are intuitive to use and are easily adapted to changes in the task without the need to use highly skilled personnel.” • “The ability (for the robotic system) to work safely in close physical collaboration with human operators.” Furthermore, the MAR points out nine key system abilities identified for future robotic solutions. Two of these abilities are presented below as these are central topics in easy reconfiguration of collaborative robots which is the topic of this PhD thesis. Configurability Configurability is the ability to alter the functionality of the robotic system to suit the needs of a given task. It covers both configuration of software and hardware modules, and the MAR stresses the need to design these abilities into new robotics solutions. Interaction Ability Robot systems must be able to interact with both humans and other robots. In terms of interaction with humans, the MAR expresses the need for faster, easier, and more intuitive robot instruction methods.

1.3

Collaborative Robots

From the need for more flexible and intuitive robotic solutions, the vision of robots operating in the dynamic environement of the human worker emerged in terms of collaborative robots. Although the consensus on the term is not clear, it generally denotes robots intended for working alongside the human staff. Thus, they contradict the paradigm of traditional industrial robotics and its need for isolating fences and highly structured operating environments. As part of working in the dynamic environment of their human counterparts, collaborative robots are subject to frequent changeovers often with a significant task variation. Consequently, the conventional tools and praxise of setting-up robots is inaddequate. Instead of using highly trained personnel or engineers, collaborative robot should be operated by the shop floor personnel. To allow this, collaborative robots need faster and more intuitive interfaces, a higher degree of configurability, and improved perception to monitor the environment. These advancements make collaborative robots an agile and easily transitioned production resource. Several robotic systems marketed as ”collaborative” are today commercially available, albeit they are in general new on the market. As a technology, collaborative robots are still on the edge of a widespread breakthrough to industry. 8

1.4. Initiating Research Problem

A cause of which is that the commercial systems do not fully realise the vision of enabling shop floor operators to perform the configuration to a new task; both in terms of robot programming and especially in terms of hardware reconfiguration. Thus, research and development on improving the interfaces, embodyment, and capabilities in general are still needed.

1.4

Initiating Research Problem

With the vision of collaborative robots being set-up by shop floor personnel still not present in industry, this PhD project will focus on research towards this goal. The initiating research problem for this PhD project is: Initiating Research Problem How can reconfiguration of industrial collaborative robots to a new task be performed by shop floor operators? In the initiating research problem, reconfiguration of industrial collaborative robots extends to both hardware reconfiguration and re-programming for collaborative robots. This corresponds well to the configurability and interaction ability identified as two of the key challenges for industrial robotics by SPARC [2015] (see Section 1.2.1). As a result, the research of this PhD project is focused on the interrelation between research on collaborative robots, robot programming, and hardware reconfiguration as depicted in Figure 1.3.

Collaborative Robots

Robot Programming

Hardware Reconfiguration

Fig. 1.3: The focus of this PhD project is the interrelation between research on collaborative robots, robot programming, and hardware reconfiguration.

9

Chapter 2

Related Research This chapter presents related research with offset in the initiating problem presented in Section 1.4. Following Figure 1.3, related work is presented for the three domains collaborative robots, robot programming, and hardware reconfiguration. Section 2.1 covers related research on industrial collaborative robots which includes related work at Aalborg University (AAU) providing an offset for this PhD project. Section 2.2 covers related research on robot programming, and section 2.3 covers research related to the hardware reconfiguration.

2.1

Collaborative Robots

The term cobot (short for collaborative robot) was first introduced in 1996 by Colgate et al. as a term for passive, mechanical devices used to aid human operators in solving industrial tasks. Today, the term is used more ambigiously and covers a variety of both passive and active robot systems. In addition to a strong research focus on collaborative robots, several robot systems marketed as collaborative are already commercially available. As a result, this chapter will both cover related research on collaborative robots and commercially available industrial collaborative robots. Section 2.1.1 presents an overview of the commercial industrial collaborative robots. Section 2.1.2 presents research on collaborative robots from AAU and Section 2.1.3 presents related research on collaborative robots in general.

2.1.1

Commercial Collaborative Robots

Within the last years, a number of industrial collaborative robots have been released to the market. Figure 2.1 depict some of the most popular ones. Currently, there is no unique definition of the term collaborative robot, and neither is there a unique classification of them. However, from Figure 2.1 it is evident, that different types of collaborative robots exist. The following classification is devised based on the commercially available collaborative robots and the related research presented in Section 2.1.3.

11

Chapter 2. Related Research

(a) KUKA iiwa (2013)

(d) KUKA KMRiiwa (2015)

(b) UR3,5,10 (2009-2015)

(e) Bosch APAS (2014)

(g) Kawada Nextage (2009) (h) Rethink Robotics Sawyer (2015) and Baxster (2012)

(c) Fanuc CR-35iA (2015)

(f) KBee Franka (2016)

(i) ABB YuMi (2015)

Fig. 2.1: Examples of commercially available collaborative robots.

12

2.1. Collaborative Robots

Collaborative Robot Arms Several manufacturers of industrial robots offer a new generation of robot arms marketed as collaborative robots; e.g. Universal Robots UR3-5-10, KUKA iiwa, and Fanuc CR-35iA. These robots basically offer an extension to the traditional industrial robot. They provide safety features allowing them to safely operate in the proximity of humans. This is often conceived by force or proximity sensing used to limit the energy in a potential collision. The collaborative robot arms also offer new human-robot interaction (HRI) methods for making task programming faster and more intuitive. A common approach is to use kinesthetic teaching, where the operator physically interacts with the robot arm. Because collaborative robot arms in principle are traditional industrial robot arms with extended safety and programming features, the HRI systems supplied with these robots are confined to robot arm configuration and programming. Consequently, challenges involved in configuring and programming an entire robot system still persists; e.g. sensor and tooling configuration. Cobots Contrary to the collaborative robot arms, cobots represent an entire robot system. Examples are the Rethink Robotics Baxter, ABB YuMi and Bosch APAS. They not only include the robot arm for manipulation, but also integrate tooling, auxiliary actuators and various sensors. As a result, cobots can offer more holistic HRI systems, where control of both robot arm, tooling and sensors is integrated into the same framework. Cobots can either incorporate a multitude of sensors and highly adaptable hardware to achieve an inherent flexibility allowing them to perform a wide variety of tasks. Alternatively, they can achieve task flexibility through configurability, where exchangeable components allow the robot to be configured specifically for the needs of each task. Autonomous Industrial Mobile Manipulators An autonomous industrial mobile manipulator (AIMM) differentiates itself from cobots by employing autonomous mobility. With the ability to move and navigate around autonomously in the manufacturing environment, AIMMs possess an added flexibility over the other collaborative robot classes. This significantly expands the range of potential tasks that can be carried out. Especially the option to carry out logistic tasks is significant as these tasks can cover more than 60 % of manual tasks in a typical manufacturing facility [Bøgh et al., 2012a]. AIMMs are related to automated guided vehicles which have been used commercially for decades. However, despite continuous research interest since the first AIMM MORO [Schuler, 1987] was introduced by Fraunhofer in the 80’ths, the AIMM technology has not yet seen significant commercial exploitation. The KUKA KMRiiawa marketed in 2015 is perhaps the most commercially ready AIMM today. 13

Chapter 2. Related Research

Several of the collaborative robot arms have become quite successful on the market (e.g. the Universal Robots UR3,5,10), but they are typically just used as an enhanced traditional industrial robot, and still set-up by experts. The cobots and AIMMs are in general still very new on the market and widespread adoption by the industry has yet to be seen. For all classes of collaborative robots, research and development are still focused on evolving their architecture, configurability, and usability.

2.1.2

Collaborative Robots at Aalborg University

At AAU, research on collaborative robots started in 2007 with the introduction of the Little Helper project at the Department for Mechanical and Manufacturing Engineering. The core of this project is the conceptualization and realization of AIMM robots named after the project [Hvilshøj and Bøgh, 2011; Hvilshøj et al., 2009]. Both the project and the name are inspired from the assistant of Walt Disney’s George Gearless. Thus, the vision of the Little Helper project is to create robot assistants to aid the human workers in their daily tasks. To this day, six collaborative robots have been developed and constructed at AAU; hereof four Little Helper robots make up the ”Little Helper family”. Figure 2.2 shows all six collaborative robots of which all but Little Helper 1 are still used actively in research. Since the initiation in 2007, the Little Helper project has been phased forward by tight attachments to several national and European research projects, including ACat [2014]; CARLoS [2014]; CARMEN [2013]; TAPAS [2011]. The Little Helper robots are composed of commercially available of-the-shelf hardware components, including a robot arm, a mobile platform, tooling, and sensors. Recent work by Hvilshøj has proposed a modular architecture for AIMMs serving as a key enabler for enhancing configurability. Research within the Little Helper project has focused on identifying and evaluating suitable industrial domains and tasks for the technology [Bøgh et al., 2012a]. With the current maturity of the technology, logistic, assembly, and machine tending tasks are found feasible for AIMMs. Another key research topic of the Little Helper project is the design and development of new human-robot interfaces for instructing and operating the robots. As part of the Little Helper project, a new programming paradigm based on robot skills have been conceptualized [Bøgh, 2012; Bøgh et al., 2012b]. Skills are generic and reusable control modules representing an operational pattern that can be parameterized for a specific task. Skills comprise task-related abilities of the combined hardware, hence the robot as one entity. For instance, the ability to pick up an object is represented by a pick skill. By combining several skills into a sequence and parameterizing them, a robot task is obtained. Preliminary studies at AAU have investigated the realization of skills, their industrial application, and the possibility of manual parameterization of 14

2.1. Collaborative Robots

(a) LH1, 2008

(d) LH4, 2013

(b) LH2, 2010

(c) LH3, 2012

(e) CR, 2014

(f) CR, 2015

Fig. 2.2: Collaborative robots at the Department of Mechanical and Manufacturing Engineering at Aalborg University. Since 2008, four mobile collaborative robots or AIMMs have been build carrying the name Little Helper. In correlation with the CARMEN project [CARMEN, 2013] two collaborative robots have been build; a stationary robot and a manually moveable robot. LH = Little Helper robot, CR = CARMEN robot.

15

Chapter 2. Related Research

skills. Since the concept of skills and the associated skill-based programming paradigm are central research topics in this PhD project, they will be thoroughly described in Chapter 4.

2.1.3

Collaborative Robot Research

In research, collaborative robots have a high interest with multiple international research projects striving to increase the industrial readiness of the technology [Bogue, 2016]. In Europe, the SPARC programme funded by the European Commission and many European companies has a total budget of more than 2.8 billion Euros for robotic research from 2014 to 2020. The research agenda of SPARC [2015] puts a clear focus on human-robot collaboration and agile robotic solutions as exemplified by the statements presented in Section 1.2.1. The original cobots presented in [Colgate et al., 1996] collaborated with the human operator by providing virtual guides and surfaces to constrain and guide the operator’s movements during manipulation of an object; in [Colgate et al., 1996] tested on the manufacturing of a car door. The cobots are deemed inherently safe as the effort the creating motion is exercised by the human, not the robot. The cobots presented by Colgate et al. are used in direct collaboration with a human operator to accomplish a common task. On the extend of collaboration, two main approaches are present in research; the truly collaborative robot and the robot assistant. Truly collaborative robots enter direct collaboration with the human worker to solve an industrial task as a team. Robot assistants do not necessarily solve a task in direct collaboration with the human worker, but serve as a co-worker instructed and supervised by the human worker. Although both types require new and improved methods for human-robot interaction, they also introduce different challenges. This PhD project will focus on collaborative robots in terms of robot assistants, and thus the term collaborative robot will throughout this thesis refer to robot assistants unless explicitly stated otherwise. Related research on true human-robot collaboration will not be covered. However, an excellent review and classification of human-robot collaboration also covering true human-robot collaboration is presented in [Krüger et al., 2009]. The EU FP7 project ROBO-PARTNER [2013], as presented on a high conceptual level in [Michalos et al., 2014], propose human-robot collaboration with multiple interaction levels for automotive manufacturing. As illustrated on Figure 2.3, three interaction levels are studied as separate scenarios in the project; 1) Shared workspace, 2) mobile assistant, and 3) joint cooperation. In the first scenario, the human worker and the robot share the same workspace, but perform separate tasks. In the second scenario, a mobile robot assists the human worker by fetching parts. In the third scenario, the human worker and the robot enter true human-robot collaboration and cooperate on the same task. On an integration level, ROBO-PARTNER proposes the use of an agent-based archi16

2.1. Collaborative Robots

3FNPWFEEVFUPDPQZSJHIU

Fig. 2.3: Three levels of human-robot collaboration studied in the EU FP7 project ROBOPARTNER [2013]. [Michalos et al., 2014]

tecture combined with ontologies for knowledge capturing in order to manage the production resources and enable a plug and produce configuration approach (see Section 2.3). Furthermore, Michalos et al. urge the use of open-source software frameworks such as the Robot Operating System (ROS). Charalambous et al. emphasize the necessity and importance of considering the human factors in implementing human-robot collaboration. They present in [Charalambous et al., 2015] a study of the key human factors, enablers, and barriers determining the success of implementing human-robot collaboration in industrial production. Since human-robot collaboration as a technology has not yet reached large-scaled industrial acceptance and implementation, the study is based on a literature review of work describing results and experiences from introduction of similar novel technologies in related manufacturing fields; e.g. cellular manufacturing, lean manufacturing, and advanced manufacturing technology. The following key factors are summarized: • Communication of change to employees.

• Operator participation in implementation. • Training and development of workforce. • Existence of a process expert.

• Organizational flexibility through employee empowerment. • Senior management commitment and support. • Impact of union involvement.

17

Chapter 2. Related Research

As evident from the list of factors, the involvement of the shop floor employees is vital in the acceptance and success of human-robot collaboration from an organizational perspective. Charalambous et al. further assess the identified factors through a qualitative case study at a UK aerospace sub-supplier. The results of which is a confirmation of the central role of the shop floor personnel in introducing new manufacturing technology.

2.2

Robot Programming

Traditionally, programming of robots are regarded as either ”online” or ”offline”. In conventional offline programming, the robot program is developed offline, typically in a virtual environment using CAD models of the robot, the process, and the operating environment. Examples of general-purpose, commercially available off-line programming systems are: KUKAsim, RobotStudio, em-Workplace and Delmia Robotics. Given the extensive use of models, offline programming can be made automatic; for example, INROPA basic (spraypainting) and RINASWELD (welding). This makes automatic programming suitable for smaller batch sizes, albeit it heavily depends on the availability and accuracy of CAD models. In conventional online programming, the operator uses the teach pendant of the robot to physically move the tool center point (TCP) to target locations and hereby graduatly build the robot program for a given task. This is a time-consuming process which given the interface of traditional industrial robots requires specialized training. Therefore, online programming requires sufficiently large batch-sizes in order to be economically feasible. As conventional online and offline robot programming methods either require extensive CAD modeling or large batch-sizes, neither is considered suitable for collaborative robots. Thus, a key challenge in developing collaborative robot assistants is finding a suitable method for programming the robot. Biggs and MacDonald [2003] present a survey on robot programming systems revealing two main categories; manual programming and automatic programming. In automatic programming approaches, cognitive and highly autonomous systems are used to make the robot less dependent on human instruction details. Instead, it relies on planning algorithms, sensor inputs, and a comprehensive world model to autonomously concatenate actions to achieve a task related goal. Contrary, in manual programming, the human specifies the actions needed to solve the desired task. Thus, in manual programming, the human directly instructs the desired operation of the robot.

18

2.2. Robot Programming

2.2.1

Automatic Robot Programming

Historically, the hierarchical paradigm was one of the first methods for implementing intelligence into mainstream robotic systems [Gat, 1998]. The paradigm got widely known for the three states sense, plan, and act which are also a more common name for the paradigm. In sense-plan-act (SPA), knowledge about the environment is acquired during the sensing state. Based on the input, the robot plans its actions during the planning state, which are then carried out during the acting state. Despite being several decades old, the core concept of SPA is still used today in many efforts to create intelligent and autonomous robot systems. SPA also paved the way for newer control architectures, such as reactive planning [Gat, 1998], subsumption [Brooks, 1986], and three-layered control architectures [Bonasso, 1991; Connell, 1992; Gat, 1998]. The concept of three-layered architectures was introduced during the 90’th. In three-layered architectures the control is separated into three hierarchical layers executing in parallel. The top layer controls the overall task execution, and the lowest layer performs the low-level motion control of motors and devices. The middle layer connects the other two layers by sequencing low-level control commands from the bottom layer to drive the combined behavior of the robot towards that defined by the top layer. Consequently, the control modules of the middle layer naturally denote actions of the robot as a single entity. By describing these actions of the middle layer as generic, parameterizable modules, the concept of robot skills is formed. The first skill-like concepts were proposed for automatic programming by Fikes and Nilsson [1971] using their now well-known STRIPS planner. Like in [Fikes and Nilsson, 1971], the concept of skills has in research proven well-suited for automatic programming because of the task-related abstraction. Several researches have proposed methods for instructing robots using highlevel instructions in natural language. In recent years, the work of Tenorth et al. have become well-known after instructing a robot to make pancakes. In [Tenorth et al., 2011] a comprehensive robot control system for cognitive robots is presented. When given a high-level instruction, the system autonomously starts working towards achieving the goal by consulting knowledge and experience in both local and online databases. Instructions clarifying a subprocedure might be retrieved from an online database formalized in natural human language. The system thus employs a semantic parser breaking down the instruction into a predicate-structure and eventually a set of action-related commands. Stenmark and Malec [2014] describe an approach using a semantic parser which extracts not only the simple predicate structures but also conditional statements and cardinalities. Recent research on cognitive robot systems and artificial intelligence has a strong focus on knowledge gathering, formalization, and sharing [Lemaignan et al., 2010; Suh et al., 2007; Tenorth and Beetz, 2012]. In terms of automatic 19

Chapter 2. Related Research

robot programming, background knowledge is used to reason about perceptions made and the task at hand. As a general human capability, being able to gather, reuse, and share knowledge is not only beneficial in industrial settings, but perhaps even more beneficial in social settings. Thus, the research topic of knowledge modeling for robotics receives attention from multiple robotic domains. Ontologies have become a widely adopted tool in knowledge engineering, and in terms of industrial robotics it has resulted in a recent standard [IEEE Robotics and Automation Society, 2015]. Combined with the idea of semantic web [Allemang and Hendler, 2011], knowledge sharing between robots reach a grand scale as pursued in the EU FP7 project RoboEarth [2009] [Tenorth et al., 2013; Waibel et al., 2011].

2.2.2

Manual Robot Programming

Contrary to reducing human intervention by introducing cognitive solutions, research on manual programming instead focus on bringing traditional online robot programming from engineers to the shop floor operators [Biggs and MacDonald, 2003]. This decreases the need for high-wage engineering hours and allows the operator to channel valuable process knowledge and experience into the robot programming. A common approach is to raise the programming to a higher level of abstraction and combining it with new and more intuitive HRI methods. A programming paradigm that provides higher-level instruction of robots is task-level programming [Lozano-Perez, 1983]. It raises the programming level from the low-level device commands used in traditional robot programming to task-related actions. To accomplish this, task-level programming systems often have a hierarchical control architecture. Typically, a three-layered architecture [Gat, 1998] is used, where the middle layer (skills) constitutes the task-related actions used in programming. Thus, tasks are constructed by manually concatenating and parameterizing skills. Skills for Manual Robot Programming Several representations of robot skills suitable for manual programming have been proposed in research. Archibald and Petriu [1993] exploit the skill-concept in a framework for skill-oriented robot programming (SKORP). Task programming is performed through a graphical user interface (GUI) with a drag-anddrop interaction for intuitive sequencing of skills. The SKORP framework is aimed at shop floor programming of robots, however, it is considered an engineering tool intended for expediting and easing the task of the traditional robot programmer. Only a limited number of skills are implemented in the framework and the skill abstraction is relatively wide, i.e. skills span both simple device commands and more complex task related actions. Furthermore, Archibald and Petriu only present limited information on the actual HRI of the proposed framework. 20

2.2. Robot Programming

In [Guerin et al., 2015] the robot programming framework CoSTAR is presented. In CoSTAR, tasks are programmed by concatenating behaviors (i.e. skills) to form behavior ”trees” as oppposed to of traditional sequences. The tree structure allows for parallel behaviors to be programmed and for selecting behaviors based on outputs of previous behaviors during runtime, see Figure 2.4. Thus, the approach provides run-time self-adaptation within a pre-programmed solution space. In [Guerin et al., 2015], CoSTAR is tested in both a laboratory

3FNPWFEEVFUPDPQZSJHIU

Fig. 2.4: Behaviour tree of a simple task. [Guerin et al., 2015]

and an industrial environment using a UR5 robot arm and shows promising results in simple handling tasks. However, no experiments on more complex tasks such as assembly are presented. Neither are experiments validating the user interface, but presumably significant robotics experience is needed. Huckaby and Christensen [2014] use the SysML modeling language to describe robot assembly tasks using skill primitives. The approach is used to model the assembly of the Cranfield benchmark [Collins et al., 1985], and the actual assembly of a model airplane is carried out using a KUKA KR5 Sixx robot arm. Although purposefully reducing the complexity in robot programming, the intended target user group and industrial use case is not clear. In [Muszynski et al., 2012] the concept of variable autonomy is tested by implementing three different user interfaces with varying robot autonomy. On the lowest level of autonomy, the robot is tele-operated by the user. On the intermediate level, the user chooses appropriate skills to perform a given task. On the highest level of autonomy, the user provides overall task instructions through speech. The three different HRI levels are tested in a user study with 20 participants. Muszynski et al. conclude that a higher the level of autonomy makes the robot programming quicker and more intuitive. 21

Chapter 2. Related Research

Stenmark [2013] presents a skill definition which includes pre- and postcondition checks. These checks make the interconnection of skills a controlled process both for manual or automatic programming. The skills are used for automatic robot programming from natural language analysis as described in [Stenmark and Malec, 2015]. Although manual parameterization is mentioned as an option, no details or usability study of it are presented. A skill-based control framework specifically intended for manual programming of collaborative robots is presented in [Andersen et al., 2015]. The framework uses a layered architecture and an agent-based approach to make the skills hardware independent. Through a GUI, skills can be manually concatenated and parameterized. Although only few implementation details are presented in [Andersen et al., 2015], clear similarities to the work presented in this PhD project are present. However, the task control architecture in [Andersen et al., 2015] uses a taxonomy with a less clear separation between higher level task control modules and low-level device functionalities resulting in a less stringent skill definition. The proposed framework is tested on a complex industrial handling task, however, no studies of the training level or robotics experience required for robot programming are conducted. In extend to the skill-based programming abstraction, intuitive, manual robot programming also requires new approaches for HRI in order to parameterize the skills. As opposed to the traditional drive through method [Owen, 1985], several feasible HRI methods are found in research; for example: • Gesture-based instruction • Kinesthetic teaching • Speech recognition

• Learning from demonstration Many of the methods require external sensing of the operator and/or the environment to determine the input which can be impractical in a dynamic, noisy industrial environment. The use of the robot’s own build-in sensors to record the operator input as done in kinesthetic teachning abolish the need for external sensing. Meanwhile, most of the commercial collaborative robot arms support this type of HRI, see Section 2.1.1. Consequently, it is chosen to focus on HRI in terms of kinesthetic teaching in this PhD project. Kinesthetic Teaching In kinesthetic teaching, the human directly interacts with the robot and pilots the TCP through the intended trajectory [Akgun et al., 2012; Kormushev et al., 2011; Wrede et al., 2013]. Kormushev et al. [2011] present a framework where both position and force profiles can be taught to the robot. The benefit of kinesthetic teaching used with redundant robot arms is investigated in [Wrede 22

2.2. Robot Programming

et al., 2013]. They present a method for kinesthetic teaching of redundant manipulators in spatially constraint workspaces. The user first teaches the allowable kinematic configuration. Subsequently, the user teaches the task-related trajectory and meanwhile the robot obeys the previously taught kinematic configuration. Wrede et al. test their approach in a user study with 49 participants from industry and find that the approach in general has a possitive effect on the usability and performance in programming redundant manipulators. Saveriano et al. [2015] propose a novel framework for learning and refining motion primitives from kinesthetic teaching. Given an operational primitive, through a Task Transition Controller an operator can at any time switch to the kinesthetic teaching mode by simply starting to guide the robot end-effector. The external force from the operator is detected, and the system automatically switches from execution to a motion refinement mode (teaching mode). By providing multiple operator demonstrations (refinements) of each motion primitive the system incrementally refines the stored motion primitive. Kramberger [2014] compares kinesthetic teaching to a haptic device and a magnetic tracker in instructing force-based robot skills. In an experiment, all three HRI methods are used to perform a complex peg-in-hole operation while the forces exceeded to the peg is recorded. Kramberger conclude, that the most robust and repeatable input is obtained through kinesthetic teaching. Thus, this method is superior in transferring human knowledge to robot skills. Muxfeldt et al. [2014] conducted an extensive user study comparing the performance of kinesthetic teaching to that of learning by demonstration. The study included 78 participants of varying age. The background and technical knowledge of the participants are not elaborated. In the study each participant performs four different industrial assembly tasks, and for each task the participant uses both kinesthetic teaching, marker-tracked manual assembly, and manual assembly with a marker-tracked handle. The performance measures of the study are contact forces and contact duration. The results yield a longer contact duration and higher contact force using kinesthetic teaching, which are attributed to the inertia and damping of the robot arm holding the object. Thus, contrary to Kramberger, Muxfeldt et al. conclude that kinesthetic teaching is not a viable method for transferring human assembly strategies to robots. Although the contact forces during the instructing are a viable expression of the instructor’s performance, the contact forces of the obtained robot program are often of higher interest. Muxfeldt et al. do not discuss the performance of the obtained program, nor how the methods would be exploited for task programming in industrial settings. The study showed that in kinesthetic teaching the performance was improved more than 40% already in the second attempt; thus, it indicates a positive learning amongst the participants.

23

Chapter 2. Related Research

2.3

Hardware Reconfiguration

Re-programming the robot is an inevitable step in transitioning the robot to a new task. The necessity of hardware reconfiguration, however, depends on the variation between the old and the new task. An analysis of more than 560 manual tasks suited for mobile collaborative robots has shown a significant variation in tasks; even within the same industry [Bøgh et al., 2012a]. Consequently, hardware reconfiguration will also be necessary eventually as the collaborative robot takes on more and more tasks. The hardware reconfiguration of a collaborative robot involves two primary tasks; firstly, it is necessary to select a configuration suitable for the given task. Secondly, the affected parts of the robot must be exchanged. This section presents research related to both tasks. To allow quick and easy exchange of hardware components, a common approach is to orchestrate the system (here a robot) into modules following a well-defined architecture with standardized interfaces; e.g. as in RMS, HMS, and EPS/EAS. In extend of the physical implications of exchanging hardware modules, the control system of the robot is also affected as active components are removed and added. Thus, standardized interfaces to support hardware module exchange are perhaps even more challenging in terms of the control system of the robot [Bi et al., 2008a; ElMaraghy, 2006; Wiendahl et al., 2007]. Section 2.3.3 presents related research on the control system aspect of modular, reconfigurable manufacturing equipment supporting quick and easy exchange of modules. Selecting a set of modules suitable for a given task can be supported by software configuration tools posessing the appropiate knowledge to aid the user. Related research on the task of selecting modules, configuration tools, and knowledge modeling for such tools is presented in Section 2.3.4. Before presenting related research on the module selection and module exchange, Section 2.3.1 briefly decribes the ambiguous use of the term configuration and the multiple definitions.

2.3.1

Definition of Configuration

The concept of configuration is described appropriately by Sabin and Weigel [1998] as: ...a special case of design activity with two key features: The artifact being configured is assembled from instances of a fixed set of well-defined component types, and components interact with each other in predefined ways. In other words, configuration is the task of orchestrating well-defined modules into a target entity following a well-defined architecture. However, the term configuration is used ambiguously in literature with three primary semantic interpretations: 24

2.3. Hardware Reconfiguration

Selecting components (verb) Configuration is used to denote the abstract task of selecting a set of components. Aggregating components (verb) Like in [Sabin and Weigel, 1998], configuration is used to denote the overall task of aggregating modules. This includes both selecting components and physically embodying the selection. A selection of components (noun) The outcome of the component selection process or the component aggregation process is referred to as a configuration. Thus, configuration denotes the specific set of components of an entity; abstract or physical. Despite the ambiguity, all three meanings will be used in this PhD thesis due to their common use in literature. In most cases, the semantic meaning will be given from the context. Otherwise, the text will explicitly state the intended semantic meaning.

2.3.2

Reconfigurable Assembly Systems

The domain of RMS advocates manufacturing flexibility through reconfiguration based on modular architectures (see Section 1.1). However, as described by Koren and Shpitalni [2010] the reconfigurability spans multiple levels from system-wide level to machine level, see Figure 2.5. According to [Koren and Shpitalni, 2010] a key enabling technology for RMS is reconfigurability on the machine level. The paradigm of RMS has been proposed within several machine/equipment classes, including reconfigurable machine tools (RMT), reconfigurable fixturing systems (RFS), reconfigurable assembly systems (RAS), reconfigurable inspection and calibration systems (RICS), and reconfigurable material-handling systems (RMHS) [Bi et al., 2008b]. Robotic solutions are one of the primary tools used in RAS [Bi et al., 2008b]. Research within RAS has focused on both cell-level reconfiguration and machine-level reconfiguration. On the machine level, the robot arm (or manipulator) itself is often subject to modularization and reconfiguration [Benhabib and Dai, 1991; Bi et al., 2010; Cohen et al., 1992]. In this approach called modular robotic systems (MRS), the manipulator is segmented into individual joint-modules and link-modules which can be combined to form custom kinematic structures tailored for a particular application. The general approach in MRS is to develop custom modules in which the main research challenges are architectural and mechanical design of the modules, deriving the kinematics, robust control systems, and structural, mechanical properties of the system [Bi et al., 2010; Carbonari et al., 2014; Valente, 2016]. Hsieh [2003] propose a RAS composed of MRSs built from industrial commercial-of-the-shelf (COTS) pneumatic components and PLCs.

25

Chapter 2. Related Research

3FNPWFEEVFUPDPQZSJHIU

Fig. 2.5: Levels of reconfiguration in RMS. Reconfigurable machines is a key enabling technology for RMS [Koren and Shpitalni, 2010]. Figure adopted from [Abele et al., 2007].

26

2.3. Hardware Reconfiguration

On the cell-level, reconfiguration of robotized RAS typically view the robot as a proprietary device adapted into a module. Arai et al. [2000] and Sugi et al. [2003] both present examples of reconfigurable robotic assembly cells where robots (robot arms with tooling) are used as modules which can be re-arranged inside the robot cell. Thus, contrary to the machine-level, the robot arm itself is not reconfigured. McKee et al. [2001] propose an architectural concept for modular robots called the MARS model. The concept defines the core data and knowledge attributed to modules and a framework for modeling the connection between modules. Lohse et al. [2006] propose an architectural and knowledge design approach for RAS. On a structural-level, modules are asserted with ports which define their structural interfaces towards ofther modules. Combined with the functional knowledge on each module, higher-level functions are achieved by combining modules while adherering to the structural constraints given by ports. [Lohse et al., 2006] primarily focus on the conceptualization and design aspects, albeit the proposed design method is intended for COTS components. Despite that RAS research has focused on both machine- and cell-level reconfiguration, research on exploiting COTS components in the design of modular, reconfigurable robotics is very limited, and no current research focus on the mechanical embodiment of modules composed of COTS components. However, for industry, the option to use COTS components is desireable, as these components are both readily available in broad variety and are certified for use in a production. To use COTS components for a modular and reconfigurable robot, the COTS components must first be adapted into modules, which require both physical and control interfaces to adapted.

2.3.3

Plug and Produce

The plug and produce paradigm represents the idea of quick and seamless exchange of hardware modules. It was first proposed by Arai et al. in 2000 as a response to the need for agile manufacturing systems. The term is derived from the plug and play concept of the IT world. In research, there is no clear definition or consensus on the term plug and produce. However, it commonly denotes the idea of quick (un)plugging of components from a manufacturing system with little to no re-programming and reconfiguration of the remaining system. Research in plug and produce primarily address the conceptual aspects and the control aspects of quick module exchange as opposed to the physical challenges [Antzoulatos et al., 2014; Arai et al., 2000; Naumann et al., 2007; Onori et al., 2012; Rocha et al., 2014; Zimmermann et al., 2008]. A modular hardware architecture giving standardized interfaces provide an enabler for exchanging modules and thus reconfiguration. From a control perspective, standardized interfaces are equally as important when dealing with active modules. Furthermore, new approaches to the control and communication structure is often needed. One approach is agent-based systems 27

Chapter 2. Related Research

in which active modules become independent agents which can provide and request functionality to the rest of the system. Agent-based systems, or multiagent systems, originate from the computational domain; however, agent-based approaches has been proposed in many different aspects of manufacturing enterprises [Monostori et al., 2006]. Within the domain of manufacturing equipment, multi-agent systems have been proposed on several technical granularities [Onori and Barata, 2009]. Some are focused on the line or system level, where the agents thus become individual stations or cells. Other focus on the machine level with individual devices as agents. The latter being related to the structure of a collaborative robot. A multi-agent system architecture defining both hardware and control interfaces of assembly systems was developed in the EU FP6 project EUPASS [2004]. The architecture covers automation equipment used for precision assembly in electronics manufacturing; this includes robots and robot modules [Ferreira et al., 2010]. Extending the results of EUPASS, the EU FP7 project IDEAS [2010] developed an integrated agent control board used as a proxy to adapt commercial components into agents [Onori et al., 2012]. The proposed framework and agent controller are tested in real world experiments which demonstrated the feasibility of the agent-based approach for reconfiguration of manufacturing systems [Onori et al., 2012]. Ferreira et al. [2012] present an evolvable assembly system separating hardware and control configuration. Two configurator tools are implemented, a Mechatronic Physical Configurator for configuring the hardware and a Mechatronic Process Configurator for task programming. The hardware configuration is performed by a system integrator and results in a configuration description in the AutomationML format [AutomationML Consortium, 2016]. Afterwards follows the building phase where the system is physically put together. A concept of skills is introduced as control modules denoting actions performed by individual agents (first presented in [Ferreira and Lohse, 2012]). Combining the skills of multiple agents results in a composite skill, where the agents are grouped in a coalition and serve as a single agent. The EU FP7 project PRIME [2012] also propose a multi-agent system architecture with both standardized hardware and control interfaces as a means to developing highly adaptable and reconfigurable plug and produce systems. However, in PRIME explicit focus is given to adapting legacy components into agents [Rocha et al., 2014]. In [Antzoulatos et al., 2014] a multi agent architecture used in PRIME is conceptualized and described. It orchestrates the system into modules, where each module contains one or more agents. A central plug and produce management module with several internal agents (see Figure 2.6) allows both hot and cold plug and produce; that is, hardware exchange without shutting down the system (hot), and hardware exchange with a system shutdown (cold). The architecture includes the concept of component agents. A component agent is associated with each specific component and serve as a 28

2.3. Hardware Reconfiguration

3FNPWFEEVFUPDPQZSJHIU

Fig. 2.6: Agent-based architecture used in the EU FP7 project PRIME [2012]. Figure from [Antzoulatos et al., 2014].

”proxy” facilitating the communication between the multi-agent system and the component. Despite their high relevance to this work, EUPASS, IDEAS and PRIME all focus on multi-agent systems on a manufacturing system level. All three projects include robotics in their architecture, but they do not present a detailed approach for collaborative robotics. Zimmermann et al. [2008] present a three-layered control architecture for robotics and automation systems to support plug and produce. The three layeres are communication, configuration, and application. The lowest level, communication, contains the low-level bus-communication with the various devices and handles the connection and disconnection of devices. The communication layer implements an abstraction upon the specific communication implementations which makes the above layers independent of the specific devices and their communication structure. Each device has an associated electronic description which is used in the configuration layer to configure the given device. Based on the description, the configuration layer also synthesizes the functionalities and resources available in the system. The resources and functions available are used in the top layer, application, to perform specific manufacturing tasks. The concept of service-oriented architectures (SOA) proposes an abstract, general interaction scheme between distributed actors. In comparison to agentbased systems, SOA does not concern how the distributed actors are realized or orchestrated. It is a concept with the mechanisms to facilitate servicebased interaction between actors. Thus, SOA and agent-based architectures 29

Chapter 2. Related Research

do not oppose each other, but rather complement each other as demonstrated by Herrera et al. [2008]. The term SOA originates from the IT world, however, it has also been adopted in manufacturing systems research [Colombo et al., 2005; Estrem, 2003; Jammes et al., 2005]. Colombo et al. [2005] propose SOA on device level using the Devices Profile for Web Services (DPWS) [OASIS, 2009] to provide a standardized communication interface between vendor specific devices. Expectedly, this could in simple systems make plug and produce possible. The proposed concept is illustrated through a dose-maker machine which is constructed by aggregating components on multiple levels. When aggregating modules on one level, an orchestration engine is used to unify the control and thereby represent the aggregated modules as a single module on a higher level. Rooker et al. [2009] present the conceptual outline of a reconfigurable control framework using an agent-based approach for automation systems. The framework separates the low-level, real-time control of each device from the higher level inter-module communication. As opposed to similar research, Rooker et al. propose the use of augmented reality as human-machine interface. Despite the use of novel and intuitive interfaces, the proposed framework remains an engineering tool.

2.3.4

Knowledge-Based Configuration

Where research on plug and produce primarily focus on making module exchange in manufacturing systems intuitive and expedite, research on knowledgebased configuration focuses on the task of finding a suitable set of modules for a given task. The definition of the term configuration in Section 2.3.1 states that configuration is the process of selecting modules from a finite, well-defined set following a well-defined aggregation method. Thus, the task of configuration becomes the problem of finding a set of components satisfying both the user requirements and the aggregation constraints [Sabin and Weigel, 1998]. Determining a feasible solution (configuration) for a manufacturing system requires both equipment (system), process, and product knowledge according to [Rampersad, 1994]. In the industry, shop floor operators often possess detailed process and product knowledge as these are directly task-related. Most shop floor operators, however, lack detailed robot equipment knowledge as this is not as tightly coupled to the task. Furthermore, as the number of modules increases, navigating the solution space to find a feasible or perhaps even optimal configuration becomes increasingly difficult. In knowledge-based configuration, the configuration process is represented as a mathematical problem which can be formulated using various mathematical and logical representations and either solved manually or solved automatically using computer software. Software systems for solving configuration problems are known as configuration systems or configurators. The autonomy and 30

2.3. Hardware Reconfiguration

scope of interaction of the configurators are highly dependent on the application and the representation of knowledge, requirements, and constraints. However, in general, configurators serve the purpose aiding their users in navigating the solution space and selecting a feasible configuration [Kruse and Bramham, 2003]. Today, configurators are strongly represented in the product domain where they extend into numerous applications and markets. The exploitation of configurators in manufacturing equipment (or robotic systems) configuration is significantly lower. However, a manufacturing system can to a large degree be regarded as a complex product. Thus, the approach used in product configuration can be exploited in reconfiguration of manufacturing systems. Knowledge Modeling for Configuration To solve a configuration problem, two types of knowledge are needed: domain knowledge and problem solving knowledge. That is, knowledge about the entity being configured, its architecture, and its components; and knowledge on how to find a feasible solution. The domain knowledge often consists of a taxonomy of principle components, their attributes, and potential connections and aggregations. The knowledge need to be modeled and formalized in order to be accessible to a software configurator. Sabin and Weigel [1998] classify knowledge representations for configurators as: • Case-based • Rule-based

• Model-based Case-based configurators use knowledge on previous specific instantiations to suggest a suitable configuration. Thus, the knowledge consists of previously chosen configurations, and the configuration of some application entails finding the best matching previous configuration. [Sabin and Weigel, 1998] Rule-based configurators are based on a set of conditional rules (if-then clauses) representing both the domain and problem solving knowledge. Solving a configuration problem is done by evaluating the rules in a predefined, consecutive order. Although rule-based configurators have been implemented successfully [Marcus et al., 1987; McDermott, 1982], they are considered inadequate on larger configuration problems with changing domain knowledge. The primary reasons are that the domain knowledge is integrated directly into the problem solving (i.e. the rules), and that the rules must be evaluated in a predefined, consecutive order which makes it complex to ensure consistency when extending or modifying the rules [Sabin and Weigel, 1998]. The model-based representation provides a separation between the domain knowledge and solving knowledge. Effectively, domain knowledge can be modeled independent of the specific configuration task allowing for significantly 31

Chapter 2. Related Research

easier reuse and maintenance [Hotz et al., 2014]. The term model-based covers a variety of knowledge representation types. The three most commonly used are: constraint-based, resource-based, and logic-based knowledge representations [Blecker et al., 2004]. In the constraint-based approach, domain knowledge is modeled in terms of domains and variables, and the input requirements are formulated in terms of constraints upon the variables and domains. Hereby, the configuration task can be regarded as a constrained satisfaction problem which is a well known problem with well known solving methods [Hotz et al., 2014]. The resource-based approach models the domain knowledge as a set of components providing or consuming resources. The input requirements for the configuration task are modeled as a set of resource demands. Solving the configuration task thus becomes a balancing of the resources amongst the components and the input requirements [Sabin and Weigel, 1998]. By using a logic-based approach, the domain knowledge and requirements are represented using a logic expressions language. The two most often used representations are description logic and first order logic [Hotz et al., 2014]. Knowledge in description logic (DL) is formulated as axioms (statements) encoded in so called triples of the form ” ”; for instance, ”Motorcycle isA Vehicle” and ”Wheel isPartOf Motorcycle”. Note that the former defines a hierarchical structure or taxonomy, and the latter defines relational structure. The orchestration into simple axioms provides an object-oriented knowledge representation both readable to humans and machines. DL can be used to describe both conceptual knowledge and specific knowledge. Conceptual knowledge describes knowledge of a particular domain, but does not include information on specific instances or individuals. The specific knowledge express concrete individuals which are instantiations of concepts specified in the conceptual knowledge. The conceptual knowledge and the specific knowledge are regarded as two separate containers, a T-Box for the conceptual knowledge and an A-Box for the specific knowledge. Due to the nature of the T-box with its conceptual description of a particular domain, it is often referred to as an ontology [Chandrasekaran et al., 1999]. Ontology-Based Knowledge Representation The term ontology originates from philosophy and denotes systematic explanations of things being. That is, ontologies are structured descriptions of certain concepts in the world. The term has been adopted in the computer and information science for the past 30 years as a common term for strives to model the real world in computer-facilitated representations [Hepp et al., 2008]. The fact that ontologies describe the hierarchical structure of concepts implies that ontologies can describe knowledge on any level of abstraction. Ontologies can be structured in an object-oriented manner where one ontology can subsume and further detail the knowledge defined by another ontology. This 32

2.3. Hardware Reconfiguration

is a useful feature in terms of knowledge sharing. For example, one ontology could define concepts generic to a certain domain. This ontology could then be subsumed when creating application specific ontologies within that domain. Hereby, the generic or core ontology serves as common ground for sharing ontologies within the domain. [Gruber, 1995] Several formats are available for implementing an ontology. One format that has been widely employed is the Ontology Web Language (OWL) proposed by the World Wide Web Consortium (W3C) [OWL Working Group, 2004]. In 2009 the OWL2 standard was released as an update to the original OWL format [OWL Working Group, 2009]. OWL is originally intended for the semantic web paradigm which aims for knowledge sharing over the internet [Allemang and Hendler, 2011]. However, the OWL and OWL2 formats have been adopted in several other domains; configuration for one. Ontology-Based Modeling for Configuration Colace et al. [2009] present a configurator approach using ontologies based on OWL to represent configuration knowledge. The customer can provide product requirements in natural language which are then mapped to the technical product configuration. The configuration process is performed in three steps. In step one the user requirements are mapped to product functions using function knowledge stored in a function ontology. The output hereof is a set of product functions covering the user requirements. In step two the functions are mapped to product components using product knowledge stored in a component ontology. The output of step two is a technical product configuration. The last step is an optimization of the product configuration performed using a Bayesian network. Yang et al. [2009] also use OWL to represent an ontology used for configuration. The OWL ontology containing the configuration knowledge is amended with a set of rules to achieve a higher degree of expressiveness in modeling the constraints. The rules are formatted using the Semantic Web Rule Language (SWRL). In implementation of their configurator, Yang et al. use the JESS (Java Expert System Shell) library. The configuration model and the constraints are translated from the OWL axioms and SWRL rules into facts and rules in JESS. To address the reusability of ontologies for configuration, Yang et al. propose a General Configuration Ontology (GC_ontology). The purpose of this ontology is to define the common concepts of the configuration domain. Thus, the GC_ontology is an application independent ontology from which domain-specific ontologies can be subsumed. The content of the GC_ontology is a number of concepts (subject) and a number of relations (predicates); e.g., concepts such as Component, Constraint, Port, and Resource; and relations such as isA, hasPart, and consumeResource. The feasibility of the proposed approach is demonstrated in [Yang et al., 2008] using a construction drilling 33

Chapter 2. Related Research

machine as an example and in [Yang et al., 2009] using a PC hardware configuration example originally used by [Felfernig et al., 2001]. Similar to the GC_ontology, Soininen et al. proposed a general ontology for the configuration domain in 1998. The proposed ontology synthesizes several previous modeling approaches for technical products, and the resulting ontology includes both structure, connection, resource, and function aspects. Although including all four aspects do to some extend produce redundant modeling, the inclusion of all four aspects gives an expressive modeling from several viewpoints. According to Soininen et al., this is important as is helps ”communicating” the ontology between various domain experts. The structure, resource, and connection aspects all cohere to the technical modeling of the product. The function aspect provides an alternative modeling space where the product is represented by its functions. The purpose of the alternative modeling space is to provide a representation suitable for customers or non-technical persons. After configuring a product in terms of functions, the technical configuration is performed to satisfy the chosen functional configuration. Ontology-Based Modeling of Manufacturing Knowledge As originally proposed by Gruber [1995], increased sharing of ontologies can be achieved by separating the ontologies into various levels of abstraction. Literature often discuss generic or core ontologies as enablers for seamless sharing of ontologies within a domain; e.g. as proposed by Yang et al. [2009] for the product domain. Gomez-Perez et al. [2006] suggest six ontology levels ranging from ”universal” (e.g. units, time, space) to domain specific (e.g. robots). A recent review of research exploiting ontologies for manufacturing purposes is presented by Ramos [2015]. Based on the reviewed work, current research is classified into the following three categories: 1) development of specific applications, 2) development of domain ontologies, and 3) proposal of core ontologies. The first contains research on various applications in which ontologies are used as a technology. The second contains research proposing ontologies with knowledge for a particular domain. The last contains research proposing general, core ontologies. In summary of current research, Ramos describes several shortcomings of the OWL format when it comes to modeling complex products or reasoning over procedural knowledge. However, as the OWL format has become the most commonly used format in both research and practical applications, Ramos concludes that the OWL format is desirable in future research, and that the shortcomings can be attended by coexistence with other logic languages. The implications of developing general and domain ontologies for use with equipment from multiple vendors are discussed in [Deshayes et al., 2007]. They propose a four-level ontology architecture with an upper-level core ontology, and below a domain ontology. The second lowest level is a standards ontology 34

2.3. Hardware Reconfiguration

comprising knowledge of various domain relevant standards. As argued by Deshayes et al., standards serve as a common descriptive measure for comparing similar products from various vendors. As a results, the industry use standards for classifying, specifying, and selecting equipment from diverse vendors. On the lowest level, vendor specific ontologies are used to capture knowledge on specific vendor’s product portfolios, terminologies, and classifications. A general domain ontology for assembly is proposed in [Lohse et al., 2006]. The assembly domain knowledge includes product, process, and equipment concepts as originally proposed in [Rampersad, 1994]. The primary contribution of Lohse et al. [2006] is the proposal of a structure for an assembly equipment ontology following the function, behavior, and structure formalism, see Figure 2.7. The equipment ontology is specifically tailored for reconfigurable manufacturing equipment and thus support reconfiguration activities.

3FNPWFEEVFUPDPQZSJHIU

Fig. 2.7: Concepts of the equipment ontology proposed by Lohse et al. [2006].

Recently, the IEEE Standard on Ontologies for Robotics and Automation created by IEEE Robotics and Automation Society got officially approved. This standard defines a generic (core) ontology for the robotics and automation domain which clearly states both terminology, taxonomy, attributes, relations, and basic mereology for the domain. The core ontology called core ontology for robotics and automation (CORA) is subsumed from the suggested upper merged ontology (SUMO) [Niles and Pease, 2001] which is regarded as a ”universal” ontology defining concepts of top-most abstraction level. The standard also defines a methodology on how to create ontologies subsumed from CORA. A key difference in modeling knowledge for the manufacturing domain as opposed to the product domain is the description of processes and the associated work flow. The ProdProc framework presented in [Campagna and Formisano, 2013] defines a knowledge modeling formalism and terminology incorporating 35

Chapter 2. Related Research

both physical products and their production processes. Knowledge on physical entities and processes are modeled separately, but relations are induced through constraints between concepts of the two. Although not demonstrated in the paper, the coupling of process and product modeling is argued to help detecting and avoiding unforeseen configuration impossibilities. The modeling capability of the ProdProc framework seems intuitive and straightforward. The expressiveness of the framework in terms of process flow control is only briefly discussed in [Campagna and Formisano, 2013], but it does not include explicit representation of parallel process flows. A modeling frame framework explicitly suitable for technical processes is presented in [Hai et al., 2011]. A modeling language called work process modeling language (WPML) is proposed. Although called a language by Hai et al., WPML is in essence a generic ontology for process modeling. The software tool called work process modeling system (WOMS-plus) provides a graphical modeling tool for creating application ontologies using the WPML. In the context of EPS/EAS, Ribeiro et al. [2008] propose an ontology using OWL, SWRL, and JESS as means of implementation. In their approach, the ontology stores information relevant to the EPS/EAS system, albeit with a strong focus on the knowledge on equipment and its functionalities. In particular, Ribeiro et al. propose a set of common properties used universally to identify and describe any equipment. The concept of skills is used as a general term to denote equipment functionality on any level. In [Ribeiro et al., 2008], the knowledge related to a gripper with the skill grasp is exemplified. SWRL is used to describe the rules and directives governing the EPS/EAS system and thus used to determine which agents can form coalitions and collaborate. Zander and Awad [2015] demonstrate how DL and OWL can be used to model knowledge on robotics. They create an example ontology with explicit focus on modeling robotic components and the capabilities hereof. Capabilities are then propagated along compound components; hence, components gain the capabilities of their respective sub-components. By combining this with a set of role inclusion axioms, complex capabilities can be derived from basic capabilities. This relates well to the concept of skills used task-level programming which represent complex capabilities obtanined from basic primitives (see Section 2.2). Knowledge-Based Configuration of Manufacturing Systems As presented by Ramos [2015], one class of research on ontologies for manufacturing purposes is the development of specific applications. This section presents related work exploiting ontologies and other knowledge representations as a technology for configuration of manufacturing systems. Research originating from Lund University in Sweden has proposed a knowledge integration framework (KIF) for synthesizing and integrating knowledge 36

2.3. Hardware Reconfiguration

from various sources [Persson et al., 2010]. The motivation is to provide a centralized knowledge resource library for higher level semantic and symbolic planning and reasoning while drawing advantage from the large amount of technical knowledge already available in current domain specific formats. Currently, KIF is capable of integrating XML based sources and especially the AutomationML format has been used extensively [Björkelund et al., 2011a]. Knowledge on both process, product, and equipment is gathered and modeled as triples stored in an OWL ontology [Björkelund et al., 2011b; Malec et al., 2007]. Equipment and process knowledge is linked through the use of skills which represent generic, low-level functions provided by the equipment and used to perform processes [Björkelund et al., 2011b]. Additional knowledge on equipment includes a device library with descriptions of known specific devices and a taxonomy classifying these devices [Malec et al., 2007]. The knowledge stored in KIF is used for automatic task-level programming (planning) and for selecting appropriate hardware devices [Björkelund et al., 2011a]. For the latter, KIF exploits the relation between devices and skills to deduce which devices are necessary given the required set of skills for a given task. However, it remains the task of the user to select specific devices and to ensure an appropriate configuration. Björkelund et al. [2011a] briefly discuss the implications of industrial embracement of KIF. They stress that KIF is a top-level framework exploiting the proprietary control of the commercial devices; e.g. the motion controller of a robot arm. However, this requires extension of the proprietary device control to allow communication with KIF as demonstrated with an extension to ABB Robot Studio in [Björkelund et al., 2011a]. Naumann et al. [2007] present a framework for configuring robot cells using a plug and produce approach. An interconnector module is introduced to separate the high-level task control from the low-level device control. Similar to KIF, the interconnector module provides an abstraction upon the device commands in terms of skills and thereby makes the task control independent of specific device syntax. Each device carries a functional description in terms of skills and a behavior description represented as a state-chart for each skill. A process can similarly be defined by a set of skills required and a behavior sequence for each skill in terms of state-charts. By matching the skills provided by the devices and the skills required by a given process, a preliminary set of devices can be chosen. Furthermore, based on the state-charts representing the behavior of skills, discrete process planning can be performed. Alsafi and Vyatkin [2010] present laboratory experiments using an ontologybased reconfiguration agent to autonomously reconfigure a modular, intelligent manufacturing system in response to changing requirements. The knowledge is modeled in OWL format, implemented using JENA, and reasoned on using the Pellet reasoner [Sirin et al., 2007].

37

Chapter 2. Related Research

2.4

Conclusion

After two decades of research, only the collaborative robot arms have been successfully adopted by industry. However, these are typically just regarded as enhanced traditional robot arms, and they have not been able to break with the conventional industrial use of robots where highly specialized personnel conduct the set-up, configuration, programming, and maintenance. Thus, the need for collaborative robots in industry still has not been fulfilled. Research on collaborative robots tend to take two directions; truly collaborative robots solving a task in direct collaboration with the operator, and robot assistants (or co-workers) which are instructed and operated by the shop floor personnel, but work alongside the human workers. This PhD focus on the latter where primary challenges according to a European research roadmap are interaction and configurability. Research on the interaction challenge focus on how to reduce the effort in programming the robot. In terms of configurability, there is a general lack of research on hardware configuration for industrial collaborative robots. Conventional online and offline robot programming methods are deemed unsuitable for collaborative robots because they are too time-consuming and require specialized training. Research on decreasing the effort in robot programming tend to focus on two approaches; automatic programming in terms of cognitive, context aware robots, and manual programming in terms of improved usability through intuitive interfaces. Although automatic programming approaches naturally yield the least effort for the operator, they are highly dependent on accurate modeling of the environment which is not always available. To reduce the effort in manual robot programming, a common approach is to use task-level programming in which the programming is done on taskrelated capabilities prior to low-level device commands. Prior research at AAU has already proposed the concept of skills as task-related modules. However, research on utilizing skills to reduce the effort in manual robot programming is still needed. This PhD project will focus manual robot programming. Several of the industrial robot arms do provide macros resembling the task-related skills. However, these are often constrained to the functionality of the robot arm itself and thus cannot encapsulate capabilities of the robot system as a whole. In research, several have proposed skill-like concepts intended for manual robot programming, but the majority remain laboratory-tested concepts. Thus, to further mature the technology, research examining the feasibility of manual, task-level robot programming in industry is needed. Given the variety in tasks suitable for collaborative robots, hardware reconfiguration is deemed inevitable. However, research on hardware reconfiguration of collaborative robots is very sparse. Current research on RAS primarily focus on either reconfiguration of the robot arm itself or on system-level configuration in which the robot is considered a proprietary module. Research on the 38

2.4. Conclusion

embodiment of modular robotic systems from COTS components is limited. Plug and produce is an interesting philosophy enabling seamless exchange of active modules, but applications within robotics are limited and do not extend into the robot composition itself. Thus, there is a need for research on plug and produce for collaborative robots. Software configurator tools has been successfully applied in many diverse markets for product configuration. Despite this, the use of configurators for manufacturing systems is limited, with no present research on configurators for collaborative robots. Thus, there is a potential in research on how shop floor operators can benefit from configurators in selecting a feasible configuration. In summary, the state of the art on intuitive, manual robot programming is task-level programming using novel instruction methods such as kinesthetic teaching. However, the majority of related research resides at limited implementations in laboratory environments. Research on hardware reconfiguration of robots is in general limited, with no current research on enabling shop floor operators to perform the entire reconfiguration process.

39

Chapter 3

Research Hypothesis and Objectives 3.1

Hypothesis

In Section 1.4 the initiating research problem for this PhD project was stated. For conveince, it is repeated below: Initiating Research Problem How can reconfiguration of industrial collaborative robots to a new task be performed by shop floor operators? Section 1.4 further emphazised the correlation between collaborative robots, robot programming, and hardware reconfiguration. These three topics formed the structure of the related work presented in Chapter 2. Based on the findings from the review of related research, the research hypothesis for this PhD project is formulated below: Hypothesis Reconfiguration of collaborative robots can be carried out by production personnel without robotics, mechanical, or programming expertise by deploying modularity in both hardware and programming and providing sufficiently intuitive configuration tools. The hypothesis encompass reconfiguration of collaborative robots as both hardware reconfiguration and robot programming. Despite a clear correlation between the hardware reconfiguration and robot programming, the two will be treated as separate objectives in this PhD thesis. Thus, in extend of the hypothesis, the reconfiguration procedure proposed in this PhD thesis is presented in Figure 3.1. As seen from this figure, it is hypothesized that hardware reconfiguration preceeds the robot programming. The hardware reconfiguration is conceived in two steps: first, a suitable configuration is chosen using a configurator tool, and secondly the robot is physically adapted accordingly in a plug and produce manner. Robot programming also consists of two steps; 41

Chapter 3. Research Hypothesis and Objectives

Exchange modules

c

n ot tio ob a R gur fi on

Sequence skills ot e ob c R tan s in

Parameterize skills on sk ati Ta ur g i nf co

ec ta uta sk b le

sp ec Ta ifi sk ca tio n

Select modules

Robot Programming

Ex

Hardware reconfiguration

Skill configurator

Hardware configurator

Fig. 3.1: Hypothesized reconfiguration approach for collaborative robots including both hardware reconfiguration and robot re-programming.

first, skills are sequenced through a software tool to suit the given task, and afterwards the skills are parameterized through physical interaction with the robot. Hence, both hardware reconfiguration and robot programming consist of a selection step and an instantiation step.

3.2

Research Objectives

Based on the hypothesis, two main research objectives are derived. Each of these main objectives is afterwards decomposed into several specific research objectives. Main Objective 1 Skill-Based Robot Programming Investigate how robot skills can be exploited in a task-level programming framework enabling non-experts to sequence and parameterize skills for industrial tasks. Main objective 1 has been decomposed into the following research objectives: 1.1 Investigate how robot skills can be sequenced and parameterized manually using kinesthetic teaching. 1.2 Develop a task-level programming tool using the manual sequencing and parameterization of skills from objective 1.1. 1.3 Investigate how engineering expertise can be encapsulated in robot skills and thus be intuitive to use for non-experts. 1.4 Assess the usability of manual, task-level programming and the applicability of the approach in industrial praxis.

42

3.3. Project Delimitation

Main Objective 2 Hardware Reconfiguration Investigate how hardware reconfiguration of modular collaborative robots can be structured to enable non-experts to perform hardware reconfiguration. Main objective 2 has been decomposed into the following research objectives: 2.1 Investigate and develop a configurator tool aiding the selection of a feasible hardware solution for a given task. 2.2 Investigate and develop a framework supporting exchange of active hardware modules following a plug and produce philosophy. 2.3 Investigate and develop a control scheme ensuring an efficient utilization of the module functionality.

3.3

Project Delimitation

The following delimitations for the PhD project are set: Configuration Procedure The configuration procedure presented in Figure 3.1 forms the offset for this PhD project. Thus, the project is delimitated from research on alternative procedures and potential implications hereof. Kinesthetic Teaching This PhD project research on finding an intuitive and interactive method for instructing collaborative robots. Several novel methods for interacting with robots exist, however, this project will only focus on kinesthetic teaching. Safety Although very relevant to the topic of human-robot collaboration, this PhD project will not assess the safety-aspects of the presented research.

3.4

Research Methodology

The overall research methodology used in this PhD project has been critical rationalism. The work has generally been dominated by an iterative approach, in which objectives are formulated, tested, and revised in a cyclic fashion until a satisfactory result is achieved. The offset for the two main research objectives present prior to the initiation of this PhD project was quite different which in turn affected the research methodology applied to research on the two objectives. Research related to Main Objective 1 - Skill-Based Robot Programming had already proposed an initial concept of robot skills and proposed suitable industrial applications. 43

Chapter 3. Research Hypothesis and Objectives

Initial proof-of-concept industrial experiments with skill concepts had been carried out. Expressed in terms of technology readiness level (TRL) on the European Commission’s scale from the Horizon2020 programme [Horizon2020, 2014], the offset related to main objective 1 were at TRL 3 at the initiation of this PhD study. In this PhD project, research on main objective 1 has been conducted using an iterative design approach, where knowledge is gradually obtained through short research cycles followed by a validation and a postrationalization. During the PhD project, the TRL related to main objective 1 was brought to 7. Prior research related to Main Objective 2 - Hardware Reconfiguration had proposed a modular concept of a mobile collaborative robot. However, research on deploying the architecture for (re)configuration was not present. Research on control structures to support the modular, reconfigurable architecture was not present either. In terms of TRL, the research related to main objective 2 was at TRL 1 at the initiation of this PhD project. Consequently, the research on main objective 2 in this PhD project has been dominated by an exploratory approach. A key effort has been to explore the related research fields as a means of obtaining a more clear definition of the objective. This has been done through both literature studies, proof-of-concept implementations, and preliminary experiments. During the PhD project, the TRL has been brought to 3. Despite that hardware reconfiguration preceeds robot programming in the hypothesised reconfiguration approach (see Figure 3.1), robot programming is chosen as the first research objective due to the more well-established research offset. This order of the research objectives also reflects the order in which research has been conducted in the PhD project.

44

Part II

Summary Report

45

Chapter 4

Skill-Based Programming This chapter describes research on Main Objective 1 - Skill-Based Robot Programming. The proposed approach uses skills as the central task-related control modules from which a robot program is created. The general concept of skills along with several potential use cases are summarized in Section 4.1 based on the journal publication [Paper 1 | Pedersen et al., 2016]. One use of skills is in manual robot programming which has been pursued in this PhD project. Section 4.2 summarize the journal publication [Paper 2 | Schou et al., 2016] describing the application of the skill concept in manual robot programming. This includes the amendment of the skill concept with a manual parameterization method and the development of an intuitive skillbased programming tool called Skill Based System (SBS). With offset in SBS, Section 4.3 draws up conclusions from an assessment of the usability and training-level required in the proposed skill-based programming paradigm published in the conference paper [Paper 3 | Schou et al., 2013]. Section 4.4 describes the realization of skills exemplified through three skills of varying complexity. The realization of an object recognition skill is based on the conference paper [Paper 4 | Andersen et al., 2016]. Finally, Section 4.5 present the results from two real-world experiments deploying AIMMs in industrial production settings. The experiments are published in a conference proceeding [Paper 6 | Bøgh et al., 2014] and in a journal [Paper 5 | Madsen et al., 2015], respectively.

4.1

Robot Skills

The use of collaborative robots in flexible and changeable manufacturing settings culminates in frequent changeovers to new tasks. In this context, it is desired to move the robot programming task from highly trained personnel and engineers to the shop floor operators working alongside the robot. However, to accomplish this, faster and more intuitive robot programming methods are needed. One approach is to raise the level of abstraction in robot programming to more task-related operations as done by the task-level programming 47

Chapter 4. Skill-Based Programming

paradigm, see Section 2.2. Whether using an automatic or manual robot programming approach, the application of software control modules with a taskrelated abstraction simplifies and expedites the effort in robot programming. In this PhD thesis the task-related control modules are refered to as skills; hence, representing skills (capabilities) of the robot. As self-contained modules, skills can encapsulate expert knowledge and allow the shop floor operator to exploit this knowledge during robot programming. Paper 1: Robot Skills for Manufacturing: From Concept to Industrial Deployment [Pedersen et al., 2016] defines the concept of robot skills used in this PhD project.

4.1.1

Definition of Skills

From an operator perspective, robot skills are defined as intuitive, objectcentered robot abilities [Paper 1 | Pedersen et al., 2016]. Thus, skills represent operations performed to physical objects. Hereby, the skills naturally become task-related operations in the view of the shop floor operator. In the view of the robot, skills are defined as effectuating a change in a set of state variables describing the knowledge the robot has of its surroundings [Paper 1 | Pedersen et al., 2016]. Thus, skills transform the world from one state to another; hence, both in the physical world and in the robot’s virtual model of the world. By combining the two definitions, a skill is said to effectuate a change to the state of one or more objects (work pieces) in the world. Consequently, moving the robot arm to a certain location does not alone qualify as a skill since it does not change the state of any world objects. Similarly, detecting an object in the scene does not effectuate a state change for the physical object and thus does not qualify as a skill either.

4.1.2

Skill-Based Architecture

The skills constitute the middle layer of a three-layered architecture with tasks as the top layer and device primitives as the bottom layer, see Figure 4.1. As shown in Figure 4.1, skills are composed of device primitives, and tasks are composed of skills. The three-layered architecture allows for a clear separation between low-level device control (device primitives), task-related operations (skills), and the overall goal of the robot (tasks). Device primitives The collaborative robot consists of several hardware devices bound together; typically including a robot arm, a gripper, a mobile platform, and sensors. Each of the devices provides low-level, device specific functionality. The device primitives are representations of these device-level functions omitting the implementational details. Thus, device primitives embrace abstract, generic device functionality; e.g. move robot arm linear, grasp object, capture image, etc. Evidently, device primitives cover both sensing and manipulation functionalities. 48

4.1. Robot Skills Fetch rotor caps

Tasks Fetch Assemble rotor caps rotor

Navigate to location

Fetch Assemble rotor caps rotor

Calibrate to workstation



Pick object

Skills Rotate object

Pick object

Capture image

Navigate to Calibrate to Navigate Pick to location workstation location object

Move robot linear

Grasp



Device primitives Move robot joint

Move robot linear

Grasp

Navigate

Measure distance

Capture image

Fig. 4.1: The three layers of abstraction used in the skill-based architecture. The skills form the middle layer constituting task-related operations of the robot as a combined entity. [Paper 1 | Pedersen et al., 2016]

Skills A robot skill is composed of both sensing and manipulation primitives combined to effectuate a certain behavior of the robot as one entity; e.g. pick object, place object, inspect quality, etc. In addition to the device primitives, skills contain various control, reaction, and decision algorithms to allow complex behaviors as opposed to traditional macros. Skills are generic modules which operations are configurable through a set of parameters. During robot programming, the parameters are specified, and hereby the parameterized skill becomes specific to a given scenario. Tasks A robot task consists of one or more skills with their parameters specified to match the given physical task. Consequently, a generic task does not exist. Programming a task is done by concatenating skills and configuring their parameters to form the combined operation necessary to reach the desired goal

4.1.3

Content of Skills

As mentioned earlier, a skill defines a generic operational pattern which is adaptable through a set of parameters. In addition, the skill also defines the conditional context of its operation which includes requirements to the initial world state allowing for its execution and the predicted output state of the execution. Hereby, all the necessary information is contained within the skill; hence, skills are self-sustained. The conceptual model of a generic skill in Figure 4.2 illustrates the integral components of a skill operation. These components are needed to obtain self-sustainability. Execution By definition, a skill always performs the same principle operation which, however, is adaptable through the parameters. The principle operation is defined as the execution, in which the device primitives are used to realize the operation.

49

Chapter 4. Skill-Based Programming

Execution

Continuous evaluation

Preconditions

Postcondition check

State (initial)

Precondition check

Parameters

State (goal)

Prediction

Procedural flow

Information flow

Fig. 4.2: Conceptual model of a generic skill. The model includes the integral parts necessary for a skill operation. The figure is an updated pictorialization of the model presented in [Paper 1 | Pedersen et al., 2016], but with the same content and conceptual meaning.

Parameters The execution of a skill is adapted to a specific scenario through a set of parameters defined prior to the execution. Hereby, the skill becomes a generic, reusable module. Preconditions Prior to the execution, the initial world state is assessed to determine if the skill can correctly perform the operation. Continuous evaluation During the execution, the operation of the skill is monitored to react upon potential unexpected situations. Postconditions After the execution, the output state following the execution is assessed to determine if the skill operation was successful. Following the three-layered architecture in Figure 4.1, a robot task is composed of one or more skills concatenated to form a sequence of state changes, see Figure 4.3. Thus, a task itself represents a state change similar to a skill, however, on a larger scale. The skill definition allows for structured and intuitive concatenation of skills to form robot tasks using the pre- and postconditions as combination rules.

4.2

Manual Robot Programming Using Skills

Prior to the execution of a skill-composed task, the parameters of each skill must be configured. The skill concept presented in [Paper 1 | Pedersen et al., 2016] is exploitable in both automatic and manual robot programming. In automatic robot programming, the sequencing and parameterization of skills is done by an automatic planning algorithm as pursued in [Pedersen and Krüger, 2015; Rovida et al., 2014]. The advantage hereof is that the need for operator intervention in the robot programming is minimal. In manual robot programming, 50

4.2. Manual Robot Programming Using Skills Parameters

Operation

Continuous evaluation

Precondition check

Execution

Postcondition check

Precondition check

Operation

State (initial)

Parameters

Place

Procedural flow

Execution

Continuous evaluation

Postcondition check

Pick

State (goal)

Information flow

Fig. 4.3: A robot task consists of one or more skills concatenated to produce an overall state change.

the skill selection and parameterization is performed by an operator. Thus, by keeping the ”human-in-the-loop” the practical task knowledge and intuition of the operator is preserved. Due to the task-related abstraction of the skills, manual sequencing of skills does not require expert training as opposed to traditional device-level robot programming. However, to fully allow non-experts to perform manual robot programming, new approaches for parameterization are needed contrary to the traditional teach-pendant jog-mode. Paper 2: Skill Based Instruction of Collaborative Robots in Industrial Settings [Schou et al., 2016] presents a manual, task-level programming approach based on robot skills which will be refered to as skill-based programming in this PhD thesis. The intention of skill-based programming is to allow shop floor operators to program industrial tasks without extensive training. Figure 4.4 shows the manual parameterization approach applied to the generic skill model from Figure 4.2. The skill-based programming approach is split into two parts; first, offline specification and secondly, online teaching. Offline specification denotes skill sequencing and part-parameterization of the skills. Skill parameters set during the offline specification are non-spatial parameters such as velocities, thresholds, and object information. The specification part can be performed through a GUI and should not require online connection to the robot. The remaining parameterization is conducted during the online teaching part using structured, direct HRI. Through a combination of haptic inputs and kinesthetic teaching the parameters are obtained; typically including spatial and force parameters. A key characteristic of the online teaching is that the state change effectuated is the same as during execution. For example, the outcome of teaching a pick skill is the object being held by the gripper; equivalent to the outcome of executing the skill. Consequently, the sequence of skills in a robot task can be taught consecutively, and hereby the online teaching of a robot task actually solves the task. Conveniently, task teaching and task execution can be combined in e.g. extending an existing task or updating the online parameters of a skill. 51

Chapter 4. Skill-Based Programming Manual parameterization Specification (offline) Teaching (online)

Parameters State (goal)

Execution

Continuous evaluation

Procedural flow

Postcondition check

Operation Precondition check

State (initial)

Information flow

Fig. 4.4: Conceptual model of a generic skill with manual parameterization which consists of an offline specification part and an online teaching part. [Paper 2 | Schou et al., 2016]

4.2.1

Skill Based System

Using skills for task-level programming requires a system to manage the sequencing, parameterization, and execution. During this PhD project, two systems using skills for task-level programming have been designed in cooperation with other AAU researchers: Skill Based System (SBS) [Paper 2 | Schou et al., 2016] and Skill-based Robot Operating System (SkiROS) [Rovida et al., 2014]. SkiROS is designed primarily for automatic robot programming using a semantic world model as input to the sequencing and parameterization of skills [Holz et al., 2015]. Contrary to SkiROS, SBS is designed for manual robot programming based on the skill-based programming approach shown in Figure 4.4. SBS is a complete robot operating tool supporting programming and execution of tasks, robot and workstation configuration, and external task scheduling. Architecture and Implementation The implementation of SBS is done in C++ and is based on a distributed architecture using ROS as communication middle-layer facilitating the interaction between nodes. The software architecture of SBS is shown in Figure 4.5. The central control contains the core functionalities of SBS. A finite state machine keeps track of the current system state and the allowable transitions. A task programming engine manages the online teaching phase and thereby the sequential parameterization of each skill in a task. A task execution engine controls the sequential execution of the skills contained in a task. This includes instantiating the skills based on the parameters obtained during task programming. 52

4.2. Manual Robot Programming Using Skills

External UI

External UI Manager

ROS Bridge

GUI Skill

Task Manager

Central Control

(Layer 1)

(State Machine)

Skill Manager

Skill

(Layer 2)

Skill Device Manager

Service

(Layer 3)

Service

ROS IO

Device Proxy

Device Proxy

Device Driver

Device Driver

Physical Device

Physical Device

Fig. 4.5: The software architecture of SBS depicting the main components. Superjacent components are classes and libraries bound into a single node. Components separated by a line are separete nodes. Solid lines indicate a ROS communication in form of either topics, services, or actions. Dashed lines indicate a communication to equipment external to the PC running SBS. The device manager is not part of the central SBS node, but necessary for the use of SBS. The device manager has been developed as part of research on hardware reconfiguration in this PhD project, see Chapter 5. [Paper 2 | Schou et al., 2016]

53

Chapter 4. Skill-Based Programming

Each of the three levels in the skill-based architecture depicted in Figure 4.1 has a corresponding manager. The key purpose of the managers is to provide a well-defined, structured interface between the individual levels and thereby provide a clear separation of the levels. The device manager keeps track of connected devices and provides a standardized interface to use the functions provided by the devices. The service of the device manager is necessary for the operation of SBS, albeit this need is not unique to SBS. As a result, it has been chosen to implement the device manager as a separate node. The device manager used with SBS has been developed as part of research on hardware reconfiguration in this PhD project, see Chapter 5. The user interface of SBS is implemented as a set of classes decoupled from the core functionality. A native GUI is used as the primary user interface. In addition, an external UI manager provides the means for external UIs to be created. Human-Robot Interaction The GUI of SBS is designed with focus on ease-of-use and simplicity to provide an intuitive tool for shop floor operators. Four of the key menus in SBS are depicted in Figure 4.6. The main menu provides access to task programming and task execution. In addition to these two primary functionalities of the system, manual control of the robot, robot and workstation setup, device configuration, and calibration are available from the menubar. A new task is created in the task specification menu. As skills are added to the task, the system predicts the symbolic state change of each skill and adapt the available skills accordingly. Thus, infeasible skill combinations are avoided, e.g. two pick skills following each other. When adding a skill, the offline parameters must be specified. The online teaching is commensed after the sequencing and offline parameterization of skills. The online teaching process of a pick skill is illustrated in Figure 4.7. During online teaching, the GUI provides the operator with instructions and feedback, and the operator’s input to the system is done through direct interaction with the robot arm. The execution of a tasks in SBS is prepared in the execution setup menu which allows several tasks to be queued for execution. During the execution phase, the GUI displays a control panel where the operator can monitor and control the task execution. SBS has been used actively in four international and national research projects with various research objectives. The application of SBS in each project along with the contributions to the development of SBS is described in [Paper 2 | Schou et al., 2016]. In connection with these projects, SBS has been used in in three experiments deploying collaborative robots in real world production facilities. Two of these experiments are covered in Section 4.5. In total, 12 real industrial tasks have been programmed and executed using SBS.

54

4.2. Manual Robot Programming Using Skills

(a) Main menu

(b) Execution setup

(c) Task specification

(d) Offline skill parameterization

Fig. 4.6: Four of the key menus in the GUI of SBS. (a) The main menu from where task programming and task execution is accessible. (b) Execution setup menu where tasks are scheduled for execution. (c) Task specification menu where the skill sequencing is performed. New skills are added and parameterized to form the task. Once completed, the online teaching is commenced. (d) The skill parameterization menu from where the offline parameters of a skill are specified.

Teach target Teach approach Robot moves to target

⎧ ⎨ ⎩



Action

Instruction

(input approach direction) Teach point (teach approach location)

Gripper grasps object Teach depart

Haptic input to start kinesthetic teaching

Hold TCP stationary to store point (store approach location)

⎨ ⎩

(a) Teaching process and GUI instructions.

(b) Physical interaction.

Fig. 4.7: The online teaching of a simple pick skill. [Paper 2 | Schou et al., 2016]

55

Chapter 4. Skill-Based Programming

4.3

Usability of Skill-Based Programming

In Paper 3: Human-Robot Interface for Instructing Industrial Tasks using Kinesthetic Teaching [Schou et al., 2013] an assessment of the usability and training-level required in using the skill-based programming approach through SBS is presented. A user study with nine participants of varying robotics experience has been conducted. In the study, each participant independently solved two industrial handling tasks. The first task was a simple pick and place task in which an aluminum brick from the Cranfield benchmark [Collins et al., 1985] had to be moved from a fixture to a surface area as shown in Figure 4.8(a). The second task was a more complex handling task in which a rotor cap had to be moved from a transportation fixture to an assembly fixture. This process included turning the object up-side-down and performing a peg-in-hole operation, see Figure 4.8(b).

(a) Task 1: Pick and place

(b) Task 2: Peg in hole

Fig. 4.8: Setup of the two tasks programmed by each participant in the user study of SBS presented in [Paper 3 | Schou et al., 2013]. Task 1 was a simple handling task where the square brick was moved from the fixture to the red circle on the surface. In task 2, one of the cylinders (rotor caps) was moved from the fixture on the left and placed upside-down into the assembly fixture on the right.

The nine participants were divided into three groups based on their prior robotics knowledge; group 1 had no significant prior robotics experience or knowledge, group 2 had experience in robot programming, and group 3 contained experts in robot programming with knowledge on the skill-based programming approach. Regardless of their prerequisite, each participant was given a 15 minutes introduction to the skill-based programming method, the HRI of SBS, kinesthetic teaching, and the two tasks. After the introduction, the participant independently programmed the two tasks as separate scenarios. During programming of each task, the number of requests for help, the number of errors, and the time spend were recorded. The mean programming times and the number of errors and help requests for each group are shown in Table 4.1. In correlation with the EU FP7 project TAPAS [2011] the user study presented in [Paper 3 | Schou et al., 2013] has been repeated as part of a larger 56

4.3. Usability of Skill-Based Programming

Task Simple Advanced

Novice 320 (193/126) 2.3 749 (288/461) 4.7

Intermediate 280 (150/130) 0.3 581 (185/396) 1.3

Expert 157 (87/70) 0.0 280 (83/196) 0.3

Table 4.1: Results from the user study presented in [Paper 3 | Schou et al., 2013]. The upper numbers for each group are the mean programming times in the format: (/); all of which are in seconds. The lower number for each group is the mean errors and help requests combined.

HRI study conducted at the KUKA Laboratories GmbH facilities. In the study at KUKA, nine participants with various engineering backgrounds and significant robotics experience programmed the same two tasks as in the previous experiment. In comparison to the three user groups from the first user study, the nine engineers would be categorized as in the top of the intermediate group due to their technical education and daily robotics experience, but unfamiliarity with skill-based programming and SBS. The results from the user study at KUKA are presented in Table 4.2. As expected, the nine parcipants from Task Simple Advanced

KUKA Engineer 191 (95/96) 1.0 504 (169/335) 2.6

Table 4.2: Results from the user study at KUKA Laboratories GmbH. The upper numbers for each group are the mean programming times in the format: (/); all of which are in seconds. The lower number for each group is the mean errors and help requests combined.

the study at KUKA fell in-between the intermediate group and expert group from the first user study in terms of programming times. However, in terms of errors and help requests, the KUKA engineers did not perform as well as the intermediate group from the first study. This was due to the KUKA engineers generally requesting more confirmation of their actions (i.e., help requests), and not errors caused during programming. The reason for this might lie in the premises of the experiment being different at KUKA Laboratories GmbH where each participant tested a number of programming systems. A key result of the two user studies was the successful programming of both tasks by all 18 participants after only a 15 minutes introduction. This indicates that skill-based programming with manual parameterization is a feasible approach to significantly lowering the training-level required in programming 57

Chapter 4. Skill-Based Programming

robots. It also indicates that skill-based programming is still a viable approach for experienced robot programmers. Expectedly, the user groups did score according to their respective robotics experience level. Surprisingly, however, the performance gaps between the groups were lower than expected. Of the total amount of errors and help requests, approximately half were contributed to difficulties in predicting the necessary steps in the task and assessing the current situation. One cause for this was that the participants had diverse educational and professional backgrounds. Thus, only few of them had the practical task knowledge that would be expected of a shop floor operator. Furthermore, none of the participants were previously familiar with the two tasks at hand. The remaining half of the total amount of errors and help requests was contributed to inadequate instructions provided by the GUI during online teaching. This has lead to a significant re-design of the instructions, moving from text-based instructions to more graphical and illustrative instructions. Extending the usability assessment presented in [Paper 3 | Schou et al., 2013] and the study at KUKA Laboratories GmbH, five other studies of SBS with regards to usability and intuitiveness have been conducted. As summed up in [Paper 2 | Schou et al., 2016], more than 70 participants with varying robotics experience have tested SBS or parts of it. Although the other five user studies each focus on specific use cases or aspects of SBS, they all point towards the same conclusion as provided in [Paper 3 | Schou et al., 2013] and above: skill-based programming significantly lowers the training-level required to program industrial tasks on a collaborative robot, and hereby enables shop floor operators perform the robot programming without extensive training.

4.4

Realizing Skills

The robot skills, as defined in Section 4.1, constitute self-sustained, independent control modules with well-defined input and output. Thus, as long as skills conform to the conceptual definition and implementation structure presented, no restrictions are put on the operation of a skill. Intentionally, this allows a skill to encapsulate behaviors on any desirable level in its operation. Independent of the complexity of the operation excerted by the skill, the skill should still be available for skill-based programming by non-experts. The lack of restrictions also enables the development of both generic and application specific skills. Although it is encouraged to create skills as generic as possible to facilitate reuse, it is in some scenarios necessary to develop application specific skills; e.g. as in the experiments described later in Section 4.5. Table 4.3 presents the skills currently implemented in the skill library of SBS. Currently, the library contains a total of 16 skills of which 14 are generic and two are application specific. 10 of the skills have been realized and implemented in correlation with this PhD study. With the set of 16 skills, 12 real 58

4.4. Realizing Skills

Skill type

Pick

Place

Vision General Manipulation Process Application Specific

Skill variant PickSimple Pick2DVision Pick3DVision PickPattern (de-palletize) PickFeeder PlaceSimple PlaceOnSurface PlaceInside PegInHole PlacePattern (palletize) RecognizeObject InspectQuality RotateObject WeldStud OperateSQPress InspectQualityExternal

+

+ + + + + + + + +

Table 4.3: List of skills implemented in the skill library of SBS. The skills marked with a ”+” have been realized in correlation with this PhD project.

industrial tasks representing both assembly, machine tending, and logistics have been programmed and executed. Six of the 12 tasks are shown in Figure 4.9. All 16 skills have been developed based on the skill model presented in Figure 4.4 and thus includes manual parameterization. Furthermore, the implementation of all 16 skills follow the same software template, see [Paper 2 | Schou et al., 2016] for details. The details on each skill will not be presented in this PhD thesis as it would be to prolonged. Instead, this section describes the parameterization, operation, and parameters of three skills of varying complexity; a simple pick skill, a force controlled peg-in-hole skill, and an object recognition skill using computer vision.

4.4.1

The Simple Pick Skill

The operation of the simple pick skill is picking up an object from a predefined position. Thus, it represents the simplest version of a pick-up operation. The operation of the pick-skill is illustrated in Figure 4.10. The manual teaching procedure, the parameters, and the operation of the pick skill are shown in Figure 4.11. During parameterization, only the steps marked in dark blue on Figure 4.11 are tought by the user; the light gray steps are performed automatically by the robot. Thus, to parameterize the pick skill, the user only has to teach the object position, the approach point, and the depart point. 59

Chapter 4. Skill-Based Programming

(a) Rotor assembly

(b) Shaft assembly

(c) Rotor cap logistics

(d) FE-socket assembly

(e) Stud welding

(f) Quality inspection

Fig. 4.9: Examples of industrial tasks programmed and executed using the library of 16 skills.

(a) Approach

(b) Object position

(c) Object grasped

(d) Depart

Fig. 4.10: Visualization of the operation performed by the simple pick skill.

60

Manual teaching

Parameterization

Parameters

Operation

Specification

Velocity

Precondition check

Teach object position

Object

Teach approach point Robot moves to object

Object position Approach vector Depart vector

Move to approach point Move to object position

Execution

4.4. Realizing Skills

Close gripper

Gripper closes

Move to depart point

Teach depart point

Postcondition check

Fig. 4.11: The manual teaching procedure, the parameters, and the operation of the simple pick skill. The velocity parameter is used for all motions in the execution. The parameterization steps marked in dark blue are taught by the user, and the steps marked in light gray are performed automatically by the robot at low speed.

4.4.2

The Peg-in-Hole Skill

The peg-in-hole skill is a more advanced skill compared to the simple pick skill as it relies on force sensing to achieve its goal. The operation of the peg-in-hole skill is visualized in Figure 4.12. First, the robot will move to the approach

(a) Approach

(b) Above hole

(c) Aligned with hole

(d) Inserted

Fig. 4.12: Visualization of the operation performed by the peg-in-hole skill.

point. Secondly, the robot will move to the rotated position above the hole. During this movement, force measurements are used to detect contact with the hole. Thirdly, the robot will rotate around the object tip to align the orientation of the object with the hole. During this movement, the object tip remains inside 61

Chapter 4. Skill-Based Programming

Parameterization

Parameters

Operation

Specification

Velocity

Precondition check

Teach above hole position

Stiffness

Teach approach point Teach inserted position Gripper releases Teach depart point

Above hole position Approach vector Inserted position Object length Depart vector

Move to approach point Move to above hole

Execution

Manual teaching

the hole as a ”guide” for the rotation. Fourthly, the object is inserted into the hole. Again, force measurements are used to detect the bottom of the hole. Lastly, albeit not illustrated on Figure 4.12, the robot releases the object and moves to the depart point. If a robot arm with adjustable compliance is used (e.g. a KUKA LWR), the stiffness of the robot is reduced when rotating around the object tip and when inserting the object. The manual teaching, the parameters, and the operation of the peg-inhole skill are described in Figure 4.13. During online teaching, the user only teaches the position above the hole, the inserted position, the approach point, and the depart point. The object length and the orientation of the hole are automatically determined by the skill.

Rotate to align with hole Move to inserted position Release object Move to depart point Postcondition check

Fig. 4.13: The manual teaching procedure, the parameters, and the operation of the pegin-hole skill. The velocity parameter is used for all motions in the execution. The parameterization steps marked in dark blue are taught by the user, and the steps marked in light gray are performed automatically by the robot at low speed.

4.4.3

Object Recognition Skill

As no restriction is put on the operation of a skill, it can encapsulate advanced algorithms, special techniques, and complex conditional behaviors in its operation. Thus, the skill developer can build such specialist techniques and knowledge into the operation of a skill which afterwards can be used for skill-based programming by a shop floor operator. Paper 4: Using a Flexible 62

4.4. Realizing Skills

Parameterization

Parameters

Operation

Specification

Velocity

Precondition check

Show object position Capture training images Extract features

Object classes Training image count Object position Object model

Capture images Classify object

Execution

Manual teaching

Skill-Based Approach to Recognize Objects in Industrial Scenarios [Andersen et al., 2016] demonstrates how advanced computer vision and machine learning techniques can be encapsulated in a skill for object recognition. The skill uses the classification method known as bag-of-words (BoW) to classify an object based on a labelled set of known objects. The method requires numerous images of each object to be captured, labelled, and used as reference. During a training phase, SIFT features [Lowe, 1999] are extracted from each of the training images and used to create a ”vocabulary” of common features amongst the objects. These common features are then related to each object class through a machine learning technique. When a new image is classified, SIFT features are extracted for this image as well. By use of machine learning, the features of the new object are matched to the features of each class and a similarity score is derived for each class. As BoW is merely a procedural method, the core performance factors are the machine learning method used and the image sets used. In [Paper 4 | Andersen et al., 2016] a comparison of four acknowledged machine learning algorithms is made prior to skill implementation. Although several candidates score similarly, the support vector machine (SVM) algorithm is chosen along with the STAR feature detector (or CenSurE) [Agrawal et al., 2008]. The implementation of the BoW approach is done using the OpenCV software library. The implementation of the object recognition method into a skill includes both an execution and a teaching routine. Thus, the skill is enabled for the skill-based programming presented in Section 4.2. The procedure of both the parameterization and the operation of the recognition skill are shown in Figure 4.14. During teaching, the user only has to show the object location.

Postcondition check

Fig. 4.14: Parameterization procedure, operation procedure, and parameters of the recognition skill described in [Paper 4 | Andersen et al., 2016]. The parameterization steps marked in dark blue are taught by the user, and the steps marked in light gray are performed automatically by the robot at low speed.

As described, the BoW method needs a set of labelled images as a reference for training the machine learning algorithm. These images are captured 63

Chapter 4. Skill-Based Programming

during online parameterization using a tool-mounted camera. In the specification part, the user enters the number of classes and the assigned label for each class. In the subsequent online teaching, the operator must place each of the objects on a chosen target location one by one. For each object, the skill will automatically move the robot arm to a series of viewpoints located on a sphere around the object. Hereby, images of the object from multiple angles are obtained. Following the image capturing for each object, the system automatically extracts SIFT features and trains the classifier. During the skill execution, the robot moves the camera to a number of viewpoints positioned on a sphere around the object to be classified. Initially, three images are captured and classified. If all three yield the same class, this concludes the output of the skill. Otherwise, more images are captured and classified in a repeated process until 80% of the total images yield the same class or an upper threshold to the amount of images is reached. If 80% or more yield the same class, the classification is considered unambiguous and the skill returns a success. The usability of the recognition skill has been tested in a user study with 20 participants. The user study was conducted using the skill-based programming system SBS presented in Section 4.2.1. In the test, each participant programmed a simple handling task consisting of the following skill sequence: 1. Recognition skill 2. Pick skill 3. Place on surface skill The participants were divided into three groups based on their robotics experience level. The results showed that after a 10 minutes introduction all participants were able to instruct the skill sequence including the recognition skill. After the participation, each user was asked to fill out an ASQ questionnaire [Lewis, 1991] which was used to evaluate the user satisfaction. The results showed that all three user groups were in general satisfied with the human-robot interaction during programming of the scenario and hereby programming of the recognition skill.

4.5

Industrial Application of Skill-Based Programming

Creating an intuitive and quick robot programming approach based on skills has been a central goal in the research on robot programming in this PhD project. However, it is crucial that the skill-based programming approach is suitable for use in industrial settings. To assess the industrial applicability of skills, skill-based programming, and SBS; three real world experiments have 64

4.5. Industrial Application of Skill-Based Programming

been carried out. This section describes two of the real world experiments which have resulted in two published papers, [Paper 5 | Madsen et al., 2015] and [Paper 6 | Bøgh et al., 2014].

4.5.1

The Grundfos Experiment

Paper 5: Integration of Mobile Manipulators in an Industrial Production [Madsen et al., 2015] describes the results of an experiment where two Little Helper robots (see Section 2.1.2) have been deployed in a manufacturing environment at the Danish pump manufacturer Grundfos A/S. The experiment was carried out in correlation with the EU FP7 project TAPAS [2011] and spanned a total of four days including hardware setup, workstation adaptation, navigation setup, robot programming, and running production. The scenario used in the experiment was production of rotors; the rotating part of the electrical motor driving the pump. The rotor production includes both assembly, logistic, and machine tending tasks. It hereby covers the main application areas for AIMMs as defined by Bøgh et al. [2012a]. Because the rotor production is normally performed manually and has a low, varying demand, it is well-suited for mobile collaborative robots which provide quick and intuitive changeovers between tasks. The experiment can thus be considered a typical scenario for AIMMs. The primary task in the scenario is the assembly of the rotor from 11 individual components inside a press-station. Prior to the assembly, one of the components must be fetched from a conveyor belt at an automated machine. Following the assembly, the finished goods are brought to a warehouse. In the experiment, the two Little Helper robots used were Little Helper 21 and Little Helper 32 , see Figure 2.2. To manage the scenario, a mission planner kept track of the progress of each robot and scheduled the tasks accordingly. Little Helper 3 was assigned the task of rotor assembly at the press station. Figure 4.15(a) shows the 11 components and the finished rotor, and Figure 4.15(b) shows Little Helper 3 in operation at the press station. During the assembly, all the components were inserted into a fixture inside the press. Hereafter, the press was activated and the hydraulic piston pressed the components together to produce the finished rotor. Following the press operation, Little Helper 3 removed the rotor from the press and placed it in a small load carrier (SLC). Prior to the assembly task, Little Helper 3 fetched the rotor cap from a conveyor belt, see Figure 4.16(a). Should no rotor caps be available, Little Helper 3 would go to a warehouse and collect rotor caps from there. Little Helper 2 performed the finished goods logistics and was therefore configured for SLC handling. Once the SLC had been filled up with rotors from the assembly, Little Helper 2 picked up the SLC and brought it to a 1 Referred 2 Referred

to as Little Helper 2 (LH2) in [Paper 5 | Madsen et al., 2015] to as Little Helper 1 (LH1) in [Paper 5 | Madsen et al., 2015]

65

Chapter 4. Skill-Based Programming

(a) Components of the assembly

(b) Assembly station

(c) Assembly process

Fig. 4.15: (a) The assembly task consists of 11 components, 1x rotor shaft, 1x pressure ring, 8x magnets, and 1x rotor cap. From these, the rotor is assembled. (b) Little Helper 3 calibrating to the workstation using a haptic calibration method. (c) Little Helper 3 performing the assembly task at the press station.

warehouse, see Figure 4.16(b). From there, it brought an empty SLC back to the assembly station. The two tasks of Little Helper 3 (fetching rotor caps and rotor assembly) were programmed using skill-based programming through SBS. In the two tasks, a total of nine generic skills were used3 , which can be interpreted as three versions of a pick skill, four versions of a place skill, a rotate skill, and a quality inspection skill. The nine generic skills were used to produce a task sequence of 118 steps for the assembly and eight steps for the rotor cap collection. Complimentary to the nine generic skills, an application specific skill was developed to operate a safety gate on the press which also served to activate the press operation. Another application specific skill was developed to validate the assembly process using an external vision system. The skill-based programming of the assembly and rotor cap collection was carried out by two experts in less than one working day (8 hours). As part of the experiment a four-hour consecutive production test was carried out. During this test, both Little Helper robots operated autonomously under the command of the mission planner. 26 errors in total resulting in 58 minutes of downtime was recorded. The rotor cap collection had a cycle time of approximately 5 minutes, and the assembly had a cycle time of approximately 10 minutes. In contrast, a human worker can assemble the rotor in 30 seconds. The significant gap between the cycle time of the human and the cycle time of the robot was primarily contributed to the application of a single armed robot and the extensive quality inspection used to ensure final product quality. To 3 According

to the current skill library. The paper states that 13 generic skills were used, however, since the experiment the PickFromTrolley and PickFromPlatform skills have been merged into one and the two calibration skills are no longer defined as skills.

66

4.5. Industrial Application of Skill-Based Programming

(a) Rotor cap collection

(b) SLC warehouse

Fig. 4.16: (a) Little Helper 3 picking up rotor caps from the conveyor belt and placing them on a transport-fixture on the robot platform. (b) Little Helper 2 transporting SLCs with finished rotors to the warehouse.

make the task economically feasible, the cycle time would therefore have to be decreased. A key result of the experiment was the successful application of collaborative robots in a real world production scenario designed for human workers. Furthermore, the successful programming and execution of both tasks based on the proposed skill-based approach through SBS have demonstrated its application and readiness in real world industrial praxis.

4.5.2

The Grundfos Experiment Revisited

In conclusion of the TAPAS project [TAPAS, 2011], the rotor production scenario was reused in a new experiment described in Paper 6: Integration and Assessment of Multiple Mobile Manipulators in a Real-World Industrial Production Facility [Bøgh et al., 2014]. The new experiment was an extension of the one from [Paper 5 | Madsen et al., 2015] with participation of all the research partners in TAPAS. The time frame of the experiment was five days, including hardware setup, workstation adaptation, safety precaution, navigation setup, robot programming, and running production. Two mobile collaborative robots (AIMMs) were used in the experiment; the Little Helper 3 from Aalborg University and the omniRob4 from KUKA GmbH, see Figure 4.17. The overall scenario of the experiment remained the same as in the previous experiment. However, an additional warehouse was introduced from where the rotor shafts (see Figure 4.15(a)) were collected. In the previous experiment, they were brought to the assembly station by an operator. The omniRob substituted for the Little Helper 2 robot used in [Paper 5 | Madsen et al., 2015] 4A

commercial version of the omniRob is today marketed as KMRiiwa.

67

Chapter 4. Skill-Based Programming

(a) Little Helper 3

(b) omniRob

Fig. 4.17: (a) Little Helper 3 performing the rotor assembly at the hydraulic press. (b) KUKA omniRob transporting SLCs with finished rotors to the warehouse. [Paper 6 | Bøgh et al., 2014]

and thus performed the logistic task of transporting SLCs between the assembly station and the warehouse. Similar to the previous experiment, the assembly process was carried out by Little Helper 3. However, the task of fetching the rotor caps was reassigned to the omniRob. Hereby, the omniRob conducted all the logistic tasks including the new task of fetching rotor shafts. This task designation allowed for a better time-wise distribution between the two robots and thereby provided an optimized scenario. Programming of the assembly task on Little Helper 3 was carried out using skill-based programming and SBS similar to the previous experiment [Paper 5 | Madsen et al., 2015]. However, SBS was in this experiment amended with a motion planner from the partner Convergent Information Technologies GmbH. The motion planner was implemented as a service, which constitutes a higherlevel device primitive provided to the skill layer. The concept of services is described in [Paper 2 | Schou et al., 2016]. As a result of the integration of the motion planner, the robot programming time was reduced by almost one hour. Furthermore, the application of the motion planner resulted in an increased robustness of the robot operation. As part of the experiment an eight-hour consecutive production test was carried out. During this test, both mobile manipulators operated autonomously under the command of the mission planner. 22 errors resulting in 44 minutes of downtime were recorded for Little Helper 3. The cycle time for the assembly process was on average 11 minutes; thus, similar to that of the previous experiment. Equivalent to the conclusion from the previous experiment, this cycle time renders the scenario economically unfeasible. However, once again, 68

4.6. Conclusion

the key result was the successful application of mobile collaborative robots in a real world production setting; complemented by the successful exploitation of skill-based programming. In promotion of the results from TAPAS, the scenario from [Paper 6 | Bøgh et al., 2014] was showcased at the AUTOMATICA fair in 2014. A replica of the production scenario from Grundfos was build and used to perform a live, continuous ”production” demonstration at the fair.

4.6

Conclusion

In this chapter, research on robot programming for collaborative industrial robots has been presented. Based on [Paper 1 | Pedersen et al., 2016] the concept of robot skills has been summarized. Skills constitute generic, task-related abilities of the robot which serve as the parametric control modules that can be concatenated into a robot task. One approach for sequencing and parameterizing skills is to do it manually through a GUI and direct interaction with the robot via kinesthetic teaching, as described in [Paper 2 | Schou et al., 2016]. A central benefit of skills is that the task programmer using the skills does not need specialist training in order to use skills performing complex operations. Thus, the skill developer can encapsulate expert knowledge and techniques into a skill, which afterwards can be used by ordinary users. Andersen et al. [2016, Paper 4] demonstrate how advanced computer-vision knowledge can be encapsulated in a skill and made available to robotics novices. To examine the usability and training-level required to successfully program industrial tasks with the developed skill-based programming tool, two user studies with participants of varying robotics experience have been conducted; one of which are summarized based on [Paper 3 | Schou et al., 2013]. The results of the user studies have shown that robotics novices are able to program industrial handling tasks after a 15 minutes introduction to the system. In extend of the usability assessments of the skill-based programming, the applicability of SBS and the skill-based approach is tested in real world industrial settings in three experiments; two of which are published in [Paper 5 | Madsen et al., 2015] and [Paper 6 | Bøgh et al., 2014]. In both experiments, several industrial tasks including a complex assembly task are programmed using SBS and the skill-based programming method. Afterwards, the programmed tasks are used in a four-hour and an eight-hour consecutive production scenarios. The results from both experiments clearly demonstrate the applicability of the skill-based programming approach in industrial praxis.

69

Chapter 4. Skill-Based Programming

4.6.1

Evaluation of Research Objectives

This chapter has presented research related to Main Objective 1 - Skill-Based Robot Programming. This section evaluates each of the four research objectives in main objective 1 based on the research presented in this chapter. Research objective 1.1 - Investigate how robot skills can be sequenced and parameterized manually using kinesthetic teaching A method for manual robot programming based on robot skills has been proposed as part of this PhD project. The method introduces a teaching function as part of each skill which controls the parameterization of that particular skill. The manual parameterization is defined as parallel to the execution; hence, the programming effectuates the same order of state changes as the execution. The proposed method exploits the benefits of kinesthetic teaching to intuitively instruct online parameters and combines it with an offline part during which skills are sequenced and partly parameterized. Research objective 1.2 - Develop a task-level programming tool using the manual sequencing and parameterization of skills To effectively exploit the method for manual sequencing and parameterization of skills, a holistic robot operating tool called SBS has been developed. SBS embeds the skill-based programming method and provides the necessary programming and execution engines to operate an industrial collaborative robot. Research objective 1.3 - Investigate how engineering expertise can be encapsulated in robot skills and thus be intuitive to use for nonexperts A central part of the skill-concept is the encapsulation of expert, domain knowledge. That is, the skill developer can embed expert knowledge in the skills without the need for expert knowledge to use the skills for skill-based programming. Over the course of this PhD project, 10 skills have been realized giving a total of 16 skills available. Two examples of encapsulation and parameterization of expert knowledge are given in Section 4.4. The first example describes the operation, parameters, and parameterization procedure of a force-controlled peg-in-hole skill. The second example describes the development of an object recognition skill and exemplifies how advanced computer vision and machine learning algorithms can be encapsulated in a skill and subsequently used by computer vision novices for skill-based programming.

70

4.6. Conclusion

Research objective 1.4 - Assess the usability of manual, task-level programming and the applicability of the approach in industrial praxis Two types of assessments of the proposed skill-based programming approach have been conducted; assessment of the usability and required training level, and evaluation of the approach in real world industrial settings. Assessment of the usability of the skill-based programming has been done through two user studies with 18 participants in total. The 18 participants, with robotics experience ranging from complete novice to expert programmer, each programmed two industrial relevant handling tasks of varying complexity. The user studies showed that even complete novices were able to program industrial robot tasks after an introduction of 15 minutes. Validation of the industrial applicability of the skill-based programming method and SBS has been done through three experiments in real world industrial manufacturing settings; two of which are described in this thesis. In summary of the experiments, SBS and the skill-based programming method have been successfully used in programming both assembly, logistics, and machine tending tasks; and in execution of these tasks during running production scenarios of four and eight hours.

71

Chapter 5

Hardware Reconfiguration As described in the research hypothesis in Chapter 3, transitioning a collaborative robot to a new task contains two primary objectives; hardware reconfiguration followed by robot programming. With the skill-based programming presented in Chapter 4 enabling shop floor operators to re-program collaborative robots, the focus in this chapter shifts to Main Objective 2 - Hardware Reconfiguration. Paper 7: Towards Shop Floor Hardware Reconfiguration for Industrial Collaborative Robots [Schou and Madsen, 2016b] presents a survey on the research topics related to hardware reconfiguration of industrial collaborative robots. Based on the vision of enabling shop floor operators to perform the hardware reconfiguration, four objectives are established which are listed below: • Modular architecture • Module selection

• Module exchange

• Module utilization The set of objectives represents a roadmap for the development of a framework for intuitive hardware reconfiguration of collaborative robots. This roadmap has formed the premise for the research on hardware reconfiguration in this PhD project. Central to the hardware reconfiguration approach is a modular hardware architecture segmenting the robot system into modules with well-defined functionalities and interfaces. Section 5.1 describes a modular architecture for AIMMs adopted from the work of Hvilshøj [2012] and a modular architecture for stationary collaborative robot cells developed in correlation with the project CARMEN [2013]. Based on the architecture, Section 5.1 describes selected examples of embodying the architectures and how COTS components are adapted into as modules. Configuring a robot to a given task implies selecting a set of modules capable of successfully solving the task. Research on using configurators to aid the user in selecting appropriate modules is presented in Section 5.2. Know73

Chapter 5. Hardware Reconfiguration

ledge modeling to support the configuration along with two proof-of-concept configurators are documented. Realizing a modular, reconfigurable collaborative robot does not only require a modular architecture with physically exchangeable modules. A control framework capable of managing and exploiting the hardware modularity is required to achieve quick and effortless module exchange. Schou and Madsen [2016a, Paper 8], summarized in Section 5.3, propose a hardware management framework supporting quick, online exchange of devices corresponding to ”hot” plug and produce, see Section 2.3. Key focus is placed on the interaction with the higher level task control (robot programming) and ensuring a flexible utilization of module functionality. Section 5.4 discusses the presented research on hardware reconfiguration and draws up conclusive remarks.

5.1

Modular Hardware

Modular hardware is considered an enabler for quick and intuitive hardware reconfiguration, see Section 2.3. This section describes the design and instantiation of modular hardware architectures for both AIMMs and stationary collaborative robots. Additionally, the section describes the embodiment of modules from both architectures.

5.1.1

Modular Architecture

Two modular architectures are used in this PhD project: the AIMM architecture and the CARMEN architecture. AIMM Architecture The AIMM architecture was proposed by Hvilshøj [2012], who developed the architecture using theories from both mass customization and modular product development. Hvilshøj used a combination of a top-down and bottom-up approach in determining the architecture. The top-down approach was based on a comprehensive analysis of more than 560 industrial assembly, logistics, and machine tending tasks within the pump manufacturing industry [Bøgh et al., 2012a]. The analysis resulted in a set of ”user” requirements in terms of needed functionality, capacity, and load capabilities. The bottom-up approach was based on an extensive survey of available COTS components which resulted in a classification and taxonomy of components for AIMMs. By merging the results of the two approaches, the modular segmentation of AIMMs was determined and the modular hardware architecture defined. The modular architecture is not easily represented as different aspects of it are best captured using different visualization and modeling methods. However, a high-level graphical representation of the AIMM architecture is shown in 74

5.1. Modular Hardware

Manipulation

Sensor

Tooling Sensing Mobility Robot Adapter Sensor Adapter Tool Changer Tool Adapter Robot Tool

Tool Magazine

Sensor

Robot Arm

Mechanical Integration Control

Base Support

Application Specific (ad-hoc)

Supply

Super-structure Overall Communicontrol unit cation block Actuation and Supply

Robot Controller

Terminal Block Inverter Converter

General Support Plate Sensor

Sensor

Mobile Platform Processing and Control Unit

Battery Package

Fig. 5.1: Illustrative representation of the modular architecture for AIMMs presented by Hvilshøj [2012]. The figure has been slightly simplified for better clarity.

Figure 5.1 to give a general idea of the content. Further details on the AIMM architecture can be found in [Hvilshøj, 2012]. The architecture depicted presents the common module types of the AIMM architecture. However, some of the module types depicted are optional; e.g. the tool changer or tool sensor. In the architecture, the super-structure constitutes a ”standard design”. That is, the super-structure is the backbone of the AIMM and thus a required, non exchangeable part. The remaining hardware modules of the AIMM architecture can be classified as manipulator, tooling, mobile platform, and sensors. In addition, a number of mechanical integration components such as base support, robot adapter, and sensor adapter are depicted in Figure 5.1. These components serve to adapt commercial devices to a common, well-defined mechanical interface; e.g. the interface between robot arm and gripper, between robot system and sensor etc. It should be emphasized that these components do not represent modules by themselves, but serve as part of the respective module on which they adapt the mechanical interface. For example, the tool adapter mounted to a robot tool adapting it to a standardized bolt pattern is considered part of the gripper module (refer to Section 5.1.2 for details). This approach is chosen as it provides the simplest module solution space for a configuration system to manage.

75

Chapter 5. Hardware Reconfiguration

Manipulation Tooling Sensing Mechanical Integration

Application Equipment

Sensor

Robot Adapter Robot Sensor Adapter Tool Changer Arm Tool Adapter Robot Robot Pallet Tool

Safety Sensor

Pallet

Control

Application Equipment

Supply

Pallet

Pallet

Table Top Plate

Table Top Plate

Frame

Frame

Robot Control

PLC

DC Supply

PC

Terminal Block

Valve Block

Terminal Block

Valve Block

Ethernet Router

Fig. 5.2: Illustrative representation of the CARMEN architecture for stationary collaborative robots.

CARMEN Architecture The CARMEN project is a Danish national research project focusing on developing more intuitive robotic solutions with a significantly lower changeover time compared to traditional industrial robotic solutions [CARMEN, 2013]. To achieve this, studies within CARMEN focus on modularity in both software and hardware, learning from prior experience, and advanced simulation of critical process steps. Where the vision for AIMMs is to bring the robot to the task, the vision in CARMEN is to bring the task to the robot cell. Consequently, the CARMEN robot cell is typically a stationary robot cell. In the CARMEN project, relevant tasks at four Danish manufacturing companies have been analyzed and used to derived initial functional, structural, and geometric requirements; similar to the top-down approach used by Hvilshøj. From these requirements, an architecture for a stationary, table-top, modular robot cell has been designed within collaboration with partners in CARMEN and used as a central aspect in the CARMEN project. The manipulationcentric modules of the CARMEN architecure are similar to those of the AIMM architecture. A high-level graphical representation of the CARMEN architecture is shown in Figure 5.2. An overview of the primary, scalable modules in the CARMEN architecture is depicted in Figure 5.3. The central concept of the CARMEN architecture is scalable pallets which mount on-top of modular tables, see frame, table-top plate, and pallets in Figure 76

5.1. Modular Hardware

Vision

Mounting

1x1

1x1

2x1

2x1

Robot

Custom

...

2x2

...

2x2

Pallet

Table-top plate

Frame Hanging Supported

Self-standing

Sensor support frame

Self-standing

Fixed to table

Fig. 5.3: The primary, scalable modules of the CARMEN architecture and their combination into a CARMEN cell.

77

Chapter 5. Hardware Reconfiguration

5.2. The pallets serve as base modules which can be equipped with application specific equipment. The pallets can be of any incrementally scaled size based on a standard unit size. Hence, pallets can be 1x1, 1x2, 1x3, 2x2, 2x3 etc. Pallets can be one of the four types: Robot pallet A pallet designated for mounting a robot arm. As commercial robot arms do not adhere to the same base bolt pattern, robot pallets with robot-specific bolt patterns are needed. Mounting pallet A pallet with threaded holes distributed in a grid on the surface. Using the threaded holes, equipment can be fixed to the pallet surface. Hereby, the mounting pallet is a mechanical adapter enabling arbitrary equipment to be integrated on the modular CARMEN platform. Vision pallet Typically a white pallet with a solid, smooth, and non-glossy surface suitable for vision applications. Custom pallet Custom pallets can be designed as long as they adhere to the standard unit and the mechanical mounting interface for pallets. A custom pallet could for instance contain custom equipment develop with the pallet interface directly build into the equipment. The tables (frame + table-top) to which the pallets are fixed follow the same standard unit and incremental scale. A CARMEN cell can be composed of several tables grouped together to provide a larger combined table surface. One of the supporting frames is designated the main frame and carries the control and computational components. If a single robot arm is used, it will be mounted to the main frame. For stability and capacity reasons, the main frame is constrained to a minimum size of 1x2. A scalable sensor support frame allows sensors such as vision cameras to be mounted over the table surface, see Figure 5.3

5.1.2

Embodying the Modular Architecture

This section presents some of the modules embodied from the AIMM and CARMEN architectures. Manipulation Modules The embodiment of manipulation-centric modules has primarily focused on object handling tasks. Thus, the majority of the embodied modules are robot arms and grippers. Table 5.1 lists the devices that have been adapted into modules with standardized and well-defined mechanical interfaces, and Figure 5.4 shows the physical embodiment of some of these modules.

78

5.1. Modular Hardware

Brand/Model Schunk WSG50 (d) Robotiq 3-finger (c) MetalWork P4-16 (e) Custom Electro-magnet Stud-welding-gun replica ATI SI-130-10 (f) KUKA LWR 4+ (a) Universal Robots UR5 (b)

Module Type Gripper Gripper Gripper Gripper Process tool Force/torque sensor Robot arm Robot arm

Table 5.1: Manipulation-related devices adapted into modules. The character in parenthesis relates to the image of the particular module in Figure 5.4.

(a) KUKA

(c) Robotiq

(d) Schunk

(e) MetalWork

(f) ATI

(b) Universal Robots

Fig. 5.4: Physical embodiment of manipulation-centric modules.

79

Chapter 5. Hardware Reconfiguration

Robot arm module

A robot arm has in general two mechanical interfaces; the tool flange and the base. Standardization of the mechanical interface between robot arm and gripper is obtained by using adapter plates. Each module, here gripper and robot arm, is equipped with an adapter plate which installs onto the device-specific interface and provides an adaption into a common, standardized mechanical interface between the adapter plates. Consequently, an adapter plate must be manufactured for each device if it does not already comply with the chosen standardized interface. The standardized interface between the adapter plates consists of four bolts in a square pattern with 70 mm between them. The robot arm adapter plate contains M8 threaded holes, and the tool adapter contains counter-sunk D8 holes. The interface is designed to provide a mechanically stiff joining of the plates and meanwhile allow a variety of devices suitable for small-part-assembly to be equipped with the adapter plate. Figure 5.5 illustrates the integration of adapter plates to both the gripper and robot arm module. The standardized interface of the adapter plates also allows modules to be mounted between the gripper and robot arm, e.g. a tool changer or a force/torque sensor. Custom interface ? Robot

D?

?

70.00

Adapter plate D8

Gripper

Standard 70.00 interface

?

Jaws sub-module

Gripper module

D? Custom interface

Fig. 5.5: A standardized mechanical interface between the tool module and robot arm module is realized using adapter plates.

In this PhD project, standardization of the robot arm base interface is achieved through the use of CARMEN robot pallets. Thus, the two robot arm modules embodied are each installed onto a CARMEN pallet providing them with the interface suitable for the CARMEN platform.

80

5.1. Modular Hardware

CARMEN Platform Central to the CARMEN architecture is the scalable pallet-concept. For the embodiment of the CARMEN architecture a standard unit size of 400 mm has been chosen in the CARMEN consortium, and thus the minimum pallet of 1x1 is 400 x 400 mm (length x width). Pallets can be of any incrementally scaled size based on the standard unit; e.g. 1x2 (400 x 800 mm), 2x2 (800 x 800 mm), 1x3 (400 x 1200 mm) etc. With the chosen unit size, a cell of 2x3 has a footprint of 800x1200 mm corresponding to the footprint of a standard EUR-pallet. The embodiment of the CARMEN architecture has been done at several of the partners in CARMEN. At AAU, a number of robot cells (main tables) and external tables have been embodiet. The following tables have been created for research and teaching purposes at AAU: • 2x2 CARMEN robot cell • 1x2 CARMEN robot cell

• 1x2 mobile CARMEN robot cell • 1x2 table for Adept AnyFeeder

• 1x2 external tables with mounting pallets Each of the robot cells realized are fully equipped with pallets, robot arm, tooling, and controllers. As part of solving several industrial and laboratory tasks, several mounting pallets have been equipped with application specific components. Figure 5.6 depicts three examples of application specific equipment.

(a) Assembly

(b) Gluing

(c) Stud-welding

Fig. 5.6: Pallets equipped with application specific equipment. (a) is equipped with both feeding and assembly fixtures for assembly of a FE-socket. (b) is equipped with a demonstrator for a gluing process. (c) is equipped with a fixture for precision stud-welding operations.

A key goal of the CARMEN project is the transfer of research results to industry. The project includes several industrial partners, and in extend of these the CARMEN technology is sold to industry through collaboration with two system integration companies. To this date, more than 30 CARMEN robot 81

Chapter 5. Hardware Reconfiguration

cells have been constructed, with more than 20 deployed in industry. According to the system integration companies, CARMEN solutions have been sold to a broad variety of industries and companies ranging from small businesses with five employees to large, global corporations. Within the CARMEN project, CARMEN robot cells have been developed and deployed at industrial partners. Recently, a CARMEN robot cell has been deployed at the industrial partner Danfoss A/S where it will take part in daily production. A key goal in this is both the dissemination of research results to industry, but also the experiences gained from both the development of the cell and the subsequent production implementation. The configuration of the cell has taken its offset in four specific tasks currently performed manually in the Danfoss production. Based on an analysis of each task, a feasible, specific configuration has been derived for each task; for some of the tasks several configuration proposals have been derived. Figure 5.7 shows one configuration for each task. Based on the configuration proposal for each task, it has been decided within the CARMEN consortium to construct a 3x3 CARMEN cell with an external 1x3 mobile table for Danfoss, and initially realize the pallets needed to automate task C and D (Figure 5.7). Although the configuration proposal for each task is based on a 2x2 main table, Danfoss requested a 3x3 main table to ensure room for further development. The mobile table can be configured to carry an AnyFeeder module on a 3x1 pallet, or carry boxes and trays for products. The main table is equipped with a UR5 robot with a Robotiq FG85 gripper. Pallets with a flexible press station, an application specific, automated feeding magazine, an application specific, active fixture, and a passive fixture for product trays have been constructed. Figure 5.8 shows the cell at Danfoss configured for task B.

82

5.1. Modular Hardware

Product Tray

Existing Laser Processing Machine

Box Finished Products

Docking bar

EmneProduct bakke Tray

ag M

AnyFeeder

EmneProduct bakke Tray

e

in

az

Active Fixture (b) Task B

Product Tray

(a) Task A

AnyFeeder

Dobb. silo

Product Tray

Product Tray

Existing feeder

Flexible Press

Flexible Presse Press

(c) Task C

(d) Task D

Fig. 5.7: Configurations of the CARMEN cell proposed for four specific tasks from Danfoss A/S. The configurations have been derived manually based on analysis of the four tasks.

83

Chapter 5. Hardware Reconfiguration

Fig. 5.8: CARMEN robot cell at Danfoss A/S. The cell consists of a 3x3 main table with a UR5 robot and application specific pallets, and an external 1x3 mobile table.

5.2

Module Selection

With offset in the modular hardware architectures, configuring the robot to a given task starts by selecting the set of suitable modules, see Figure 3.1. However, selecting suitable modules for a given task is not trivial as it requires knowledge on both the process, the product to be manufactured, and the equipment to be reconfigured [Rampersad, 1994]. Although most shop floor operators might possess detailed knowledge on the process and the product, they rarely possess detailed knowledge on the robot equipment domain. This is further supplemented by the size and complexity of the robot equipment domain. Especially the fact that not all modules have a direct implication in the process but might be equally important as other modules adds to the complexity of determining a configuration. For example, a gripper has a direct implication in the process, but a power supply does not; yet, both are equally important for the operation of the robot. The concept of software configurators (see Section 2.3) provides a cooperative tool aiding the user in the configuration process. Therefore, configurator technology is investigated in this PhD project as a means of aiding the shop floor operators in selecting a feasible set of modules for a given task. This section documents yet unpublished research on the objective of selecting a suitable 84

5.2. Module Selection

Common Ontologies

Domain Ontologies

Application Ontologies

Material Ontology

Process Ontology

Geometry Ontology

Product Ontology

Equipment Ontology

Manipulation Ontology

CARMEN Ontology

Fig. 5.9: The ontologies created as part of the knowledge modeling activity in this PhD project. The three domain ontologies represent the manufacturing knowledge as proposed by Rampersad [1994] and Lohse et al. [2006]. The equipment ontology is represented by the manipulation ontology and the CARMEN ontology.

set of modules using configurators. It describes the design, development, and implementation of two proof-of-concept configurators for industrial collaborative robots and the associated research effort on knowledge modeling. Two cases are selected as reference for this research; configuration of manipulation-centric modules and configuration of a CARMEN cell. The former focuses on configuration of modules related to object manipulation typically including a robot arm, a tool, and sensors. This is of relevance to both AIMMs and stationary collaborative robots. The second case focuses on configuration of the CARMEN robot cell from the CARMEN modules defined in Section 5.1.

5.2.1

Configuration Model

The review of related work presented in Chapter 2 suggests that ontologies are a well-proven method for modeling knowledge of for both robotics and configuration of technical systems. As a result, the ontology-based knowledge modeling method is adopted in this PhD project. Following the theory of Rampersad [1994], the knowledge for the configuration task is covered by the three domains process, product, and equipment. As presented in the review of related work in Chapter 2, Lohse [2006] proposed an ontology framework with both process, product, equipment domain ontologies. The overall concepts of the framework have been adopted in this PhD project and used for the knowledge modeling activities. A number of ontologies have been created as part of the knowledge modeling for the configuration task, see Figure 5.9. Instead of a single equipment ontology, an ontology is created for each of the two configuration cases; i.e. the equipment ontology is represented by a CARMEN ontology and a manipulation ontology. Extending the three domain ontologies, two ”common” ontologies have been created: a geometry ontology and a material ontology. These ontologies represent knowledge applicable to a wide range of technical objects and systems. To that extend, they 85

Chapter 5. Hardware Reconfiguration

are regarded as ”common”, and they cover general concepts and instances of their respective fields; e.g. 3D-shapes, 2D-shapes, and units of measure for the geometry ontology, and individual materials and their properties for the material ontology. All the ontologies have been modeled in OWL DL using Protégé 51 from Stanford University [Stanford University, 2015]. The Pellet reasoner [Sirin et al., 2007] is used to infer additional knowledge (axioms) based on the asserted knowledge (axioms); e.g. if a gripper uses a pneumatic energy source, it is inferred to be a pneumatic gripper by the reasoner. Common Ontologies The material ontology is used to associate a material with both products and equipment. In the configurator implementations presented in Section 5.2.2, the primary usage is to deduce product weight, fragility, and surface color based on the associated material. The geometry ontology is used to associate geometries with both products and equipment. In example, the geometry is used to deduce possible grasps on a product when selecting a suitable gripper. Product Ontology The product ontology contains knowledge on products2 manipulated in the robot task. The high-level concept taxonomy of the product ontology is inspired from the structure proposed by Lohse [2006]. The core concepts of the product ontology and the relations to the material and geometry ontologies are illustrated in Figure 5.10. The top-most concept of the product ontology is the product. A product can either be a component or an assembly. In [Lohse, 2006] a component is defined as the entity being input for a process; i.e. it could be single entities or sub-assemblies. The motivation for this definition by Lohse is that complex or 3rd party entities, like a bearing, do not need to be decomposed into sub-components. As such, the need to represent it as an assembly is unnecessary. Although the definition by Lohse is well-argued, a component is in this PhD project defined as a single entity, not further decomposable into subcomponents. Hence, it cannot be a sub-assembly. The separation of assembly and component allows an explicit relation between component and material and geometric properties, see Figure 5.10. In the case of complex or 3rd party entities, like a bearing, they can be modeled as instances directly of product. Despite the clearer separation of component and assembly compared to that of 1 Protégé

5 is a graphical integrated development environment for ontologies. that the term product used throughout this section refers to the concept: product and the subsumed concepts: component and assembly. 2 Note

86

5.2. Module Selection Geometry ontology

Product

0..*

assembledFrom

hasRegion

Region

hasPrimaryShape 1

1..*

Assembly

Component

1

hasShape

Shape 1

Valve

ValveBody

madeFrom

Material Material ontology

Domain ontology

Common ontology

Concept

Instance

relation

isA

instanceOf

Fig. 5.10: The primary concepts and relations of the product ontology. Individuals are included to exemplify the instantiation of the concepts.

Lohse, the qualifying condition for being a component is still relative. For example, is a product made from welding two components together a component or an assembly? And does the application of paint to a component render it an assembly? A strict semantic definition providing an explicit classification of these and similar examples will not be put forth. The classification as either assembly or component will instead be left open as a modeling flexibility in each particular application. Process Ontology The process ontology contains knowledge on the task to be accomplished by the robot. Like in the product ontology, the high-level concepts of the process ontology are inspired by Lohse [2006]. Figure 5.11 illustrates the top-most concepts of the process ontology and their relations. Instances are included in the figure to exemplify the extend of the concepts from which they are derived. Any physical act is considered an activity. The activity can be subsumed by either a process or an action. The primary difference between the two is that a process involves a state change of one or more products (or components/assemblies). Hence, a process is an activity performed on a product. An action, on the other hand, is defined as an activity in which only the state of the actor is changed. In consequence of this definition, the actions naturally become low-level activities. A task is defined as a process performing a welldefined portion of work to achieve a production related goal. An operation is defined as a process changing the state of one or more products. Given that a task consists of one or more operations, the simplest task would be a single operation. As such, these definitions by Lohse [2006] does not clearly define a boundary between task and operation. However, as this thesis considers the 87

Chapter 5. Hardware Reconfiguration Product ontology

Product

1..* performedOn

Activity

Process

1..*

Task

composedOf

1..*

Operation

composedOf

Action

1..*

LogisticTask

AssemblyTask

Assemble Rotor

Concept

MoveRotor

Instance

relation

InsertMagnet

isA

Grasp

instanceOf

Fig. 5.11: The primary concepts and relations of the process ontology. Individuals are included to exemplify the instantiation of the concepts.

configuration of one task at a time, the focus is on the content of a task; hence, the sequence of operations. Thus, the definition of task and operation by Lohse has been sufficient to support the work of this PhD study. The abstraction levels and mereological relations between task, operation, and action clearly resembles the three-layered architecture used in the skillbased programming presented in Chapter 4. This similarity is exploited in the configurators in Section 5.2.2 to link process and equipment knowledge. Manipulation Ontology The manipulation ontology contains the manipulation-centric concepts of a robot cell. That is, the concepts that are part of the robot itself; typically including a robot arm, a robot tool, sensors, structural components, and control/computational components, see Figure 5.1 and Figure 5.2. The full manipulation ontology cannot be visualized properly in this thesis, but is made available for download3 . Instead, the hierarchical structure of the primary concepts is shown in Figure 5.12, and the primary relations between these concepts are shown in Figure 5.13. The general concept of a module is subsumed by the active module which is a module containing at least one device. 3 The most recent version of the manipulation ontology can be downloaded from: http://tinyurl.com/csmanonto

88

5.2. Module Selection

0..*

Module

Passive Module

Structural Component

Active Module 1..*

Device

Manipulator

RobotArm

RobotTool

Gripper

Sensor

ProcessTool hasPart

Camera

Force/Torque Sensor

isA

Fig. 5.12: The upper, central concepts of the manipulation ontology.

RobotCell

0..1

1..* 1..* 1..*

RobotTool

Manipulator

Robot 0..*

Sensor

1..*

Structural Component

Cell Controller

1

Controller hasPart

Fig. 5.13: The relations of the upper, central concepts of the manipulation ontology.

89

Chapter 5. Hardware Reconfiguration

Grasp Release Active Function

Primitive hasGraspType 1

MoveFingers …

hasFunction

RobotTool

1 1..* 1(..*)

Gripper

1..*

Concentric Grasp SurfaceGrasp

hasPrimitive 1..*

1..*

Device

Parrallel Gripper

ParallelGrasp

1(..*)

GraspType



hasGraspType usesPrinciple

Impactive

usesPrinciple

Constigutive Ingressive usesPrinciple

Operational Principle relation

Gripping Principle

Astrictive

isA

Fig. 5.14: Example of non-physical knowledge in the manipulation ontology. The figure illustrates some of the functional aspects related to a gripper. Further subsumption of the gripper concept is exemplified using the parallel gripper concept which further restricts the relations of the gripper concept. The classification of gripping principles is based on [Monkman et al., 2007].

Devices are defined as controllable, mechatronic entities. The concepts, relations, and properties of the manipulation ontology furthermore cover nonphysical concepts and relations such as functions and operating principles. An example of the non-physical concepts is presented in Figure 5.14. As shown in Figure 5.14, a device must be capable of performing at least one active function. A gripper is required to have at least one grasp and at least one release primitive, but as given by the hierarchical structure a gripper is not restricted to only these two primitives. The subsumption of gripper by parallel gripper is shown to exemplify further restrictions on the relations. The function concept represents abstract capabilities of equipment. The hierarchical structure of the function concept is shown in Figure 5.15 which includes the concepts task, skill, and primitive. These concepts represent the software functions used in the skill-based programming presented in Chapter 4. The function concept and the subsumed concepts have been modeled in the 90

5.2. Module Selection

Task

AssembleRotor

1..*

PegInHole Skill Active Function

Function

PlaceOnto

1..*

Pick

Grasp MoveLinear Primitive Release Concept

Instance

hasFunction

isA

instanceOf

CaptureImage

Fig. 5.15: The hierarchical structure of function concepts and their relations. The function concept is subsumed by the task, skill, and primitive used in skill-based programming.

manipulation ontology and not in the process ontology because these concepts reflect equipment capabilities and not process requirements. However, they do provide a convenient link to the process requirements as exploited in the configurators presented in Section 5.2.2. CARMEN Platform Ontology The ontology on the CARMEN platform contains the CARMEN specific modules and components and their internal relations. It furthermore links to the manipulation ontology by including the concept robot. That is, the robot concept asserted in the CARMEN ontology is described by importing the manipulation ontology. The full CARMEN ontology cannot be visualized properly in this thesis, but is made available for download4 . Instead, the hierarchical structure of the upper, central concepts is shown in Figure 5.16, and the relational structure of these concepts is shown in Figure 5.17. As shown in Figure 5.17, a robot is a required part of a CARMEN cell. Although a CARMEN cell could in principle exist without a robot, the CARMEN cell is in this PhD project regarded as a robot cell and therefore must include at least one robot. 4 The most recent version of the CARMEN ontology can be downloaded from: http://tinyurl.com/cscaronto

91

Chapter 5. Hardware Reconfiguration

Physical Module

CARMEN Module

RobotCell

CARMEN Cell

CARMEN Frame

CARMEN TableTop

CARMEN Pallet

CARMEN SensorFrame

Frame Extension

MainFrame

MountingPallet

RobotPallet

VisionPallet

CustomPallet

isA

Fig. 5.16: The upper, central concepts of the CARMEN ontology.

1

CellController

CARMEN Cell 0..*

1

Frame Extension

MainFrame PowerSupply

1..*

0..*

1..*

0..*

CARMEN SensorFrame 0..*

CARMEN TableTop

MainTableTop

0..*

1..*

CARMEN Pallet

RobotPallet 1

Manipulation ontology hasPart

Robot

Sensor

Fig. 5.17: The relations of the upper, central concepts of the CARMEN ontology.

92

5.2. Module Selection

5.2.2

Configurators

Two proof-of-concept configurators have been implemented during this PhD project; a configurator for manipulation equipment and a configurator for CARMEN platform equipment. Both configurators are implemented in C++ code. The REDLAND and RASQAL libraries5 [Beckett, 2015] provide the means to read, write, and execute queries in the OWL files containing the ontologies. A custom shared object wrapping the REDLAND and RASQAL libraries is created with the purpose of providing higher level methods devised for extracting configuration knowledge. Thus, the configurators include the shared object which includes the REDLAND and RASQAL libraries. Manipulation Equipment Configurator The manipulation equipment configurator has been implemented as standalone, graphical tool. It includes both equipment, process, and product knowledge, and represents a proof-of-concept implementation of a manipulation equipment configurator intended for shop floor operators. Based on user input of process and product information, the configurator determines the required module types (e.g. robot arm or camera) and a set of feasible specific modules denoted the candidates (e.g. UR5 or ASUS Xtion). The final selection within the set of candidates is done by the operator. The user interface of the manipulation equipment configurator is illustrated in Figure 5.18 to 5.21. The user interface of the manipulation equipment configurator is designed as a step-through ”wizard”, and the first step is input of process information, see Figure 5.18. The scope of the configurator is to determine a configuration for a single task and, following the process ontology illustrated in Figure 5.11, a task consists of one or more operations. Thus, the user input for the process is given by sequencing operations. A number of generic operations are defined in the process ontology and available through the configurator; e.g. insert product into fixture and place product on surface. The task - operation - action structure of the process ontology is remarkably similar to the task - skill primitive structure in the manipulation ontology. Both skills and operations are defined as effectuating a state change to a given product. However, because the operations represent task-related goals from a human perspective, the semantics of operations may not fully express the activities needed. That is, part of the activity in an operation can be implicit. For example, the operation move product to bin does not explicitly state that the product should be picked up first, but humans naturally infer this. This implicit knowledge is modeled in the manipulation ontology by asserting which skills are used to realize a given operation, see Figure 5.22. 5 REDLAND provides support for the Ressource Description Framework (RDF), and RASQAL provides support for executing SPARQL queries on RDF-data.

93

Chapter 5. Hardware Reconfiguration

Fig. 5.18: Screenshot of the process input menu in the manipulation equipment configurator.

94

5.2. Module Selection

Fig. 5.19: Screenshot of the product input menu in the manipulation equipment configurator.

95

Chapter 5. Hardware Reconfiguration

Fig. 5.20: Screenshot of the input summary menu in the manipulation equipment configurator.

96

5.2. Module Selection

Fig. 5.21: Screenshot of the candidate selection menu in the manipulation equipment configurator.

Operation

1..* realizedBy

Skill 1..*

hasFunction

Primitive Domain ontology

relation relation Application ontology

1..* hasFunction

Device

relation

Fig. 5.22: The relation between process and equipment knowledge. The link between operations and skills is modeled through the realizedBy relationship which is asserted in the manipulation ontology.

97

Chapter 5. Hardware Reconfiguration

Process information

Process input

Operations

Extract skills

Skills

Modules

Find modules with primitive

Primitives

Extract primitives

Determine module types

Module types

Product input

Products

Revise module list

Candidates

Module types

Determine module types

Product information

User selection

Candidate selection

Selected configuration

Fig. 5.23: Overview of the configuration flow in the manipulation equipment configurator.

Using the link between operations and skills, the configurator maps the operations sequenced by the user into a set of required skills and from these into a set of required primitives. Based on the required primitives, the configurator extracts the individual modules providing these primitives. The result of which is a list of all candidates providing at least one of the required primitives. By extracting the module type for each individual candidate, the represented module types are determined. An overview of the configuration flow in the manipulation equipment configurator is depicted in Figure 5.23. The list of potential module types, the list of candidates, and the list of required primitives are used for the final candidate selection. However, first the product information is acquired and analyzed. The next step after the process information input is the input of product information. In this window (see Figure 5.19) the user can assign a product to each operation. The product knowledge including individual products is retrieved from the product ontology and displayed to the user. Should a given product not be present in the product ontology, a product input menu in the configurator allows the user to intuitively create new products which are then inserted as axioms into the product ontology. Based on the product input provided by the user, the list of candidates already obtained from the process input is further refined. During this refinement, the candidates not suitable for the 98

5.2. Module Selection 2..*

assembledFrom

Assembly Product Component 1..*

hasRegion

GraspRegion 1

Geometry ontology

Region 1

hasShape

Gripper

1..* hasGraspType Common ontology

GraspType

Domain ontology

0..* graspedWellBy Application ontology

Shape

relation

hasShape

isA

Fig. 5.24: Example of a relation between product and equipment knowledge. For each component one or more grasp regions are asserted in the product ontology. The link between the shape of a grasp region and a suitable grasp type is modeled through the graspedWellBy relationship which is asserted in the manipulation ontology. Note that an assembly inherits the grasp regions of any components part of the assembly.

chosen products are removed from the list. In its current state, the configurator supports a maximum cardinality of one for each module type; i.e. maximum one robot arm, one gripper, one force/torque sensor, one camera etc. Thus, it does not consider the use of e.g. two grippers or the use of tool-changers. The product-based refinement is performed by linking knowledge from the product ontology and the manipulation ontology. Figure 5.24 depicts an example where the capabilities of a gripper are linked to the product knowledge through the shape of one or more grasp regions on the product. After the input of process and product information, the configurator generates a summary of the inputs and determines the required skills, the required primitives, the required module types, and the possible candidates. The last step in the configuration process is the selection of specific modules from the list of candidates. At this point, the candidate list only contains valid modules and as such the selection is a matter of user preference. However, the final selection could be further supported using either an optimization of the proposed candidates based on some criteria, or using machine learning based on historical choices and success rates; albeit, this remains future work. To aid the user in the selection, the candidates are categorized according to their module type. Furthermore, the list of required primitives is shown during the selection, and as candidates are selected, the primitives covered are marked green, see Figure 5.21. The selection is finished once all required primitives have been covered by a specific module. The conclusion of the configuration process is the gener99

Chapter 5. Hardware Reconfiguration

ation of a configuration file and a configuration report. The configuration file contains the obtained configuration and the asserted operations and products. Thus, the configuration file both represents the final, specific configuration and a summary of the user input leading to this configuration. The latter could be used in future applications of machine learning based on the selection process. The configuration report is displayed as a last step in the user interaction and summarizes the configuration input and output. A preliminary, qualitative feasibility study of the manipulation equipment configurator has been conducted. The three industrial tasks: rotor assembly (Figure 4.9(a)), the FE-socket assembly (Figure 4.9(d)), and a part of the Cranfield benchmark assembly [Collins et al., 1985] have been used as input for the configurator. All three tasks have all been carried out (programmed and executed on a collaborative robot) during this PhD project. Since the configurator determines all suitable modules, the success criteria for the study was that the modules used previously to solve the given task were present in the list of candidates determined by the configurator. In all three test cases, the configurator did suggest the modules used to solve the task which indicates the feasibility of the proposed configurator. Figure 5.25 shows the process and product inputs for the rotor assembly task and the selected candidates. The task has been solved physically in both the TAPAS and ACat projects using different configurations; both are present in the candidate list. The selected candidates in Figure 5.25 represent the configuration used to solve the rotor assembly in the ACat project. The experience from the feasibility study indicates that the manipulation equipment configurator does produce the expected output within the defined scope. An analysis of all the proposed candidates did however reveal some candidates which would not be physically suitable for the given task. These were included in the candidate list due to the scope set for the proof-of-concept implementation of the configurator which does not yet assess all implications of the various process and product inputs. Further maturation and extension of both configurator scope and the underlying knowledge would significantly improve the obtained response from the configurator. In summary of the feasibility study, the approach used in the manipulation equipment configurator is deemed feasible and further maturation and extension of the scope is possible following the same method as demonstrated. CARMEN Configurator The proof-of-concept implementation of the CARMEN configurator has been integrated as a sub-menu in SBS, see Section 4.2.1. This has been chosen to directly integrate the configuration output of the CARMEN configurator into the subsequent skill-based programming. The CARMEN configurator is designed as a graphical tool allowing the user to configure the layout of the robot-cell in terms of pallets and their position and orientation on the CAR100

5.2. Module Selection

(a) Process and product input

(b) Candidate selection

Fig. 5.25: Module selection for the rotor assembly task (Figure 4.9(a)) which was part of a feasibility study of the manipulation equipment configurator. The figure shows the process and product inputs and the selected candidates. The selected candidates have been used to physically conduct the rotor assembly in the ACat project. Please note, that only insertion of two of the eight magnets have been included as the repetition of this operation does not affect the configuration outcome.

MEN table. The configurator provides a visual representation of the resulting CARMEN table layout during the configuration process. The user can choose pallets from a library containing robot pallets, vision pallets, empty mounting pallets, equipped mounting pallets, and custom pallets. Both the robot cell, the pallets, and equipment mounted to the pallets will have a local reference frame. The configurator automatically creates a transformation tree from all the individual reference frames and thereby the relation between them. During the configuration process, the configurator ensures that pallets can only be positioned and oriented according to the respective empty slots on the table-top. It furthermore prevents pallets from occupying the same space, and it ensures that the same pallet from the library can only be inserted once. As the SBS programming tool is not suited for coordination between two robot arms, the configurator limits the number of robot pallets to one. The CARMEN configurator is based only on the equipment knowledge represented in the CARMEN ontology. As a result, the configurator can only prevent inconsistent configurations, not provide suggestions or validation of the configuration in relation to the given task. 101

Chapter 5. Hardware Reconfiguration

Fig. 5.26: Screenshot of the user interface of the CARMEN configurator.

Figure 5.26 shows a screenshot of the user interface of the CARMEN configurator menu in SBS. After the configuration process, an obtained configuration is stored for use in task programming and execution. During robot programming, skills can be programmed with reference to any of the reference frames present in the configuration. That is, a skill can be related to a particular pallet or equipment on a pallet. Consequently, a stored task remains executable even though pallets are switched, moved, or rotated as long as the new physical configuration is updated in the configurator menu. Despite the system’s ability to cope with differing configurations, it is recommended to use the same configuration during task execution as the one used during task programming. For that reason, when initiating the execution of a task on a CARMEN cell, SBS will prompt the operator to validate the configuration. A preliminary, qualitative feasibility study has also been conducted for the CARMEN configurator. Two industrial tasks were configured using the configurator. Both tasks used a UR5 robot pallet and two mounting pallets with custom equipment. The configurations were realized physically on a 2x2 CARMEN cell and the associated tasks were instructed using SBS, see Figure 5.27. In the feasibility study, task 1 was first executed, and afterwards a changeover to task 2 was performed which was then executed. Afterwards, the two pallets of task 2 were switched, and task 2 was once again executed. Following each reconfiguration activity, the resulting configuration was updated in the CARMEN configurator. After each reconfiguration activity, the given task could 102

5.3. Hardware Management Framework

(a) Task 1

(b) Task 2

Fig. 5.27: Physical embodiment of the two tasks used in the feasibility study of the CARMEN configurator.

executed successfully. This indicates that the CARMEN configurator can successfully model a given configuration, and that the asserted configuration is successfully integrated with the skill-based programming in SBS.

5.3

Hardware Management Framework

With a suitable target configuration determined, the last objective is to physically reconfigure the robot, see Figure 3.1. A modular hardware architecture and physical modules with well-defined hardware interfaces, as described in Section 5.1, provide the means to physically reconfigure the robot in terms of exchanging, adding, and removing modules. However, when the modules comprise active components (devices) the control architecture needs to be prepared for the reconfiguration as well. The control system must support module exchange and be able to effectively utilize the functionality of each module. Paper 8: A Plug and Produce Framework for Industrial Collaborative Robots [Schou and Madsen, 2016a] presents a framework for managing the software control aspect of modular, reconfigurable collaborative robots. The framework facilitates the control and communication with active, software controlled devices encapsulated into active modules. An agent-based architecture is used in the framework to support online exchange of active modules following the ”hot” plug and produce concept. The communication between the distributed agents is done using ROS as the communication middle-layer. The use of ROS also allows an easier integration of 3rd party software available from the ROS community which provides a fast growing library of open-source, robot-centered software including drivers for a broad variety of devices. A key purpose of the proposed hardware management framework is to create 103

Chapter 5. Hardware Reconfiguration

Device Library

Device Proxy

Task Control

Task Control Layer

Device Manager

Device Control Layer

Device Proxy

Device Proxy

Device Proxy

Device Driver

Device Driver

Device Driver

Device Driver

Physical Device

Physical Device

Physical Device

Physical Device

Fig. 5.28: Architecture of the hardware management framework. The architecture uses an agent-based approach with distributed control nodes for each device. Superjacent elements in the figure are components bound into a single node through linked libraries.

a clear, well-defined separation between the task control level and the device control level making the task control level independent of specific device syntax and communication structure. To accomplish this, general, abstract functions called primitives are introduced which represent common device functionality.

5.3.1

Architecture

Figure 5.28 presents the architecture of the proposed hardware management framework. The framework constitutes the device control layer in Figure 5.28 which consists of a device manager, a device library, a number of device drivers, and a number of corresponding device proxies. Device driver Each physical, active device has a corresponding software driver to facilitate the communication with the device. Thus, the device driver is specific to a particular device. On one side, the device driver handles the low-level communication with the physical device which follows the specific communication interface, protocol, and syntax of the device. On the other side, the device driver provides a ROS-based interface for interaction with the device manager. To allow an easy integration of downloaded, third party ROS packages, the ROS interface of the device driver is not standardized. Consequently, the communication structure and syntax of the ROS interface will differ depending on the particular developer who created the driver. 104

5.3. Hardware Management Framework

Device proxy To adapt the ROS interface of device drivers into the syntax of primitives, a device proxy is introduced for each device driver. The device proxy serves to perform the mapping from the specific syntax and structure of the particular device driver into the syntax of primitives. The higher-level task control is hereby kept independent of specific device syntax and implementations following the idea of plug and produce. Consequently, changes to the module library, e.g. introduction of a new module, can be done without affecting the task control. The device proxy is implemented as a dynamically linked library included at runtime in the device manager. Device manager Central to the framework is the device manager node, which serves two primary purposes. Firstly, it keeps track of the connected device drivers and manages the registration of newly launched device drivers. Secondly, the device manager handles the communication between the device layer and the task control. Incoming primitive requests from the task control are designated to a matching device by the device manager and send to the respective device proxy. Task control The hardware management framework is purposely kept independent of the task control system. Thus, although illustrated as a single node in Figure 5.28, any task control system would apply as long as it adheres to the interaction protocol of the primitives.

5.3.2

Primitives

On the conceptual level, primitives represent general, abstract functionality and provides a standardization of the interface between the task control and the device layer. Contrary to skills, as described in Section 4.1, there is no explicit implementation or template for primitives. In implementation, primitives represent a well-defined structure of information with the primitive name as a simple piece of information. When matching the primitive request with available primitives in the device layer, the device manager considers only this information and not the semantic meaning of it. Hereby, the implementation of the primitive syntax puts no restriction on the action performed by each primitive, nor on its level of abstraction. Consequently, there is no finite list of primitives, as a new skill might request a currently un-encountered primitive, and a new device might provide another currently un-encountered primitive. In both cases, no changes are necessary to the hardware management framework, and no central database of known primitives needs to be updated. It is worth mentioning that this approach allows primitives to be defined from a conceptual perspective with common, general functionality in mind, but the implementation allows very specific primitives to still be utilized through the primitive interface. 105

Chapter 5. Hardware Reconfiguration

Parameters The syntax of primitives allows the inclusion of parameters. A parameter can be specified as required from either the device level or from the task control level. For example, a device providing a moveJoint primitive naturally requires the joint-angle values; hence, this is a required parameter. Other primitives like grasp requires no parameters. However, the task control can amend the grasp with e.g. a force parameter which constrains the primitive to devices offering a force-controlled grasp. Furthermore, parameters specified from the task control can be marked as optional in which case the parameters are only obeyed if possible; hence, these provide ”loose” constraints. In summary, the proposed implementation of primitives with both optional and required constraints provides a common, standardized interface between task control and the device-layer, albeit with sufficient flexibility in implementation to support efficient use of advanced or specialized device functionality. Although there is no finite set of primitives, Table 5.2 lists some of the most common, generic primitives.

5.3.3

Feasibility Study

The feasibility of the proposed framework has been demonstrated in a feasibility experiment. Two robot arm modules and two gripper modules were used to obtain four different configurations. Initially in the experiment, a pick and place task was instructed using the first configuration. During the experiment, the task control was neither reconfigured or shutdown. The task execution was simply paused, the hardware reconfigured, and the task execution resumed. In conclusion, the feasibility study has demonstrated the ”hot” plug and produce capability of the system.

106

5.3. Hardware Management Framework

Module type

Robot arm

Gripper

Camera Force/torque sensor Tool changer Stud welding tool

Primitive

Parameters

moveLinear

position

moveJoint

pose

setTCP

transform

getCartPosition

-

getJointPose

-

kinestheticTeaching

-

setCartImpedance

impedance

grasp

-

release

-

moveFingers

width x

getWidth

-

captureImage

-

measureCartForce

-

measureCartWrench

-

attach

-

detach

-

weld

-

Table 5.2: List exemplifying primitives for selected module types. The table only includes parameters required by the implementing module. Primitives marked in blue are defined as necessary primitives for the particular module type.

107

Chapter 5. Hardware Reconfiguration

5.4

Conclusion

This chapter has presented research on hardware reconfiguration for industrial collaborative robots. A roadmap of the key challenges and objectives in realizing shop floor hardware reconfiguration of industrial collaborative robots is summarized based on [Paper 7 | Schou and Madsen, 2016b]. The central objectives are the design of a modular architecture, tools for selecting of a feasible set of modules, enabling quick exchange of modules, and allowing the task control system to efficiently utilize any given module. A modular architecture for AIMMs is adopted from previous research at AAU. As part of this PhD project, an architecture for stationary, collaborative robots has been developed in correlation with the CARMEN project. Over the course of this PhD project, a number of modules have been instantiated and built based on the two architectures. To aid the operator in selecting a feasible configuration for a given task, two proof-of-concept configurator tools have been presented in this chapter. The knowledge to support these configurators have been modeled as ontologies in OWL2 format. Separate ontologies have been created for the product, process, and equipment domains and these are brought together in the configurators. One of the configurators are designated for configuration of manipulation-centric equipment. This configurator is implemented as a standalone tool and interacts with the user through a GUI. The second configurator is a configurator specifically for configuring the layout of a CARMEN cell. This configurator has been integrated into SBS, the skill-based programming tool presented in Chapter 4, and provides a direct integration with the robot programming. Both configurators are considered proof-of-concept implementations which require furhter maturation and extension of their configuration scope. Furthermore, assessment of the usability and required training level of the configurators is needed. On the objective of module exchange, a hardware management system has been proposed, implemented, and demonstrated as presented in [Paper 8 | Schou and Madsen, 2016a] and summarized in Section 5.3. The framework incorporates an agent-based architecture using ROS as communication middlelayer, and it allows ROS packages from other developers to be adapted into agents and used in the framework. Online exchange of modules following the concept of ”hot” plug and produce is facilitated through the framework. As part of the hardware management framework, the concept of primitives has been introduced. Primitives are abstract, device-level functions serving as a standardized function interface between task control and the device layer. To support the use of specialized device functionality, a flexible method of amending parameters and constraints to primitives has been proposed and is embedded in the hardware management framework.

108

5.4. Conclusion

5.4.1

Evaluation of Research Objectives

This chapter has presented research related to Main Objective 2 - Hardware Reconfiguration. This section evaluates each of the three research objectives in main objective 2 based on the research presented in this chapter. Research objective 2.1 - Investigate and develop a configurator tool aiding the selection of a feasible hardware solution for a given task Two configurator tools have been developed in this PhD project; a manipulation equipment configurator and a CARMEN platform configurator. The configurators provide configuration of their specific applications. The manipulation equipment configurator demonstrates how product and process knowledge can be exploited in order to determine a list of feasible equipment module candidates. The CARMEN configurator incorporates a visual overview of the product, and integrates the configuration directly into the subsequent skill-based programming in SBS. Despite being regarded as proof-of-concept implementations, each of the configurators is extendable as the underlying knowledge is expanded. A preliminary, qualitative feasibility study of each configurator has indicated that configurator technology can be used to aid the selection of hardware modules, albeit further research on extending and maturing the configurators are needed. Equipment, process, and product knowledge to support the configurators have been modeled as a number of ontologies. In terms of equipment, a manipulation ontology containing knowledge on manipulation-centric equipment and a CARMEN ontology containing knowledge on the CARMEN equipment have been created. Research objective 2.2 - Investigate and develop a framework supporting exchange of active hardware modules following a plug and produce philosophy A hardware management framework has been proposed which allows quick, online exchange of active hardware modules following the ”hot” plug and produce concept. The framework is built on an agent-based architecture and uses ROS as the communication middle-layer. Central to the framework is a device manager which manages the system configuration and facilitates the plugging and unplugging of modules. The framework includes a pre-compiled library expediting the adaptation of ROS packages into agents usable in the framework.

109

Chapter 5. Hardware Reconfiguration

Research objective 2.3 - Investigate and develop a control scheme ensuring an efficient utilization of the module functionality The concept of primitives is introduced to provide generic, device-level functions abstract from the specific implementation on each device. The primitives are used as a standardized interaction protocol between the task control layer and the device control layer. Hereby, the task control is made independent of specific hardware syntax, and thus module exchange can be performed without reinstantiation of the task control. A flexible approach of amending constraints to the primitives ensures that advanced or specialized device functionality can be utilized through the primitive-generalization. This further allows the task control to request specific devices rather than allowing the device manager to choose the best suited. The device manager in cooperation with a set of device proxies translate between the syntax of primitives and the syntax of a given specific device.

110

Chapter 6

Conclusion This chapter draws up the main contributions of this PhD thesis by summarizing research on each of the objectives defined in Chapter 3. Based on the contributions, final concluding remarks are presented along with remarks on future work.

6.1

Summary of Contributions

In this section, each of the research objectives defined in Chapter 3 is reviewed and the related contribution of this PhD project is summarized. Main Objective 1 - Skill-Based Robot Programming 1.1 Investigate how robot skills can be sequenced and parameterized manually using kinesthetic teaching: This PhD thesis has proposed a method for manual, task-level programming based on robot skills as described in [Paper 2 | Schou et al., 2016]. The method, called skill-based programming, introduces a teaching procedure as an integral part of each skill which controls the manual parameterization of that particular skill. The manual parameterization is defined as parallel to the execution; hence, the programming effectuates the same order of state changes as the execution. The proposed skill-based programming exploits the benefits of kinesthetic teaching to intuitively instruct online parameters and combines it with an offline specification during which skills are sequenced and partly parameterized in a graphical user interface (GUI). 1.2 Develop a task-level programming tool using the manual sequencing and parameterization of skills: Schou et al. [2016, Paper 2] describe the design and implementation of a holistic robot operating tool called Skill Based System (SBS). SBS embeds the skill-based programming method proposed in objective 1.1 and provides the necessary task programming and task execution engines to 111

Chapter 6. Conclusion

allow shop floor operators to intuitively and quickly program and execute tasks on an industrial collaborative robot. 1.3 Investigate how engineering expertise can be encapsulated in robot skills and thus be intuitive to use for non-experts: During this PhD project, 10 skills have been realized and implemented in the skill library available in SBS giving a total of 16 skills available. Two examples of encapsulation and parameterization of expert knowledge is given in Section 4.4. The first example describes the operation, parameters, and parameterization procedure of a force-controlled peg-in-hole skill. The second example describes the development of an object recognition skill proposed in [Paper 4 | Andersen et al., 2016] and exemplifies how advanced computer vision and machine learning algorithms can be encapsulated in a skill and subsequently used by computer vision novices for skill-based programming. 1.4 Assess the usability of manual, task-level programming and the applicability of the approach in industrial praxis: Schou et al. [2013, Paper 3] describe a user study assessing of the usability of the skill-based programming method. Nine participants with robotics experience ranging from complete novice to expert each programmed two industrial relevant handling tasks of varying complexity. A similar study has been carried out as part of the TAPAS project using nine robotic engineers from KUKA Laboratories GmbH. The two user studies showed that even complete novices were able to program industrial robot tasks after an introduction of 15 minutes. Three experiments in industrial manufacturing settings have been conducted to validate the industrial applicability of the skill-based programming method and SBS as a robot operating system. Two of these experiments are described in [Paper 5 | Madsen et al., 2015] and [Paper 6 | Bøgh et al., 2014] respectively. In summary of the experiments, SBS and the skill-based programming have been successfully used in programming both assembly, logistics, and machine tending tasks and in execution of these tasks during running production scenarios. Main Objective 2 - Hardware Reconfiguration 2.1 Investigate and develop a configurator tool aiding the selection of a feasible hardware solution for a given task: Section 5.2 describes the development of a manipulation equipment configurator and a CARMEN platform configurator which have been carried out in this PhD project. The two configurators are regarded as proof-ofconcept implementations, albeit they are both extendable as the underlying knowledge is expanded. The two configurators demonstrate various 112

6.2. Concluding Remarks

user input levels and how the configuration outcome can be directly integrated into the subsequent skill-based programming. A preliminary, qualitative feasibility study of each configurator has indicated that configurator technology can be used to aid the user in selecting suitable hardware modules. Knowledge on robot equipment, processes, and products have been modeled as ontologies and used as the configuration model supporting the configurators. The knowledge includes general domain knowledge, knowledge on specific entities, and relational knowledge. 2.2 Investigate and develop a framework supporting exchange of active hardware modules following a plug & produce philosophy: A hardware management framework enabling quick, online exchange of active hardware modules following the ”hot” plug and produce concept is described in [Paper 8 | Schou and Madsen, 2016a]. The framework is built on an agent-based architecture with a central device manager which manages the system configuration and facilitates the plugging and unplugging of modules. Communication between agents is done using the Robot Operating System (ROS) as middle-layer. The hardware management framework includes a pre-compiled library expediting the adaptation of any ROS package into an agent usable in the framework. 2.3 Investigate and develop a control scheme ensuring an efficient utilization of the module functionality: Schou and Madsen [2016a, Paper 8] propose the concept of primitives as a standardized interaction protocol between the task control and the device manager. Primitives are defined as generic, device-level functions abstract from the specific implementation on each device which makes the task control independent of specific hardware syntax. Consequently, the exchange of modules can be done without reinstantiation of the task control. Schou and Madsen [2016a, Paper 8] also present a flexible approach of amending the primitives with constraints which allows advanced or specialized device functionality to be utilized through the primitive syntax. This ensures a high usability of the particular functionality provided by each module.

6.2

Concluding Remarks

The motivation for this PhD project has been to investigate intuitive, manual methods and tools for reconfiguring collaborative robots and thereby enabling shop floor personnel to transition the robot to new tasks. By keeping the human-in-the-loop, the process insight and experience of the shop floor operator can be preserved as the task is transferred to a robot assistant. Based on this 113

Chapter 6. Conclusion

motivation, the research of this PhD project has focused on two main objectives: research on intuitive, manual robot programming, and research on quick and easy reconfiguration of hardware. In research on robot programming, this PhD thesis has proposed a manual task-level programming method and created a robot operating tool called SBS implementing this method. The method uses skills as control modules representing capabilities of the robot from which tasks are created. As such, this PhD thesis extends previous research on skills from Aalborg University with the manual parameterization method and the SBS robot operating tool. Over the course of this PhD project, eight generic skills and two specific skills have been created encapsulating various expert knowledge. Two user studies have proven that the proposed programming method enables both robot experts and robot novices to program industrial tasks without the need for extensive training. Furthermore, three experiments deploying autonomous industrial mobile manipulators equipped with SBS in real industrial production settings have proven the industrial applicability of the proposed skill-based programming and operating approach. Thus, the skill-based programming method for collaborative robots is concluded to be at technology readiness level (TRL) 7. The research on hardware reconfiguration in this PhD project has focused on two distinct aspects: selecting feasible modules for a given task, and exchanging modules quickly and easily. This PhD thesis has suggested to use configurator tools to aid the shop floor operator in selecting a feasible set of modules for a given task. Two such configurators have been developed and a number of ontologies containing supporting knowledge have been created. The state of the configurators is concluded to be at TRL 3. To further raise the TRL of the configurators, their usability for non-experts and their applicability in industrial settings should be investigated. This PhD project has proposed, implemented, and demonstrated an agentbased hardware management framework enabling a plug and produce approach to exchange of hardware modules. As part of the hardware management framework, this PhD thesis has proposed the concept of primitives which are defined as generic device functions abstract from specific device syntax. Hereby, the primitives provide a generalized interaction between task control and the specific hardware modules. Consequently, a clear separation between task control and the device layer is obtained which makes the task control independent of specific hardware communication syntax. A study has indicated the feasibility of the proposed hardware management framework and the proposed primitives in a laboratory environment. Thus, the TRL is concluded to be at 4. Further assessment of the feasibility of the framework including additional module types is recommended. Finally, experiments investigating the applicability of the framework in industrial praxis is needed to further increase the TRL.

114

6.3. Future Work

The hypothesis on which this PhD project is based reads: Reconfiguration of collaborative robots can be carried out by production personnel without robotics, mechanical, or programming expertise by deploying modularity in both hardware and programming and providing sufficiently intuitive configuration tools. Based on the findings presented in this thesis, the hypothesis cannot be rejected. Hence, the studies of this PhD project support the notion that with modularity and sufficiently intuitive tools, collaborative industrial robots can be reconfigured by production personnel.

6.3

Future Work

Based on the summarized contributions of this PhD project, several directions for future research have been identified and are outlined in this section. Extensions of Skill-Based Programming In its current state, SBS only allows linear sequences of skills to be created. Despite the successful deployment of SBS in the industrial experiments, it has become clear that non-linear sequences allow for a greater flexibility in the robot programming; e.g. by allowing a conditional task flow which is able to automatically cope with error scenarios. Future work should focus on extending SBS to allow non-linear skill-sequences to be programmed. The current skill concept assumes static states between skills which makes it a viable approach for discrete activities such as assembly, logistics, and machine tending. Another potential extension of the skill-based programming is within continuous processes such as polishing, painting, and welding. Future work should investigate how the skill-concept can be applied to continuous processes and how manual parameterization can be carried out in such tasks. Development of new skills is considered an engineering task as it requires expert process knowledge. This is intentional as the encapsulation of expert, engineering knowledge is desired. However, it currently also requires software programming expertise as skills are developed as classes in C++. However, both the primitives used, the kinesthetic teaching functionality, and the graphical menus for specification are already reusable routines in SBS. Thus, skill development is to some extend an aggregation of these well-defined functionalities combined with custom algorithms. Future work should investigate if the skill development can be formalized and encapsulated in a graphical tool to reduce the need for software programming expertise and hereby make skill development available to a broader group of technical personnel.

115

Chapter 6. Conclusion

Extend Modularity The modular architectures used in this PhD study only consider modules on a single level. Future research should focus on extending the architectures to consider modular construction of modules. That is, the modules themselves might also be reconfigurable. Mature Configurators The research on hardware reconfiguration presented in this PhD thesis has proposed two proof-of-concept implementations of configurators. Future research should focus on maturing these configurators and validating their usability through user studies and industrial applications. The potential of merging the two confiogurators and rendering them into single tool should be explored. In this PhD project, the knowledge serving as a foundation for the configurators is maintained through a 3rd party tool, and only limited knowledge is being transferred from the configurators and back into the ontologies. Future work should focus on continuously gathering knowledge and expanding the ontologies from the operator inputs, statistics, and learning. Formalize Module Exchange Process The hardware management framework enables quick exchange of hardware modules without reinstantiation of the robot control system. However, it presumes the physical exchange of hardware modules is conducted correctly which might not always be the case. Future work should investigate how the operator can be guided through the physical hardware exchange and how the result of the physical hardware exchange can be verified against the desired configuration. Physical Adaptation of Commercial Components Based on two modular hardware architectures, a number of hardware modules have been embodied using commercial-of-the-shelf (COTS) devices. The embodiment has adapted the mechanical interfaces of these COTS devices into the standardized interface defined for modules. Future research should, however, focus on a more in-depth analysis and design of standardized mechanical, electrical, pneumatic, and signal interfaces for modules. Combination of Hardware Reconfiguration and Robot Programming In this PhD project hardware reconfiguration and robot programming are regarded as two separate objectives, see Chapter 3. However, over the course of this PhD project, it has become clear that a correlation between the two objectives exist. Consequently, future work should investigate this correlation and determine if robot programming and hardware reconfiguration can be combined into a single reconfiguration process. 116

Glossary Cobot

Holistic collaborative robot system including robot arm, tool, sensors etc. Contrary to collaborative robot arm.

Collaborative robot arm

Traditional industrial robot arm offering collaborative capabilities, but without tool, sensors etc.

Configuration

Ambiguous term denoting both the task of configuring an entity and the resulting solution. A thorough description of the term configuration is given in Section 2.3.1.

Configurator

A software tool aiding the user in determining a configuration solution.

Device

An active component/equipment.

Little Helper

A family of autonomous industrial mobile manipulators (AIMM) from Aalborg University.

Module

A software or hardware entity with well-defined functionality and interfaces. It can either be conceptual or specific.

Peg-in-hole

Insertion of a cylindrical object into a cylindrical hole.

Plug and produce

Plugging and unplugging of components in a manufacturing system with little to no effort for the user. Hot plug and produce entails plugging and unplugging components without shutting down or reinitiating the system.

Product

An entity produced by a manufacturing company. Covers both finished goods as well as sub-products (components and assemblies) used in the manufacturing process. 117

Glossary

Reconfiguration

Configuration of an already configured system. Hence, changing the current configuration of a system.

Robot assistant

Collaborative robots working alongside the human worker. Robot assistants are instructed by the operator, but do not engage in direct physical collaboration.

Robot programming

Configuring the robot control software to perform a given task. This is done through the user-interface of the particular robot and can both be graphical and intuitive or require software programming competences depending on the particular interface.

Rotor

Rotating part of an electrical motor.

Shop floor operator

Human worker performing manual production tasks in the production environment. Shop floor operators are not expected to have any robot training.

Skill-based programming

Robot programming using manual parameterization of skills.

Specification

(In skill-based programming) Sequencing skills and setting offline parameters as part of skill-based programming.

Task-level programming

Robot programming using task-related building blocks.

Teaching

(In skill-based programming) Online parameterization of skills as part of skill-based programming.

True collaborative robot

Collaborative robot which engages in direct physical collaboration with the human operator in order to solve a common task.

118

Acronyms

Acronyms AAU

Aalborg University

AIMM

Autonomous industrial mobile manipulator

CMS

Changeable manufacturing system

CORA

Core ontology for robotics and automation

COTS

Commercial-of-the-shelf

DL

Description logic

EAS

Evolvable assembly system

EPS

Evolvable production system

FMS

Flexible manufacturing system

GUI

Graphical user interface

HMS

Holonic manufacturing system

HRI

Human-robot interaction / human-robot interface

MAR

Multi-annual roadmap (for robotics) [SPARC, 2015]

MRS

Modular robotic system

OWL

Ontology Web Language

RAS

Reconfigurable assembly system

RMS

Reconfigurable manufacturing system

ROS

Robot Operating System [ROS Wiki, 2016]

SBS

Skill Based System

SkiROS

Skill-based Robot Operating System

SLC

Small load carrier (box)

SME

Small or medium enterprise

SWRL

Semantic web rule language

TCP

Tool center point

TRL

Technology-readiness level

119

References Abele, E., Wörn, A., Fleischer, J., Wieser, J., Martin, P., and Klöpper, R. (2007). Mechanical module interfaces for reconfigurable machine tools. Production Engineering, 1(4):421–428. ACat (2014). Learning and execution of action categories acat project website. EU project funded under the European Community’s Seventh Framework Programme grant no 600578. Web page: http://www.acat-project.eu/. Agrawal, M., Konolige, K., and Blas, M. R. (2008). Censure: Center surround extremas for realtime feature detection and matching. In Computer Vision–ECCV 2008, pages 102–115. Springer. Akgun, B., Cakmak, M., Yoo, J. W., and Thomaz, A. L. (2012). Trajectories and keyframes for kinesthetic teaching: A human-robot interaction perspective. In Proceedings of the seventh annual ACM/IEEE international conference on HumanRobot Interaction, pages 391–398. ACM. Allemang, D. and Hendler, J. (2011). Semantic Web for the Working Ontologist: Effective Modeling in RDFS and OWL. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2 edition. Alsafi, Y. and Vyatkin, V. (2010). Ontology-based reconfiguration agent for intelligent mechatronic systems in flexible manufacturing. Robotics and Computer-Integrated Manufacturing, 26(4):381 – 391. Andersen, R., Schou, C., Damgaard, J., and Madsen, O. (2016). Using a flexible skill-based approach to recognize objects in industrial scenarios. In Accepted for presentation at: The 47th International Symposium on Robotics (ISR 2016), Munich, Germany. Andersen, R. H., Dalgaard, L., Beck, A. B., and Hallam, J. (2015). An architecture for efficient reuse in flexible production scenarios. In Automation Science and Engineering (CASE), 2015 IEEE International Conference on, pages 151–157. Antzoulatos, N., Castro, E., Scrimieri, D., and Ratchev, S. (2014). A multi-agent architecture for plug and produce on an industrial assembly platform. Production Engineering, 8(6):773–781. Arai, T., Aiyama, Y., Maeda, Y., Sugi, M., and Ota, J. (2000). Agile assembly system by “plug and produce”. {CIRP} Annals - Manufacturing Technology, 49(1):1 – 4. Archibald, C. and Petriu, E. (1993). Model for skills-oriented robot programming (SKORP). pages 392–402. AutomationML Consortium (2016). Whitepaper AutomationML Part 1 - Architecture and general requirements. Web page: http://www.aston.ac.uk/eas/ research/groups/ncrg/resources/netlab/.

120

References Beckett, D. (2015). REDLAND RDF Libraries. Web page: https://librdf.org. Version 1.0.17. Benhabib, B. and Dai, M. (1991). Mechanical design of a modular robot for industrial applications. Journal of Manufacturing Systems, 10(4):297–306. Bi, Z., Lin, Y., and Zhang, W. (2010). The general architecture of adaptive robotic systems for manufacturing applications. Robotics and Computer-Integrated Manufacturing, 26(5):461 – 470. Bi, Z. M., Lang, S. Y. T., Shen, W., and Wang, L. (2008a). Reconfigurable manufacturing systems: the state of the art. International Journal of Production Research, 46(4):967–992. Bi, Z. M., Lang, S. Y. T., Verner, M., and Orban, P. (2008b). Development of reconfigurable machines. The International Journal of Advanced Manufacturing Technology, 39(11):1227–1251. Biggs, G. and MacDonald, B. (2003). A survey of robot programming systems. In Proceedings of the Australasian conference on robotics and automation, pages 1–3. Björkelund, A., Edström, L., Haage, M., Malec, J., Nilsson, K., Nugues, P., Robertz, S. G., Störkle, D., Blomdell, A., Johansson, R., Linderoth, M., Nilsson, A., Robertsson, A., Stolt, A., and Bruyninckx, H. (2011a). On the integration of skilled robot motions for productivity in manufacturing. In Assembly and Manufacturing (ISAM), 2011 IEEE International Symposium on, pages 1–9. Björkelund, A., Malec, J., Nilsson, K., and Nugues, P. (2011b). Knowledge and skill representations for robotized production. {IFAC} Proceedings Volumes, 44(1):8999 – 9004. 18th {IFAC} World Congress. Blecker, T., Abdelkafi, N., Kreutler, G., and Friedrich, G. (2004). Product configuration systems: State of the art, conceptualization and extensions. In Génie logiciel Intelligence artificielle. Eight Maghrebian Conference on Software Engineering and Artificial Intelligence (MCSEAI 2004), pages 25–36. Bogue, R. (2016). Europe continues to lead the way in the collaborative robot business. Industrial Robot: An International Journal, 43(1):6–11. Bonasso, R. P. (1991). Integrating reaction plans and layered competences through synchronous control. In Proceedings of the 12th International Joint Conference on Artificial Intelligence - Volume 2, IJCAI’91, pages 1225–1231, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Brooks, R. (1986). A robust layered control system for a mobile robot. IEEE Journal on Robotics and Automation, 2(1):14–23. Brussel, H. V., Germany, H., Hendrik, P., and Brussel, V. (1994). Holonic manufacturing systems, the vision matching the problem. In First European Conference on Holonic Manufacturing Systems.

121

References Brussel, H. V., Wyns, J., Valckenaers, P., Bongaerts, L., and Peeters, P. (1998). Reference architecture for holonic manufacturing systems: {PROSA}. Computers in Industry, 37(3):255 – 274. Bøgh, S. (2012). Autonomous Industrial Mobile Manipulation (AIMM) - maturation, exploitation and implementation. Identifying Skills for AIMM Robots. AAU - PhD thesis. Bøgh, S., Hvilshøj, M., Kristiansen, M., and Madsen, O. (2012a). Identifying and evaluating suitable tasks for autonomous industrial mobile manipulators (AIMM). The International Journal of Advanced Manufacturing Technology, 61(5-8):713– 726. Bøgh, S., Nielsen, O., Pedersen, M., Krüger, V., and Madsen, O. (2012b). Does your Robot have Skills? In Proceedings of the 43nd International Symposium of Robotics (ISR). Bøgh, S., Schou, C., Rühr, T., Kogan, Y., Dömel, A., Brucker, M., Eberst, C., Tornese, R., Sprunk, C., Tipaldi, G. D., and VestergaardHennessy, T. (2014). Integration and assessment of multiple mobile manipulators in a real-world industrial production facility. In Proceedings for the joint conference of ISR 2014, 45th International Symposium on Robotics and Robotik 2014, 8th German Conference on Robotics, pages 305–312. VDE Verlag GMBH. Campagna, D. and Formisano, A. (2013). Product and production process modeling and configuration. Fundamenta Informaticae, 124(4):403–425. Carbonari, L., Callegari, M., Palmieri, G., and Palpacelli, M.-C. (2014). A new class of reconfigurable parallel kinematic machines. Mechanism and Machine Theory, 79:173 – 183. CARLoS (2014). Cooperative robot for large spaces manufacturing. EU project funded under the European Community’s Seventh Framework Programme grant no 606363. Web page: http://carlosproject.eu/. CARMEN (2013). Center for avanceret robotbaseret automation (center for advanced robot-based automation). National Danish research project funded by Innovation Fund Denmark. Web page: http://innovationsfonden.dk/en/node/609. Chandrasekaran, B., Josephson, J. R., and Benjamins, V. R. (1999). What are ontologies, and why do we need them? IEEE Intelligent Systems, 14(1):20–26. Charalambous, G., Fletcher, S., and Webb, P. (2015). Identifying the key organisational human factors for introducing human-robot collaboration in industry: an exploratory study. The International Journal of Advanced Manufacturing Technology, 81(9):2143–2155. Cohen, R., Lipton, M., Dai, M., and Benhabib, B. (1992). Conceptual design of a modular robot. Journal of Mechanical Design, 114(1):117–125.

122

References Colace, F., De Santo, M., and Napoletano, P. (2009). Product configurator: an ontological approach. In Intelligent Systems Design and Applications, 2009. ISDA’09. Ninth International Conference on, pages 908–912. IEEE. Colgate, E. J., Peshkin, M. A., and Wannasuphoprasit, W. (1996). Cobots: Robots for collaboration with human operators. Collins, K., Palmer, A. J., and Rathmill, K. (1985). Robot technology and applications: Proceedings of the 1st robotics europe conference brussels, june 27–28, 1984. pages 187–199, Berlin, Heidelberg. Springer Berlin Heidelberg. Colombo, A. W., Jammes, F., Smit, H., Harrison, R., Lastra, J. L. M., and Delamer, I. M. (2005). Service-oriented architectures for collaborative automation. In 31st Annual Conference of IEEE Industrial Electronics Society, 2005. IECON 2005., page 617—624. Connell, J. H. (1992). Sss: a hybrid architecture applied to robot navigation. In Robotics and Automation, 1992. Proceedings., 1992 IEEE International Conference on, volume 3, pages 2719–2724. Deshayes, L., Foufou, S., and Gruninger, M. (2007). An Ontology Architecture for Standards Integration and Conformance in Manufacturing, pages 261–276. Springer Netherlands, Dordrecht. ElMaraghy, H., Schuh, G., ElMaraghy, W., Piller, F., Schönsleben, P., Tseng, M., and Bernard, A. (2013). Product variety management. {CIRP} Annals - Manufacturing Technology, 62(2):629 – 652. ElMaraghy, H. A. (2006). Flexible and reconfigurable manufacturing systems paradigms. International Journal of Flexible Manufacturing Systems, 17(4):261– 276. Estrem, W. A. (2003). An evaluation framework for deploying web services in the next generation manufacturing enterprise. Robotics and Computer-Integrated Manufacturing, 19(6):509 – 519. Leadership of the Future in Manufacturing. EUPASS (2004). Evolvable Ultra-Precision Assembly Systems. EU project funded under the European Community’s Sixth Framework Programme. Felfernig, A., Friedrich, G., and Jannach, D. (2001). Conceptual modeling for configuration of mass-customizable products. Artificial Intelligence in Engineering, 15(2):165–176. Ferreira, P., Doltsinis, S., and Lohse, N. (2014). Variety management in manufacturing symbiotic assembly systems – a new paradigm. Procedia CIRP, 17:26 – 31. Ferreira, P. and Lohse, N. (2012). Configuration model for evolvable assembly systems. In 4th CIRP Conference On Assembly Technologies And Systems.

123

References Ferreira, P., Lohse, N., and Ratchev, S. (2010). Precision Assembly Technologies and Systems: 5th IFIP WG 5.5 International Precision Assembly Seminar, IPAS 2010, Chamonix, France, February 14-17, 2010. Proceedings, chapter Multi-agent Architecture for Reconfiguration of Precision Modular Assembly Systems, pages 247–254. Springer Berlin Heidelberg, Berlin, Heidelberg. Ferreira, P., Lohse, N., Razgon, M., Larizza, P., and Triggiani, G. (2012). Skill based configuration methodology for evolvable mechatronic systems. In IECON 2012 38th Annual Conference on IEEE Industrial Electronics Society, pages 4366–4371. Fikes, R. E. and Nilsson, N. J. (1971). Strips: A new approach to the application of theorem proving to problem solving. Artificial Intelligence, 2(3–4):189 – 208. Gat, E. (1998). On three-layer architectures. Artificial intelligence and mobile robots, pages 195–210. Gomez-Perez, A., Fernández-López, M., and Corcho, O. (2006). Ontological Engineering: with examples from the areas of Knowledge Management, e-Commerce and the Semantic Web. Springer Science & Business Media. Gruber, T. R. (1995). Toward principles for the design of ontologies used for knowledge sharing. International Journal of Human-Computer Studies, 43(5):907 – 928. Guerin, K. R., Lea, C., Paxton, C., and Hager, G. D. (2015). A framework for enduser instruction of a robot assistant for manufacturing. In 2015 IEEE International Conference on Robotics and Automation (ICRA), pages 6167–6174. Hadar, R. and Bilberg, A. (2012). Manufacturing Concepts of the Future – Upcoming Technologies Solving Upcoming Challenges, pages 123–128. Springer Berlin Heidelberg. Hai, R., Theißen, M., and Marquardt, W. (2011). An ontology based approach for operational process modeling. Advanced Engineering Informatics, 25(4):748 – 759. Special Section: Advances and Challenges in Computing in Civil and Building Engineering. Headquarters for Japan’s Economic Revitalization (2015). New robot strategy japan’s robot strategy - vision, strategy, action plan. Web page: http://www. meti.go.jp/english/press/2015/pdf/0123_01b.pdf. Hepp, M., De Leenheer, P., de de Moor, A., and Sure, Y. (2008). Ontology management: semantic web, semantic web services, and business applications, volume 7. Springer Science & Business Media. Herrera, V. V., Bepperling, A., Lobov, A., Smit, H., Colombo, A. W., and Lastra, J. L. M. (2008). Integration of multi-agent systems and service-oriented architecture for industrial automation. In 2008 6th IEEE International Conference on Industrial Informatics, pages 768–773.

124

References Holz, D., Topalidou-Kyniazopoulou, A., Rovida, F., Pedersen, M. R., Krüger, V., and Behnke, S. (2015). A skill-based system for object perception and manipulation for automating kitting tasks. In 2015 IEEE 20th Conference on Emerging Technologies Factory Automation (ETFA), pages 1–9. Horizon2020 (2014). Technology readiness levels (TRL), Annex G from Horizon2020 work programme. Hotz, L., Felfernig, A., Stumptner, M., Ryabokon, A., Bagley, C., and Wolter, K. (2014). Knowledge-based configuration: From research to business cases, chapter 6: Configuration knowledge representation and reasoning, pages 41–72. Morgan Kaufmann Publishers. Hsieh, S. (2003). Re-configurable dual-robot assembly system design, development and future directions. Industrial Robot: An International Journal, 30(3):250–257. Huckaby, J. and Christensen, H. (2014). Modeling robot assembly tasks in manufacturing using sysml. In ISR/Robotik 2014; 41st International Symposium on Robotics; Proceedings of, pages 1–7. Hvilshøj, M. (2012). Autonomous Industrial Mobile Manipulation (AIMM) - maturation, exploitation and implementation. Developing modular and (re)configurable AIMM families based on architectures. AAU - PhD thesis. Hvilshøj, M. and Bøgh, S. (2011). "Little Helper" - An Autonomous Industrial Mobile Manipulator Concept. International Journal of Advanced Robotic Systems, 8(2). Hvilshøj, M., Bøgh, S., Madsen, O., and Kristiansen, M. (2009). The Mobile Robot “Little Helper”: Concepts, ideas and working principles. In Paper presented at IEEE Conference on Emerging Technologies & Factory Automation. IEEE. IDEAS (2010). Instantly Deployable Evolvable Assembly Systems. EU project funded under the European Community’s Seventh Framework Programme. IEEE Robotics and Automation Society (2015). IEEE standard on ontologies for robotics and automation. IEEE Std 1872-2015, pages 1–60. Jammes, F., Smit, H., Lastra, J. L. M., and Delamer, I. M. (2005). Orchestration of service-oriented manufacturing processes. In 2005 IEEE Conference on Emerging Technologies and Factory Automation, volume 1, pages 8 pp.–624. Koren, Y., Heisel, U., Jovane, F., Moriwaki, T., Pritschow, G., Ulsoy, G., and Van Brussel, H. (1999). Reconfigurable Manufacturing Systems. {CIRP} Annalsmanufacturing Technology, 48(2):527–540. Koren, Y. and Shpitalni, M. (2010). Design of reconfigurable manufacturing systems. Journal of Manufacturing Systems, 29(4):130 – 141. Kormushev, P., Calinon, S., and Caldwell, D. G. (2011). Imitation learning of positional and force skills demonstrated via kinesthetic teaching and haptic input. Advanced Robotics, 25(5):581–603.

125

References Kramberger, A. (2014). A comparison of learning-by-demonstration methods for force-based robot skills. In Robotics in Alpe-Adria-Danube Region (RAAD), 2014 23rd International Conference on, pages 1–6. Kruse, C. and Bramham, J. (2003). You choose [product configuration software]. Manufacturing Engineer, 82(4):34–37. Krüger, J., Lien, T., and Verl, A. (2009). Cooperation of human and machines in assembly lines. {CIRP} Annals - Manufacturing Technology, 58(2):628 – 646. Leitão, P. (2009). Agent-based distributed manufacturing control: A state-of-the-art survey. Eng. Appl. Artif. Intell., 22(7):979–991. Lemaignan, S., Ros, R., Mösenlechner, L., Alami, R., and Beetz, M. (2010). Oro, a knowledge management platform for cognitive architectures in robotics. In Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on, pages 3548–3553. Lewis, J. R. (1991). Psychometric evaluation of an after-scenario questionnaire for computer usability studies: the asq. ACM SIGCHI Bulletin, 23(1):78–81. Lohse, N. (2006). Towards an ontology framework for the integrated design of modular assembly systems. University of Nottingham Nottingham - PhD Thesis. Lohse, N., Hirani, H., and Ratchev, S. (2006). Equipment ontology for modular reconfigurable assembly systems. International Journal of Flexible Manufacturing Systems, 17(4):301–314. Lowe, D. G. (1999). Object recognition from local scale-invariant features. In Computer vision, 1999. The proceedings of the seventh IEEE international conference on, volume 2, pages 1150–1157. IEEE. Lozano-Perez, T. (1983). Robot programming. Proceedings of the IEEE, 71(7):821– 841. Madsen, O., Bøgh, S., Schou, C., Andersen, R., Damgaard, J., Pedersen, M., and Krüger, V. (2015). Integration of mobile manipulators in an industrial production. Industrial Robot, 42(1):11–18. Malec, J., Nilsson, A., Nilsson, K., and Nowaczyk, S. (2007). Knowledge-based reconfiguration of automation systems. In 2007 IEEE International Conference on Automation Science and Engineering, pages 170–175. Marcus, S., Stout, J., and McDermott, J. (1987). VT: An expert elevator designer. AI Magazine, 8(4):41–57. McDermott, J. (1982). R1: A rule-based configurer of computer systems. Artificial Intelligence, 19(1):39–88.

126

References McKee, G. T., Fryer, J. A., and Schenker, P. S. (2001). Object-oriented concepts for modular robotics systems. In Technology of Object-Oriented Languages and Systems, 2001. TOOLS 39. 39th International Conference and Exhibition on, pages 229–238. Michalos, G., Makris, S., Spiliotopoulos, J., Misios, I., Tsarouchi, P., and Chryssolouris, G. (2014). Robo-partner: Seamless human-robot cooperation for intelligent, flexible and safe operations in the assembly factories of the future. Procedia CIRP - 5th Conference on Assembly Technologies and Systems (CATS) 2014, 23:71 – 76. Monkman, G. J., Hesse, S., Steinmann, R., and Schunk, H. (2007). Robot grippers. John Wiley & Sons. Monostori, L., Váncza, J., and Kumara, S. (2006). Agent-based systems for manufacturing. {CIRP} Annals - Manufacturing Technology, 55(2):697 – 720. Muszynski, S., Stückler, J., and Behnke, S. (2012). Adjustable autonomy for mobile teleoperation of personal service robots. In 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, pages 933–940. Muxfeldt, A., Kluth, J.-H., and Kubus, D. (2014). Kinesthetic Teaching in Assembly Operations – A User Study, pages 533–544. Springer International Publishing, Cham. Naumann, M., Wegener, K., and Schraft, R. (2007). Control architecture for robot cells to enable plug’n’produce. In Robotics and Automation, 2007 IEEE International Conference on, pages 287–292. Niles, I. and Pease, A. (2001). Towards a standard upper ontology. In Proceedings of the International Conference on Formal Ontology in Information Systems - Volume 2001, FOIS ’01, pages 2–9, New York, NY, USA. ACM. OASIS (2009). Devices profile for web services. Version 1.1, Web page: http://docs. oasis-open.org/ws-dd/dpws/1.1/os/wsdd-dpws-1.1-spec-os.pdf. Onori, M. (2002). Evolvable assembly systems: A new paradigm? In 33rd International Symposium on Robotics (ISR). Onori, M. and Barata, J. (2009). Evolvable production systems : Mechatronic production equipment with process-based distributed control. In 9th IFAC Symposium on Robot Control, volume 42, pages 80 – 85. Onori, M., Barata, J., and Frei, R. (2006). Evolvable Assembly Systems Basic Principles, pages 317–328. Springer US, Boston, MA. Onori, M., Lohse, N., Barata, J., and Hanisch, C. (2012). The ideas project: plug & produce at shop-floor level. Assembly Automation, 32(2):124–134. Owen, T. (1985). Assembly with robots. Prentice Hall, Inc.,Old Tappan, NJ.

127

References OWL Working Group, W. (2004). OWL Web Ontology Language Reference. W3C Recommendation. Web page: http://www.w3.org/TR/owl-ref/. OWL Working Group, W. (2009). OWL 2 Web Ontology Language: Document Overview. W3C Recommendation. Web page: http://www.w3.org/TR/ owl2-overview/. Pedersen, M. and Krüger, V. (2015). Automated planning of industrial logistics on a skill-equipped robot. In Workshop on Task Planning for Intelligent Robots in Service and Manufacturing. Pedersen, M. R., Nalpantidis, L., Andersen, R. S., Schou, C., Bøgh, S., Krüger, V., and Madsen, O. (2016). Robot skills for manufacturing: From concept to industrial deployment. Robotics and Computer-Integrated Manufacturing, 37:282 – 291. Persson, J., Gallois, A., Bjoerkelund, A., Hafdell, L., Haage, M., Malec, J., Nilsson, K., and Nugues, P. (2010). A knowledge integration framework for robotics. In Robotics (ISR), 2010 41st International Symposium on and 2010 6th German Conference on Robotics (ROBOTIK), pages 1–8. PRIME (2012). Plug and PRoduce Intelligent Multi Agent Environment based on Standard Technology. EU project funded under the European Community’s Seventh Framework Programme. Ramos, L. (2015). Semantic web for manufacturing, trends and open issues: Toward a state of the art. Computers Industrial Engineering, 90:444 – 460. Rampersad, H. K. (1994). Integrated and Simultaneous Design for Robotic Assembly (Product Development: Planning, Designing, Engineering). John Wiley & Sons, Inc., New York, NY, USA. Ribeiro, L., Barata, J., Onori, M., and Amado, A. (2008). 9th ifac workshop on intelligent manufacturing systems owl ontology to support evolvable assembly systems. IFAC Proceedings Volumes, 41(3):290 – 295. ROBO-PARTNER (2013). Seamless human-robot cooperation for intelligent, flexible and safe operations in the assembly factories of the future. EU project funded under the European Community’s Seventh Framework Programme. Web page: http: //robo-partner.eu/. RoboEarth (2009). RoboEarth: robots sharing a knowledge base for world modelling and learning of actions. EU project funded under the European Community’s Seventh Framework Programme. RoboticsVO (2013). A Roadmap for U.S. Robotics: From Internet to Robotics web page: https://books.google.dk/books?id=KPhQngEACAAJ. Rocha, A., di Orio, G., Barata, J., Antzoulatos, N., Castro, E., Scrimieri, D., Ratchev, S., and Ribeiro, L. (2014). An agent based framework to support plug and produce. In Industrial Informatics (INDIN), 2014 12th IEEE International Conference on, pages 504–510.

128

References Roland Berger (2014). Industry 4.0 The new industrial revolution. How Europe will succeed. Think Act. Rooker, M. N., Strasser, T., Pichler, A., Stubl, G., Zoitl, A., and Terzic, I. (2009). Adaptive and rreconfigurable control framework for the responsive factory. In 2009 7th IEEE International Conference on Industrial Informatics, pages 831–836. ROS Wiki (2016). Robot operating system (ros) documentation. Web page: http: //www.ros.org. Rovida, F., Schou, C., Andersen, R., Damgaard, J., Chrysostomou, D., Bøgh, S., Pedersen, M., Grossmann, B., Madsen, O., and Krüger, V. (2014). Skiros: A four tiered architecture for task-level programming of industrial mobile manipulators. In Presented at the 1’st International Workshop on Intelligent Robot Assistants (IRAS) at the 13’th International Conference on Intelligent Autonomous Systems (IAS). Sabin, D. and Weigel, R. (1998). Product configuration frameworks-a survey. IEEE Intelligent Systems and their Applications, 13(4):42–49. Saveriano, M., i. An, S., and Lee, D. (2015). Incremental kinesthetic teaching of endeffector and null-space motion primitives. In 2015 IEEE International Conference on Robotics and Automation (ICRA), pages 3570–3575. Schou, C., Andersen, R. S., Chrysostomou, D., Bøgh, S., and Madsen, O. (2016). Skill based instruction of collaborative robots in industrial settings. Submitted to Robotics and Computer-Integrated Manufacturing (RCIM). Submitted May 2016. Schou, C., Damgaard, J., Bøgh, S., and Madsen, O. (2013). Human-robot interface for instructing industrial tasks using kinesthetic teaching. In Proceedings of 44th International Symposium on Robotics, ISR 2013, pages 1–6. IEEE Xplore. Schou, C. and Madsen, O. (2016a). A plug and produce framework for industrial collaborative robots. Submitted to International Journal of Advanced Robotic Systems (IJARS). Submitted May 2016. Schou, C. and Madsen, O. (2016b). Towards shop floor hardware reconfiguration for industrial collaborative robots. In Accepted for presentation at: The 19th International Conference on Climbing and Walking Robots and Support Technologies for Mobile Machines (CLAWAR 2016) in Workshop on Collaborative Robots for Industrial Applications, London, United Kingdom. Schuler, J. (1987). Integration von Förder- und Handhabungseinrichtungen. Springer. Sirin, E., Parsia, B., Grau, B. C., Kalyanpur, A., and Katz, Y. (2007). Pellet: A practical owl-dl reasoner. Web Semantics: Science, Services and Agents on the World Wide Web, 5(2):51 – 53. Software Engineering and the Semantic Web. Soininen, T., Tiihonen, J., Männistö, T., and Sulonen, R. (1998). Towards a general ontology of configuration. Artif. Intell. Eng. Des. Anal. Manuf., 12(4):357–372.

129

References SPARC (2015). Robotics 2020 multi-annual roadmap for robotics in europe. Rev. B. Stanford University (2015). Protégé 5. Web page: http://protege.stanford.edu. Stenmark, M. (2013). Industrial robot skills. In 12th Scandinavian Conference on Artificial Intelligence (SCAI), pages 295–298. Stenmark, M. and Malec, J. (2014). Describing constraint-based assembly tasks in unstructured natural language. In Proc. IFAC 2014 World Congress, Capetown, South Africa. Stenmark, M. and Malec, J. (2015). Knowledge-based instruction of manipulation tasks for industrial robotics. Robotics and Computer-Integrated Manufacturing, 33:56 – 67. Special Issue on Knowledge Driven Robotics and Manufacturing. Sugi, M., Maeda, Y., Aiyama, Y., Harada, T., and Arai, T. (2003). A holonic architecture for easy reconfiguration of robotic assembly systems. IEEE Transactions on Robotics and Automation, 19(3):457–464. Suh, I. H., Lim, G. H., Hwang, W., Suh, H., Choi, J.-H., and Park, Y.-T. (2007). Ontology-based multi-layered robot knowledge framework (omrkf) for robot intelligence. In 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 429–436. TAPAS (2011). Robotics-enabled logistics and assistive services for the transformable factory of the future. EU project funded under the European Community’s Seventh Framework Programme grant no 260026. Web page: http://www.tapas-project. eu/. Tenorth, M. and Beetz, M. (2012). Knowledge processing for autonomous robot control. In AAAI Spring Symposium: Designing Intelligent Robots. Tenorth, M., Klank, U., Pangercic, D., and Beetz, M. (2011). Web-enabled robots. Robotics & Automation Magazine, IEEE, 18(2):58–68. Tenorth, M., Perzylo, A., Lafrenz, R., and Beetz, M. (2013). Representation and exchange of knowledge about actions, objects, and environments in the RoboEarth framework. IEEE Transactions on Automation Science and Engineering, 10(3):643–651. Valente, A. (2016). Reconfigurable Industrial Robots—An Integrated Approach to Design the Joint and Link Modules and Configure the Robot Manipulator, pages 779– 794. Springer International Publishing. Waibel, M., Beetz, M., Civera, J., D’Andrea, R., Elfring, J., Galvez-Lopez, D., Haussermann, K., Janssen, R., Montiel, J. M. M., Perzylo, A., Schiessle, B., Tenorth, M., Zweigle, O., and van de Molengraft, R. (2011). RoboEarth. IEEE Robotics Automation Magazine, 18(2):69–82. Wiendahl, H.-P., ElMaraghy, H., Nyhuis, P., Zäh, M., Wiendahl, H.-H., Duffie, N., and Brieke, M. (2007). Changeable manufacturing - classification, design and operation. {CIRP} Annals - Manufacturing Technology, 56(2):783 – 809.

130

References Wrede, S., Emmerich, C., Grünberg, R., Nordmann, A., Swadzba, A., and Steil, J. (2013). A user study on kinesthetic teaching of redundant robots in task and configuration space. Journal of Human-Robot Interaction, 2(1):56–81. Yang, D., Dong, M., and Miao, R. (2008). Development of a product configuration system with an ontology-based approach. Computer-Aided Design, 40(8):863 – 878. Yang, D., Miao, R., Wu, H., and Zhou, Y. (2009). Product configuration knowledge modeling using ontology web language. Expert Systems with Applications, 36(3, Part 1):4399 – 4411. Zander, S. and Awad, R. (2015). Expressing and reasoning on features of robot-centric workplaces using ontological semantics. In Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, pages 2889–2896. Zimmermann, U. E., Bischoff, R., Grunwald, G., Plank, G., and Reintsema, D. (2008). Communication, configuration, application - the three layer concept for plug-andproduce. In Proceedings of the Fifth International Conference on Informatics in Control, Automation and Robotics (ICINCO 2008), pages 255–262.

131

Part III

Papers

133

The papers have been removed in the public version of this PhD thesis due to copyrights.

135

ISSN (online) – 2246-1248 ISBN (online) - 978-87-7112-773-7