ARTICLE International Journal of Advanced Robotic Systems
Agent-oriented Embedded Control System Design and Development of a Vision-based Automated Guided Vehicle Regular Paper
Wu Xing*, Yu Jun, Lou Peihuang and Tang Dunbing Nanjing University of Aeronautics & Astronautics, Nanjing, China * Corresponding author E-mail:
[email protected]
Received 10 Mar 2012; Accepted 19 Apr 2012 DOI: 10.5772/46127 © 2012 Xing et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract This paper presents a control system design and development approach for a vision‐based automated guided vehicle (AGV) based on the multi‐agent system (MAS) methodology and embedded system resources. A three‐phase agent‐oriented design methodology Prometheus is used to analyse system functions, construct operation scenarios, define agent types and design the MAS coordination mechanism. The control system is then developed in an embedded implementation containing a digital signal processor (DSP) and an advanced RISC machine (ARM) by using the multitasking processing capacity of multiple microprocessors and system services of a real‐time operating system (RTOS). As a paradigm, an onboard embedded controller is designed and developed for the AGV with a camera detecting guiding landmarks, and the entire procedure has a high efficiency and a clear hierarchy. A vision guidance experiment for our AGV is carried out in a space‐limited laboratory environment to verify the perception capacity and the onboard intelligence of the agent‐oriented embedded control system. Keywords Multi‐agent System, Automated Guided Vehicle, Control System, Embedded System, Vision Guidance www.intechopen.com
1. Introduction
A special role in analysing and solving the decentralized system problems is attributed to the concepts of an agent and multi‐agent system (MAS). The MAS methodology has been applied widely on the routing and scheduling of AGVs when the multi‐vehicle transport control system is designed, e.g. a local rescheduling procedure for the dynamic routing of autonomous decentralized AGV systems [1], an intelligent agent‐based AGV controller for conflict‐free shortest path generation, minimum time motion planning and deadlock avoidance [2], and a decentralized control software architecture based on the situated multi‐agent system for transport assignment and collision avoidance [3].
As to another respect, the decentralized control systems of AGVs still need a group of autonomous vehicles that feature an environment perception capacity and onboard intelligence. It is often the important concern in the domain of robotics and artificial intelligence, and the MAS methodology is also used for a wide range of robotic control applications [4‐6]. In a multi‐agent
Int J AdvAgent-oriented Robotic Sy, 2012, Vol. 9, 37:2012 Wu Xing, Yu Jun, Lou Peihuang and Tang Dunbing: Embedded Control System Design and Development of a Vision-based Automated Guided Vehicle
1
architecture with cooperative fuzzy control, a goto agent and an avoid agent are designed to treat different aspects of robot control separately, and fuzzy logic is used to unify their actions to obtain a final complex behaviour [5]. Besides, the MAS technique is more than a high‐level design methodology for control system architecture, and it also provides a system development approach to a practical control device by using hardware and software resources [7‐10]. The Prometheus methodology is defined as a proper detailed process to specify, implement and test/debug agent‐oriented software systems [7]. It offers a set of detailed guidelines that include examples and heuristics, and which provide a better understanding of what is required in each step of the development. This methodology is applied in system modelling and software development for a surveillance‐related mobile robot that can detect and follow humans [8]. Breemen defines a type of agent, named a controller‐agent, which contains all the information of a particular control problem, and presents an implementation framework for hierarchical structured multi‐controller systems that consist of a set of heterogeneous control algorithms [9]. Ning promotes this work by developing an embedded control software system based on the multi‐agent architecture for a teleoperation mobile robot. Agents are encapsulated into different tasks in a real‐time operating system (RTOS) [10]. When the MAS technique is applied in a practical control system’s development, hardware and software resources are also important to that implementation. With the advent of VLSI system level integration, embedded systems such as a digital signal processor (DSP), an advanced RISC machine (ARM) and a field programmable gate array (FPGA) become the centre of gravity of the computer industry. A combination of several heterogeneous microprocessors in the distributed control system has proven to be a high‐performance scheme for complex mobile robots [11‐12]. In summary, the MAS methodology is used widely for AGVs’ routing and scheduling, robotic control architectures and software system development. However, few attempts have been made to design and develop an embedded control system in the context of an autonomous AGV. The aim of this paper is to improve the environment perception capacity and the onboard intelligence of our AGV by using the MAS methodology and the embedded system resources. This paper presents how the detailed process provided by the Prometheus methodology is used to analyse system functions, construct operation scenarios, define agent types and design the MAS coordination mechanism, and how the multi‐agent control system with its coordination mechanism is implemented at the hardware and software level of embedded systems.
2
Int J Adv Robotic Sy, 2012, Vol. 9, 37:2012
2. Overview of the Prometheus‐based Methodology The Prometheus methodology is a three‐phase agent‐oriented design process for software systems, and the design tool based on this methodology is also available for practical software development. Accordingly, it has been applied in the software development of a surveillance‐related robotic [8]. This methodology offers a collection of guidelines and means, e.g. goal overview, scenarios analysis and roles assignment, to help the designer to comprehend system functions and to determine the elements that form the MAS. It also explicitly considers agent perceptions and actions as modelling elements, and it can be used to explain why and how the different elements are obtained from agent‐based applications. The Prometheus methodology comprises three phases, and we adapt its process for development by using the embedded system resources at the hardware and software level, as shown in Fig.1.
Figure 1. The modified Prometheus methodology for embedded realization
The system specification phase determines the basic functionalities of the system, which can be described as a series of multi‐level goals. Interface analysis is used to understand the input and output interactions between the system and the environment, which can also be considered as the perception and action for a robotic system. The system operation status is expressed as a scenario that contains a sequence of structured job steps. Each autonomous element in a scenario is defined as a role, which can be played by an agent or several agents together. The architecture design phase investigates the agent types that exist in the system and how they interact. The agents are identified by role‐agent mapping, which analyses the correlation of goals, perceptions and actions of multiple roles. Agent types and numbers are designed to play the
www.intechopen.com
different roles. Their means of interaction are described in the MAS coordination mechanism. The detailed development phase is oriented to the embedded system resources at the hardware and software layer, differing from the specific software programming language (JACK) and the Prometheus Design Tool (PDT) [13]. Each agent is implemented using MCUs and their on‐chip peripheral devices, as well as RTOS tasks and their algorithms. The coordination mechanism is constructed based on the physical buses and ports, as well as RTOS system services and their behaviour scheduling. 3. System Specification AGV can transport materials between different sites by tracking the specified path towards the destination. The transportation functionality can be regarded in terms of two stages. The former is a vehicle movement that means the AGV carries materials and moves towards the destination. The latter concerns the transfer of materials, which means
the AGV has arrived at the destination and the materials are passed to another machine (such as a conveyor belt or a warehouse) automatically or manually. In order to support these two basic functions, the AGV needs to manage all the devices mounted on it, so as to receive and convey instructions and to monitor and feed back the vehicle’s condition, etc. In this sense, an AGV function can be described and decomposed as shown in Fig.2. Since vehicle movement is the most important and complicated function for AGV transportation, this paper mainly focuses on this point, which is broken down into more detailed sub‐functions at different layers. Automated guidance movement distinguishes AGVs from other manned industrial vehicles, which are our research priorities. Path tracking, turning control and obstacle avoidance are three basic movement behaviours for automated guidance, so they are identified as three typical operation scenarios. This scenario conception is adopted here as a perspective to analyse the input and output interactions between the system and the environment, as shown in Fig.3.
Figure 2. Goal decomposition diagram
Figure 3. Interface analysis diagram www.intechopen.com
Wu Xing, Yu Jun, Lou Peihuang and Tang Dunbing: Agent-oriented Embedded Control System Design and Development of a Vision-based Automated Guided Vehicle
3
An actor is an external entity that can help the system to interact with the environment, such as sensors, motors and other devices mounted on the AGV. The looking‐down camera is the primary guiding sensor that acquires the images of landmarks. Landmark width and path deviation is calculated for path tracking when the AGV travels in the mono‐path, and the intersection position is computed in addition for the turning control when the AGV approaches an intersection. The wheel encoder is the sensor that perceives the motion state of driving wheels. The actual velocity and turning angle can be obtained by measuring the rotation velocities of two wheels. Two servo motors are used to control the rotation of driving wheels, including the velocity rate and direction of each wheel, as well as the speed difference between two wheels. A range sensor is used to estimate the distance between the AGV and a possible obstacle, and a bumper to detect whether a collision occurs. A scenario is a sequence of structured steps including a goal, perception, an action, even other scenarios, which form a possible means of execution of the system. Fig.4 describes the process performed by the AGV to follow the guiding lines in the path tracking scenario. It begins with the AGV operation mode selection. In the automated guidance mode, the AGV distinguishes the guiding lines from other landmarks first. Then, the width of the guiding line is detected. If it is the normal width, the AGV runs at the normal speed rate, otherwise it runs at the slow rate. Path deviations between the AGV and the guiding lines are calculated, and they are used by the path tracking algorithm to generate the speed difference between two driving wheels. The desired velocities of the driving wheels are synthesized based on the AGV speed rate and the speed difference, which is sent to the wheel motors as the rotation control instructions.
Figure 4. Path tracking scenario
The roles involved in all of the scenarios are identified by clustering the goals in the system decomposition diagram and linking the perceptions and actions in the interface analysis diagram. Fig.5 describes the role assignment in our system. The roles related with sensors are defined as the perception roles, which perceive the physical conditions of the AGV and the environment, and offer data describing their status. The roles controlling motors and other actuators are regarded as the actuation roles, which receive instructions and operate the devices. The behaviour roles are used to describe the AGV’s movement and other actions, such as differential control, turning control and motion control. The communication roles are used by the user interface, through which the instructions can be received from the host computer or the operator and sent to the behaviour role, and the data of the AGV operation status can be fed back and displayed on the host computer.
Figure 5. Role assignment diagram
4
Int J Adv Robotic Sy, 2012, Vol. 9, 37:2012
www.intechopen.com
Figure 6. MAS coordination mechanism
4. Architecture Design
One task involved in this phase is to design the agent type and number of the system. The agents are identified by comparing the similarity of different roles in terms of goals, perceptions and actions. In our case, we group (1) the roles of landmarks identification, deviation calculation, position computation and line‐width detection into the image processing agent, (2) the roles of range detection and collision detection into the obstacle detection agent, and (3) the roles of instruction receipt and status feedback into the user interface agent. Other roles are related with the agents in the one‐to‐one mode respectively.
The information about perception, behaviour, communication and actuation related to roles ‐ depicted in Fig.5 ‐ are automatically propagated and linked with the agents after the role‐agent mapping. Therefore, the agents are also grouped into these four types. Once the agent type and number are decided, the next task is to build the MAS coordination mechanism by using agent conversations (Interaction Protocols ‐ IP). This mechanism describes what happens among agents to realize the given goals and scenarios, as shown in Fig.6. There are two basic functions of information exchange and time sequence in agent conversations. Information exchange is a data flow using IPs from one agent to another to form a closed‐loop process for a common goal. The information from the perception agents may be sent to a behaviour agent or several agents, e.g. Deviations only to the differential control agent, and Position used to describe the corner for the turning control agent or the location for the motion control agent. Several behaviour agents may affect one kind of information together and then it may be sent to several other agents, e.g. Speed rate www.intechopen.com
is determined by the turning control agent when the AGV approaches an intersection and rotates around the corner, or by the motion control agent when the AGV detects the guiding lines with different widths, or an obstacle in the direction it is heading, even a possible collision. The interaction between the behaviour agents and the communication agents is bidirectional, e.g. Tracking is sent to the differential control agent and the turning control agent, and they feedback the AGV status to the user interface agent, and then to the host computer.
In order to organize data flows efficiently, information should be sent and received at certain time points with the order relations, also called time sequences. It can be implemented by using IPs or a timer. The on‐chip timer offers several initial reference signals for some perception and communication agents, and then the entire MAS can work in an orderly manner based on these reference signals. For example, the timer provides a timing signal to the image processing agent, and it calculates path deviations and sends them to the differential control agent. This agent executes the path tracking process at the time point when it receives deviations. The Deviations IP not only offers the information input, but also defines the time sequence of the image processing and differential control agent. This IP with the timing function is called a timing IP.
Fig.7 describes the detailed data flows and the time sequence in the MAS for the path tracking scenario. The Tracking instruction from the user interface agent initializes certain related agents. Triggered by an N Hz timing signal, the image processing agent identifies the guiding lines and calculates path deviations at each fixed interval. Deviations are sent to the differential control agent as a timing IP, shown as a bidirectional bold arrow
Wu Xing, Yu Jun, Lou Peihuang and Tang Dunbing: Agent-oriented Embedded Control System Design and Development of a Vision-based Automated Guided Vehicle
5
pointing directly to the differential control agent in Fig.7. This IP activates the differential control agent at an N Hz
interval, and this agent periodically calculates the speed difference to correct the deviations.
Figure 7. Path tracking protocols
Figure 8. The agent‐oriented embedded control system’s development
Figure 9. The control structure of the AGV path tracking 6
Int J Adv Robotic Sy, 2012, Vol. 9, 37:2012
www.intechopen.com
The motion control agent is triggered by the two timing IPs Linewidth and Speed difference. The AGV speed rate is the control output for the guiding information Linewidth, and also the reference input when the desired wheel velocity is synthesized. The motion control agent sends three IP outputs to the motor control agent. Start/stop and Wheel direction are two timing IPs that can immediately change the motor motion by activating the motor control agent as soon as they are received. Wheel velocity is an IP without the timing function. The motor control agent reserves Wheel velocity until it is triggered by the timing IP Actual velocity from the encoder measurement agent. At that time, the target value (Wheel velocity) and the actual value (Actual velocity) of the wheel velocities are compared and a closed‐loop control is executed to eliminate the velocity error. The encoder measurement agent is triggered by a 4N Hz timing signal directly from the timer. That is to say, when the differential control agent performs path tracking once, the motor control agent carries out the velocity servo four times. The relation of control frequencies ensures that the speed difference required by the path tracking can be implemented reliably by the AGV driving system. 5. Detailed Development In this phase, each agent and their coordination mechanism are developed by using the embedded system resources at the hardware and software layers. Agents are converted into agent structures in a microprocessor network of DSP and ARM, and agent tasks in a RTOS μC/OS‐II, as shown in Fig.8. This onboard control system features a distributed intelligence structure and parallel computation capacity. The MAS coordination mechanism is constructed based on a multitasking processing capacity of multiple microprocessors and system services of RTOS. The image processing agent is encapsulated into the only task in the DSP, and all of the software and hardware resources are used by it for image processing, including median filtering, edge detection, feature extraction and line fitting, etc. This agent in the DSP is coordinated with other agents in the ARM by using a board‐level means of communication, and its task sends three outputs of deviations, position and line‐width by using messages. Only one ARM is employed here for other agents, owing to the support of RTOS μC/OS‐II and various on‐chip peripherals. These agents interact with each other by using the RTOS’s system services. The SMP is a shared memory area of RAM used to store the global data for different tasks in order to realize the MAS blackboard. A mutex semaphore is assigned to each group of shared data to ensure the reliability and integrality of data. A task can access the shared data only when it gets the related mutex semaphore.
www.intechopen.com
A timer interrupt guarantees the behaviour synchronization of several tasks, as shown in Fig.8. The timing signals with two frequencies are generated as a semaphore or a mailbox message, to make the user interface task send the status data ‐ e.g. path deviations to the host computer periodically ‐ to make the image processing task acquire the images of landmarks periodically, to make the obstacle detection task find the possible obstacle periodically, and to make the encoder measurement task count the pulse number periodically. The encoder measurement task is activated by a timing signal to calculate the actual velocities of driving wheels, or to count the turning angle of the AGV turnaround. The former is sent as a message to the mailbox of the motor control task, and starts a servo control process based on the error of wheel velocity. The latter is sent as a message to the mailbox of the turning control task, and determines whether the AGV rotation achieves the objective orientation. The data is also saved to the SMP, which can be collected by the user interface task to feed back the information. The image processing agent is also triggered by a timing signal to calculate the path deviations and line‐width of the guiding lines, or to detect the position of the crossing point in the visual field. More details of the image processing algorithm can be found in [14]. Deviations, position and line‐width are sent as messages to make the related tasks maintain the AGV along the path, or locate it accurately on the target point, or change its speed rate. The differential control task is triggered by the message of path deviations. It reads two actual wheel velocities and the AGV speed rate from the SMP, and then starts the path tracking algorithm. It pursues the optimal speed difference that can realize a deviation elimination process with the best error‐correct coordination and rapidity that the actual driving system can support. More details of the algorithm can be found in [15]. The motion control task is activated by the messages of line‐width, position, speed difference, object distance or AGV collision respectively, so as to change the AGV speed rate, or to synthesize the desired wheel velocity, or to locate the AGV on the target point, or to stop the AGV for an emergency, or to change the AGV movement direction. The motor control task is activated by the messages Start/stop so as to start or stop the AGV, or by the message Wheel direction so as to change the AGV movement direction, or by the message Actual velocity so as to launch the velocity servo control algorithm for the driving wheels. A multi‐objective genetic algorithm with Pareto optimality and elitist tactics is used to identify the driving system dynamics and to optimize the PID controller
Wu Xing, Yu Jun, Lou Peihuang and Tang Dunbing: Agent-oriented Embedded Control System Design and Development of a Vision-based Automated Guided Vehicle
7
parameters. More details of the algorithm can be found in [16]. The PWM voltage signal is given to motor drivers to regulate the wheel velocities. Through three phases of system specification, architecture design and detailed development, the AGV control structure is built based on the image processor, the tracking controller and the servo controller, as shown in Fig.9. By the constraints of vehicle dynamics, the tracking controller generates the optimal speed difference output to eliminate path deviations in a multi‐step control sequence. The servo controller regulates the PWM voltages of motor drivers to correct the errors of desired velocity and actual velocity for driving wheels, and its parameters are optimized by the multi‐objective genetic algorithm. Encoders provide the actual velocities as the control feedback for servo control and also as the dynamics constraints for path tracking. 6. Vision guidance experiment To test the agent‐oriented embedded control system in the context of an autonomous AGV, our AGV (NHAGV) is used to perform a vision guidance experiment in a space‐limited laboratory environment. Its main technical parameters are listed in Table 1. The NHAGV can change Body size Load capacity Moving speed (m) (kg) (m/min) 1.9×0.9 500 0‐60
its speed rate from 0 to 60 m/min in a stepless speed‐adjusting manner. The tracking accuracy denotes the lateral position deviation between the AGV and the guiding line at different speed rates, and locating accuracy means the deviation when the AGV stops on the target point at the low speed. The entire travelling route from our laboratory to the outside corridor is as shown in Fig.10. It is difficult for the NHAGV with a body size of 1.9m×0.9m to go through a door with a 1.4m width and to enter a corridor with a 2m width. Therefore, the turning points of a right angle are set in the route in order to reduce the turning radius of the NHAGV. In the vision guidance experiment, the onboard camera of the NHAGV is not only used for real‐time AGV vision guidance control, but also for guidance video recording which catches the change process of the guiding lines in the visual field, as shown in Fig.11. In addition, we hold a digital camera (not mounted on the NHAGV) and record manually the typical positions and actions of the NHAGV along that travelling route, as shown in Fig.12. With the help of Fig.10, Fig.11 and Fig.12, we can observe the NHAGV movements and guiding line changes intuitively, and further analyse how the agent‐oriented embedded control system works when the NHAGV autonomously moves along the travelling route. Tracking accuracy (mm) 0‐10
Locating accuracy (mm) 0‐3
Table 1. The main technical parameters of NHAGV
Figure 10. The space‐limited experiment environment
8
Int J Adv Robotic Sy, 2012, Vol. 9, 37:2012
www.intechopen.com
The NHAGV starts in site 1, as shown in Fig.10. The onboard control system is initialized, and those agents related to path tracking work cooperatively in sequence, as shown in Fig.7. When the image processing agent detects a guiding line with a double width, as shown in Fig.11(a), the NHAGV enters the deceleration area, as shown in Fig.12(a), and the motion control task reduces the AGV speed at a rate of 100mm/s. When the image processing agent detects the first turning point in the front area of the visual field, the NHAGV approaches the first corner. The turning control task begins to organise the whole turning process. On the one hand, it continuously checks the position of the turning point in the visual field when it is triggered by the message from the image processing agent. On the other hand, it asks the differential control task to eliminate path deviations and maintain the NHAGV along the guiding line. When the turning control task is informed that the turning point moves into the central area of its visual field, as shown in Fig.11(b), it sends the message Stop to the motor control task, and immediately stops the NHAGV in site 2. At that time, the NHAGV accurately locates its kinematics centre on the turning point. The turning control task sets the same velocity rate of two driving wheels, but in the opposite velocity direction, and then restarts the NHAGV to make it rotate round the turning point, as shown in Fig.11(c) and Fig.12(c). The encoder measurement task accumulates the turning angle and sends it to the turning control task. It continuously checks whether the angle achieves the target value between the current path and the next path. If the target value is achieved, as shown in Fig.11(d), it sends the message Stop to the motor control task, and immediately stops the NHAGV in site 3, as shown in Fig.10 and Fig.12(d). At that time, the NHAGV has changed its direction of heading to a new path. After the NHAGV re‐orientates itself, the turning control task sets the same velocity rate and direction of the driving
wheels, and then restarts the NHAGV. This means the end of a right‐angle turning control.
Figure 11. The video images of the guiding lines
The vision guidance experiments have been repeated many times, and we get satisfactory experiment results. The NHAGV with a body size of 1.9m×0.9m can turn in a zero radius in order to go through the door with a 1.4m width and can enter the corridor with a 2m width without any collision in this space‐limited environment. In particular, when the NHAGV locates itself on the second turning point, as shown in Fig.12(g), the deviation between the NHAGV and the target point is no more than 3 mm, the distance between the front of the NHAGV and the corridor wall is less than 10 cm, and the distance between the side of the NHAGV and the doorframe is less than 30 cm.
Figure 12. The video images of the AGV movements
www.intechopen.com
Wu Xing, Yu Jun, Lou Peihuang and Tang Dunbing: Agent-oriented Embedded Control System Design and Development of a Vision-based Automated Guided Vehicle
9
Figure 13. The parallel operation in a pipeline structure
These satisfactory experiment results are achieved owing to several reasons. Besides the contribution of the relevant algorithms themselves (e.g. the image processing algorithm identifies path deviations quickly and accurately, and the path tracking algorithm eliminates path deviations effectively), the agent‐oriented design methodology provides a high‐efficiency and clear‐hierarchy development procedure to help us to analyse system functions, to design the MAS for application, and to utilize hardware and software resources effectively. During the phase of MAS modelling, this methodology explicitly considers agent perceptions and actions as modelling elements, and the different elements are obtained from the agent‐based applications reasonably. For the AGV guidance, multiple elements of environmental information are perceived by the camera, the range sensor and encoders respectively, and integrated by the agent‐oriented control system to produce an advanced guiding capacity so that the tracking and locating accuracy maintains a high value, and so no collision occurs in this space ‐limited environment. During the phase of system development, this methodology utilizes the multitasking processing capacity of the embedded control system effectively in order to achieve the onboard intelligence. The microprocessor network of the DSP and ARM and the multitasking RTOS construct the parallel computation capacity in a pipeline structure, as shown in Fig.13. The whole vision guidance process, containing image processing, path tracking, encoder measurement and motor control, can be completed in one image processing cycle. For each set of path deviations, the path tracking agent runs once, and the encoder measurement agent and the motor control agent runs N times. The operation time of path tracking is much less than that of the image processing, so that encoder measurement and motor control can be executed during the rest time. When vision guidance is continuously implemented in the pipeline structure, the (k+1)‐th image processing occurs simultaneously with the k‐th path tracking as well as with the k1‐th to kN‐th encoder measurement and motor control. As a result, the pipeline structure improves the
10 Int J Adv Robotic Sy, 2012, Vol. 9, 37:2012
operation efficiency significantly compared to the conventional cascaded structure. Accordingly, the real‐time performance of perception, processing and control can be guaranteed when the intelligent algorithms are executed. 7. Conclusion The agent‐oriented embedded control system is designed and developed based on the MAS methodology and the embedded system resources for the vision‐based AGV with the onboard camera detecting landmarks. The MAS methodology is used to analyse system functions, construct operation scenarios, define agent types and design the MAS coordination mechanism. At the hardware layer, several microprocessors of the DSP and ARM are organized to construct the distributed intelligence structure. At the software layer, the system services of the RTOS are utilized to strengthen the parallel computation capacity. As to the algorithm aspect, a set of agents are designed for the perception, behaviour, actuating and communication functions of AGV guidance and control. For complex mechatronics design, the MAS methodology can provide an effective organizing approach from the bottom to the top for embedded control systems. On the other hand, embedded system resources containing the parallel computation structure of multiple microprocessors at the hardware layer and the multitasking processing capacity of the RTOS at the software layer can facilitate the agent implementation and the MAS coordination mechanism’s development. The vision guidance experiment in a space‐limited environment shows the advantages of combining the MAS methodology and the embedded system resources, which can improve the perception capacity and the onboard intelligence in the context of an autonomous AGV. 8. Acknowledgements This work was supported by the National Natural Science Foundation of China (No.61105114 and No. 51175262) , and the Research Fund of Nanjing University of Aeronautics & Astronautics (No. NJ2010025 and No.S1026‐053).
www.intechopen.com
9. References [1] Tatsushi N, Masakazu A and Masami K (2006) Experimental studies on a local rescheduling procedure for dynamic routing of autonomous decentralized AGV systems. Robotics and Computer‐Integrated Manufacturing. 22: 154–165. [2] Sharad C S, Alok K C and Surendra K (2008) Development of an intelligent agent‐based AGV controller for a flexible manufacturing system. International Journal of Advanced Manufacturing Technology. 36: 780–797. [3] Danny W and Tom H (2008) Architectural design of a situated multiagent system for controlling automatic guided vehicles. International Journal of Agent‐Oriented Software Engineering. 2(1): 90‐128. [4] Muoñz‐Salinas R, Aguirre E and García‐Silvente M (2005) A multi‐agent system architecture for mobile robot navigation based on fuzzy and visual behaviour. Robotica. 23(6): 689‐699. [5] B Innocenti, B Lopez and Salvi J (2007) A multi‐agent architecture with cooperative fuzzy control for a mobile robot. Robotics and autonomous systems. 55: 881‐891. [6] Muoñz‐Salinas R, Aguirre E and García‐Silvente M (2009) Multi‐agent system for people detection and tracking using stereo vision in mobile robots. Robotica. 27(5): 715‐727. [7] Padgham L and Winikoff M (2004) Developing intelligent agents systems: A practical guide. John Wiley and Sons. [8] Gascueña J M and Fernández‐Caballero A (2011) Agent‐oriented modelling and development of a person‐following mobile robot. Expert Systems with Applications. 38(4): 4280‐4290.
www.intechopen.com
[9] van
Breemen
A
J
N
(2001)
Agent‐based
multi‐controller systems - a design framework for complex control problems. Ph.D thesis, University of Twente. [10] Ning K J and Yang R Q (2006) MAS based embedded control system design method and a robot development paradigm. Mechatronics. 16: 309‐321. [11] Lazaro J L, Garcia J C, and Mazo M (2001) Distributed architecture for control and path planning of autonomous vehicles. Microprocessors and microsystems. 25: 159‐166. [12] Yasuda G (2003). Distributed autonomous control of modular robot systems using parallel programming. Journal of materials processing technology. 141: 357‐364. [13] Thangarajah J, Padgham L and Winikoff M (2005) Prometheus design tool. The 4th Int. conf. on autonomous agents and multi‐agent systems. 127–128. [14] Yu J, Lou P H and Qian X M (2008) An intelligent real‐time monocular vision‐based AGV system for accurate lane detecting. Proc. of Int. Colloquium on Computing, Communication, Control, and Management. 2: 28‐33. [15] Wu X and Lou P H (2009) Optimal path tracking control based on motion prediction. Control and Decision. 24(4): 565‐569. [16] Wu X, Lou P H and Tang D B (2011) Multi‐objective Genetic algorithm for system identification and controller optimization of automated guided vehicle. Journal of Networks. 6(7): 982‐989.
Wu Xing, Yu Jun, Lou Peihuang and Tang Dunbing: Agent-oriented Embedded Control System Design and Development of a Vision-based Automated Guided Vehicle
11