High-Level Control Mid-Level Control Low-Level Control - CiteSeerX

6 downloads 16530 Views 307KB Size Report
software infrastructure for complex control systems, which exploits new and ... customization modules, act on a much slower time frame. High-Level Control.
AN OPEN SOFTWARE INFRASTRUCTURE FOR RECONFIGURABLE CONTROL SYSTEMS L. Wills, S. Kannan, B. Heck, G. Vachtsevanos, C. Restrepo, S. Sander, D. Schrage, J.V.R. Prasad Georgia Institute of Technology, Atlanta, GA 30332

1 Introduction Complex dynamic systems, such as aircraft, power systems, and telecommunications networks, present major challenges to control systems designers. Both the military and civilian sectors of our economy are demanding new and highly sophisticated capabilities from these systems that the current state-of-the-art is not offering. Among them are the following: • Ability to adapt to a changing environment; • Ability to reconfigure the control algorithms; • Plug-and-play extensibility for new technologies; • Interoperability among different components (e.g., running on different processors); • Open software architecture that supports tools and algorithms from a variety of sources and domains. Meeting these challenges will require a fundamental change in the way control systems are composed, integrated, and reused. Recent advances in software technology have the potential to revolutionize control system design. In particular, new component-based architectures [2,17] encourage flexible “plug-and-play” extensibility and evolution of systems. Distributed object computing allows heterogeneous components to interoperate across diverse platforms and network protocols [3,4,7]. New advances are being made to enable dynamic reconfiguration and evolution of systems while they are still running [10,11]. This paper describes a new software infrastructure for complex control systems that exploits these new and emerging software technologies. More specifically, it presents an open control platform (OCP) for complex systems, including those that must be reconfigured or customized in real-time. The next section describes the state of the art in control system design and discusses features desired in a control system. Section 2 then describes an open control software infrastructure to support these desired features, followed by an overview of a first-generation prototype of this infrastructure that has been developed for an autonomous aerial vehicle control application. The paper concludes with a discussion of ongoing work and open issues. 1.1 State of the Art in Control System Configurations Control systems for highly complex systems (such as processing plants, manufacturing processes, aerospace vehicles, power plants, etc.) are themselves very complex. Notions of “control” are expanding from the traditional loop-control concept to include such other functionalities as supervision, coordination

and planning, situation awareness, diagnostics, optimization [1,18,19]. Consider, for example, the control system for an Uninhabited Aerial Vehicle (UAV) such as a vertical takeoff and landing (VTOL) UAV of the helicopter type [16]. A hierarchical/intelligent control architecture for the UAV application is depicted in Figure 1. It integrates high-level situation awareness and mode selection functionalities with middle-level coordination routines, aimed at facilitating mode transitioning and reconfigurable control, as well as low-level control activities. An intelligent operator interface complements the basic algorithmic modules. The three-level hierarchy is intended to respond to varying time frames and degrees of intelligence imposed by the behaviors the vehicle must exhibit in the course of a mission. For example, some of the high level functions (such as situation awareness) must be able to react quickly to changing conditions. Others, such as mission customization modules, act on a much slower time frame.

High-Level Control Situation Awareness Reactive Control & Mode Selection

Mid-Level Control Mode Transitioning

Sensors

Recent advances in software technology have the potential to revolutionize control system design. This paper describes a new software infrastructure for complex control systems, which exploits new and emerging software technologies. It presents an open control platform (OCP) for complex systems, including those that must be reconfigured or customized in real-time for extreme-performance applications. An application of the OCP to the control system design of an autonomous aerial vehicle is described.

Modeling

Abstract

Low-Level Control Stability and Control Augmentation System

Figure 1: Hierarchical Control Architecture The architecture supports a variety of sensor suites, in a plug and play manner, that are capable of sensing both external and internal conditions. Having both day and night vision capability is a typical example of sensor hardware/software that may be viewed as mission specific and configured in a plug and play mode. Raw sensor data is fed to the sensor management module where data is processed, validated and fused to provide usable information to other control modules. The output of the sensor management module feeds into a high level situation awareness module, a fault detection and isolation module, as well as to lower level control modules. a The high level modules must be able to react to possible threats, both external (such as a missile) and internal (such as a fault). If a threat or potential failure is recognized, the mode selection module generates in real-time sequences of new modes needed to change the vehicle’s operating regime. New waypoints are generated for the vehicle flight controllers and transitioning dynamics are scheduled via the mode switching or reconfiguration module. At this level, a nonlinear dynamic/fuzzy logic control framework assists in accomplishing the switching tasks in a stable and robust manner.

To appear in the Proceedings of the American Control Conference, Chicago, IL, June 2000.

reconfiguration management by making reconfiguration strategies, change policies, and rationale for reconfiguration decisions explicit in a software architecture model. The layers of the OCP are described in more detail in this section. 2.1 Real-Time Distributed Object Computing Today’s complex control systems are increasingly made up of distributed, heterogeneous hardware and software components, using a variety of different types of machine platforms and programming languages. The underlying network technology in place to allow communication between these components may also vary both within and across the applications. In the UAV application, for example, control algorithms and sensorprocessing modules running on a desktop PC may interact with a high-fidelity math model of the vehicle running on a multiprocessor computer. Also, a high-fidelity simulation capable of providing realistic graphics and visualization (see Figure 3) may be running on specialized hardware to visualize the aircraft dynamics and provide battle information. During flight, these algorithms may communicate with the actual hardware sensor and actuation suites on-board the vehicle. Some of these components may need to communicate over a wireless link, while others may be connected through a high-speed, high-bandwidth backplane bus on-board the vehicle. Moreover, the distribution topology is likely to evolve as the system develops and as the UAV interacts with other vehicles and other sources of mission guidance, such as a mothership.

GUI Interface

This last level of the control hierarchy provides set points and command trajectories to low-level stabilizing controllers. Although the control architecture depicted in Figure 1 is specific to a UAV, it possesses generic features that may be found in many complex engineered systems. Thus, modern control system configurations entail a series of “modules” that perform a variety of functions needed to carry out a “mission,” a “production plan” or a specific “operational task.” The architecture of new control systems may be viewed as a hierarchical structure, a distributed one or, as in most cases, a combination of both. 1.2 Limitations of the State of the Art State of the art implementations of these complex control systems tend to have a common set problems: • Tight coupling. The modules in a typical modern control system tend to be tightly coupled in that they share state or control information. This can make the system faster but may make changes to the system harder to achieve. With the advent of ever-faster processors, the efficiency gained by tight coupling may not be as important as the ability to update the control system quickly and easily. • Complex, inflexible data interchange: Distributed control systems utilize data interchange between components. Modifications to the hardware may require extensive modifications to the entire data transfer protocol (timings, priorities, interconnections, etc.). • Computationally limited: Many extreme-performance control systems are operating under stringent technological constraints (e.g., size, power, bandwidth). Current control systems that meet these constraints may not satisfy the computational requirements needed in extreme performance. • Closed/proprietary systems : Traditionally, control firms have provided customized or proprietary hardware and software platforms to their customers, severely limiting component interchangeability, reconfigurability, and distributed and concurrent processing.

onyx.gatech.edu Controllers

Re/configuration Management (Control Domain-Specific Patterns)

Controller Interface

[φ C θ C ψ C ]

Attitude Controller Pitch

[δ LON

δ LAT

δ TAIL ]

Roll

Y Yaw Z

δCOLLECTIVE

Collective Trajectory Controller RPM

or

ΩC

fast-sgi.gatech.edu

Rigid Body Dynamics

Swash Plate & Servo dynamics

Aerodynamics Rotor dynamics

(Plug & Play, Reconfigurability)

Control rotor dynamics

Testbed UAV Interface

Real-Time Distributed Computing (Real-Time CORBA-based)

X

Model

Virtual Resource Network

Figure 2: Open Control Platform In the bottommost layer, the OCP builds on and extends new advances in real-time middleware technology [4,7] which allows distributed, heterogeneous components to communicate asynchronously in real-time. The middle layer treats components as networked resources that are flexibly configured to achieve a desired functionality and that can be readily reconfigured dynamically. The third layer supports

Trajectory Controller

controllers.gatech.edu

UAV Interface

A new software infrastructure, called the Open Control Platform (OCP), is being developed by Georgia Tech in collaboration with Boeing to serve as a substrate for integrating innovative control technologies. A prototype version has been successfully demonstrated [5] which is being applied to the autonomous control of uninhabited aerial vehicles. As illustrated in Figure 2, the OCP consists of multiple layers of application programmer interfaces (API) that increase in abstraction and become more domain specific at the higher layers.

Open Control Platform

2 Open Control Platform Design

Actuators Sensor Suite avionics.testbed.gatech.edu

Figure 3: Distributed Communication

To appear in the Proceedings of the American Control Conference, Chicago, IL, June 2000.

An important challenge is providing real-time communication among these distributed, heterogeneous components. The OCP leverages from the latest middleware technology for distributed object communication. In particular, it uses the Common Object Request Broker Architecture (CORBA) standard set by the Object Management Group (OMG) [9] to achieve seamless distributed communication between objects running on different computers or a network and across multiple network protocols. Extensions have been made to CORBA technology by Washington University and by Boeing to allow distributed communication to occur in real-time [4,7]. 2.2 Prioritized Event-based Communication One of the key extensions provided by Washington University is a new real-time event service [4], which enables components to interact without being tightly coupled. The event service provides a communication abstraction, called an event channel. The event channel mediates information flow between suppliers and consumers so that the system architecture can be reconfigured by a relatively local change in connections to the event channel, rather than by more pervasive changes between directly linked components. This minimizes the architectural impact of switching components. The event service used by the OCP also supports realtime event prioritization and management [4]. This is useful during an extreme performance maneuver, for instance, when attitude data may not only be needed at a higher rate and quality but also at a higher priority than other types of events. The event service facilitates real-time scheduling by supporting multiple strategies for priority-based event dispatching and preemption. 2.3 Virtual Resource Network The core real-time CORBA technology provides mechanisms with which one may reconfigure and reprioritize information flow, but the interface is at a very low level. Hence, we introduce an abstraction called the virtual resource network (VRN) to simplify configuration and reconfiguration of system components during a mission. The overall objective in configuring control software for a mission is to define the interactions between the various components in the system in a systematic fashion. We treat the components of the system as networked resources at multiple levels of granularity. For example, in the UAV application, the resources at the flock mission planning level are individual UAVs. Internal to the UAV, resource allocation occurs when faults occur or resources are being depleted. The resources considered for reconfiguration at this level are redundant sensors and control surfaces. Re/configuration could be at the mission level, sensor level or anywhere in between. Configurations are defined as input-output relationships and information flow between the components. This determines the functionality of the network of resources. The VRN also abstracts away distinctions between mathematical models (e.g., of actuator dynamics) and hardware (e.g., the actuator device drivers). The ability of these resources to be flexibly interchanged without changing other components in the system, e.g., in progressing from simulation to real hardware, comes from the standardization of the interfaces for each component. It is also possible for any other resource on the network to use a resource address to connect directly and interact with the resource, irrespective of where it exists on the network. All distributed computing specifics have been abstracted out. 2.4 Architecture-oriented Re/configuration The software infrastructure supports a loose coupling between control system components to provide flexibility,

extensibility, and reuse. The VRN provides a layer of abstraction for easily specifying configurations and reconfigurations with localized changes to the VRN representation. However, a configuration management mechanism is needed to ensure that the configurations, specified in the VRN, are valid and consistent with functional and nonfunctional requirements (such as performance, security, and reliability). Moreover, it is critical that changes to the configuration (either through human-directed evolution or dynamic, on-line reconfiguration) maintain overall system integrity by being globally coordinated and consistent. The OCP takes an architecture-oriented approach to reconfiguration management, building on seminal work in runtime software evolution [10]. Reconfigurations of control systems follow standard strategies, or “change application policies” [10], for making changes without violating reliability, safety, and consistency constraints. For example, it is common for more than one controller to be applicable to the same task but only one may be active at a time. A reconfiguration in which one controller component is replaced with another might need to follow different policies, depending on the types of controllers involved; some may require a mixing strategy while others a discrete switch. Change policies may dictate the sequence in which the reconfiguration must take place for stable transition and the time period within which the reconfiguration must be completed in order for the controllers to continue to provide stabilizing control. Such policies may also require the initiation of reconfigurations in other resources like the activation of additional sensors to start generating data or they may require a new component to work concurrently with a component it is going to replace to gather state information before the replacement takes place. The reconfiguration management layer of the OCP supports the explicit definition and application of reconfiguration policies specific to the controls domain.

3 A Prototype Open Control Platform An initial prototype of the OCP has been constructed, driven by the application of autonomous control of a Yamaha R50/RMAX helicopter UAV. A test reconfiguration scenario has been demonstrated on the OCP in which a helicopter main rotor collective controller is replaced with an RPM controller when the collective actuators fail and become stuck. Altitude control on a helicopter is normally achieved by controlling the magnitude of lift produced by the main rotor. The lift produced by the main rotor may be changed either by varying the collective pitch setting of the blades or by varying the rotational speed (RPM) of the main rotor. In practice, the collective is used as the primary control surface to change the overall lift produced by the main rotor. This is because the RPM degree of freedom is slower to respond and the margin of feasible RPM change on full-size helicopters is small. However, this margin is much larger in small unmanned helicopters and may thus be exploited as a secondary control surface in case the collective actuators fail and get stuck. The functional setup of this scenario is shown in Figure 4 (feedback variables are not shown for clarity). The control system consists of a helicopter attitude controller (inner-loop), a trajectory controller (outer-loop), a pilot stick to provide trajectory commands, a mathematical model of the unmanned helicopter, a multimedia GUI for visualizing the flight dynamics and a fault detection and reconfiguration module. The trajectory controller [12, 13] takes commanded trajectories attitude

X C , YC , Z C ,ψC ,

as inputs and generates the

φC ,θC ,ψC , necessary to track the desired command.

To appear in the Proceedings of the American Control Conference, Chicago, IL, June 2000.

Within

the

trajectory

controller,

either

the

collective

δCOLLECTIVE or the RPM controller’s output Ω C may be used to control altitude [14]. The attitude controller [15] takes as input the commanded attitude and generates the actuator deflections, δ , necessary to achieve and maintain the desired attitude. The actuator signals are then fed into the simulation model (or, in the future, the actual helicopter in flight). The fault detection and identification (FDI) module induces preprogrammed fault events and then generates reconfiguration commands. The functionality of the FDI module and the decision logic to generate the necessary reconfiguration is assumed to be known. In the future, the FDI module and the pilotstick will be replaced with a fullscale mission planner and fault tolerant control algorithms that automatically detect and diagnose faults, and decide how to reconfigure the hardware/software system. The simulation model will also be eventually replaced with the actual vehicle hardware - a Yamaha R-50/RMAX Remotely Piloted Helicopter. Attitude Controller

Trajectory Controller

[φ C

X

θC ψ C ]

[δ LON δ LAT δ TAIL ]

Pitch Roll

Y

XC     YC  Z C  ψ   C

Yaw

Z

Trajectory Command

Pilot Stick

Collective Trajectory Controller RPM

δCOLLECTIVE or Ω C Actuator Deflections

Reconfiguration Command

Fault Detection/ Reconfiguration

Helicopter Flight Simulation Model

(temporarily hardcoded )

(FORTRAN)

Multimedia GUI (C++)

Figure 4: Flight Control Configuration Figure 5 shows the behavior of the simulated helicopter’s altitude rate response over the course of the test reconfiguration scenario. For purposes of this test, which is a vertical descent scenario, altitude rate (h&) was commanded rather than altitude itself. Figure 5 shows the commanded altitude rate (h&C ) and the response (h&) of the UAV over time. At point A, a vertical descent rate of approximately 3 ft/s is commanded and the UAV responds. Sometime between B and D, the actuator becomes stuck at point C. At D, a positive altitude rate is commanded in an attempt to recover the aircraft from its continuing descent, however the helicopter fails to respond. A reconfiguration event occurs at E where the RPM controller is substituted for the nominal RPM controller at the same time switching off the collective control if necessary. This allows the helicopter to successfully recover altitude and continue its mission. Without this reconfiguration the helicopter would have continued to descend until it hit the ground at a high descent rate. The components integrated in the OCP demonstration system are able to communicate with each other even though they are running in different processes and are written in different programming languages (e.g., Fortran and C++). In the

demonstration, a simulated collective actuator fault triggers a reconfiguration event, which replaces the main rotor collective controller with the redundant main rotor RPM controller. This online switch is made gracefully through a simple change in the controller connections to the event channel. The initial OCP prototype also demonstrates the ability to reuse legacy components and plug in new replacement components as the system evolves. A legacy flight dynamics model was used in the initial demonstration system. Since it was written in Fortran, it was wrapped in C++ and a standard CORBA IDL interface was defined to integrate it with the rest of the system. This allowed us to reuse a trusted, existing dynamics model in the original system. The OCP made it easy to replace this legacy component with a higher fidelity dynamics model (written in C++) when it became available in the next version of the demonstration system a few months later.

4 Ongoing Work and Open Issues Currently, low-level control components and some mid-level control algorithms, such as a novel mode-transitioning algorithm, are integrated into the OCP. Additional mid-level control algorithms, primarily for fault detection, diagnosis, and isolation, and for fault tolerant control, are being developed and will be integrated into the OCP in the near term. Higher-level mission planning and mode selection algorithms will be incorporated in the future. In addition, neural net and fuzzy logic-based algorithms for automatic limit prediction and avoidance are being developed for safety-critical applications, such as extreme performance UAVs. Areas in which we are focusing our attention include the following. Real-time performance. We are exploring ways to balance tradeoffs that arise in providing system flexibility and modularity while maintaining real-time performance. Dynamic scheduling. The current OCP prototype uses real-time static scheduling in its event service. We plan to extend the OCP to take full advantage of recent advances in dynamic scheduling in the event service [6]. Unified reconfiguration process. We are exploring a unified reconfiguration process which closely couples software architectural reconfiguration with control reconfiguration. This will allow information needed by both types of reconfiguration to be shared (such as resource monitoring). This paper describes the use of the latest software technology to enhance the capabilities of emerging control algorithms. A beneficial synergy exists in the concurrent development of the new control algorithms and of the software infrastructure to support it. While the controls technology is enabled by the open software infrastructure, it is also driving the underlying component-based software technology forward by demanding new capabilities to support on-line customization and extreme performance.

Acknowledgements This work was performed under DARPA contract No. F33615-98-C-1341; we gratefully acknowledge DARPA's Software Enabled Controls program for their continued support. The authors would like to acknowledge the contributions of Brian Mendel and Bryan Doerr of the Boeing Phantom Works, and Scott Clements, Mark Hale, Freeman Rufus, Gideon Saroufiem, and Ilkay Yavrucuk of Georgia Tech.

To appear in the Proceedings of the American Control Conference, Chicago, IL, June 2000.

altitude rate ft/s

A

B

C

h&Ccommanded

E

D

h&

2

4

6

8

10

response

12

time / units 14

Figure 5: Results of an online reconfiguration performed using the OCP. References 1. P. P. Bonissone, V. Badami, K. H. Chiang, P. S. Khedkar and K. W. Marcelle. Industrial Applications of fuzzy logic at General Electric Proc. of the IEEE 83 3 450-65, 1995. 2. Component-based Software. IEEE Software special issue, volume 15, number 5, Sept-Oct. 1998. 3. G. Glass. Overview of Voyager: ObjectSpace’s Product Family for State-of-the-Art Distributed Computing. www.objectspace.com/products/vgrWhitePapers.htm, 1999. 4. T. Harrison, C. O'Ryan, D. Levine and D. Schmidt. The Design and Performance of a Real-time CORBA Event Service, IEEE Journal on Selected Areas in Communications, 1999. 5. S. Kannan, C. Restrepo, I. Yavrucuk, L. Wills, D. Schrage, J.V.R. Prasad. Control Algorithm and Flight Simulation Integration Using the Open Control Platform for Unmanned Aerial Vehicles. Proc. of the Digital Avionics Conference, pp. 6.A.3-1 to 6.A.3-10, St. Louis, MO, Oct. 1999. 6. D. Levine, C. Gill, D. Schmidt. Dynamic Scheduling Strategies for Avionics Mission Computing. In Proc. 17th DASC. AIAA/IEEE/SAE. Digital Avionics Systems Conference, pp. C15/1-8, volume 1, 1998. 7. D. Levine, S. Mungee, and D. Schmidt. The design and performance of real-time object request brokers. In Computer Communications, volume 21, 1998. 8. T. F. Murphy, S. Yurkovich, and S. -C. Chen, “Intelligent control for paper machine moisture control”, in Proc. of the IEEE Conference On Control Applications, Dearborn, MI, pp. 826-833, Sept. 1996. 9. Object Management Group. CORBA 2.2 Common Object Services Specification, Dec. 1998. http://www.omg.org . 10. P. Oreizy, N. Medvidovic, R.N. Taylor, Architecturebased Runtime Software Evolution, Proceedings of the International Conference on Software Engineering (ICSE’98), pp. 117-186, 1998. 11. P. Oreizy, M. Gorlick, R. N. Taylor, D. Heimbigner, G. Johnson, N. Medvidovic, A. Quilici, D. Rosenblum, A. Wolf. An Architecture-Based Approach to Self-Adaptive Software, IEEE Intelligent Systems, vol. 14, no. 3, pages 54-62. May/June 1999.

12. J.V.R. Prasad and A.M. Lipp. Synthesis of a Helicopter Nonlinear Flight Controller Using Approximate Model Inversion. Mathematical and Computer Modelling (18)34, pp. 89-100, Aug. 1993. 13. J.V.R. Prasad, A. Calise, Y. Pei, and J. Corban. Adaptive Nonlinear Controller Synthesis and Flight Test Evaluation on an Unmanned Helicopter. Proc. of the IEEE Conference on Control Applications, Hawaii, August 1999. 14. J.V.R. Prasad and I. Yavrucuk. Reconfigurable Flight Control using RPM Control for Heli-UAVs. Proc. of the 25 th European Rotorcraft Forum, Rome, Italy, Sept. 1999. 15. R. Rysdyk and A. Calise. Adaptive Model Inversion Flight Control for Tiltrotor Aircraft. Journal of Guidance, Control, and Dynamics, vol. 22, no. 3, pp.402-407, MayJune 1999. 16. D. Schrage and G. Vachtsevanos “Software-Enabled Control for Intelligent UAVs”, Proc. of 1999 Int. Conference on Control Applications, Hawaii, August 2227, 1999. 17. C. Szyperski. Component Software. Addison-Wesley, 1998. 18. G. Vachtsevanos, “Hierarchical Control,” Handbook of Fuzzy Computation, Editors in Chief E. Ruspini, P. Bonissone and W. Pedrycz, Institute of Physics Publishing, pp. F 2.2:42-53, 1998. 19. G. Vachtsevanos, W. Kim, S. Al-Hasan, F. Rufus, M. Simon, D. Schrage and J.V.R. Prasad, “Mission Planning and Flight control: Meeting the Challenge with Intelligent Techniques,” Journal of Advanced Computational Intelligence, (1)1:62-70, Oct. 1997. 20. J. Waldo. The Jini Architecture for Network-Centric Computing. Communications of the ACM, volume 42, number 7, pp. 76-82, July 1999.

To appear in the Proceedings of the American Control Conference, Chicago, IL, June 2000.