Specifying Trusted Distributed System Components

0 downloads 0 Views 216KB Size Report
From these generic classes of components we create instantiations of the .... This class consists of components that process each input event by either ignoring it ...
Specifying Trusted Distributed System Components Jim Alves-Foss Laboratory for Applied Logic Department of Computer Science University of Idaho Moscow, Idaho 83844-1010, USA Email: [email protected] Abstract

In this paper we present a set of generic classes of system components for use in the formal speci cation and veri cation of reliable computer systems. Software developers can verify correctness and reliability properties for a particular generic class only once, and reuse the proofs each time they create an instantiation of a component of the generic class. As an example of the use of these generic classes we demonstrate how we can verify that two generic classes satisfy McCullough's restrictiveness security property. From these generic classes of components we create instantiations of the components of a simple distributed system, and reuse the security proofs to verify the total system security. Security is an important issue for government agencies, banking and other industries and thus there is tremendous interest in the reliable performance of the security mechanisms of computer systems. A failure in such mechanisms can have far reaching consequences, not only economic and personal but also in terms of national security.

1 Introduction The development of complex computer software has been made more manageable by the use of modular programming techniques and programming constructs such as abstract data types and generic modules. These techniques can be applied to both the speci cation and implementation phases of the software development cycle. This paper presents an approach that uses generic modules in the speci cation and Journal of Computing and Information, Vol. 2, No. 1, 1996, pp. 238-257. Special Issue: Proceedings of Eighth International Conference of Computing and Information (ICCI’96), University of Waterloo, Waterloo, Ontario, Canada, June 19-22, 1996. 1996, Journal of Computing and Information (JCI)

veri cation of trusted distributed system components. Where a trusted component is one which is proven to satisfy a speci ed reliability property, which in the case of this paper is a security property. It is imprtant to understand that we are only concerned with critical system components (those that execute reliability-critical code) and not all system components. Give current tools and methodologies, formal approaches, such as that outlined in the paper, need to be carefully weighed against the appropriate cost/bene t metric. Certain classes of system, such as text processing, will have much lower bene t from application of a formal approach that would safety criticial systems. The approach we outline is designed for critical components, such as the exemplary security critcial components we provide in this paper. The approach presented here, consists of establishing a set of generic classes of system components for use in the formal speci cation and veri cation of reliable computer systems. In this paper we use the term generic class to describe a polymorphic, parameterized system component speci cation, where each class speci es a collection of components that have similar characteristics. This speci cation acts as a template, where each parameter de nes certain characteristics (possibly very complex characteristics) of the system component. Associated with each template is a set of proof obligations and requirements on the parameters that will be used in the veri cation of the speci cation. The developer can instantiate these parameters for speci c tasks, given that the inherent obligations are satis ed. Therefore, software developers need verify correctness and reliability properties for a particular generic class only once, and can then reuse the proofs each time they create an instantiation of a component of the generic class. The properties that specify the reliability and correctness of software systems are dependent on the context of the system. For example, security is an important issue for government agencies, banking and other industries. In these settings there is tremendous interest in the reliable performance of the security mechanisms of computer systems. A failure in such mechanisms can have far reaching consequences, not only economic and personal but also in terms of national security. Thus the bene t of applying a formal approach will well o set the cost of the application. In this paper we will use McCullough's restrictiveness security policy [14, 15] as an exemplary reliability property, and demonstrate how to verify that generic classes of components satisfy this property. From these generic classes of components we create instantiations of the components of a simple distributed system, and reuse the security proofs to verify the total system security, thus ensuring that the system speci cation reliably implements our security policy. All the proofs discussed in this paper have been mechanically checked using the HOL theorem proving system [11, 7]. We begin by giving a brief introduction to the HOL system and notation in Section 2. In Section 3 we discuss the formal system model and security policy we use — 239 — Journal of Computing and Information, Vol. 2, No. 1, 1996, pp. 238-257.

in this paper. In Section 4 we present a classi cation of generic modules and discuss how we formalize their de nition. In Section 5 we specialize a subclass of components to demonstrate how we can create a simple database component speci cation and we present an example of an instantiation of the database component to de ne a simple multi{level le server for the Rushby{Randell Secure Distributed System [19].

2 The HOL system To formally model the properties of a distributed system and to ensure the accuracy of our proofs, we felt that it was necessary to develop the proofs and properties using a mechanical veri cation system. This prevents proofs from containing logical mistakes, and assures that the foundations on which the work is based are sound. Due to the nature of the proofs, which include quanti cation over functions, we felt that a system which supports higher{order logic and a typed lambda calculus would facilitate our e orts. The HOL system was selected for this project due to its support for higher{ order logic, generic speci cations and polymorphic type constructs. Furthermore its availability, ruggedness, local support, and a growing world{wide user base made it a very attractive selection. HOL is a general theorem proving system developed at the University of Cambridge [11, 7] that is based on Church's theory of simple types, or higher{order logic [8]. Similar to predicate logic in allowing quanti cation over variables, higher{ order logic also allows quanti cation over predicates and functions thus permitting more general systems to be described. HOL is not a fully automated theorem prover but is more than simply a proof checker, falling somewhere between these two extremes. HOL has several features that contribute to its use as a veri cation environment: built{in theories, rules of inference for higher{order logic, proof tactics, a proof management system, and a metalanguage for extending the prover.

Notations and conventions. Throughout this paper, we present several de ni-

tions and theorems developed in the HOL system. To make this work more understandable to the reader unfamiliar with HOL syntax, we have run the HOL output through a preprocessor. This preprocessor transfers HOL special symbols into their logic symbol counterparts. These include: 1. quanti ers 8, 9,  2. logical operators :, ^, _, >, , =, ,