Interactive Computation: Stepping Stone in the ...

4 downloads 0 Views 3MB Size Report
[14] Lorents, P., L. Motus, and J. Tekko, A Language and a Calculus for Distributed. Computer Control Systems Description and Analysis, Proc. on Software for.
ETAPS 2005

FInCo 2005 Preliminary Version

FInCo 2005

Interactive Computation: Stepping Stone in the Pathway From Classical to Developmental Computation 1 Antˆonio Carlos da Rocha Costa a,b,2 Gra¸caliz Pereira Dimuro a,3 a

Escola de Inform´ atica, Universidade Cat´ olica de Pelotas, Pelotas, Brazil

b

PPGC, Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil

Abstract This paper reviews and extends previous work on the domain-theoretic notion of Machine Development. It summarizes the concept of Developmental Computation and shows how Interactive Computation can be understood as a stepping stone in the pathway from Classical to Developmental Computation. A critical appraisal is given of Classical Computation, showing in which ways its shortcomings tend to restrict the possible evolution of real computers, and how Interactive and Developmental Computation overcome such shortcomings. A formal framework for Developmental Computation is sketched, and the current frontier of the work on Developmental Computation is briefly exposed. Key words: Interactive computation, developmental computation, domain theory, classical theory of computation

1

Introduction

In [5], the first author introduced a domain-theoretic approach to the conceptual analysis of Interactive and Developmental Computation. That thesis consisted of an epistemological analysis of the principles of Artificial Intelligence and the Theory of Computation, aiming among other goals: (i) to make clear, by means of a historical review of Computer Science, that the notions of Interaction and Development were present in the area since the very beginning (and even before, in areas such as Cybernetics, and beyond); but that, for various reasons (mainly the too restrictive notion of 1 2 3

Work partially supported by CNPq and FAPERGS. Email: [email protected] Email: [email protected] This is a preliminary version. The final version will be published in Electronic Notes in Theoretical Computer Science URL: www.elsevier.nl/locate/entcs

1

ETAPS 2005

Costa and Dimuro

FInCo 2005

computational effectiveness that was adopted in Classical Computation), they were always kept latent, and never fully explored; (ii) to show that interactive and developmental machines can go beyond the models of Classical Computation (CC), in the sense of introducing a shift in the scope of the notion of computation, bringing it from the strictly algorithmic computational processes to the non-algorithmic computational processes; (iii) to introduce, in a tentative way, some elementary developmental mechanisms capable of supporting processes of machine development. Considering that [5] was elaborated in the late 1980’s and early 1990’s, before the seminal papers by Peter Wegner [21,22,23] on Interactive Computation (IC), and also before his immediately subsequent papers with Dina Golding [12,25,26] – so, being unable to benefit from such papers –, it is remarkable how close [5] matches the general goals of their work. On the other hand, [5] being based on different epistemological principles, namely those of Jean Piaget’s Genetic Epistemology [14,15] (for the epistemological foundation of P. Wegner’s work, see [24]), it is not surprising that some differences in purposes and results have appeared between that thesis and the work by Wegner & Goldin. The present paper aims to highlight the domain-theoretic basis that the work adopted. Section 2 summarizes Domain Theory. Section 3 shows that Interaction was already embedded in the well known von Neumann’s computer architecture. Section 4 introduces Developmental Computation. Section 5 sketches a domain-theoretic framework for Developmental Computation. Section 6 concerns related work. Section 7 brings the Conclusion, and a brief overview of the current frontiers of Developmental Computation.

2

A Domain-based Appraisal of CC

Domain Theory [1,13] officially introduced in Computer Science the idea of partial object, that is, the result of a partial (unfinished ) computation. Using partial objects, Domain Theory was able to give infinite computations the status of first order citizens. Each infinite computation can be assigned a non-trivial meaning, thus allowing infinite computations to be distinguished from each other, so that they may not be simply dismissed as divergent. A domain is an ordered structure whose elements are called objects. Objects of a domain are considered to be results of a computation. Computations are seen as processes that construct objects of a domain. Objects of domains are ordered according to the way each object participates in the construction of other objects. That is, if x and y are objects of a domain D and x is a part of y, one denotes this by x v y. The relation v is called approximation relation, and x is said to be an approximation of y. Objects are said to be partial objects, since – in general – it is possible to aggregate new (partial) objects to a given object, to make it become a more complete object. Objects from which it is not possible to construct other ob2

2

ETAPS 2005

Costa and Dimuro

FInCo 2005

jects, because nothing can be added to them, are said to be total (complete) objects. Total objects are the maximal elements of the ordering v. Since a computation is a construction of (one or more) objects, the state of a computation at a given moment is given by the (partial) objects that have been constructed by the computation up to that moment. If, at time t, a computation has constructed a sequence of partial objects x0 v x1 v x2 v . . . xt , then xt is the state of the computation at time t (assuming that x0 was the first partial object constructed by the computation at time 0). The objects constructed by a finished computation are said to be its products, or results. If a computation has ended at time t, and up to that time it has constructed a sequence of partial objects x0 v x1 v x2 v . . . xt , then xt is the final result of the computation. If an infinite computation constructs a chain of objects x0 v x1 v x2 v . . ., the result of that computation is the limit of such chain, given by the least upper bound of that chain in the domain: t{x0 , x1 , x2 , . . .}. Results of infinite computations are, thus, ideal objects, limits of the computations that construct them. The conceptual importance of domains is in that they equip the Theory of Computation with the notion of product of an infinite computation. As mentioned before, this notion is by itself a depart from the framework of CC. Effectiveness of computations is taken, in CC, at the lowest possible level of conceptual richness: effectiveness is taken as deliverability of a finite result in finite time (see, e.g.,[18]). It is interesting to contrast this requirement with Turing’s original ideas: [19] was concerned with the computation of real numbers, and a successful computation was one that lasted forever, computing correctly all the digits of the infinite representation of its result. Domains and operations on domains are required to satisfy a set of constraints that keep them within the acceptable limits of what are, intuitively, “computable operations”. The main such constraints are the following: (1) The computation of a result object from a given object should not reduce the structure of the produced object if additional parts are added to the initial object. That is: better inputs do not excuse worse outputs. (2) The computation of a finite result should not depend on operations acting on infinite objects. That is, finite outputs can only depend on finite parts of input objects. Such requirements are called the monotonicity and continuity requirements, respectively. Formally we have: operations f : D1 → D2 on domains should be (1) monotonic (x v y ⇒ f (x) v f (y)) and (2) continuous (f (tX) = tf (X), where X is a (possibly infinite) chain of objects in D). Finite chains of partial objects in D can be understood as constructions of their maximal objects: x0 , x1 , . . . , xn can be seen as a construction of xn . Infinite chains in D can be seen as constructions of their limits: X = x0 , x1 , . . . , xi , . . . is the construction of its least upper bound tX in D. Thus all domains are required to be complete partial orders: (3) order completeness (X ∈ D is a chain ⇒ tX ∈ D). A fourth requirement can allow for construction independent processes, processes that do not depend on v. 3

3

ETAPS 2005

Costa and Dimuro

FInCo 2005

The sole contemplation of some computational notions inherent to the Theory of Domains is enough to expose serious shortcomings in the classical notion of computation. Shortcomings that the contemplation of the notion of interaction makes even more overt. The first such shortcoming is: CC-1: In CC, non-terminating computations are meaningless.

The lack of a notion of limit, based on the notion of continuity that the approximation order supports, prevents CC from being able to handle partial objects, even when the operational models that constitute its hallmarks (Turing machines, λ-calculus reductions, etc.) show wide open to any attentive eye the internal handling of such partial objects. The second shortcoming of CC exposed by Domain Theory is: CC-2: CC performs input-output mappings, not constructions.

The limitation to the computation of input-output mappings is the most serious shortcoming of CC exposed by the notion of interaction, as is well known from the research on reactive systems (e.g., [17]), and repeatedly stressed by Wegner & Goldin. This criticism is reinforced by Domain Theory. There is nothing in the notion of construction that requires that that sequence of aggregation steps be restricted so that just one single interaction step happens, with one initial complete object being given and one final object being (possibly) received back, and with everything that happens in-between concealed inside the constructing machine, inaccessible from outside. One may say that constructions steps are input-output mappings [11], thus Turing-computable if the construction is effective in the classical sense, and that a construction is nothing more than a succession of such Turing machine computation steps, thus picturing domain constructions as interactive processes in the sense of Wegner & Goldin. But the problem is that nothing in Domain Theory requires constructions steps to be Turing-computable. That is, nothing in the structure of domains limits constructions to be classical computations: Domain Theory points to a conceptual framework where CC appears as the lower bound of a wide range of possible notions of computability, compatible with domains as universes of object constructions. In other terms, Domain Theory is compatible with a notion of non-algorithmic computation. That is, the third shortcoming of CC is: CC-3: In CC, effective is synonym of algorithmic.

The notion of computation as construction in domains opens the possibility that non-Turing controlled sequences of non-Turing computable construction steps be considered effective in a concrete, physical symbol systems-based way. This possibility lies at the core of the notion of Developmental Computation.

3

A View of IC

The fourth shortcoming of CC is: CC-4: A classical computing machine operates as a closed system, while it is computing, and thus is unable to alter its input objects, as such alterations require

4

4

ETAPS 2005

Costa and Dimuro

FInCo 2005

the active participation of the machine environment.

CC models can only be extended with interactive input-output operations at the expense of dismissing the essential commitment that Church, Turing, Kleene, Post and others had to the solution of Hilbert’s Entsheidungsproblem (Decision Problem). Hilbert’s problem raised the question if every mathematical problem could to be solved by a mechanical, non-creative procedure, performed by a mathematician thinking alone, in complete isolation from everyone else, doing calculations only with the help of paper and pencil, as vividly pictured by A. Turing [19]. And, Turing and all the others were strongly committed to keeping their models of computation within the bounds suggested by the procedural model proposed by Hilbert. The essential feature allowing for interaction is the integration of the environment as a true participant of the computation process, playing an active role in the process. If the environment is an active participant in the computation process, the computation is no more mechanical, in Hilbert’s sense. That is, it is no more effective in the restricted sense that CC assigns to such term. But, sure, it may still be effective, and mechanical, in a wider, physical symbol system-based sense. A current problem, then, is to characterize this wider notion of effectiveness, that surpasses the narrow sense of effectiveness of CC, and is able to encompass at least the effectiveness of, e.g., von Neumann computers. The work of Wegner & Goldin is, of course, the fundamental stepping stone in this direction. So, the case is not that some (possibly remote) idea of a (possibly hard to conceive) domain structure, modelling the computations of a (possibly futuristic) very special kind of computer, may someday reveal a (possibly weird) example of object construction that can not be performed by classical computations. The case is that even everyday computers – the so-called von Neumann computers [3] – demonstrate, through interaction, that CC is a very restricted notion of computation. The input-output behavior of von Neumann computers, allowing for interactive computations, shifts the domain of computation of such computers to areas that are very far from that contemplated by Church’s Thesis. The simple fact is that: Turing machines are perfect operational models of von Neumann computers only when von Neumann computers are operating in a non-interactive way, that is, when they are computing mathematical functions. Only when operating in such special, restricted modes of operation, von Neumann computers are subsumed by Church’s Thesis. Besides introducing the environment as an active participant in the computation, and giving it the power to influence the construction process of output objects by affecting the structure of the input objects, the architecture that von Neumann designed for the programmable computer [3] supports other important features not present in CC machine models. von Neumann computers stretch Turing’s notion of stored program to an extent that could not be anticipated from it. Turing’s notion of stored pro5

5

ETAPS 2005

Costa and Dimuro

FInCo 2005

gram (in a universal Turing machine) lies on the possibility of interpreting objects stored in memory (tape) either as data or as program instructions, depending on the context in which the computer’s control unit accesses such objects. By incorporating the notion of stored program (and the associated feature of the duality of data and program) in his model of computers, and by combining such feature with the possibility of dynamically entering input objects during a computation, von Neumann introduced a possibility that profoundly departures from the very essence of the computational possibilities allowed by Turing machines, namely, the possibility of a program being dynamically modified by the environment, during its computation. The possibility of the interactive modification of running programs makes of von Neumann computers situated machines, that is, interactive machines whose behaviors can only be fully understood in connection with the behaviors of the environments where they are situated. This central result of [5] shows that computer technology has always been based on the notions of IC, and that Interaction is not a late novelty introduced by the development of computer technology (as suggested by Wegner [22]).

4

Developmental Computation

The consideration of von Neumann computers as situated computers immediately exposes a fifth shortcoming of CC: CC-5: Classical computing machines have fixed structures.

DC concerns systems where the structure of computers can be modified dynamically. In particular, it concerns computational systems where the modification of the structure of machines can be understood as development. The question that immediately poses itself is, then, how the fixity of the computer structure can be overcome? One possible answer is to mimic the solution found by biological organisms: to arrange that elements of the system structure – material elements – be exchanged with the environment while the system is operating. Programming models inspired by molecular biology (e.g., [4]) will certainly serve as sources of answers for such question. In a situation of joint computation involving computer and environment, and with the possibility of material exchanges between them, both the computer and the environment are not pre-determined in their structures, with the consequence that even the control rules of the computer’s control unit need not rest fixed during the computation. We call developmental computation any computation where the structure of the computer is able to develop as the computation goes on, and we state the following requirement for DC: DC-1: Developmental computers may vary their structure while computing, by exchanging material objects with their environments.

With the help of domains, DC can be seen as involving a special kind of construction, namely, the construction of the computing machine itself. This allows the discrimination of two aspects of computations, when seen as 6

6

ETAPS 2005

Costa and Dimuro

FInCo 2005

constructions: on the one hand, a computation constructs the objects handled by the computing machine; on the other hand, a computation can construct the machine itself (if it is a developmental machine). In this context, the notion of purpose of computation has to be re-thought. For if the construction of objects by machines can be seen as an attempt to satisfy needs or requests from the environment (the users of the machine), what could be the purpose of the construction of the machine itself? The latter question seems to accept two kinds of answers. First, one can see that the construction of the machine may serve some purposes of the environment (users), since more developed machines may be expected to perform better services. The second, somewhat unexpected answer, is that the construction of the machine may serve some purpose of the machine itself. The latter answer is surely an epistemological divisor, separating two different notions of machines: autonomous machines, that is, machines endowed with goals that are of their own; and heteronomous machines, that is, machines that have no goals of their own, its working being dedicated essentially for the fulfilment of goals of the environment. The question if a computing machine is possible, which is autonomous and yet is not a living being existing by its own is, of course, yet unsolved (see [6] for an attempt to define autonomy in the context of multi-agent systems). Analyzing the general biological and psychological models presented by Piaget [14,15,16], including his models of development of biological and cognitive structures, we think that two new fundamental processes should be incorporated to computing machines to leverage their developmental processes, namely, a process of internally regulated object construction, called equilibration, and a process of adaptation of the machine to the environment: DC-2: Equilibration is the process of self-regulated construction of object constructors in computing machines. DC-3: Adaptation is the process of self-regulated adjustment of internal and external operations of the computing machine to the possibilities and constraints determined by the environment.

We note, first, that self-regulated constructions are not a new idea in the Theory of Computation. John von Neumann himself explored them, in order to define computing machines with reliability features that approximate that of the human brains [20]. Following Piaget, we construe equilibration as a process operating through a set of development stages of the computing machine. At each development stage, the machine is able to construct particular kinds of internal and external objects, in certain ways, determined by the set of operations it has available for such purpose, at that stage. Development stages are ordered according to the degree of their development, determined by some measure of the richness of the set of operations for object constructions available at that stage. When development is seen as a construction in a domain, the ordering of the stages of development is given by the approximation relation of the domain. 7

7

ETAPS 2005

Costa and Dimuro

FInCo 2005

The equilibration process has two dimensions, namely, a diachronic dimension and a synchronic dimension [16]. The diachronic dimension is the one that regulates the development process as such. That is, it regulates the way the machine changes from one development stage to the next development stage. Major equilibration is the name applied to denote the diachronic process of equilibration. The synchronic dimension is the one responsible for regulating the construction of the internal and external objects, at each stage. Minor equilibration is the name applied to such process. Adaptation is correlative to equilibration, in the sense that the equilibration process produces better adaptation resources to the computing machine, while dysadaptation act as an indicator of the need of new steps in the equilibration process. As the machine develops through its set of development stages, under the supervision of the adaptation process, it gets more and more adapted to the environment, as richer construction processes of internal and external objects become possible at each new stage, due to the richer set of object constructors that become available at that new stage. Adaptation is defined in terms of two subsidiary notions: (i) Assimilation: the process by which the machine is able to apply to internal and external objects the set of its currently available operations, in order to achieve its current goals. (ii) Accommodation: the process by which the machine is able to adjust its current set of operations, in order to make them better applicable to internal and external objects, in order to achieve its current goals. Adaptation is thus defined as: DC-4: Adaptation is the situation where every required assimilation is possible, because every required operation on a given environment can be performed, and every required accommodation is possible, because every required adjustment in the internal and external operations can also be performed.

Major equilibration furthers the stages of adaptation, because more internal and external objects can be handled with more sophisticated operations. Thus, major equilibration is the central factor of development [16]. On the other hand, the progress of adaptation requires ever more sophisticated stages of development, that can only be achieved through major equilibration.

5

Sketch of a Formal Framework for the Theory of DC

The distinction between development and evolution is based on the idea that development concerns individuals while evolution concerns sets of individuals, and can be formalized by requiring that development happens in a domain, so that the sequence of stages of a development (construction) guarantees the increasing richness of the operational structures of those stages, while evolution may be defined without that requirement. The main purpose of the following preliminary formal framework is just to indicate the basis on which we think it is possible to formally prove that DC 8

8

ETAPS 2005

Costa and Dimuro

FInCo 2005

is a more encompassing notion than IC. Let M be a developmental computing machine, T be a discrete-time temporal structure. Then, define: • Development stages: (1) DM is the set of possible stages of development of M t the set of possible stages of development of M at time t ∈ T (2) DM S t τ the set of possible stages of development of M for τ ⊆ T (3) DM = t∈τ DM 0 (4) DM ∈ DM the set of possible initial stages of development of M • Machine operations: (5) op(d) the operational structure of stage d, that is, set of (internal and external) machine operations available at development stage d • Approximation relation: (6) v⊆ DM × DM the approximation relation between development stages of t0 t , d0 ∈ DM , with t ≤ t0 , and op(d) ⊆ op(d0 ) M , so that d v d0 iff d ∈ DM (development increases the richness of the operational structure). • Development steps: t+1 t → DM the set of possible development steps at time t, (7) ∆t+1 ⊆ DM t t+1 guarantees the inclusion relation between defined so that every δt ∈ ∆t+1 t the operational structures of development steps, that is, op(d) ⊆ op(δtt+1 (d)), t for every d ∈ DM • Development relation: t+n−1 t+2 t+1 the development relation (possibly, = ∆t+n (8) ∆t+n t t+n−1 ◦∆t+n−2 ◦. . .◦∆t+1 ◦∆t a function) operating from t to t + n, so that for all d, d0 ∈ DM it happens 0 t0 t and d0 ∈ DM , for t, t0 ∈ T with t < t0 , and (d, d0 ) ∈ δtt that d v d0 iff d ∈ DM 0 0 0 for δtt ∈ ∆tt ; so that if (d, d0 ) ∈ δtt then op(d) ⊆ op(d0 ). Let E be the evolutive environment of a developmental computing machine M . Define: • Evolution stages: (1) EM the set of possible evolution stages of the environment E of M t the set of possible stages of evolution of the environment at t ∈ T (2) EM S t τ the set of possible stages of evolution of the environment (3) EM = t∈τ EM during the period τ ⊆ T 0 ∈ EM the set of possible initial stages of evolution of the environment (4) EM • Environments operations: (5) op(e) the set of operations that the environment is able to apply on the t machine at evolution stage e ∈ EM • Approximation relation: (6) v⊆ EM × EM the approximation relation between stages of evolution of t0 t , e0 ∈ EM the environment, so that e v e0 iff e ∈ EM , with t ≤ t0 (with no requirement of enrichment of the environment’s operational structure) • Evolution steps: t+1 t → EM the set of possible environment evolution steps at t ⊆ EM (7) Υt+1 t 9

9

ETAPS 2005

Costa and Dimuro

FInCo 2005

• Evolution relation: t+n−1 t+2 t+1 the environment evolution relation = Υt+n (8) Υt+n t t+n−1 ◦Υt+n−2 ◦. . .◦Υt+1 ◦Υt operating from t to t + n. Developmental machines and their evolutive environments must interact, if the machine is to operate in the environment. The idea of interaction that underlies the following formalization is that any interaction step is a coordination of two operations, one performed by the machine, the other performed by the environment. For such coordination to occur, the two operations are required to be coherent (or, compatible) in some sense. Let M be a developmental computing machine, E its evolutive environt t . Then define: , and e ∈ EM ment, d ∈ DM • Interaction coherence: (1) ≈⊆ op(d) × op(e) the coherence relation between operations of the develt t , so that and operations of the evolution stage e ∈ EM opment stage d ∈ DM the meaning of od ≈ oe is that od ∈ d and oe ∈ e are coherent (or, compatible) with each other • Adaptation of development stages: t t the adaptation relation between the set of development × EM (2) ./⊆ DM t t , so that d ./ e iff ∀od ∈ d.∃oe ∈ stages DM and the set of evolution stages EM e.od ≈ oe and ∀oe ∈ e.∃od ∈ d.od ≈ oe • Adaptation of machines: (3) ./τ ⊆ Mach × Env the adaptation relation between developmental computing machines and environments, during the period τ ⊆ T , so that M ./τ E iff t t .d ./ e. .∀e ∈ EM ∀t ∈ τ.∀d ∈ DM

6

Related work

The first results on DC were established even before the Ph.D. work in [5] was officially begun. They supported the M.Sc. dissertation by Mart´ın Escard´o [9], where the computability of recursive functions on partial (lazy) natural numbers were analyzed. Partiality (laziness) of natural numbers arises from the allowance of an interactive input of such numbers through successive approximations [2], so that recursive functions on them realize a model of IC. A short report about that work appeared in [8]. The second author made use in her thesis [7] of the idea of construction independent processes to define a structure where real numbers and intervals of real numbers are constructively obtained. The structure, called bi-structured coherence spaces is based on Girard’s coherence spaces [10]. It is said to be a bi-structure because, besides the ordered-structure of the approximation relation, that regulates the construction of real numbers and intervals, it also supports the algebraic structure of the operations on real numbers and intervals, established on the basis of the usual ordering of numbers. The operations of such algebraic structure are defined so that they are all construction independent. An effort to work out the notion of computational systems with autonomous goals is going on [6]. 10

10

ETAPS 2005

7

Costa and Dimuro

FInCo 2005

Conclusion: The current frontier of DC

Of course, the main issue that still has to be clarified is the notion of material exchange between machines and environments. DC introduces many other issues, the most central of them being: • Axiology: the idea that computing machines have goals of their own implies the idea that computing machines have values of their own. The understanding of what such values may be, and how they should give rise to rules to which computing machines would adhere by their own, is a major problem, that should be solved prior to the establishment of the Theory of Developmental Computation. • Teleonomy: the idea that the development of a computing machine should proceed according to principles that are internal to the computing machine is connected to the axiological problem just mentioned, but concerns specifically developmental goals and rules. The problem of teleonomy is, thus, the central problem of DC. Acknowledgements: To an anonymous referee, for very useful comments.

References [1] S. Abramsky and A. Jung. Machine models and simulations. In S. Abramsky, D. Gabbay, and T. Maibaum, editors, Handbook of Logic in Computer Science, volume 3, pages 1–168. Claredon Press, 1994. [2] R. Bird. Introduction to Functional Programming using Haskell. Prentice-Hall, 1998. 2nd. edition. [3] A. W. Burks, H. H. Goldstine, and J. v. Neumann. Preliminary discussion of the logical design of an electronic computing instrument. Part I, vol.1, 1946. In A. H. Taube, editor, John von Neumann - Collected Works, pages 34–79. MacMillan, New York, 1963. [4] L. Cardelli. Bioware languages. In A. Herbert and K. S. Jones, editors, Computer Systems: Theory, Technology, and Applications – A Tribute to Roger Needham, pages 59–65, Berlin, 2004. Springer. [5] A. C. R. Costa. Machine Intelligence: Sketch of a Constructivist Approach. PhD thesis, Programa de Posgradua¸c˜ao em Computa¸c˜ao, UFRGS, Porto Alegre, RS, Brazil, October 1993. In Portuguese. [6] A. C. R. Costa and G. P. Dimuro. Agent drives and the functional foundation of agent autonomy. ESIN-UCPel, 2004. To be submitted. [7] G. P. Dimuro. Bi-structured Coherence Spaces and the Construction of Real Numbers and Intervals of Real Number. PhD thesis, Programa de Posgradua¸c˜ao em Computa¸c˜ao, UFRGS, Porto Alegre, RS, Brazil, March 1998. In Portuguese. [8] M. H. Escard´o. On lazy natural numbers with applications to computability theory and functional programming. ACM SIGACT News, February 1993.

11

11

ETAPS 2005

Costa and Dimuro

FInCo 2005

[9] M. H. Escard´o. Partial natural numbers. Master’s thesis, CPGCC/UFRGS, Porto Alegre, 1993. In Portuguese. [10] J. Y. Girard. Linear logic. Theoretical Computer Science, 59:1–102, 1987. [11] D. Goldin, S. Smolka, P. Attie, and E. Sonderegger. Turing machines, transition systems, and interaction. Information and Computation, 194(2):101–128, Nov. 2004. [12] D. Goldin and P. Wegner. Persistence as a form of interaction, Jul. 1998. Tech. Rep. CS-98-07. [13] A. Jung. Domains and denotational semantics: History, accomplishments, and open problems. Bulletin of ETAPS, 1996. ´ [14] J. Piaget. Introduction ` a l’Epist´ emologie G´en´etique. PUF, Paris, 1950. [15] J. Piaget. Biology and Knowledge: an essay on the relations between organic regulations and cognitive processes. The University of Chicago Press, Chicago, 1971. [16] J. Piaget. The development of thought : equilibration of cognitive structures. Viking Press, New York, 1977. [17] A. Pnueli. Linear and branching structures in the semantics and logics of reactive systems. In W. Brauer, editor, ICALP185 – 12th International Colloquium on Automata, Languages, and Programming, pages 15–32. SpringerVerlag, 1985. LNCS, vol. 194. [18] H. Rogers. Theory of Recursive Functions and Effective Computability. McGraw-Hill, New York, 1967. [19] A. M. Turing. On computable numbers, with an application to the entscheidungsproblem. Proc. London Math. Soc., 42:230–265, 1936. [20] J. von Neumann. The Computer and the Brain. Yale Univ. Press, New Haven, 1958. (2nd. ed., 1967). [21] P. Wegner. Machine models and simulations. In Wegner Agha and Yonezawa, editors, Research Directions in Concurent Object-Oriented Programming. MIT Press, 1993. [22] P. Wegner. Why interaction is more powerful then algorithms. Comm. of the ACM, May 1997. [23] P. Wegner. Interactive foundations of computing. Science, Feb. 1998.

Theoretical Computer

[24] P. Wegner. Towards empirical computer science. The Monist, Spring 1999. [25] P. Wegner and D. Goldin. Interaction, computability, and church’s thesis, May 1999. Draft. [26] P. Wegner and D. Goldin. Mathematical models of interactive computing, Jan. 1999. Draft.

12

12

ETAPS 2005

FInCo 2005 Preliminary Version

FInCo 2005

A Mathematical Model of Dialog Mark W. Johnson 1 Department of Mathematics Pennsylvania State University Altoona PA 16601-3760 USA

Peter McBurney 2 Department of Computer Science University of Liverpool Liverpool L69 3BX UK

Simon Parsons 3 Department of Computer and Information Science Brooklyn College 2900 Bedford Avenue Brooklyn NY 11210 USA

Abstract Computer Science is currently undergoing a paradigm shift, from viewing computer systems as isolated programs to viewing them as dynamic multi-agent societies. Evidence of this shift is the significant effort devoted recently to the design and implementation of languages and protocols for communications and interaction between software agents. Despite this effort, no formal mathematical theory of agent interaction languages and protocols yet exists. We argue that such a theory needs to account for the semantics of agent interaction, and propose the first mathematical theory which does this. Our framework incorporates category-theoretic entities for the utterances made in an agent dialog and for the commitments incurred by those utterances, together with maps between these. Key words: agent communications, auctions, category theory, dialogue games, FIPA ACL, interaction protocols, multi-agent systems

1 2 3

Email: [email protected] Email: [email protected] Email: [email protected] This is a preliminary version. The final version will be published in Electronic Notes in Theoretical Computer Science URL: www.elsevier.nl/locate/entcs

13

ETAPS 2005

1

Johnson, McBurney and Parsons

FInCo 2005

Introduction

The rise of the Internet, ambient computing, ad-hoc networks and virtual communities have led to a paradigm shift in how we view computer systems and computation [36]. Instead of computer systems being viewed simply as programs which execute some pre-determined method, a better analogy is to view systems as societies of interacting and autonomous entities, or “agents”, who combine together as and when necessary to achieve possibly-conflicting individual objectives. This agent-oriented perspective has become influential within computer science over the last decade, and has made connections with prior work in biology (e.g., ecology, evolutionary theory), physics (statistical mechanics), economics (game theory) and sociology (organization theory) [34]. Designing a multi-agent computational system typically means specifying the capabilities and roles of the agents comprising the system, and their means of interaction. Accordingly, considerable research and development effort has been devoted to the design of languages and protocols for autonomous software agents to communicate with one another. The most widely-known language is FIPA’s Agent Communications Language [7], which is perhaps the only real standard in this area. 4 FIPA ACL defines 22 locutions, or speech acts, which may be uttered by agents in an interaction in any order, in the same way as humans may freely utter sentences from a human language. Because such freedom leads to a state-space explosion in any realistic application, recent attention has been given to the design of interaction protocols which limit (to a greater or lesser extent) the freedom of agents to make utterances in any order. The most widespread approach to the design of agent interaction protocols has drawn on dialog games from the philosophy of argumentation, which date at least to Aristotle [4] and which were revived in modern times by Charles Hamblin [9]. They have a structural resemblance to the games of economic game theory [25] and to the two-party games of model theory [15,10]. Agent interaction protocols have been articulated for many different types of dialogue, for example, for dialogs involving Information-Seeking, e.g. [2]; mutual Inquiry [19]; Persuasion [28]; Negotiation over the division of some scarce resource [3]; and Deliberations over what action to take in some circumstance [18]. See [21] for a review of recent work on agent dialogue-game protocols. In all this work, it is assumed that the agents who enter multi-agent interactions do so for a purpose, although not necessarily a benign or unselfish one. In other words, their behavior is intentional, and so the expected and actual outcome(s) of an interaction are important in understanding it. Thus, any mathematical theory of protocols for such interactions needs to account for the semantics of the interaction, and perhaps also for the semantics of the utterances which comprise the interaction. In seeking such a theory, an obvious starting point would be Claude Shannon’s theory of communication [30]. 4

This is despite the many problems of FIPA ACL [24,26].

2

14

ETAPS 2005

Johnson, McBurney and Parsons

FInCo 2005

But Shannon, perhaps reflecting his career in a telecommunications company, explicitly ignores the semantics of messages: “Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem.” [30, p. 31]. By contrast, for developers and users of agent systems, dealing with the semantics of messages and protocols is an essential part of the “engineering problem” [33]. There are in fact several different functions that a semantics for an agent communications language or dialog protocol may be required to serve: •

To provide a shared understanding to agents participating in a communicative interaction of the meaning of individual utterances, of sequences of utterances, and of dialogs.



To provide a shared understanding to designers of agent protocols and to the designers (who may be different) of agents using those protocols of the meaning of individual utterances, of sequences of utterances, and of dialogs.



To provide a means by which the properties of languages and protocols may be studied formally and with rigor, either alone or in comparison with other languages or protocols.



To provide a means by which languages and protocols may be readily implemented.

Drawing on the programming language semantics literature, van Eijk [6] identified three generic types of semantics for agent communications languages. An axiomatic semantics defines each locution of a communications language or protocol in terms of the pre-conditions which must exist before the locution can be uttered, and possibly also the post-conditions which apply following its utterance. For example, the semantic language, SL, for the locutions of the FIPA Agent Communications Language, is an axiomatic semantics of the speech acts of the language, defined in terms of the beliefs, desires and intentions of participating agents [7]. Similarly, the semantics defined for many dialog game protocols for agent interaction, e.g., [2], are also axiomatic semantics. A second type of semantics, an operational semantics, considers the dialog locutions as instructions which operate successively on the states of some abstract machine. Here, the semantics defines the locutions in terms of the transitions they effect on the states of this machine. Operational semantics have recently been defined for some agent dialog protocols, e.g., [17]. Third, in denotational semantics, each element of the language syntax is assigned a relationship to an abstract mathematical entity, its denotation. Perhaps the first example of a denotational semantics for a dialog protocol was the possible-worlds semantics for question-response interactions defined by Ham3

15

ETAPS 2005

Johnson, McBurney and Parsons

FInCo 2005

blin in 1956 [8]. Although possible-worlds and other denotational semantics have a long subsequent history in mathematical linguistics, only recently have denotational semantics been defined for agent dialog protocols. For instance, [22] presents a category-theoretic semantics for a broad class of deliberation dialog protocols, and uses this semantics to prove properties of dialogs conducted under these protocols. These efforts at defining language and protocol semantics have focused primarily on individual languages or protocols, or on limited classes of protocols. We know of no effort underway to define a semantics for all agent interaction protocols. In other words, there is as yet no mathematical theory of agent interaction protocols, in the same way that the lattices of possible worlds semantics provides an algebraic theory for modal logical languages [27]. Such a theory would, we hope, provide a formal means to compare one protocol with another, to identify when two protocols are the same (in some sense or other), and to choose between protocols. Such a theory is the aim of our work. This paper presents the first mathematical theory of agent interaction protocols, comprising a categorical semantics for a very broad class of protocols. We consider protocols which can be defined explicitly, and abstract away from the type of protocol and the nature of the interaction outcomes. Section 2 of the paper describes our view of Agent Interaction Protocols, Section 3 presents our semantics, while Section 4 presents some mathematical implications of the framework. In Section 5, we present examples of how interaction protocols may be represented in the framework, to illustrate its expressive power. We end the paper with a discussion of future work in Section 6. It is worth noting that our problem domain and objectives differ from efforts currently underway to develop a semantics for interaction of computational processes in general, such as the work on game semantics [1]. As mentioned above, our domain involves interactions between purposeful agents, each entering a multi-agent dialog with the intention of achieving some goal. Accordingly, the outcomes (both partial and final) of agent interactions are important to any semantic theory, and to the design, engineering and control of the interactions. It is not clear to us that the game-semantics-ofinteraction community has considered these issues as prominently as required by the agents community. On the other hand, the outcomes in agent dialogs are considerably more general than the real-valued monetary pay-offs typically assumed in economic game theory [25]. An abstraction of such payoffs is needed for any semantic theory of agent interactions. 5

5

The only work known to us in economic theory which abstracts from real-valued spaces is [31], but this uses category theory to prove a result about real spaces.

4

16

ETAPS 2005

2

Johnson, McBurney and Parsons

FInCo 2005

Agent Interaction Protocols

The syntactical form of an agent interaction protocol may be defined explicitly by specifying a number of elements [20], as follows: Commencement Rules: Rules which define the circumstances under which the dialog commences. Typically, the Commencement Rules of a protocol refer to states prior to or outside the dialogue, and so are not modelled within it. We will not consider these further in this paper. A collection of Locutions: Rules which indicate what utterances are permitted. Typically, legal locutions permit participants to assert propositions, permit others to question or contest prior assertions, and permit those asserting propositions that have been subsequently questioned or contested to justify their assertions. Justifications may involve the presentation of a proof of the proposition or an argument for it. 6 Combination Rules for the Locutions: Rules which define the dialogical contexts under which particular locutions are permitted or not, or obligatory or not. For instance, it may not be permitted for a participant to assert a proposition p and subsequently the proposition ¬p in the same dialogue, without in the interim having retracted the former assertion. Another example involves argumentative dialogue, where one agent may request another to justify a statement the latter has made; most such protocols require the claimant to respond to such a request immediately after the justification request is made. A collection of Commitments: Some utterances in a dialog may commit the speaker to some claim or action. A bid in an auction, for example, commits the bidder to purchase the good in question at the price mentioned in the bid, if the bid is accepted. Typically, the assertion of a claim p in a debate is defined as indicating to the other participants some level of commitment to, or support for, the claim. Since [9], formal dialog systems typically establish and maintain public sets of commitments, called commitment stores, for each participant; these stores may be non-monotonic, in the sense that participants may also be permitted to retract committed claims, although possibly only under defined circumstances. Combination Rules for the Commitments: Rules which define the ways in which Commitments may be combined or not. For example, it is usually not permitted for an agent, in the one dialogue, to commit to undertake some action and to subsequently commit not to undertake the same action, without first having withdrawn or cancelled the first commitment. Note that an agent who makes a commitment may not be able to withdraw or modify it without permission from other agents, depending on the rules of the dialog, as in [23]. 6

Classifications of locutions have been given, for example, by [5,29].

5

17

ETAPS 2005

Johnson, McBurney and Parsons

FInCo 2005

Locution-Commitment Assignment Rules: An assignment of a commitment or commitments to each locution, in a manner compatible with the relevant combination rules. Termination Rules: Rules that define the circumstances under which the dialog ends. Thus, a dialog under a protocol defined by a structure such as this consists of an ordered sequence of locutions which is not forbidden by the combination rules for locutions. The commitment associated to a dialog then refers to the combination of the ordered sequence of commitments associated to these locutions. For example, an agent may make an offer at one point in a dialog and later retract this offer, if the protocol permits this. Even if a retraction utterance is permitted, the commitment associated to the initial offer may or may not then be cancelled by the commitment associated to the retraction, depending on the commitment combination rules of the protocol. Retraction of a prior offer may incur a penalty, for example, so that the commitments created by the prior offer still stand. With respect to commitments, it is worth noting here that more than one notion of commitment is present in the literature on dialog games. For example, philosophers of argumentation often treat commitments in a purely dialogical sense, so that they may have no reference to anything beyond the dialogue, e.g., [9]. In contrast, others treat commitments as obligations to (execute, incur or maintain) a course of action [32]. These actions may be utterances in a dialogue, as when a speaker is forced to defend a proposition he has asserted against attack from others; so propositional commitment can be seen as a special case of action commitment. Because our primary motivation is the design of interaction protocols between autonomous software agents, we believe it is reasonable to define commitments in terms of future actions (or propositions) external to the dialogue. In a commercial negotiation dialogue, for instance, the utterance of an offer may express a willingness by the speaker to undertake a subsequent transaction on the terms contained in the offer. For this reason, we view commitments as referring to some objects (physical or virtual) in the world beyond the dialogue. Of course, this structure does not capture all agent interaction protocols, for example, those which cannot be defined formally or finitely. However, it is sufficiently general to represent protocols of each type commonly seen in human or agent dialogues, such as those defined in the typology of [32]. In the next section, we present a categorial semantics for all protocols definable with this structure.

3

The Categorical Framework

We begin our presentation with some explanatory words on category theory [16]. A (small) category is a minimalist mathematical construct which consists 6

18

ETAPS 2005

Johnson, McBurney and Parsons

FInCo 2005

of two sets and a system of combination rules. The first set, called the objects of the category, is largely a placeholder. The second set, called the morphisms of the category, consists of a collection of arrows from one object to another. Thus, one might think of a morphism as an arrow with a tail (or source) and a head (or target). Given a pair of arrows, one can try to combine them to form a longer arrow if the head of one lies at the same object as the tail of the other. In other words, we can separate the collection of all morphisms in our category C into sets as those with the same head and tail. Then C(A, B) will denote the set of all arrows A → B and the composition law is an assignment C(B, C) × C(A, B) → C(A, C) of an arrow gf : A → C to every pair consisting of f : A → B and g : B → C. The reason for writing gf rather than f g comes from the theory of mathematical functions, but the reader should keep in mind that “time flows from right to left” in this notation. In other words, gf represents first following the arrow f and then following the arrow g. The first element of our model is to consider the locutions of a protocol as the arrows (or, more properly, morphisms) in a category where the composition law is determined by the combination rules for locutions in that protocol. We will tend to use the symbol D for this category. 7 There is one formal complication which arises from our desire to forbid certain combinations of locutions. In order to deal with this problem, we add a new element ∗ to every set C(A, B) which we think of as an illegal arrow. We also want to say any composition with the illegal arrow on either side is another illegal arrow. The technical terminology for this process is to consider only categories “enriched over pointed sets”. Notice, we can now say f followed by g is illegal within the categorical context by the equation gf = ∗. This complication should be viewed as purely formal and will be suppressed whenever this will cause no additional confusion. 8 One should keep in mind that the composition rule in a category must be associative. This is simply the statement that (hg)f = h(gf ), so that all compositions can be formed in whatever order is convenient. In particular, this means that making hgf an illegal combination then implies both that h may not legally follow gf and that hg may not legally follow f . One other key feature of a category is that there is an identity morphism 1B associated to each object B of the category. This has the property that 1B f = f for any f : A → B and g1B = g for any g : B → C. Now a dialog represents a sequence of composable arrows (fn , fn−1 , . . . , f1 ), where composable simply means the target of fi and the source of fi+1 are the same object. The dialog 7

For those who take the view that a protocol consists solely of locutions and their combination rules, the category D alone then provides a model for a protocol. 8 For more on this construction, see [11].

7

19

ETAPS 2005

Johnson, McBurney and Parsons

FInCo 2005

is illegal precisely when the composition is illegal, fn fn−1 . . . f1 = ∗. Up to this point, we have not dealt with the commitments in any way. As with the locutions, one might build a category of commitments with the commitments as arrows and their combination rules determining the composition law of the category. We will use O to indicate this category. However, in many cases it seems one would like to consider all commitments as “composable”. Mathematically, this corresponds to assuming that there is only one object in O. In this case, O may be more efficiently described as a monoid, which simply means a set together with a multiplication that may (or may not) have inverses but does contain a unit. The set in question is the set of all morphisms of O, the multiplication is given by the composition law and the unit comes from the identity map of the unique object. For example, the whole numbers under addition form a monoid, with 0 as the unit. As another example, consider the integers under multiplication with 1 as the unit. The question of the existence of inverses corresponds to the question of which commitments may be retracted without restriction. If all commitments have a retraction, the monoid becomes a group, a mathematical construct which may be more familiar than either monoids or categories. The group associated to the whole numbers under addition is the integers under addition. The group associated to the integers under multiplication is the rational numbers (fractions of integers) under multiplication. In fact, one can always find a smallest group that contains a particular monoid, which would allow us to focus on groups rather than monoids if we prefer. Notice the group would be Abelian (multiplication order is irrelevant) precisely when commitments are all time-independent of one another. We still have not dealt with the Locution-Commitment Assignment Rules in the protocol structure. This involves some assignment of an arrow in O to each arrow in D in a manner compatible with the composition laws in the two categories. The term for an assignment between categories is a functor F : D → O, which associates an object F (D) of O to each object D of D. Further, associated to each arrow g : B → C in D, one has an arrow F (g) : F (B) → F (C) in O. Finally, for composable morphisms g and f , one has F (gf ) = F (g)F (f ) so one can compose and then map to O or map each arrow to O and then compose, with the same results. Once again, we have technical restrictions due to the illegal morphisms, so we would like to force F (1B ) = 1F (B) and F (∗) = ∗ as well. This says F must be an enriched functor between the two categories D and O which are enriched over pointed sets. Note that enriched category theory is a mature mathematical theory [13], and so we have access to a variety of well-known constructions. Thus, our model for a protocol consists of a triple: D, O and F : D → O where D and O are categories (enriched over pointed sets) with F an enriched functor. 8

20

ETAPS 2005

4

Johnson, McBurney and Parsons

FInCo 2005

Implications of the Framework

In the theory of categories, there is an obvious category which contains all of the functors F : D → O once we fix O. This would be called the category of pointed categories over O and could be denoted Cat∗ /O. The morphisms in this category from F : D → O to G : C → O are the (enriched) functors H : D → C which make the following triangle commute: H

D AA

AA AA F AA

O.

/C ~~ ~ ~ ~~ ~~ G

(1)

Recall that a diagram is said to commute if each path through the diagram yields the same result at any point when results can be compared, so this triangle commuting says GH = F . This implies there is a natural notion of morphism between the triples associated to protocols with the same commitment category, and they form the category Cat∗ /O. Among the most basic objects in category theory are categories denoted [n], which contain only a string of n composable morphisms aside from the required identity and illegal maps. For example: [1]

0

/1

[2]

0

/1

/2

[3]

0

/1

/2

(2)

/3

Then [0] simply consists of a single object and its identity and illegal morphisms. The main use for a category [n] is that a functor G : [n] → D is simply a string of composable morphisms. However, even if D is a category of locutions, the morphism G(k) → G(k + 1), which is an arrow between two objects in D, may correspond to either the identity on the object G(k) or a long string of combinable locutions in D. Suppose O has only one object (as in the auction examples considered below) and H : [n] → O is a functor. Then H corresponds to the choice of an ordered sequence of commitments and this makes H an object in Cat∗ /O. Thus, we can consider a morphism in Cat∗ /O, which consists of a commutative triangle G /D (3) [n] A AA AA H AAA

O.

~~ ~~ ~ ~F ~~

9

21

ETAPS 2005

Johnson, McBurney and Parsons

FInCo 2005

This corresponds to a dialog in our protocol whose associated commitment sequence is the ordered sequence of commitments associated to H. However, it may be that we have compressed the dialog by looking at a longer dialog and reducing it down to a shortened apparent length by composing certain portions and ignoring intermediate commitments over the excised period. In any event, the functor G chooses a sequence of composable morphisms in D, which corresponds to a dialog in our protocol (although we may be fastforwarding certain portions of the dialog in some sense). The assumption that the triangle commutes implies the commitments associated to the relevant portions of the dialog must be those associated to H. If we only want to pay attention to the final commitments rather than to an ordered sequence of intermediate commitments, we should simply consider the case n = 1. The reader should be aware that we do NOT assume all dialogs begin at the same point in this work. A natural object to associate to a model of a protocol is the set of all such commutative triangles (graded by n). This is a very natural construction in category theory and corresponds to the simplicial set associated to a category, often called the nerve of the category. In terms of protocols, this corresponds to looking at dialogs sorted by their ordered sequence of outcomes, possibly by ignoring intermediate commitments. If we restrict to what are usually called the one-simplices, or setting only n = 1, this corresponds to dialogs where we consider only the final commitments. There are a large number of notions of equivalence of simplicial sets and we are currently applying these to the study of protocol equivalence.

5

Some Examples

In this Section, we present some illustrative examples of agent interaction protocols represented in our categorical framework. The basic idea is that the locutions create a directed graph by tiling and then the combination rules impose relations via the composition law. 5.1

FIPA ACL

As mentioned above, FIPA ACL, the Agent Communications Language of the Foundation for Intelligent Physical Agents (FIPA), defines 22 locutions which may be uttered by agents in a dialog in any order [7]. These include locutions to inform another agent of the truth of some proposition, or to request that some action be undertaken. FIPA ACL does not define any locution combination rules, so that an agent may utter any of the 22 locutions at any point in a dialogue. This means the category D should be (basepoints added to) the free category on the 22 possible locutions, essentially just a repeated tiling where the tile consists of 22 morphisms with the same source and all targets are different. (See below for a tiling example with only three 10

22

ETAPS 2005

Johnson, McBurney and Parsons

FInCo 2005

morphisms.) Free objects are one of the most carefully studied concepts in category theory, sometimes thought of as the most universal constructions. This connection with a free category explains the feeling that many other protocols could be modeled by imposing relations on the FIPA ACL. For the commitment category in this example, notice that no commitments are defined or associated to locutions in the FIPA ACL, hence there are no combination rules for commitments. In our framework, this may be represented by saying the commitment category should simply be [0] described above. That is, we should instead think of a single outcome which iterates to itself as the outcome associated to each (legal) locution. To understand the functor F : D → [0], it will suffice to notice that no combination of locutions is forbidden, so the basepoints are an afterthought in this case. In order to represent this, we should think of F as adding basepoints to an ordinary functor from a free category to the category with one object and only the identity morphism. There is only one such functor into such a trivial category, namely the functor which sends all objects to the unique object and all morphisms to the identity morphism. Thus, our functor F will send only the basepoint morphisms to the basepoint morphism of [0] and every other morphism will be sent to the identity morphism in [0]. 5.2

An English Auction

Perhaps the most widely-used formal interaction protocols are auctions. These are processes by which one or more buyers negotiates the price of some good with one or more sellers [14]. In the most common form of auction, the so-called English auction, multiple potential buyers of a single good bid increasingly higher prices to purchase the good from a single seller. The winning bidder is that potential buyer who makes the highest bid, and the amount paid by the buyer is the amount indicated in that highest bid. Each bid may be viewed as an utterance creating a commitment to purchase the item if agreed by the seller. We can represent this process by viewing our category as a tiling, where a single tile is defined by the number of atomic locutions and the set of parameters allowable for each. For example, suppose a basic auction protocol (for two bidders) consists of three possible utterances: “Agent a increments the current bid by amount n”; “Agent b increments the current bid by amount m”; and “The clock ticks with no bid”. When a then increments the current bid, this is an atomic locution, while n is a parameter which would generally be a natural number. However, zero is always the parameter for the clock. (The clock is only included so that the end of the auction is detectable by three consecutive clock ticks.) The basic “tile” would then consist of four objects (one more than the number of atomic locutions), which we will label S, A, B, and C. Then D(S, A) would be the 11

23

ETAPS 2005

FInCo 2005

Johnson, McBurney and Parsons

(pointed) natural numbers (the possible parameters) corresponding to the first locution where a increments the current bid. Similarly, D(S, B) would be the (pointed) natural numbers corresponding to the locution where b increments the current bid. Finally, D(S, C) would be two points, one corresponding to the clock tick and the other to the illegal locution. There would be no other morphisms aside from the required identities and illegal morphisms. A diagram of this tile would be as follows: (4)

CO tick

}S b,N∗ }}} }} ~}}

a,N∗

/A

B. Now the point of the tiling idea is that we would think of each of 1, 2 or 3 as a new location for 0. One iteration of this process might yield the following diagram (where new objects are de-emphasized): •O

(5)

tick

b,N∗



CO

a,N∗

/•

•O

tick



~~ ~~ ~ ~ b,N∗ ~~ ~ •O ~~ ~~ ~ tick ~ ~~ /• B ~ a,N∗ b,N∗ ~~ ~~ ~~

S

tick

/A ~ ~ ~ ~~b,N∗ ~~

a,N∗



a,N∗

/•

• Iterating this procedure yields something like a lattice in Rn , which can be described as a free category (enriched over pointed sets). However, we now need to introduce the relations inherent in our combination rules for locutions. In the case of our auction example, “a increments the bid by n” followed by “b increments the bid by m” should be viewed as equivalent to “b increments the bid by m+n”, for example. Notice we also avoid much of the state-space explosion problem of the FIPA ACL in this case, since any three consecutive ticks ends the dialog, allowing us to impose a height restriction in this diagram. Our outcome category for the English auction would consist of the pointed whole numbers times a small pointed monoid consisting of 1, a, b and the basepoint, which keeps track of the last real bidder (any tick of the clock would be given the identity in the bidder slot). The functor would simply take “a increments the bid by n” to the pair (a, n) in this notation, so our 12

24

ETAPS 2005

Johnson, McBurney and Parsons

FInCo 2005

relation above becomes (b, m)(a, n) = (b, m + n).

6

Conclusions

In this paper, we have presented the first mathematical theory of agent interaction protocols which takes explicit account of the semantics of protocols. We do this by representing formally the utterances and commitments in agent dialogs, and the relationships between them. Our model is a categorical one, and it abstracts away from the type of interaction and the nature of the commitments being discussed. It therefore applies to a very broad class of agent interaction protocols, and is also not limited to real-valued monetary transactions. In contrast, prior work on the semantics of agent dialogs has focused on the semantics of individual utterances, as in the semantics of the FIPA ACL [7], or on the semantics of dialogs under only one protocol, as in [17], or a limited class of protocols, as in [22]. Similarly, prior work on parametrizing the space of auction mechanisms, such as [35], does not extend to dialogue game protocols. This feature of our work helps answer an important question: Why use category theory?. Only category theory is sufficiently abstract that we could hope to represent all types of agent interaction protocols. That we were able to present a model of the FIPA ACL, an interaction language defined without explicit commitments, and a model of an auction protocol, in which utterances are usually assumed to incur commitments, shows the potential of this formalism. In addition, a categorical semantics is likely to prove necessary to answering the question: When are two protocols the same? In earlier work [12], we identified several distinct notions of protocol equivalence, and we are currently representing these different notions in our framework. A mathematical theory of protocols should be able to characterize different types of protocols and identify those which are similar or equivalent. A categorical semantics may also allow us to build new protocols with specific properties. Our future work is devoted to exploring the implications of this framework and applying it to protocol comparisons. Acknowledgments We are grateful for comments received from the anonymous FInCo 2005 referees.

References [1] S. Abramsky. Semantics of interaction: an introduction to game semantics. In A. M. Pitts and P. Dybjer, editors, Semantics and Logics of Computation, pages 1–31. Cambridge University Press, Cambridge, UK, 1997. [2] L. Amgoud, N. Maudet, and S. Parsons.

Modelling dialogues using

13

25

ETAPS 2005

Johnson, McBurney and Parsons

FInCo 2005

argumentation. In E. Durfee, editor, Proceedings of the International Conference on Multi-Agent Systems (ICMAS 2000), pages 31–38, Boston, MA, USA, 2000. IEEE Press. [3] L. Amgoud, S. Parsons, and N. Maudet. Arguments, dialogue, and negotiation. In W. Horn, editor, Proceedings of the European Conference on Artificial Intelligence (ECAI 2000), pages 338–342, Berlin, Germany, 2000. IOS Press. [4] Aristotle. Topics. Clarendon Press, Oxford, UK, 1928. (W. D. Ross, Editor). [5] J. L. Austin. How To Do Things with Words. Oxford University Press, Oxford, UK, 1962. [6] R. M. van Eijk. Programming Languages for Agent Communications. PhD thesis, Department of Computer Science, Utrecht University, Utrecht, The Netherlands, 2000. [7] FIPA. Communicative Act Library Specification. Standard SC00037J, Foundation for Intelligent Physical Agents, 3 December 2002. [8] C. L. Hamblin. Language and the Theory of Information. Ph.D. thesis, Logic and Scientific Method Programme, University of London, London, UK, 1957. Submitted October 1956. [9] C. L. Hamblin. Fallacies. Methuen, London, UK, 1970. [10] W. Hodges. A Shorter Model Theory. Cambridge University Press, Cambridge, UK, 1997. [11] M. W. Johnson. On pointed enrichments and illegal compositions. Technical Report ULCS-03-010, Department of Computer Science, University of Liverpool, Liverpool, UK, 2003. [12] M. W. Johnson, P. McBurney, and S. Parsons. When are two protocols the same? In M-P. Huget, editor, Communication in Multi-Agent Systems: Agent Communication Languages and Conversation Policies, Lecture Notes in Artificial Intelligence 2650, pages 253–268. Springer, Berlin, 2003. [13] G. M. Kelly. Basic Concepts of Enriched Category Theory. London Mathematical Society Lecture Notes 64. Cambridge University Press, Cambridge, UK, 1982. [14] V. Krishna. Auction Theory. Academic Press, San Diego, CA, USA, 2002. [15] P. Lorenzen. Ein dialogisches konstruktivit¨atskriterium. In Infinitistic Methods: Proc. Symp. Foundations of Mathematics, Warsaw, 2-9 September 1959, pages 193–200, Warszawa, Poland, 1961. PWN. [16] S. Mac Lane. Categories for the Working Mathematician. Graduate Texts in Mathematics 5. Springer, New York, second edition, 1998. [17] P. McBurney, R. M. van Eijk, S. Parsons, and L. Amgoud. A dialogue-game protocol for agent purchase negotiations. Journal of Autonomous Agents and Multi-Agent Systems, 7(3):235–273, 2003.

14

26

ETAPS 2005

Johnson, McBurney and Parsons

FInCo 2005

[18] P. McBurney, D. Hitchcock, and S. Parsons. The eightfold way of deliberation dialogue. Intelligent Systems, 2005. In press. [19] P. McBurney and S. Parsons. Representing epistemic uncertainty by means of dialectical argumentation. Annals of Mathematics and Artificial Intelligence, 32(1–4):125–169, 2001. [20] P. McBurney and S. Parsons. Games that agents play: A formal framework for dialogues between autonomous agents. Journal of Logic, Language and Information, 11(3):315–334, 2002. [21] P. McBurney and S. Parsons. Dialogue game protocols. In M-P. Huget, editor, Communication in Multi-Agent Systems: Agent Communication Languages and Conversation Policies, Lecture Notes in Artificial Intelligence 2650, pages 269– 283. Springer, Berlin, 2003. [22] P. McBurney and S. Parsons. A denotational semantics for deliberation dialogues. In N. R. Jennings, C. Sierra, E. Sonenberg, and M. Tambe, editors, Proceedings of the Third International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2004), pages 86–93, New York City, 2004. ACM Press. [23] P. McBurney and S. Parsons. The Posit Spaces Protocol for multi-agent negotiation. In F. Dignum, editor, Advances in Agent Communication, Lecture Notes in Artificial Intelligence 2922, pages 364–382. Springer, Berlin, 2004. [24] P. McBurney, S. Parsons, and M. Wooldridge. Desiderata for agent argumentation protocols. In C. Castelfranchi and W. L. Johnson, editors, Proceedings of the First International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2002), pages 402–409, New York City, 2002. ACM Press. [25] M. J. Osborne and A. Rubinstein. A Course in Game Theory. MIT Press, Cambridge, MA, USA, 1994. [26] J. Pitt and A. Mamdani. Some remarks on the semantics of FIPA’s Agent Communications Language. Journal of Autonomous Agents and Multi-Agent Systems, 2:333–356, 1999. [27] S. Popkorn. First Steps in Modal Logic. Cambridge, UK, 1994.

Cambridge University Press,

[28] H. Prakken. On dialogue systems with speech acts, arguments, and counterarguments. In M. Ojeda-Aciego et al., editors, Proceedings of the Seventh European Confernce on Applications of Logic in Artificial Intelligence (JELIA 2000), Lecture Notes in Artificial Intelligence 1919, pages 224–238, Berlin, Germany, 2000. Springer. [29] J. Searle. Speech Acts: An Essay in the Philosophy of Language. Cambridge University Press, Cambridge, UK, 1969.

15

27

ETAPS 2005

Johnson, McBurney and Parsons

FInCo 2005

[30] C. E. Shannon. The mathematical theory of communication. In C. E. Shannon and W. Weaver, editors, The Mathematical Theory of Communication, pages 29–125. University of Illinois Press, Chicago, IL, USA, 1963. Originally published in the Bell System Technical Journal, October and November 1948. [31] H. Sonnenschein. An axiomatic characterization of the price mechanism. Econometrica, 42(3):425–434, 1974. [32] D. N. Walton and E. C. W. Krabbe. Commitment in Dialogue: Basic Concepts of Interpersonal Reasoning. State University of New York Press, Albany, NY, USA, 1995. [33] M. J. Wooldridge. Semantic issues in the verification of agent communication languages. Journal of Autonomous Agents and Multi-Agent Systems, 3(1):9–31, 2000. [34] M. J. Wooldridge. Introduction to Multiagent Systems. Wiley, New York, 2002. [35] P. R. Wurman, M. P. Wellman, and W. E. Walsh. A parametrization of the auction design space. Games and Economic Behavior, 35(1–2):304–338, 2001. [36] F. Zambonelli and H. v. D. Parunak. Signs of a revolution in computer science and software engineering. In P. Petta et al., editors, Engineering Societies in the Agents World (ESAW 2002), Lecture Notes in Artificial Intelligence 2577, pages 13–28, Berlin, 2003. Springer.

16

28

FInCo 2005 Preliminary Version

ETAPS 2005

FInCo 2005

A reflective higher-order calculus L.G. Meredith

1

CTO, Djinnisys Corporation 505 N72nd St, Seattle, WA 98103

Matthias Radestock

2

CTO, LShift, Ltd. 6 Rufus St, London N1 6PE

Abstract The π-calculus is not a closed theory, but rather a theory dependent upon some theory of names. Taking an operational view, one may think of the π-calculus as a procedure that when handed a theory of names provides a theory of processes that communicate over those names. This openness of the theory has been exploited in π-calculus implementations, where ancillary mechanisms provide a means of interpreting of names, e.g. as tcp/ip ports. But, foundationally, one might ask if there is a closed theory of processes, i.e. one in which the theory of names arises from and is wholly determined by the theory of processes. Here we present such a theory in the form of an asynchronous message-passing calculus built on a notion of quoting. Names are quoted processes, and as such represent the code of a process, a reification of the syntactic structure of the process as an object for process manipulation. Name-passing in this setting becomes a way of passing the code of a process as a message. In the presence of a dequote operation, turning the code of a process into a running instance, this machinery yields higher-order characteristics without the introduction of process variables. As is standard with higher-order calculi, replication and/or recursion is no longer required as a primitive operation. Somewhat more interestingly, the introduction of a process constructor to dynamically convert a process into its code is essential to obtain computational completeness, and simultaneously supplants the function of the ν operator. In fact, one may give a compositional encoding of the ν operator into a calculus featuring dynamic quote as well as dequote. Key words: concurrency, message-passing, process calculus, reflection

1 2

[email protected] [email protected] This is a preliminary version. The final version will be published in Electronic Notes in Theoretical Computer Science URL: www.elsevier.nl/locate/entcs

29

Meredith and Radestock

ETAPS 2005

1

FInCo 2005

Introduction

The π-calculus ([10]) is not a closed theory, but rather a theory dependent upon some theory of names. Taking an operational view, one may think of the π-calculus as a procedure that when handed a theory of names provides a theory of processes that communicate over those names. This openness of the theory has been exploited in π-calculus implementations, like the execution engine in Microsoft’s Biztalk [8], where an ancillary binding language providing a means of specifying a ‘theory’ of names; e.g., names may be tcp/ip ports or urls or object references, etc. But, foundationally, one might ask if there is a closed theory of processes, i.e. one in which the theory of names arises from and is wholly determined by the theory of processes. Behind this question lurk a whole host of other exciting and potentially enlightening questions regarding the role of names with structure in calculi of interaction and the relationship between the structure of names and the structure of processes. Speaking provocatively, nowhere in the tools available to the computer scientist is there a countably infinite set of atomic entities. All such sets, e.g. the natural numbers, the set of strings of finite length on some alphabet, etc., are generated from a finite presentation, and as such the elements of these sets inherit structure from the generating procedure. As a theoretician focusing on some aspects of the theory of processes built from such a set, one may temporarily forget that structure, but it is there nonetheless, and comes to the fore the moment one tries to build executable models of these calculi. To illustrate the point, when names have structure, name equality becomes a computation. But, if our theory of interaction is to provide a basis for a theory of computation, then certainly this computation must be accounted for as well. Moreover, the fact that any realization of these name-based, mobile calculi of interaction must come to grips with names that have structure begs the question: would the theoretical account of interaction be more effective, both as a theory in its own right and as a guide for implementation, if it included an account of the relationships between the structure of names and the structure of processes? 1.1

Overview and contributions

Here we present a theory of an asynchronous message-passing calculus built on a notion of quoting. Names are quoted processes, and as such represent the code of a process, a reification of the syntactic structure of the process (up to some equivalence). Namepassing, then becomes a way of passing the code of a process as a message. In the presence of a dequote operation, turning the code of a process into a running instance, this machinery yields higher-order characteristics without the introduction of process variables. As is standard with higher-order calculi, replication and/or recursion is no longer required as a primitive operation. Somewhat more interestingly, the introduction of a process constructor to dynamically convert a process into its code is essential to obtain computational completeness, and simultaneously supplants the function of the ν operator. 2

30

Meredith and Radestock

ETAPS 2005 FInCo 2005 In fact, we give a compositional encoding of the ν operator into the calculus, making essential use of dynamic quote as well as dequote. Following the tradition started by Smith and des Rivieres, [3] we dub this ability to turn running code into data and back again, reflection; hence, the name r eflective, higher-order calculus, or rho-calculus, for short, or ρ-calculus for even shorter. Certainly, the paper presents a concrete calculus that may be used to model a variety of computations and highlights a number of interesting phenemona in those computations. We take the view, however, that the main contribution is that the calculus provides an instrument to bring to life a set of questions regarding the role of names in calculi of interaction. These questions include the calculation of name equality as a computation to be considered within the framework of interaction and the roles of name equality in substitution versus synchronization. These questions don’t really come to life, though, without the instrument in hand. So, we turn immediately to the presentation of the calculus.

2

The calculus

2.0.1 Notation We let P, Q, R range over processes and x, y, z range over names.

ρ-calculus

P, Q ::= 0

null process

| x(y) . P

input

| xh|P |i

lift

| qxp

drop

| P |Q

parallel

x, y ::= pP q

quote

2.0.2 Quote Working in a bottom-up fashion, we begin with names. The technical detail corresponding to the π-calculus’ parametricity in a theory of names shows up in standard presentations in the grammar describing terms of the language: there is no production for names; names are taken to be terminals in the grammar. Our first point of departure from a more standard presentation of an asynchronous mobile process calculus is here. The grammar for the terms of the language will include a production for names in the grammar. A name is a quoted process, pP q. 2.0.3 Parallel This constructor is the usual parallel composition, denoting concurrent execution of the composed processes. 3

31

Meredith and Radestock

ETAPS 2005 FInCo 2005 2.0.4 Lift and drop Despite the fact that names are built from (the codes of) processes, we still maintain a careful disinction in kind between process and name; thus, name construction is not process construction. So, if one wants to be able to generate a name from a given process, there must be a process constructor for a term that creates a name from a process. This is the motivation for the production xh|P |i, dubbed here the lift operator. The intuitive meaning of this term is that the process P will be packaged up as its code, pP q, and ultimately made available as an output at the port x. A more formal motivation for the introduction of this operator will become clear in the sequel. But, it will suffice to say now that pP q is impervious to substitution. In the ρ-calculus, substitution does not affect the process body between quote marks. On the other hand, xh|P |i is susceptible to substitution and as such constitutes a dynamic form of quoting because the process body ultimately quoted will be different depending on the context in which the xh|P |i expression occurs. Of course, when a name is a quoted process, it is very handy to have a way of evaluating such an entity. Thus, the qxp operator, pronounced drop x, (eventually) extracts the process from a name. We say ‘eventually’ because this extraction only happens when a quoted process is substituted into this expression. A consequence of this behavior is that qxp is inert except under and input prefix. One way of saying this is that if you want to get something done, sometimes you need to drop a name, but it should be the name of an agent you know. Remark 2.1 The lift operator turns out to play a role analogous to (ν x)P . As mentioned in the introduction, it is essential to the computational completeness of the calculus, playing a key role in the implementation of replication. It also provides an essential ingredient in the compositional encoding of the ν operator. Remark 2.2 It is well-known that replication is not required in a higher-order process algebra [13]. While our algebra is not higher-order in the traditional sense (there are not formal process variables of a different type from names) it has all the features of a higherorder process algebra. Thus, it turns out that there is no need for a term for recursion. To illustrate this we present below an encoding of !P in this calculus. Intuitively, this will amount to receiving a quoted form of a process, evaluating it, while making the quoted form available again. The reader familiar with the λ-calculus will note the formal similarity between the crucial term in the encoding and the paradoxical combinator [1]. 2.0.5 Input and output The input constructor is standard for an asynchronous name-passing calculus. Input blocks its continuation from execution until it receives a communication. Lift is a form of output which – because the calculus is asynchronous – is allowed no continuation. It also affords a convenient syntactic sugar, which we define here. x[y] , xh|qyp|i 4

32

Meredith and Radestock

ETAPS 2005 FInCo 2005 2.0.6 The null process As we will see below, the null process has a more distinguished role in this calculus. It provides the sole atom out of which all other processes (and the names they use) arise much in the same way that the number 0 is the sole number out of which the natural numbers are constructed; or the empty set is the sole set out of which all sets are built in ZF -set theory [7]; or the empty game is the sole game out of which all games are built in Conway’s theory of games and numbers [2]. This analogy to these other theories draws attention, in our opinion, to the foundational issues raised in the introduction regarding the design of calculi of interaction. 2.1

The name game

Before presenting some of the more standard features of a mobile process calculus, the calculation of free names, structural equivalence, etc., we wish to consider some examples of processes and names. In particular, if processes are built out of names, and names are built out of processes, is it ever possible to get off the ground? Fortunately, there is one process the construction of which involves no names, the null process, 0. Since we have at least one process, we can construct at least one name, namely p0q 3 . Armed with one name we can now construct at least two new processes that are evidently syntactically different from the 0, these are p0q[p0q] and p0q(p0q) . 0. As we might expect, the intuitive operational interpretation of these processes is also distinct from the null process. Intuitively, we expect that the first outputs the name p0q on the channel p0q, much like the ordinary π-calculus process x[x] outputs the name x on the channel x, and the second inputs on the channel p0q, much like the ordinary π-calculus process x(x) . 0 inputs on the channel x. Of course, now that we have two more processes, we have two more names, pp0q[p0q]q and pp0q(p0q) . 0q. Having three names at our disposal we can construct a whole new supply of processes that generate a fresh supply of names, and we’re off and running. It should be pointed out, though, that as soon as we had the null process we also had 0 | 0 and 0 | 0 | 0 and consequently, we had the names p0 | 0q, and p0 | 0 | 0q, and .... But, since we ultimately wish to treat these compositions as merely other ways of writing the null process and not distinct from it, should we admit the codes of these processes as distinct from p0q? This question leads to several intriguing and apparently fundamental questions. Firstly, if names have structure, whether this derives from the structure of processes or something else, what is a reasonable notion of equality on names? How much computation, and of what kind, should go into ascertaining equality on names? Additionally, what roles should name equality play in a calculus of processes? In constructing this calculus we became conscious that substitution and synchronization identify two potentially very different roles for name equality to play in name-passing calculi. That these are very different roles is suggested by the fact that they may be carried out by very different mechanisms in a workable and effective theory. We offer one choice, but this is just one design choice 3

pun gratefully accepted ;-)

5

33

Meredith and Radestock

ETAPS 2005 FInCo 2005 among infinitely many. Most likely, the primary value of this proposal is to raise the question. Likewise, we offer a proposal regarding the calculation of name equality that is just one of many and whose real purpose is to make the question vivid. We wish to turn to the core mechanics of the calculus with these questions in mind. 2.2

Free and bound names

The syntax has been chosen so that a binding occurrence of a name is sandwiched between round braces, (·). Thus, the calculation of the free names of a process, P , denoted FN (P ) is given recursively by FN (0) = ∅ FN (x(y) . P ) = {x} ∪ (FN (P ) \ {y}) FN (xh|P |i) = {x} ∪ FN (P ) FN (P | Q) = FN (P ) ∪ FN (Q) FN (qxp) = {x} An occurrence of x in a process P is bound if it is not free. The set of names occurring in a process (bound or free) is denoted by N (P ). 2.3

Structural congruence

The structural congruence of processes, noted ≡, is the least congruence, containing αequivalence, ≡α , that satisfies the following laws: P | 0≡ P ≡0 | P P |Q ≡ Q|P (P | Q) | R ≡ P | (Q | R) 2.4

Name equivalence

We now come to one of the first real subtleties of this calculus. Both the calculation of the free names of a process and the determination of structural congruence between processes critically depend on being able to establish whether two names are equal. In the case of the calculation of the free names of an input-guarded process, for example, to remove the bound name we must determine whether it is in the set of free names of the continuation. Likewise, structural congruence includes α-equivalence. But, establishing α-equivalence between the processes x(z).wh|y[z]|i and x(v).wh|y[v]|i, for instance, requires calculating a substitution, e.g. x(v) . wh|y[v]|i{z/v}. But this calculation requires, in turn, being able to determine whether two names, in this case the name in the object position of the output, and the name being substituted for, are equal. As will be seen, the equality on names involves structural equivalence on processes, which in turn involves alpha equivalence, which involves name equivalence. This is a subtle mutual recursion, but one that turns out to be well-founded. Before presenting the technical details, the reader may note that the grammar above enforces a strict alternation between quotes and process constructors. Each question about a process that involves a 6

34

Meredith and Radestock

ETAPS 2005 FInCo 2005 question about names may in turn involve a question about processes, but the names in the processes the next level down, as it were, are under fewer quotes. To put it another way, each ‘recursive call’ to name equivalence will involve one less level of quoting, ultimately bottoming out in the quoted zero process. Let us assume that we have an account of (syntactic) substitution and α-equivalence upon which we can rely to formulate a notion of name equivalence, and then bootstrap our notions of substitution and α-equivalence from that. We take name equivalence, written ≡N , to be the smallest equivalence relation generated by the following rules. (Quote-drop) pqxpq ≡N x P ≡Q (Struct-equiv) pP q ≡N pQq 2.5

Syntactic substitution

Now we build the substitution used by α-equivalence. We use P roc for the set of processes, pP rocq for the set of names, and {~y /~x} to denote partial maps, s : pP rocq → pP rocq. A map, s lifts, uniquely, to a map on process terms, sb : P roc → P roc by the following equations. \ q} = 0 (0){pQq/pP \ q} = (R){pQq/pP \ q} | (S){pQq/pP \ q} (R | S){pQq/pP \ q} = (x){pQq/pP q}(z) . ((R{z/y}) \ {pQq/pP \ q}) (x(y) . R){pQq/pP \ q} = (x){pQq/pP q}h|R{pQq/pP \ q}|i (xh|R|i){pQq/pP   qpQqp x ≡ pP q N \ q} = (qxp){pQq/pP  qxp otherwise where   pQq x ≡ pP q N (x){pQq/pP q} =  x otherwise and z is chosen distinct from pP q, pQq, the free names in Q, and all the names in R. Our α-equivalence will be built in the standard way from this substitution. But, given these mutual recursions, the question is whether the calculation of ≡N (respectively, ≡, ≡α ) terminates. To answer this question it suffices to formalize our intuitions regarding level of quotes, or quote depth, #(x), of a name x as follows. #(pP q) = 1 + #(P ) 7

35

Meredith and Radestock

ETAPS 2005   max{#(x) : x ∈ N (P )} N (P ) 6= ∅ #(P ) =  0 otherwise

FInCo 2005

The grammar ensures that #(pP q) is bounded. Then the termination of ≡N (respectively, ≡, ≡α ) is an easy induction on quote depth. 2.6

Dynamic quote: an example

\ to Anticipating something of what’s to come, consider applying the substitution, {u/z}, the following pair of processes, wh|y[z]|i and w[py[z]q]. \ = wh|y[u]|i wh|y[z]|i{u/z} \ = w[py[z]q] w[py[z]q]{u/z} Because the body of the process between quotes is impervious to substitution, we get radically different answers. In fact, by examining the first process in an input context, e.g. x(z) . wh|y[z]|i, we see that the process under the lift operator may be shaped by prefixed inputs binding a name inside it. In this sense, the lift operator will be seen as a way to dynamically construct processes before reifying them as names. 2.7

Semantic substitution

The substitution used in α-equivalence is really only a device to formally recognize that binding occurrences do not depend on the specific names. It is not the engine of computation. The proposal here is that while synchronization is the driver of that engine, the real engine of computation is a semantic notion of substitution that recognizes that a dropped name is a request to run a process. Which process? Why the one whose code has been bound to the name being dropped. Formally, this amounts to a notion of substitution that differs from syntactic substitution in its application to a dropped name.   Q x ≡ pP q N \ q} = (qxp){pQq/pP  qxp otherwise In the remainder of the paper we will refer to semantic and syntactic substitutions simply as substitutions and rely on context to distinguish which is meant. Similarly, we \ will abuse notation and write {y/x} for {y/x}. Finally equipped with these standard features we can present the dynamics of the calculus. 2.8

Operational Semantics

The reduction rules for ρ-calculus are 8

36

Meredith and Radestock

ETAPS 2005

FInCo 2005 x0 ≡N x1

(Comm)

x0 h|Q|i | x1 (y) . P → P {pQq/y} In addition, we have the following context rules: P → P0 (Par) P | Q → P0 | Q P ≡ P0

P 0 → Q0

Q0 ≡ Q (Equiv)

P →Q The context rules are entirely standard and we do not say much about them, here. The communication rule does what was promised, namely make it possible for agents to synchronize and communicate processes packaged as names. For example, using the comm rule and name equivalence we can now justify our syntactic sugar for output. x[z] | x(y) . P = xh|qzp|i | x(y) . P → P {pqzpq/y} ≡ P {z/y} But, it also provides a scheme that identifies the role of name equality in synchronization. There are other relationships between names with structure that could also mediate synchronization. Consider, for example, a calculus identical to the one presented above, but with an alternative rule governing communication. ∀R.[Pchannel | Qchannel →∗ R] ⇒ R →∗ 0 pQchannel qh|Q|i | pPchannel q(y) . P → P {pQq/y} (Comm-annihilation) Intuitively, it says that the codes of a pair of processes, Pchannel , Qchannel , stand in channel/co-channel relation just when the composition of the processes always eventually reduces to 0, that is, when the processes annihilate one another. This rule is well-founded, for observe that because 0 ≡ 0 | 0, 0 | 0 →∗ 0. Thus, p0q serves as its own co-channel. Analogous to our generation of names from 0, with one such channel/co-channel pair, we can find many such pairs. What we wish to point out about this rule is that we can see precisely an account of the calculation of the channel/co-channel relationship as deriving from the theory of interaction. We do not know if the computation of name equality has a similar presentation, driving home the potential difference of those two roles in calculi of interaction. 9

37

Meredith and Radestock

ETAPS 2005 FInCo 2005 We mention, as a brief aside, that there is no reason why 0 is special in the scheme above. We posit a family of calculi, indexed by a set of processes {Sα }, and differing only in their communication rule each of which conforms to the scheme below. ∀R.[Pchannel | Qchannel →∗ R] ⇒ R →∗ R0 ≡ Sα pQchannel qh|Q|i | pPchannel q(y) . P → P {pQq/y} (Comm-annihilation-S) We explore this family of calculi in a forthcoming paper. For the rest of this paper, however, we restrict our attention to the calculus with the less exotic communication rule, using → for reduction according to that system and ⇒ for →∗ .

3

Replication

As mentioned before, it is known that replication (and hence recursion) can be implemented in a higher-order process algebra [13]. As our first example of calculation with the machinery thus far presented we give the construction explicitly in the ρ-calculus. D(x) , x(y) . (x[y]|qyp) !P (x) , xh|D(x) | P |i | D(x) !P (x) = xh|(x(y) . (x[y]|qyp)) | P |i | x(y) . (x[y]|qyp) → (x[y]|qyp){p(x(y) . (qyp|x[y])) | P q/y} = x[p(x(y) . (x[y]|qyp)) | P q] | (x(y) . (x[y]|qyp)) | P → ... ∗ → P | P | ... Of course, this encoding, as an implementation, runs away, unfolding !P eagerly. A lazier and more implementable replication operator, restricted to input-guarded processes, may be obtained as follows. !u(v) . P , xh|u(v) . (D(x) | P )|i | D(x) It is worth noting that the lift operator is essential to get computational completeness. A similar calculus equipped with only a static quote enjoys a computational expressiveness at least equivalent to context-free grammars, but short of context-sensitive. This fact is established and exploited in a forthcoming paper on a type system for the ρ-calculus.

4

Bisimulation

Having taken the notion of restriction out of the language, we carefully place it back into the notion of observation, and hence into the notion of program equality, i.e. bisimulation. That is, we parameterize the notion of barbed bisimulation by a set of names over which 10

38

Meredith and Radestock

ETAPS 2005 FInCo 2005 we are allowed to set the barbs. The motivation for this choice is really comparison with other calculi. The set of names of the ρ-calculus is global. It is impossible, in the grammar of processes, to guard terms from being placed into contexts that can potentially observe communication. So, we provide a place for reasoning about such limitations on the scope of observation in the theory of bisimulation. Definition 4.1 An observation relation, ↓N , over a set of names, N , is the smallest relation satisfying the rules below. y ∈ N , x ≡N y

(Out-barb)

x[v] ↓N x P ↓N x or Q ↓N x

(Par-barb)

P | Q ↓N x We write P ⇓N x if there is Q such that P ⇒ Q and Q ↓N x. Notice that x(y) . P has no barb. Indeed, in ρ-calculus as well as other asynchronous calculi, an observer has no direct means to detect if a message sent has been received or not. Definition 4.2 An N -barbed bisimulation over a set of names, N , is a symmetric binary relation SN between agents such that P S N Q implies: (i) If P → P 0 then Q ⇒ Q0 and P 0 S

0 NQ .

(ii) If P ↓N x, then Q ⇓N x. 

P is N -barbed bisimilar to Q, written P ≈N Q, if P S tion SN .

5

NQ

for some N -barbed bisimula-

Interpreting π-calculus

Here we provide an encoding of the pure asynchronous π-calculus into the ρ-calculus. Since all names are global in the ρ-calculus, we encounter a small complication in the treatment of free names at the outset. There are several ways to handle this. One is to insist that the translation be handed a closed program (one in which all names are bound either by input or by restriction). This alternative feels inelegant. Another is to provide an environment, r : Nπ → pP rocq, for mapping the free names in a π-calculus process into names in the ρ-calculus. Maintaining the updates to the environment, however, obscures the simplicity of the translation. We adopt a third alternative. To hammer home the point that the π-calculus is parameterized in a theory of names, we build a π-calculus in which the names are the names of ρ-calculus. This is no different than building a π-calculus using the natural numbers, or the set of URLs as the set of 11

39

Meredith and Radestock

ETAPS 2005 FInCo 2005 names. Just as there is no connection between the structure of these kinds of names and the structure of processes in the π-calculus, there is no connection between the processes quoted in the names used by the theory and the processes generated by the theory, and we exploit this fact. 5.1

π-calculus

More formally,

π-calculus

P, Q ::= 0 | x[y] | x(y) . P | (ν x)P | P |Q | !P x, y ::= x, y ∈ pP rocq

Note well: names are quoted ρ-calculus processes. 5.2

Structural congruence

Definition 5.1 The structural congruence, ≡, between processes is the least congruence closed with respect to alpha-renaming, satisfying the abelian monoid laws for parallel (associativity, commutativity and 0 as identity), and the following axioms: (i) the scope laws: (ν x)0 ≡ 0, (ν x)(ν x)P ≡ (ν x)P, (ν x)(ν y)P ≡ (ν y)(ν x)P, P | (ν x)Q ≡ (ν x)P | Q, if x 6∈ FN (P ) (ii) the recursion law: !P ≡ P | !P (iii) the name equivalence law: P ≡ P {x/y}, if x ≡N y 5.3

Operational semantics

The operational semantics is standard. 12

40

Meredith and Radestock

ETAPS 2005

FInCo 2005 (Comm) x[z] | x(y) . P → P {z/y}

In addition, we have the following context rules: P → P0 (Par) P | Q → P0 | Q P → P0 (New) (ν x)P → (ν x)P 0 P ≡ P0

P 0 → Q0

Q0 ≡ Q (Equiv)

P →Q Again, we write ⇒ for →∗ , and rely on context to distinguish when → means reduction in the π-calculus and when it means reduction in the ρ-calculus. The set of π-calculus processes will be denoted by P rocπ . 5.4

The translation

The translation will be given by a function, [[−]](−, −) : P rocπ ×pP rocq×pP rocq → P roc. The guiding intuition is that we construct alongside the process a distributed memory allocator, the process’ access to which is mediated through the second argument to the function. The first argument determines the shape of the memory for the given allocator. Given a process, P , we pick n and p such that n 6= p and distinct from the free names of P . For example, n = pΠm∈F N (P ) m[p0q]q and p = pΠm∈F N (P ) m(p0q) . 0q. Then [[P ]] = [[P ]]2nd (n, p) where [[0]]2nd (n, p) = 0 [[x[y]]]2nd (n, p) = x[y] [[x(y) . P ]]2nd (n, p) = x(y) . [[P ]]2nd (n, p) [[P | Q]]2nd (n, p) = [[P ]]2nd (nl , pl ) | [[Q]]2nd (nr , pr ) [[!P ]]2nd (n, p) = xh|[[P ]]3rd (nr , pr )|i | D(x) | nr [nl ] | pr [pl ] [[(ν x)P ]]2nd (n, p) = p(x) . [[P ]]2nd (nl , pl ) | p[n] and xl , px[x]q xr , px(x) . 0q [[P ]]3rd (n00 , p00 ) , n00 (n) . p00 (p) . ([[P ]]2nd (n, p) | (D(x) | n00 [nl ] | p00 [pl ])) 13

41

Meredith and Radestock

ETAPS 2005 FInCo 2005 Remark 5.2 It is also noteworthy that the translation is dependent on how the parallel compositions in a process are associated. Different associations will result in different bindings for ν-ed names. This will not result in different behavior, however, as the bindings will be consistent throughout the translation of the process. 



Theorem 5.3 (Correctness) P ≈π Q ⇐⇒ [[P ]] ≈r(FN(P )) [[Q]]. Proof sketch: An easy structural induction. One key point in the proof is that there are contexts in the ρ-calculus that will distinguish the translations. But, these are contexts that can see the fresh names, n, and the communication channel, p, for the ‘memory allocator’. These contexts do not correspond to any observation that can be made in the π-calculus and so we exclude them in the ρ-calculus side of our translation by our choice of N for the bisimulation. This is one of the technical motivations behind our introduction of a less standard bisimulation. Example 5.4 In a similar vein consider, for an appropriately chosen p and n we have [[(ν v)(ν v)u[v]]] = p(v) . ((pp[p]q(v) . u[v]) | (pp[p]q[pn[n]q])) | p[n] and [[(ν v)u[v]]] = p(v) . u[v] | p[n] Both programs will ultimately result in an output of a single fresh name on the channel u. But, the former program will consume more resources. Two names will be allocated; two memory requests will be fulfilled. The ρ-calculus can see this, while the π-calculus cannot. In particular, the π-calculus requires that (ν x)(ν x)P ≡ (ν x)P . Implementations of the π-calculus, however, having the property that (ν x)P involves the allocation of memory for the structure representing the channel x come to grips with the implications this requirement has regarding memory management. If memory is allocated upon encountering the ν-scope, there are situations where the left-hand side of the equation above will fail while the right-hand will succeed. Remaining faithful to the equation above requires that such implementations are lazy in their interpretation of (ν x)P , only allocating the memory for the fresh channel at the first moment when that channel is used. Having a detailed account of the structure of names elucidates this issue at the theoretical level and may make way to offer guidance to implementations. 5.5

Higher-order π-calculus

As noted above, the lift and drop operators of the ρ-calculus effectively give it features of a higher-order calculus [14], [15]. The translation of the higher-order π-calculus is quite similar to the translation for π-calculus. Of course, the higher-order π-calculus has application and one may wonder how this is accomplished. This is where the susceptibility of lift to substitutions comes in handy. For example, to translate the parallel composition of a process that sends an abstraction, (v)P , to a process that receives it and applies it to the values, v we calculate 14

42

Meredith and Radestock

ETAPS 2005 FInCo 2005 [[x[(v)P ] | (x(Y ) . Y hvi)]](z) = (z(v) . xh|[[P ]](z 0 )|i) | (x(y).qyp|z[[[v]](z 00 )]) where the translation is parameterized in a channel, z, for sending values, and z 0 and z are constructed from z in some manner analogous to what is done with n and p above. More generally, one may seek to understand the trade-offs between a presentation of higher-order capability in the higher-order π-calculus and the ρ-calculus. A detailed study is a subject worthy of an entire paper, but at a high level of description one may note that the same argument levied with the ordinary π-calculus applies here: the higher-order π-calculus does not offer a theory of names, but rather depends upon one being provided. An investigator interested in the higher-order π-calculus as an executable language must still address computation on names, such as calculating name equality in substitution or synchronization, outside of the framework of the theory. Additionally, the higher-order πcalculus has a larger inventory of moving parts: process variables, for sending and receiving processes, as well as names. On both counts the ρ-calculus is more minimalist, needing neither a theory of names, nor the machinery of process variables. On the other hand, minimalism does not always align with ease of use. Experience shows that when writing specifications in the ρ-calculus of any reasonable size one quickly adopts conventions that make the calculus resemble a more traditional higher-order calculus. 00

6

Conclusions and future work

We studied an asynchronous message-passing calculus built out of a notion of quote. We showed that the calculus provides a workable, effective theory of computation capable of encoding the π-calculus with a compositional account of the ν-operator, as well as the higher-order π-calculus. These encodings bring to light interesting computational phenomena that implementations of the π-calculus have had to face. Additionally, the development of the calculus highlights several intriguiging aspects of the relationships between the structure of processes and the structure of names. We note that this work is situated in the larger context of a growing investigation into naming and computation. Milner’s studies of action calculi led not only to reflexive action calculi [11], but to Power’s and Hermida’s work on name-free accounts of action calculi [6] as well as Pavlovic’s [12]. Somewhat farther afield, but still related, is Gabbay’s theory of freshness [5]. Very close to the mark, Carbone and Maffeis observe a tower of expressiveness resulting from adding very simple structure to names [9]. In some sense, this may be viewed as approaching the phenomena of structured names ‘from below’. By making names be processes, this work may be seen as approaching the same phenomena ‘from above’. But, both investigations are really the beginnings of a much longer and deeper investigation of the relationship between process structure and name structure. Beyond foundational questions concerning the theory of interaction, such an investigation may be highly warranted in light of the recent connection between concurrency theory and biology. In particular, despite the interesting results achieved by researchers in this field, there is a fundamental difference between the kind of synchronization observed in the π-calculus and the kind of synchronization observed between molecules at 15

43

Meredith and Radestock

ETAPS 2005 FInCo 2005 the bio-molecular level. The difference is that interactions in the latter case occur at sites with extension and behavior of their own [4]. An account of these kinds of phenomena may be revealed in a detailed study of the relationship between the structure of names and the structure of processes. Acknowledgments. The authors wish to thank Robin Milner for his thoughtful and stimulating remarks regarding earlier work in this direction, and Cosimo Laneve for urging us to consider a version of the calculus without heating rules.

References [1] Hendrik Pieter Barendregt. The Lambda Calculus – Its Syntax and Semantics, volume 103 of Studies in Logic and the Foundations of Mathematics. North-Holland, 1984. [2] John Horton Conway. On Numbers and Games. Academic Press, 1976. [3] J. des Rivieres and B. C. Smith. The implementation of procedurally reflective languages. In ACM Symposium on Lisp and Functional Programming, pages 331–347, 1984. [4] Walter Fontana. private conversation. 2004. [5] M. J. Gabbay. The π-calculus in FM. In Fairouz Kamareddine, editor, Thirty-five years of Automath. Kluwer, 2003. [6] Claudio Hermida and John Power. Fibrational control structures. In CONCUR, pages 117–129, 1995. [7] Jean-Louis Krivine. The curry-howard correspondence in set theory. In Martin Abadi, editor, Proceedings of the Fifteenth Annual IEEE Symp. on Logic in Computer Science, LICS 2000. IEEE Computer Society Press, June 2000. [8] Microsoft Corporation. Microsoft biztalk server. microsoft.com/biztalk/default.asp. [9] M.Carbone and S.Maffeis. On the expressive power of polyadic synchronisation in picalculus. Nordic Journal of Computing, 10(2):70–98, 2003. [10] Robin Milner. The polyadic π-calculus: A tutorial. Logic and Algebra of Specification, Springer-Verlag, 1993. [11] Robin Milner. Strong normalisation in higher-order action calculi. In TACS, pages 1–19, 1997. [12] Dusko Pavlovic. Categorical logic of names and abstraction in action calculus. Math. Structures in Comp. Sci., 7:619–637, 1997. [13] David Sangiorgi and David Walker. Cambridge University Press, 2001.

The π-Calculus: A Theory of Mobile Processes.

16

44

Meredith and Radestock

ETAPS 2005

FInCo 2005

[14] Davide Sangiorgi. Bisimulation in higher-order process calculi. Computation, 131:141–178, 1996.

Information and

[15] B. Thomsen. A Theory of Higher Order Communication Systems. Computation, 116(1):38–57, 1995.

Information and

17

45

ETAPS 2005

FInCo 2005 Preliminary Version

FInCo 2005

Time-awareness and Proactivity in Models of Interactive Computation Leo Motus 1 Tallinn Technical University 19086 Tallinn, Estonia

Merik Meriste Tartu University Institute of Technology 51014 Tartu, Estonia

Walter Dosch Institute of Software Technology University of L¨ ubeck L¨ ubeck, Germany

Abstract Autonomous and proactive behaviour of components characterize today computer applications. This has introduced systems architecture where the interactions of autonomous components (e.g. agents) are decisive in determining the overall behaviour of a system. The conventional agent-based architecture is to be enhanced with a sophisticated time model that supports time-aware behaviour and interactions of agents. This paper suggests a feature-space for taxonomy of models for interactive computation to foster the development and analysis of behavioural properties in time-aware agents, and multi-agents. This feature space has been developed in the context of an on-going KRATT project (a development environment for agents and multi-agents). The focus of this paper is on discussing the necessity and feasibility of introducing the new taxonomy. Key words: proactive and autonomous computing, time-awareness, time-sensitive interactions, multiple time systems.

1

46

Email: [email protected] This is a preliminary version. The final version will be published in Electronic Notes in Theoretical Computer Science URL: www.elsevier.nl/locate/entcs

ETAPS 2005

1

Motus, Meriste and Dosch

FInCo 2005

Time-awareness and proactivity in computing

The rapidly increasing use of components with autonomous and proactive behaviour characterise today’s computer applications. This has introduced a new generic architecture for systems – multi-agent systems – where the (time-aware and proactive) interactions of autonomous components (agents) are decisive in determining the overall behaviour of the system. The overall behaviour of such systems (i.e. prescribed plus emergent behaviour) cannot usually be defined as a composition of components’ behaviours. Such systems operate in sophisticated environments that cannot be considered as a single component. Instead, an environment is also considered as a collection of closely interacting, potentially proactive and autonomous components that operates, to a large extent, independently of the system. A typical agent-based system that operates in a time-sensitive environment has a major additional property, as compared with a conventional real-time system – the complete list of interacting agents and the structure of their interactions cannot be finally fixed at the design stage. This property has invoked at least two new research topics. First, the agent-based architecture itself is evolving in time and different aspects of the evolving architecture need monitoring (and may be supervision) in order to guarantee the required service. Second, research of real-time systems, composed form autonomous agents – with imposed time and location constraints on agent’s individual behaviour, on interaction of agents, and on the overall system’s behaviour – needs a qualitatively new model of computations. Computationally such a loosely coupled time- and location-constraint collection of interacting agents can be considered as a set of simultaneously processing concurrent streams that can violate the non-interference principle when exchanging information. This takes us from the so called algorithmic concurrency to forced (or true) concurrency, as considered by Motus in [20] and by Wegner in [37], e.g. the Q-model and multi-stream interaction machine respectively. Time- and location-aware agent-based systems are rapidly gaining influence in the contemporary world – cars, communication systems, transport systems, banking, and medical devices are just a few examples. All those devices and systems are built from autonomous components, and are essentially software-intensive (i.e. their functionality is determined by software). Software-intensive systems differ from the other engineering systems in that they are clearly more capable for explicit proactive behaviour and rely on dynamic control structure more often as compared to the non-software-intensive systems in the artificial world. The notion of proactive behaviour was first applied to artefacts from the artificial world by computer control, distributed artificial intelligence, and artificial life communities. Majority of software-intensive systems operate across the border of natural and artificial worlds – e.g. computer control systems for technical devices and technological processes, autonomous mobile robots, 2

47

ETAPS 2005

Motus, Meriste and Dosch

FInCo 2005

interactive problem-solving systems – and quite often contain AI based components. Considering the major trend in software design – from object-oriented design to (potentially autonomous and proactive) components based design – it becomes only natural to apply proactive components explicitly for designing software-intensive systems. Proactive components can often be treated as autonomous agents. By proactivity the authors mean component’s ability to anticipate the evolution of its environment and to choose the goal-directed activities that lead to better satisfaction of the component’s goal, and in the case of well-designed system to better satisfaction of the system’s goal. Such approach has been named kenetic engineering. The name was coined by J. Ferber in the context of distributed artificial intelligence research, and denotes the process of development artificial systems by applying interacting autonomous components [6]. Similarity with genetic engineering, as defined for natural biological systems, is intentional and emphasises certain cohesion between the building principles of proactive artificial systems as compared to those of biological systems. Multi-agent systems rely essentially on behavioural features that cannot be specified in conventional algorithmic computing, but are inevitably present in real-time, autonomous, and/or proactive computing systems. Examples of such features are persistency of computation, direct interaction with system’s environment, time-awareness of behaviour, dynamically evolving structure of interactions, and remarkable share of emergent behaviour. These properties cannot be completely specified in advance – their form of appearance depends on the particular context and history of events in the system itself, as well as on the context and history of events in the system’s environment. Similar features have always characterised real-time and embedded systems. Attempts to handle and analyse the above-mentioned features within the paradigm of algorithmic computing have led to theoretical difficulties [2,14,18,19,37,39]. The evolution of computer science is gradually reaching the understanding formulated by proponents of interactive computing as follows: “Interactive systems such as modelled by UML represent a new paradigm in computation that inherently cannot be modelled using traditional, or algorithmic, tools. At the heart of the new computing paradigm is the notion that a system’s job is not to transform a single static input into an output, but rather to provide an ongoing service” [8]. It is interesting to note that from its early days the practice of objectoriented programming has followed (intuitively) the paradigm of interactive computing, except the object autonomy – any object has always had only partial control over its own methods and data structures. An autonomic object (or rather a set of such objects) with full control over its methods forms a pragmatic basis for implementing agents. In reality, an implemented agent needs a dynamic support structure to interact with the other agents and to ensure satisfaction of time- and other constraints on its autonomous behaviour [7]. 3

48

ETAPS 2005

Motus, Meriste and Dosch

FInCo 2005

Thus, software engineering practice and tools for implementing agents’ communities where each agent individually may have limited perception of time and inter-agent interactions consist of exchange of ordered messages. Research needs to focus on potential ways of designing and assembling autonomous agents so as explicitly to emphasize in-agent and inter-agent interactions and consider their time- and other constraints. Two complementary research goals can be pointed out: •

how to build a system that forces the required behaviours, enables and assesses emergent behaviour of agents, and eliminates the unwanted behaviours (“conventional” interaction-centred computing),



how to build a system that – in addition to what was said in the previous paragraph – satisfies the imposed time- and other constraints imposed on individual agents, on groups of agents, and on interactions of agents and their groups (time- and location-aware interaction-centred computing).

The latter goal cannot always be separated from first one. This means that for building and analysing properties of time-aware, and/or location-aware multi-agents systems one needs a theory for time-aware interactive computation, and a corresponding model of computation. It is natural to assume, that theory and model of computation for time-aware interactive computing is related to those of the “conventional” interactive computing. Time has always been present in computing in the form of topological ordering of operations. With the appearance of multiprogramming and multiprocessors scheduling issues for execution of certain algorithms became important and related theories introduced metric time into computing. The application domain of metric time was extended by introducing temporal logics for describing and analysing properties of programs. Many domains of modelling apply one single metric time (e.g. computational economy). Several concepts of metric time (strictly increasing, fully reversible, relative time with moving origin) are simultaneously applied for timing analysis of inter-component interactions in real time systems. Time-correctness analysis of computing in systems that apply forced concurrency and/or allow autonomous, proactive components often assumes introduction of several independent time counting systems that need to be maintained simultaneously and occasionally synchronised (see for instance [20, 23]). Whenever the authors mention time-aware interaction-centred model of computation further in the text, it is assumed that the corresponding time model comprises several metric times with simultaneous existence of multiple concepts of each time. The preliminary experience and research of real-time embedded systems has pointed on the criticality of the used time model. In order to be able to analyse time-constraint interactions, the theory should allow for multiple time counting systems (potentially each autonomous agent may have its own time-counting system) with three concepts of time – strictly increasing, fully reversible and relative time [23]. 4

49

ETAPS 2005

Motus, Meriste and Dosch

FInCo 2005

In the conventional approach to agents, the attention has been usually focused on agents’ intelligence related issues, such as reasoning, beliefs, intentions, desires, negotiations, and others. In other words, the research of multiagent systems has been mostly agent-centred, even organizational aspects of a system have been described and implemented by means of “mental” states of agents. So far computational and systems engineering issues have received comparatively little attention in distributed artificial intelligence studies. In the domain of real-time systems (and embedded systems) the research focus has been on control, monitoring, and communication issues with a strong emphasis systems engineering aspects, computational aspects have gained slightly less attention. The existing trend in practical applications has lead to (partial) merging of two domains – multi-agents and real-time embedded systems. This paper attempts to initiate a study of computational problems resulting form such a merge. As the result, a multi-agent architecture is to be enhanced with sophisticated time, and a real-time system is considered as a loosely coupled collection of interacting autonomous agents with time-critical constraints on agents’ behaviour and their interactions. The participating agents and their interaction patterns may change dynamically during integration, testing, and also during normal operation. This feature has always been desirable in conventional real-time systems, but has deliberately been avoided to increase the behavioural determinism. In the other words, fixed structure has been applied to be able to predict, with reasonable confidence, the behaviour of the future system already during its design. The component-based design and steadily increasing pro-activeness of components have increased the role of emergent behaviour in real-time systems to the level that assumes reconsidering the behavioural analysis and finding new ways of achieving behavioural determinism. The rest of this paper discusses the evolution of new computing concepts, and respective models of computations. The paper continues with a suggestion of a new feature space for taxonomy of models of computation, and an illustration of a particular research problem.

2

New concepts of computing and models of computation

A model of computation provides a concise (and, in principle, approximately matching) description of what happens in a computing system [2, 38]. Sufficiently precise and widely accepted description of computing has been provided by the concept of Turing machine that covers today’s practically important computing cases only partially. This situation has been foreseen by A. Turing himself [32] and later addressed by H.A. Simon. In the early 1970-es R. Milner [18] pointed out the importance of interactions between algorithms (in addition to that of algorithms themselves), and some time later interactioncentred model of computation was suggested by P. Wegner [34, 35]. 5

50

ETAPS 2005

Motus, Meriste and Dosch

FInCo 2005

The response of researchers to the changing understanding of the role of computation has been the development of concepts and models of interactive computation that capture more features as compared to models based on Turing machines [19, 39]. These concepts and models assume non-trivial generalisation of computability in the Church-Turing sense, for instance [10] defines computability logic where computational problems are described as games played by machine against the environment. By now the evolution of computing systems has reached the level were, in addition to wide usage of interaction-centred paradigm of computation, the earlier natural assumption of complete knowledge about the causal relations does not always hold (e.g. multi-agent systems, embedded systems). For instance, the case when one of the interacting partners operates in an artificial world (with the completely known causal relations) and the other partner operates in the natural world (with incompletely known causal relations, as a rule). The same example – interaction over the border of different universes – serves to illustrate the potential violation of the fundamental assumption for a stationary axiomatic basis of the applied theory (or algorithm) due to different time-scales of the Laws of Nature in different universes. One of the computationally cheapest ways to approximate the incompletely known causal relations is the introduction of time constraints – instead, for instance, of applying probabilistic methods or fuzzy logic. Also, time constraints can be easily used to ensure a stationary axiomatic basis for the applied theory (i.e. require that the computation terminates before the axiomatic basis changes) [23]. In increasing number of computing systems the correctness of the result depends also on satisfaction of location-constraint requirements. At the same time, with the increase of autonomy of components a single metric time per system is not sufficient to assess the time-correctness of computing results – e.g. the case of distributed real-time systems with proactive autonomous components and with dynamic configuration. Autonomic components foster the use of proactive behaviour of components. Hence, interactive computing, autonomic components, and proactive behaviour, plus time- and locationawareness of computing (as required by many applications) are often tied together and play important role in software that forms the core of time-aware multi-agent paradigm. In spite of the rapid increase of time-critical and/or time- and location aware computer applications, the role of time is still considered by the majority of researchers in a simplified manner – as a single variable, common for all mathematical functions used in the system. This practice is valid in mathematics and is still a widely trusted belief in computer science. This belief is based on assumption that a neutral observer (e.g. a designer of the system) can have complete knowledge about the intrinsic properties, and can observe all the details of the designed system. This belief is in concordance with Newtonian (one single observer) and Einsteinian (several observers) theory, 6

51

ETAPS 2005

Motus, Meriste and Dosch

FInCo 2005

whereas the interactions in a massively parallel system based on interactive computing paradigm are better explained by the quantum theory [36]. In a system composed of proactive autonomous components, each of the components may have its own independent time counting system. Hence, one additional time dimension for the whole system cannot solve the time awareness problem. This is even more so because, in many cases, the components cannot be synchronised with the time of the system designer. For instance, the components interact with each other directly via communication links created dynamically (e.g. because of the emergent behaviour that has not been prohibited by the designer), or they react to events in the environment, or they react to exceptions in the system (not foreseen by the designer). Formal timing analysis of interactions assumes the use of more sophisticated time model (as discussed in [20, 23, 29]) than a conventional single metric time dimension per system. Time in agents has been considered usually in concordance with the traditions of computer science, i.e. as an additional dimension of a state space – meaning that a single time variable is introduced for the whole system. Examples of traditional time models are discussed and surveyed in [16, 42]. Autonomic computing [12] and proactive computing [31] have been compared in [33]. Intuitively it is believed that a suitable underlying model of computation should be that of interactive computation. Unfortunately, the appropriate and widely accepted formalism for such a model is not yet available. However, there are many concepts and experimental models. The huge variety of approaches that can be related to interactive computing paradigm is illustrated by a loosely grouped list of publications. The authors apologise for potential mistakes in grouping, and for probably leaving out some of the important publications. A very subjective sample list of publications related to the evolution of interactive computing paradigm, and to extending it with time- and locationawareness (grouped by the most representative methods used) follows: •

State machines, state transition view : an input effects on update of the state and on output c-machine [32]; self-reproducing automata [26]; abstract state machine [9]); input/output automata [15]; attributed automata [17]; interaction machine [35];



Process algebras, represented, for instance, by CCS [18] and π-calculus [19]; cost calculus – a process algebra of bounded rational agents for interactive problem solving [39];



Stream-based approach: input/output behaviour history transformer [4]; compositional refinement of interactive systems [3], [5]



Logical framework based models, represented by weak second order predicate calculus with time [14]; temporal logic [16]; logic of rational agents [40]; computational logic [10];



Miscellaneous approaches represented by [28], the Q-model [20]; by agentgroup-role model [7]. 7

52

ETAPS 2005

Motus, Meriste and Dosch

FInCo 2005

Many of the above listed approaches can intuitively be considered as special cases of super-Turing model of computations [39]. The principles for building taxonomy, suggested further in this paper, enable the authors to focus on their pragmatic goal – to clarify the initial conditions for enhancing interaction-centred models of computation with sophisticated time-awareness, as discussed in the previous sections of this paper. The taxonomy itself needs a separate effort in analysing the existing methods in the context of suggested feature space – actions, interactions, and time-awareness. Besides, the formal study of relationships between different approaches to interactive computation and their results is still to be done. An example of a search for relationships between time-aware and mainstream interactive computing is superficially and preliminary discussed in the following – just to emphasise the potential use of taxonomy of models of computation. (i) The mainstream computer science has considered interactive computation as a set of interacting computing agents with focus on interactions. In CCS [18] Milner abstracted away the quantitative time, his calculus determined the system’s behaviour by the order of executed interactions. Wegner’s research group formulated basic principles of interactive computing [34, 35, 37, 38] in the 1990-es, and suggested that a multi-stream interaction machine represents the most sophisticated interactive computation, again neglecting explicit quantitative time. (ii) Approximately at the same time with Milner’s CCS a report [28] by Quirk and Gilbert was published on real-time systems based on interactive computing concept extended by a truly sophisticated time model. The basic result of this publication was further elaborated under the name of Qmodel (see, for instance [20]), mapped into a weak second-order predicate logic with time [14], and linked with object-oriented software development environment [21]. (iii) Stream-based approach has been studied, for instance by [3], [5], and linked to state-machine approach in [30]. Streams have been applied as a tool in a history transformer in [4]. Based on the above information one might be interested to study the relations between the Q-model, streams, and multi-stream interaction machine. The study could start by stating the known facts and then gradually go deeper. For instance, the Q-model defines a real-time system as a collection on “processes” interacting via “channels”. A common process p is an I/O mapping that is repeated many (up to the countable number of) times, i.e. p : T (p) × dom p → val p , and T (p) is a time-set that determines the time instants when the mapping is executed. Compared to conventional stream processing this specification gives additional flexibility – each process can have a different time-set, the map8

53

ETAPS 2005

FInCo 2005

Motus, Meriste and Dosch

ping need not be executed at regular intervals (that is practically impossible anyway). Interaction of simultaneously executing processes forms a separate stream, where each element is a message sent from the producer “process” pi to the consumer process pj via “channel” σij : val pi × T (pi ) × T (pj ) → projval

pi

dom pj , where

the length of the message (i.e. depth of the consumer memory) is determined by channel function K(σij , t) ⊂ T (pi ) , and t ∈ T (pj ) . Stream formed by a channel has to synchronise the potentially different times of interacting processes and satisfy the time-constraints of the consumer process. A set of interacting common processes forms a multi-stream interaction machine. Alternatively a selector process of the Q-model can represent a multi-stream interaction machine. If all the involved time-sets coincide (e.g. set of integers), then the resulting stream processing is pretty straightforward. The problem becomes more complicated if the time-sets are different. The Qmodel has been used for timing analysis of object-oriented software design [21], suggested as a model processor candidate for analysing time-correctness of interactions in UML profile for scheduling, performance and time [29]. The end of the example.

3

Feature space for taxonomy of computations

For systematic progress in developing the time-aware interaction-centred model it would be desirable to systematise the variety of models of computation according to their characteristic features. Examples of feature spaces used earlier by other researchers have been surveyed and discussed in [2, 27, 37]. From the point of view of time-aware proactive systems, a three dimensional approximation of the feature space should include the following dimensions – action, interaction, and time-awareness. Projections of the feature space onto two-dimensional planes – i.e. the planes of interactive autonomous actions, time-aware autonomous actions, and time-aware interactions – describe rather precisely the existing research directions and are therefore a good starting point for building taxonomy of existing approaches and for discussing potential directions for better satisfaction of the actual requirements. For each feature a metrics will be introduced by markers that define classes of models based on qualitative properties that are of interest for distinguishing models of computation. We will consider the following classes as a starting point. Action axis is partitioned by the following markers: A1 – actions completely prescribed by algorithms Components that perform actions comprise fixed algorithms, are causally related, and the environment may not influence the algorithms and their rela9

54

ETAPS 2005

FInCo 2005

Motus, Meriste and Dosch Interaction

Interactive autonomous actions

ive sit n se ns e- ctio m i T tera in

Action

Time-sensitive autonomous actions

Timeawareness

Fig. 1. The feature space and its projections

tions during system’s operation A2 – actions influenced by environment Behaviour of some components of the system is influenced or controlled by the environment A3 – proactive actions. Some components of the system are proactive and autonomous, that can choose an action from a set of actions that best serves the component in a given context (systems with dynamically changing behaviour). A4 – adaptive actions In addition to proactivity, some components have capability to learn and adapt their behaviour and goals according to changing conditions (systems with high share of emergent behaviour and hard to predict dynamic behaviour). Interaction axis is partitioned by the following markers: I1 – prescribed communication Communication between the components is predefined by the algorithms applied, conventional parallel processing is possible – the case for algorithmic computation I2 – dynamic (context-dependent) communication Interactions between components determine the behaviour of the system (e.g. different algorithms may produce equivalent behaviour of a system); forced parallel processing is possible – the case for interactive computation I3 – time-constraint dynamic communication Interactive computation with time constraints, including those imposed on the occurrence instants of interactions and on the validity of information exchanged during those interactions. Time-awareness axis is partitioned by the following markers: 10

55

ETAPS 2005

FInCo 2005

Motus, Meriste and Dosch Interaction

I3 Conventional models of computation

I2

I1 A1

A2

A3

A4

Action

T1 T2 T3

Timeawareness

Fig. 2. The feature space with markers.

T1 – a single topological time Time is established by qualitative ordering of events and actions for the whole system. T2 – topological time and one metric time A system has one strictly increasing metric time, in addition to a topological time. T3 – topological time and multiple metric times In addition to a time model (e.g. as defined in T2 ) for the whole system, each component of the system may have its own time model comprising, for instance, one topological and several metric times (plus each metric time may simultaneously be present in the form of different time concepts, such as fully reversible, strictly increasing, and/or relative time with moving origin). The markers are assigned to features in Figure 2. Conventional models of computation (meaning here thoroughly studied and widely accepted models) are situated in the vicinity of the origin of coordinates, i.e. in the subspace of completely prescribed actions A1 with prescribed communications I1 in a single topological time T1 . Please note that temporal logics do not belong to this subspace, since many of them operate with metric time. The definitive taxonomy of models of computation in such a feature space is, to the best of our knowledge, not yet available; sample surveys are [27, 42]. Intuitive preliminary results suggest that taxonomy based on the features defined in this paper is a valuable tool for planning research. The suggested feature space stems from the expected properties and requirements of rapidly spreading new classes of computer applications – such as ubiquitous computing that include autonomic, and proactive components, computing systems with dynamic ad hoc architecture, multi-agent based sys11

56

ETAPS 2005

FInCo 2005

Motus, Meriste and Dosch Interaction

I3 Q-model

Interaction machines

I2

I1 A1

A2

A3

A4

Action

T1 T2 T3

Timeawareness

Fig. 3. Enhancing multi-stream interaction machines with time-awareness, depicted in the suggested feature space.

tems, etc. The feature space enables to distinguish between the objectives, capabilities, and scope of the existing models of computation, as well as that of respective tools and resulting products. The corresponding taxonomy would support comparative study and capability analysis of available and suggested models. The authors have been influenced by the first preliminary classification of the models in this feature space when developing time-aware models of interactive computing that are being applied in the KRATT environment [24, 25] for developing time-aware multi-agent systems. In conjunction with the development of KRATT, pilot applications are developed to test the design principles, underlying theories and assumptions, and practically developed parts of the test-bed. The concluding Figure 3 sketches a systematic development process of the time-aware models for interactive computation that also cover proactive and adaptive/learning systems. The process starts form the merge of multi-stream interaction machines and the Q-model based on stream processing methods. The rectangular formations drawn in steady lines present the existing models, whereas the dotted line formations present the missing parts that are to be added.

4

Conclusions

Time- and location-aware agent-based systems are rapidly gaining influence in the contemporary world – cars, communication systems, transport systems, banking, and medical devices are just a few examples. All those devices and systems are built from autonomous components, and are essen12

57

ETAPS 2005

Motus, Meriste and Dosch

FInCo 2005

tially software-intensive (i.e. their functionality is determined by software). Software-intensive systems differ from the other engineering systems in that they are clearly more capable for explicit proactive behaviour and rely on dynamic control structure more often as compared to the non-software-intensive systems in the artificial world. This paper has observed that those new applications require properties that cannot be studied by conventional mainstream methods and suggested a new feature space that supports comparative study of a variety of methods. The new feature space has been applied in the paper to clarify the initial conditions for enhancing interaction-centred models of computation with sophisticated time-awareness so as to analyse the proactive, time-aware systems. The taxonomy itself needs a separate effort in analysing the existing methods in the context of suggested feature space – actions, interactions, and timeawareness. The formal study of relationships between different approaches to interactive computation and their results is still to be done. Large part of the paper explained the specific role of time in such systems and the need for truly sophisticated time model in the considered models of computation. In a system composed of proactive autonomous components, each of the components may have its own independent time counting system. Hence, one additional time dimension for the whole system cannot solve the time-awareness problem. Formal timing analysis of interactions assumes the use of more than one metric time plus simultaneous use of three time concepts (strictly increasing, fully reversible and relative with moving origin), such time model has not been widely used in computer science so far. This paper is based on interim results of an ongoing larger project carried out in the Estonian Centre of Excellence for Dependable Computing (CDC) – a long term joint venture of Tallinn University of Technology and Tartu University Institute of Technology, with recently joined University of Luebeck.

5

Acknowledgement

This research has been partially financed by Estonian Science Foundation (ETF) grant no. 4860, and by a grants no. 014 2509s03 and no. 018 2565s03 from the Estonian Ministry of Education.

References [1] Bigus, J. P., D. A. Schlosnagle, J. R. Pilgrim, W. N. Mills and Y. Diao, ABLE: A Toolkit for Building Multi-agent Autonomic Systems, IBM Systems Journal, 41 (2002), 350–371. [2] Blass, A. and Y. Gurevich, Algorithms: A quest for absolute definitions, Bulletin of European Assoc. for Theoretical Computer Science, 81 (2003), 195–225.

13

58

ETAPS 2005

Motus, Meriste and Dosch

FInCo 2005

[3] Broy, M., Compositional Refinement of Interactive Systems, Journal of the ACM, 44 (1997), 850–891. [4] Caspi, P. and N. Halbwachs, A Functional Model for Describing and Reasoning about Time Behaviour of Computing Systems, Acta Informatica, 22 (1986), 595–627. [5] Dosch, W. and A. St¨ umpel, Introducing Control States into Communication Based Specifications of Interactive Components. In: H.R. Arabnia, H. Reza (eds.): Proceedings of the International Conference of Software Engineering Research and Practice (SERP’04), Volume II. Las Vegas, Nevada, June 21-24, 2004. Athens, GA: CSREA Press, 2004, 875–881. [6] Ferber, J., “Multi-agent systems. An Introduction to Distributed Artificial Intelligence”, Addison-Wesley, Harley (UK), 1999. [7] Ferber, J., O. Gutknecht and F. Michel, From Agents to Organizations: An Organizational View of Multi-agent Systems, P. Giorgini, J.P. M¨ uller, J. Odell (Eds.): AOSE 2003, LNCS 2935 (2004), 214–230. [8] Goldin, D., D. Keil, and P. Wegner, An Interactive Viewpoint on the Role of UML, Ch.15. in Unified Modeling Language: Systems Analysis, Design, and Development Issues, K. Siau and T. Halpin (Eds)., Hershey, PA: Idea Group Publishing, 2001, 250–264. [9] Gurevich, Y., Evolving algebras 1993: Lipari guide. In Borger, Ed., Specification and validation methods, 1995, 231–243. [10] Japaridze, G., Introduction to computability logic, Annals of Pure and Applied Logic, 123 (2003), 1–99. [11] Jennings, N. R., An Agent-based Approach for Building Complex Software Systems, Communications of the ACM, 44, No. 4 (2001), 35–41. [12] Kephart, J. O. and D. M. Chess, The Vision of Autonomic Computing, Computer, 36, No. 1 (2003), 41–50. [13] Lamport, L., The Temporal Logic of Actions, ACM Transactions on Programming Languages and Systems, 16 (1994), 872–923. [14] Lorents, P., L. Motus, and J. Tekko, A Language and a Calculus for Distributed Computer Control Systems Description and Analysis, Proc. on Software for Computer Control, Pergamon/Elsevier (1986) 159–166. [15] Lynch, N. A. and M. R. Tuttle, An introduction to input/output automata, CWI Quarterly 2(3) (1989), 219–246. [16] Manna, Z. and A. Pnueli, “The temporal logic of Reactive and Concurrent systems: Specifications”, Springer Verlag, 1991. [17] Meriste, M. and J. Penjam, Attributed Models of Computing, Proc. of the Estonian Academy of Sciences. Engineering, 1 (1995), 139–157.

14

59

ETAPS 2005

Motus, Meriste and Dosch

FInCo 2005

[18] Milner, R. A., “A Calculus of Communicating Systems”, LNCS, 92 (1980), 171p. [19] Milner, R., “Communicating and Mobile Systems: The π-calculus”, Cambridge University Press, 1999. [20] Motus, L. and M. G. Rodd, “Timing Analysis of Real-time Software”, Elsevier, 1994. [21] Motus, L. and T. Naks, Formal timing analysis of OMT designs using LIMITS, Computer Systems Science and Engineering, 13, No. 3 (1998), 161–170. [22] Motus, L., M. Meriste, T. Kelder and J. Helekivi, An Architecture for a Multiagent System Test-bed, Proceedings of the 15th IFAC World Congress, vol. L, Elsevier Science Publ., (2002), 6 pp. [23] Motus, L., Modeling metric time. In B. Selic, L. Lavagno, G. Martin (Eds), UML for Real: Design of Embedded Real-time Systems, Kluwer Academic Publ., Norwell (2003), 205–220. [24] Motus, L., M. Meriste, T. Kelder, J. Helekivi and V. Kimlaychuk, A test-bed for time-sensitive agents – some involved problems, 9th IEEE Intern Conf. on Emerging Technologies and Factory Automation, Portugal, 2 (2003), 645–651. [25] Motus, L., M. Meriste, T. Kelder and J. Helekivi, Agent-based Templates for Implementing Proactive Real-time Systems, Proc. International Conference on Computing, Communications and Control Technologies, Austin, Texas, 199– 204. [26] Von Neumann, J., “Theory of Self-Reproducing Automata”, Univ. of Illinois Press, 1966. [27] Van Parunak, H., S. Brueckner, M. Fleischer and J. Odell, A Preliminary Taxonomy of Multi-Agent Interactions, 2nd Int.Conf. on Autonomous Agents and Multi-agent Systems (2003), 1090–1091. [28] Quirk, W. J. and R. Gilbert, “The formal specification of the requirements of complex real-time systems”, AERE, Harwell, rep. No. 8602, 1977. [29] Selic, B. and L. Motus, Modeling of Real-time Software with UML, IEEE Control Systems Magazine, 23, No. 3 (2003), 31–42. [30] St¨ umpel, A., “Stream Based Design of Distributed Systems through Refinement”, Logos Verlag Berlin, 2003. [31] Tennenhouse, D. L., Proactive Computing, Communications of the ACM, 43, No. 5 (2000), 43–50. [32] Turing, A., On Computable Numbers, with an Application to the Entscheidungsproblem, Proc. London Math. Society, 42:2 (1936), 230–265; A correction, ibid, 43 (1937), 544–546. [33] Want, R., T. Pering and D. Tennenhouse, Comparing Autonomic and Proactive computing, IBM Systems Journal, 42 (2003) 129–135.

15

60

ETAPS 2005

Motus, Meriste and Dosch

FInCo 2005

[34] Wegner, P., Interaction as a Basis for Empirical Computer Science, ACM Computing Surveys, 27, No. 5 (1995), 80–91. [35] Wegner, P., Why Interaction is More Powerful than Algorithms, Comm. of ACM, 40, No. 5, 80–91. [36] Wegner, P., Towards empirical Computer Science, Monist., 82, No. 1 (1998), 58–108. [37] Wegner, P., Interactive Foundations of Computing, Theoretical Computer Science, 192 (1998), 315–351. [38] Wegner, P. and D. Goldin, “Coinductive Models of Finite Computing Agents”, Electronic Notes in Theoretical Computer Science, 19 (1999). [39] Wegner, P. and E. Eberbach, New models of Computation, Computer, 47, No. 1 (2004), 4–9. [40] Wooldridge, M., “Reasoning about rational agents”, MIT Press, 2000. [41] Wooldridge, M., On the Sources of Complexity in Agent Design, Applied Artificial Intelligence, 14 (2000), 623–644. [42] Yu, S., “The Time Dimension of Computational Models”, Tech. Report No. 549. Univ. of Western Ontario, Dep. of Comp. Sci., 2000.

16

61

ETAPS 2005

FInCo 2005 Preliminary Version

FInCo 2005

Interactive Computation and Platform-Based Design: an Equivalence Relation Francesco Gianfelici

1

Dipartimento di Elettronica, Intelligenza Artificiale e Telecomunicazioni Universit` a Politecnica delle Marche, I-60131 Ancona, Italy

Abstract The need of identifying the principles of an effective and reliable engineering of Interactive Systems is methodologically suited to establishing an equivalence relation between two computational paradigms: Interactive Computation and PlatformBased Design. This approach allows us to underline: the observable behavior, the component modelling, the multi-layer structures, the functional abstraction the progressive refinement property, as representative of key points in this sphere. Secondly this equivalence relation implies: that the applicative success, obtained by the Platform-Based Design Paradigm, is the better guarantee of a usable definition of Interactive Computation Paradigm in applications. Finally, the theoretical formulations, developed in this field by the decisive contributes of many researchers, can be easily extended to Platform-Based Design Paradigm, potentially enlarging their applicative expressiveness.

1

Introduction

The growing interest in complex computational structure modelling, where the regulation of nondeterministic behaviour and the complexity of interactions play one of the major parts in the management of the dynamical evolution of entities and object, sheds light on the limitations of traditional paradigms (principally based on I/O approaches) and demonstrates how researcher attention is actually focussed on the definition and understanding of an alternative theory. The modern P2P networks, the embedded systems [9] [13], the agent and service oriented applications [3] [4] represent only the most common and pragmatical examples along this direction. The development of computational formulations enable model systems to exploit their peculiar properties, has determined the birth of a great number of paradigms, which have shown their effectiveness in specific domains. Their 1

62

Email: [email protected] This is a preliminary version. The final version will be published in Electronic Notes in Theoretical Computer Science URL: www.elsevier.nl/locate/entcs

ETAPS 2005

F. Gianfelici

FInCo 2005

nature, apparently extremely heterogeneous, has caused disjoint developments and stand-alone characterizations. But in many cases, the common properties and the principles that regulate the formulation, generate deep relations between them, through which it is possible: (i) to enrich our knowledge in particular phenomena modelling (ii) to extract features or properties of paradigms. This last aspect is generated by the idea that the properties through which we develop step-by-step the construction of this equivalence relation necessary also characterize the same paradigms: the absence of analogous approaches in literature render novel the proposed methodology. Thanks to the considerations, previously developed, we propose in this paper an equivalence relation between two paradigms: the Interactive Computation and Platform-Based Design. An accurate presentation of this last paradigm leaves out of consideration of this work: exhaustive descriptions (with many practical examples) can be found in the articles that are proposed in [15], however a case study based on an Embedded System for Electronic Measurement of Gas Concentration will be considered in this paper. The algebraic characterization (equivalence relation) underlines as the observable behavior, the component modelling, the multi-layer structures, the progressive refinement property, constitute the basic features for the engineering of interactive system. The development of an equivalence relation allows to exploit some direct implications: (i) the great success, received by the Platform-Based Design Paradigm, is the most concrete example for a practical use of Interactive Computation Paradigm in application domains (ii) the theoretical formulation, developed by means of the contributes of many researchers [10] [12] [11], can be easily extended to Platform-Based Design Paradigm, which is characterized by a more practical formulation (iii) to identify the principles of effective and reliable engineering of interactive systems. If the used approach is particular effective to develop an accurate characterization of Interactive Computation Paradigm, the absence of a rigorous theoretical base for the Platform-Based Design Paradigm still imposes its formal definition, which will be necessarily represented with a Formal Language, because of the intrinsic features of this paradigm that require a functional characterization expressed in terms of hierarchial regular structures. This aspect, united with the fact that the equivalence relation is not defined between processes or formal languages but between paradigms, partially weakens the algebraic formulation, requiring a less formal definition, if compared with other works in the Process Algebra field. Methodologically such equivalence relations are achieved by introducing a Basic Interaction Automata starting from the definition of Interaction between Component and Environment and exploiting the identification property of ∆() operator. The absence of specific limitations on the concurrent behaviour of this operator allows ambivalent use of the distribution law at right or at left, 2

63

ETAPS 2005

F. Gianfelici

FInCo 2005

providing its application indistinctly to Ω or to the relative arguments. Then we proceed defining a Complete Interaction Automata as natural extension to the n-component case. The exigency to characterize the Complete Interaction Automata by means of an analogous formulation (with reduced complexity) has generated the definition of Restricted Complete Interaction Automata (RCIA) obtained by a partial function Ψ(Cset ) that semantically represents the observable behaviour of the components. The RCIA definition has led to the introduction of Multi-Layer Interaction Automata (MLIA) achieved as composition of RCIAs: identified by means of an opportune monotonicity condition on the Ψ(Cset ) for every RCIA. This relation has evidenced as Progressive Refinements and Functional Abstraction of the MLIA, achieved by means of two partial function successions, are directly connected with the succession of Ψ(Cset ). Finally the equivalence relation between the MLIA and the formulation of Platform Based Design Paradigm is established. This paper is organized as follows. Section 2 introduces the Basic Interaction Automata. Section 3 presents the Complete Interaction Automata as natural extension of Basic Interaction Automata. Section 4 gives a brief introduction to the definition of Restricted Complete Interaction Automata (RCIA), which is obtain with a restriction, provided by a specific partial function Ψ(Cset ). Section 5 extends the RCIA to Multi-Layer Interaction Automata (MLIA) by means of a specific condition on Ψ(Cset ). Section 6 defines The Progressive Refinement and Functional Abstraction of MLIA. Section 7 proposes a theoretical formulation of Platform-based Design Paradigm. Section 8 concludes the work showing the equivalence relation.

2

Basic Interaction Automata

According to the notation expressed by J. van Leeuwen in [14], we define the component C and the environment E, which interact by means of streams (Signals) at every time t with the following formulas: (1)

E(t) = e1 , e2 , . . . , en

(2)

C(t) = c1 , c2 , . . . , cm

with n 6= m, and where the terms e1 , e2 , . . . , en e c1 , c2 , . . . , cm hold to an alphabet Θ. Supposing the (1) and the (2) are defined at the same time, without loss of generality, and given an element set I = {Iec (0), Iec (1), . . . , Iec (n)} we can introduce the following relation: (3)

Iec (t) = E(t)  C(t) = E(t) ◦ C(t)

where the operators  and ◦ assume the meaning of common action (at the same time). Basing on (3), we express the interactions between the two components as: (4)

Ω = {[E(0), C(0)], [E(1), C(1)], . . . , [E(n), C(n)]} 3

64

ETAPS 2005

F. Gianfelici

FInCo 2005

analogously by means of (3) with: Ω = [Iec (0), Iec (1), . . . , Iec (n)]

(5)

Then (3) and (4) represent every possible interaction between the component (C) and its environment (E), without any specific modelling of E and C. Definition 2.1 Given a set of interactive-action (IA) as IA = {a, b, . . .} and establishing a partial function ∆(I) = IA, which associates every element I with an element of (IA), we rewrite the (5) as: Ω = [∆(Iec (0)), ∆(Iec (1)), . . . , ∆(Iec (n))]

(6) and then: (7)

Ω = [a, b, . . . , d]

where Ω represents a word, and the partial function ∆(I) is an identification operator, which can be directly applied to Ω by the composition property: (8)

∆(Ω) = [∆(Iec (0)), ∆(Iec (1)), . . . , ∆(Iec (n))]

Definition 2.2 The (Ω) represents an operator that has the identification property. No limitations in terms of the identification kind (Stochastic [2] [1], Approximated, . . . ) are established. Definition 2.3 The (Ω) operator is regulated by the distributive law, indistinctly defined at right or at left, without any limitations in terms of Milner’s Concurrency, because this operator is a special kind of instantiation with a certain degree of uncertainty. Indicated with Σ = IA = {a, b, . . .} an alphabet, we use Σ to express Σ = Σ ∪ {} where  is not in Σ. Defined a Basic Interaction Automata as a tuple < V, V 0 , V F > (eventually equipped with time) on an alphabet, we said that the language L generated by ∆(Ω) or by the composition of various ∆(I) are regular if there exists a Basic Interaction Automata A such that L(A) = L. 

3

Complete Interaction Automata

Given n-components C1 , C2 , . . . , Cn , everyone with its environment E1 , E2 , . . . , En , it is always possible to express the set of pairs (C1 , E1 ), (C2 , E2 ), . . . , (Cn , En ) with n-Basic Interaction Automata. At the purpose to characterize the complete behaviour is important to develop two considerations: - the simple composition of the n-Basic Interaction Automata not allows to define a complete automata, because of the absence of characterization between everyone and all others. - a relation that associates to E1 , E2 , . . . , En an EGlobal for model the C1 , C2 , . . . , Cn with EGlobal , must be establish. 4

65

ETAPS 2005

F. Gianfelici

FInCo 2005

The complete environment, presented with the EGlobal notation is achieved as: (9)

EGlobal (t)j = E1 ∩ E2 ∩ . . . ∩ En

Generalizing (1) and (2) we have: (10)

EGlobal (t)jE = e1 , e2 , . . . , en

(11)

C(t)E j = c 1 , c2 , . . . , c m

that expresses a generic interaction between EGlobal and Cj , whose modelling of Ci and Cj must be added: (12)

C(t)ji = cj1i , cj2i , . . . , cjni

(13)

C(t)ij = ci1j , ci2j , . . . , cimj

with C(t)ji is indicated the stream between the components Ci and Cj , which models the communication channel from Ci to Cj . Then we extend the (3) by means of two the following relations: (14)

Iecj (t) = EGlobal (t)  C(t)j = EGlobal (t) ◦ C(t)j

(15)

Iccji (t) = C(t)j  C(t)i = C(t)j ◦ C(t)i

In this way, we have the interactions: (16)

Iec (t) = {Iec1 (t), Iec2 (t), . . . , Iecn (t)}

n (t)} (17) Icc (t) = {Icc12 (t), Icc13 (t), . . . , Icc1n (t), Icc23 (t), Icc24 (t), . . . , Icc2n (t), . . . , Iccn−1

At this point, defining Ω with the properties described in section 2: (18)

Ω = [Iec (0), Icc (0), Iec (1), Icc (1), . . . , Iec (n), Icc (n)]

so obtaining the complete set of interactions between all components C1 , C2 , . . . , Cn and the environment EGlobal . Given a set of interactive actions as IAC = {a, b, . . .} and applying the operator ∆(Ω) from (8) it is possible to obtain a word. Indicated with an Σc = IAC = {a, b, . . .} alphabet, we use Σc to express Σc = ΣC ∪{} where  is not in Σc . Defining a Complete Interactive Automata as a tuple < V, V 0 , V F > (eventually equipped with time) on an alphabet, it was observed that the language L, generated by ∆(Ω) or by the composition of various ∆(I), is regular if there exists a Complete Interaction Automata A such that L(A) = L.

4

Restricted Complete Interaction Automata (RCIA)

Given n-components Cset = {C1 , C2 , . . . , Cn } and E1 , E2 , . . . , En and defined a partial function Ψ(Cset ), which discriminates the components in function of one or more properties, we obtain a finite set of elements: C1new , C2new , . . . , Clnew with l ≤ n. The number l is the cardinality of the operator Ψ, and indicated as π(Ψ). Defined as E1new , E2new , . . . , Elnew the relative environments is alnew . Then starting from ways possible to obtain, according with (9), an EGlobal new new new new C1 , C2 , . . . , Cl and EGlobal with analogous considerations, whose are been proposed in section (3), a characterization is always possible. 5

66

ETAPS 2005

5

F. Gianfelici

FInCo 2005

Multi-Layer Interaction Automata (MLIA)

Given n-components Cset = {C1 , C2 , . . . , Cn } and E1 , E2 , . . . , En and defined m partial functions Ψk (Cset ), it is possible to establish an order relation on the various Ψk (Cset ), in function of the cardinality of Ψk (Cset ): - Ψk (Cset ) ≥ 0 because the cardinality is positive semi-definite, π(Ψk (Cset )) ≥ 0 - Ψk (Cset ) ≥ Ψt (Cset ) iff π(Ψk (Cset )) ≥ π(Ψt (Cset )) Then defined m-RCIAs, everyone equipped with its Ψk (Cset ), it is possible to extend the proposed ordering to RCIAs. Indicated with RCIAi [Ψi (Cset )] the i-th RCIAs, generated starting from Ψi (Cset ), we have: - RCIAi [Ψi (Cset )] = n iff π(Ψi (Cset )) = n where n is the cardinality of Cset - RCIAk [Ψk (Cset )] ≥ RCIAt [Ψt (Cset )] iff Ψk (Cset ) ≥ Ψt (Cset ) Obtaining a Multi-Layer Interaction Automata (MLIA) as composition of RCIAi [Ψi (Cset )] such that, every level corresponds to an RCIA, with the following ordering: - first level:: RCIAi [Ψi (Cset )] - ... - n-th level:: RCIAk [Ψk (Cset )] - (n+1)-th level:: RCIAq [Ψq (Cset )] where RCIAi [Ψi (Cset )] < . . . < RCIAk [Ψk (Cset )] < RCIAq [Ψq (Cset )]. The consideration, previously developed, implies as it is always possible to establish an isomorphism between the set of components C (with their relative environments) and the set of RCIAs, which compose every MLIA. The isomorphism, previously cited, does not imply any partial ordering between the state number, which it still can be achieved with the addition of explicit condition on Ψ(Cset ).

6

The Progressive Refinement and Functional Abstraction of MLIA

Defined a MLIA by means of k RCIAs, and indicated with the following notation: - first level:: RCIA1 [Ψ1 (Cset )] - ... - (k-1)-th level:: RCIAk−1 [Ψk−1 (Cset )] - (k)-th level:: RCIAk [Ψk (Cset )] we propose in this section some considerations, which will be successively exploited. 6

67

ETAPS 2005

F. Gianfelici

FInCo 2005

Definition 6.1 We define progressive refinement, every succession of partial functions Φj (Cset ) with j = {1, 2, . . . , k} such that the cardinality of this succession will be monotonically increasing. Definition 6.2 We define functional abstraction, every succession of partial functions Θv (Cset ) with v = {1, 2, . . . , k} such that the cardinality of this succession will be monotonically decreasing . In accordance with the definitions (6.1) and (6.2) we write: Lemma 6.3 The progressive refinement corresponds to functional abstraction of Cset iff Θv (Cset ) = Φj (Cset ) and v = k − j + 1. and so according with the definition of MLIA and the lemma (6.3), we have: - Ψv (Cset ) = Θv (Cset ) con j = {1, 2, . . . , k} - Ψv (Cset ) = Φv=k−j+1 (Cset ) con j = {1, 2, . . . , k} At this point, we state the following theorem: Theorem 6.4 Every MLIA is characterized as a functional abstraction of the components and as a progressive refinement, iff it is possible to define two successions of partial functions such that: - Θv (Cset ) = Φj (Cset ) with j = {1, 2, . . . , k} iff v = k − j + 1 - Ψv (Cset ) = Θv (Cset ) - Ψv (Cset ) = Φv=k−j+1 (Cset ) Proof. The proof of this theorem is simply obtained by the composition of (6.1), (6.2) and (6.3) and the relative considerations that are previously described in this section. 2

7

The Platform-based Design Paradigm

The Platform-based Design Paradigm was principally developed thanks to the contribute of Prof. A. Sangiovanni Vincentelli [7] (co-founder of Cadence Inc. [8]), which sheds light on its effective use in application domain. A complete and accurate review of this paradigm is provided in [15], where an exhaustive description (with many practical examples) can be found in the articles that are proposed. The pragmatical nature, which has been developed the Platform-based Design Paradigm and the great success in application contests (not only relegated in Information Technology but also in domains as Microelectronics, Digital Electronics, Electronic Measurement Instrumentation, . . . ), has lack of a rigorous theoretical formulation. In this section we provides a characterization of this paradigm obtained starting from a representation expressed in terms of Formal Language: this choice is principally motivated by the need 7

68

ETAPS 2005

F. Gianfelici

FInCo 2005

to express the design action (the basic element of this paradigm) together with other features: (i) the platforms and the API, (ii) the progressive refinement property. The Formal Language representation permits an adequate formulation of this aspect (design action) that is principally imputable to their properties of abstraction, and compactness. Successively this unconventional characterization will translate in a more adequate form, which constitutes the fundamental prerogative to establish our equivalence relation.

7.1

Representation: A Formal Language Formulation

The main aims of this section are: (i) to provide a formal framework enable to describe the Platform Based Design (ii) to give a more formal description of the G. Sangiovanni-Vincetelli’s intuition. P roj ::= nil | Inf raStmain | Inf raStl | Pd | Api(P roj).P roj | P roj k P roj | Inf raStmain ::= nil | Api(Inf raStmain ).Inf raStmain | | Inf raSti .Inf raStj | Inf raSti + Inf raStj | | Inf raSti k Inf raStj | Inf raSt1 ::= nil | Pi .Inf raSt1 | Pi + Inf raSt1 | | Api(Inf raSti ).Inf raSt1 | Pi k Inf raSt1 | Inf raSt2 ::= nil | Pi .Inf raSt2 | Pi + Inf raSt2 | | Api(Inf raSti ).Inf raSt2 | Pi k Inf raSt2 | ... Inf raStk ::= nil | Pi .Inf raStk | Pi + Inf raStk | | Api(Inf raSti ).Inf raStk | Pi k Inf raStk | P1 ::= nil | a.P1 | P1 + P1 | Api(comp1y ).P1 | P1∗ | P1 k P1 | P2 ::= nil | a.P2 | P2 + P2 | Api(comp2y ).P2 | P2∗ | P2 k P2 | ... Pm ::= nil | a.Pm | Pm + Pm | Api(compmy ).Pm | Pm∗ | Pm k Pm | We use the following notation. A is a set of basic actions, Aτ = A∪{τ }, where τ is used to represent internal activity. The P roj defines the global project (in the design phase), Inf raStmain represents the higher level of Platform abstraction, the set of {Inf raSt1 , Inf raSt2 , . . . , Inf raStn } is the formalization of hierarchical dimension that characterizes the Platforms, and P1 , P2 , . . . , Pm constitute the multilevel formulation of processes. The Api() is a predicate, enable to describe the Application Program Interface (API). 8

69

ETAPS 2005

FInCo 2005

F. Gianfelici

Fig. 1. Embedded System for Electronic Measurement of Gas Concentration (ESEMGC)

7.2

A Case Study: Embedded System for Electronic Measurement of Gas Concentration (ESEMGC)

In this section the description of an Embedded System for Electronic Measurement of Gas Concentration (ESEMGC) is presented. The Formal Language, previously proposed, is used to describe the system, showing the effectiveness of this representation kind. Due to the limited space available, we are constrained to limit the system formalization to its main functionalities. This system, proposed in Fig. 1, is enable to acquire the Gas Concentration Measurement (provided by Figaro Sensor), united with the thermoresistor and dampness sensor, at the purpose of elaborate a self calibration of Figaro Sensor (R0 parameter that expresses the reference condition normally calculated at 20◦ C, 1500 ppm e 65 %) based on Artificial Neural Network and implemented with a µ-processor. Its formal representation: P roj ::= Inf raStmain Inf raStmain ::= Inf raSt1−AN N .(Inf raSt1−gas +Inf raSt1−damp. +Inf raSt1−temp. ) Inf raSt1−AN N ::= Inf raSt2−KLT .Inf raSt2−M LE Inf raSt2−KLT ::= Api(Inf raSt3−CORR ).Inf raSt3−EIG .Inf raSt3−P rojection Inf raSt3−M LE ::= Api(Inf raSt3−M LE−base ).PRo −estim. Inf raSt1−gas ::= Inf rast2−gasHw kInf rast2−gasSw Inf raSt2−gasHw ::= [Api(Inf raSt3−gasAc ).Api(3 − gasCh)]+Api(Inf raSt3−gasT est ) ...

7.3

Conventional Representation

Identifying the cardinality of the Framework: (19)

π({Inf raSt1 , Inf raSt2 , . . . , Inf raStk }) = k

we can express: (20)

{Inf raSt1 , Inf raSt2 , . . . , Inf raStk } |=

k [ i=1

9

70

P latf ormi

ETAPS 2005

F. Gianfelici

FInCo 2005

S we can establish that ki=1 P latf ormi are equipped with the progressive refinement property [6]. The formulation so obtained is enable to represent any object as composition of elementary components at various abstraction degrees. Every level corresponds to a combination of these components, expressed in platform terms. This multi-level structure represents a progressive refinement of details whose every component can be described: in this way the presented approach maps the object representation across the observability behavior of components, their composition (platform), and the succession of their composition (platforms).

8

Platform-based Design and Interactive Computation Paradigm

In section the equivalence relation between paradigms (Platform-based Design and Interactive Computation) will be presented. Definition 8.1 We define API (Application Program Interface) as an interface that masks the properties, the methods and the events of every component. In this way, APIs are elements provided by Ψ, where Ψ semantically represents the component observability. Definition 8.2 A platform is every structure, composed by objects that interacting establish through their dynamical behaviours the functionality of the structure. An RCIA is a platform where the objects are the components (C) that interact, including eventually the environment (EGlobal ). Proposition 8.3 Defined with the relation: - Platform ≡ RCIA from induction: - RCIA1 [Ψ1 (Cset )] ≡ P latf orm1 - ... - RCIAk−1 [Ψk−1 (Cset )] ≡ P latf ormk−1 - RCIAk [Ψk (Cset )] ≡ P latf ormk so obtaining: Lemma 8.4 Every MLIA is a set of platforms, ordered by means of a succession of partial functions Ψ. Then starting from: (21)

k [

RCIAi [Ψi (Cset )] |= M LIA

i=1

we obtain: (22)

k [

P latf ormi |= M LIA

i=1

10

71

ETAPS 2005

F. Gianfelici

FInCo 2005

Basing on 8.3 and on the partial ordering, formulated starting from the succession of Ψ, it is possible to establish: Lemma 8.5 Define the P latf ormi , as a model for every MLIA, their composition is equipped with the properties of progressive refinement and functional abstraction, as it is defined in Theorem 6.4. then: k [

(23)

P latf ormi |= {Θ, Φ}

i=1

iff the P latf ormi are defined a priori, otherwise: {Θ, Φ} |=

(24)

k [

P latf ormi

i=1

As a direct consequence: the Platform-based Design Paradigm is a special kind of MLIA, equipped with components at various levels of abstraction (APIs), characterized by relations expressed as the Ψ codomain, and with the properties of progressive refinement and functional abstraction (Θ, Φ). Being that every MLIA is made starting from the Interactive Computation Paradigm, the result so achieved establishes an equivalence relation between this paradigm and the well-known Platform-based Design Paradigm, under the conditions expressed by MLIA construction.

9

Future Works

The results presented in this paper, underline that it is possible to make complex computational structures, starting from Interactive Computation Paradigm, and establishing some equivalence relation with other paradigms. Our attention is turned to the modelling of these relations by means of articulated structures, which permit the identification of the paradigm properties.

10

Conclusion

In this paper an equivalence relation between paradigms (Interactive Computation and Platform-Based Design) is presented. The direct implications that this approach generates, allow to identify (by construction) the principles of effective and reliable engineering of interactive systems, and to enlarge the expressiveness of this paradigm.

References [1] Gianfelici F., Biagetti G., Crippa P. and Turchetti C.: A Novel KLT Algorithm Optimized for Small Signal Sets, IEEE Proceedings of International Conference of Acustics, Speech and Signal Processing (ICASSP 2005), Philadelphia (USA). [2] Gianfelici F. and Turchetti C.: A Stochastic Process Recognizer, Italian Patent (in Internationalization Phase), 2004, Dep. Num. AN2004A000050.

11

72

ETAPS 2005

F. Gianfelici

FInCo 2005

[3] Viroli M., and Ricci A.: Instructions-based semantics of agent mediated interaction, In Nicholas R. Jennings, Carles Sierra, Liz Sonenberg, and Milind Tambe, editors, 3rd international Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2004), volume 1, pages 102–110, New York, USA, ACM. [4] Ricci A., Viroli M. and Omicini A.: Agent Coordination Context: From Theory to Practice, In Robert Trappl, editor, Cybernetics and Systems 2004, volume 2, pages 618–623, Vienna, Austria, 2004. Austrian Society for Cybernetic Studies. 17th European Meeting on Cybernetics and System Research (EMCSR 2004), Vienna, Austria, Proceedings. [5] Keutzen K., Newton A.R., Rabaey J.M., Sangiovanni-Vincentelli A.: Systemlevel design: orthogonalization of concerns and platform-based design, IEEE Transaction on Computer-Aided Design of Integrated Circuits and System, Vol. 19, 2000, 1523–1543. [6] Burch J., Passerone R., Sangiovanni-Vincentelli A.L.: Modeling Techniques in Design-by-Refinement Methodologies, Proceedings of Integrated Design and Process Technology, 2002. [7] Ferrari A., Sangiovanni-Vincentelli A.: System Design: Traditional Concepts and New Paradigms, Proceedings of International Conference on Computer Design (ICCD ’99), 1–12, 1999. [8] Cadence Inc.: www.cadence.com [9] Vahid F., Givargis T.: Embedded System Design: A Unified Hardware/Software Introduction, John Wiley and Sons, 2002. [10] Goldin D., Smolka S., Attie P. and Sonderegger E.: Turing Machines, Transition Systems, and Interaction, Information and Computation Journal, Volume 194, Issue 2, Nov. 2004, pp. 101-128. [11] Wegner P.: Paraconsistency of Interactive Computation, PCL 2002 (Workshop on Paraconsistent Computational Logic), Denmark, July 2002. [12] Leeuwen J. van and Wiedermann J.: On the power of interactive computing, In J. van Leeuwen (Ed.), IFIP TCS 2000 Conference: Theoretical Computer Science - Exploring New Frontiers of Theoretical Computer Science (pp. 619-623). Berlin: Springer-Verlag. [13] Wiedermann J. and Leeuwen J. van: A computational model of interaction in embedded systems, UU-CS (Ext. r. no. 2001-02). Utrecht, The Netherlands: Utrecht University: Information and Computing Sciences. [14] Leeuwen J. van and Wiedermann J. On algorithms and interaction, In M. Nielsen and B. Rovan (Eds.), Mathematical Foundations of Computer Science 2000-25th Int. Symposium (pp. 99-112). Berlin: Springer-Verlag. [15] www-cad.eecs.berkeley.edu/HomePages/alberto/pubslast/pubslast.html

12

73

ETAPS 2005

FInCo 2005 Preliminary Version

FInCo 2005

Interactions in Transport Networks Nigel Walker 1 Marc Wennink 2 BT Research, B54, 141, Adastral Park, Ipswich, IP5 3RE, UK

Abstract We present a model that captures basic interactions occurring in transport networks, including routing and flow control. Many network processes can be seen as solving an optimisation problem, or seeking a balance between competing interests. The problem structure is illustrated by means of a ‘component graph’, which dictates the communication and interaction patterns between different parts of the system. We show how the same formalism also captures interactions in electrical circuits. Key words: Optimisation, routing, Lagrangian duality, interaction, graphical models.

1

Introduction

We are interested in developing techniques to model and specify processes and interactions occurring in communications networks, primarily transport networks, but ultimately other components of shared infrastructure, much of which also incorporates elements of network functionality. In this setting interactions can take place across many different ‘axes’ (between different users, between users and operators, between nodes or between layers of the network, between flows and costs), and over quite different timescales. Each process or interaction takes place within a larger spatial and temporal environment. Many network processes can be seen as solving some kind of optimisation problem (e.g. minimum cost routing) or, more generally, as seeking a balance between competing interests (e.g. sharing available capacity). From a computational point of view, the problem is to find the values of parameters (variables) in the system that achieve such an optimum, or balance point. This must be done dynamically, and often as a distributed calculation, in response 1 2

74

Email: [email protected] Email: [email protected] This is a preliminary version. The final version will be published in Electronic Notes in Theoretical Computer Science URL: www.elsevier.nl/locate/entcs

ETAPS 2005

FInCo 2005

Walker and Wennink

to changes in the environment. For example, a routing protocol must continually respond to changes in availability of links, adjusting flows accordingly. Although we do not report on it directly in this paper, we believe there is much value to be gained in developing high level (programming) languages to analyse and organise network functionality and structure. Advantages we anticipate from such an approach include better statement of management issues, exposure of options for refinement into different protocols, comparison of different design options within a common language framework, and precision about the amount of network state, numbers of variables, and naming structure. Here we concentrate on a model of interaction that should underpin a language. We draw on the mathematics of optimisation, which puts our work in parallel with other recent work using optimisation theory in the design and analysis of networks and protocols [10,12]. We also show how the formalism can be used to describe fundamental interactions in electrical networks, which suggests opportunities for transfer of concepts between the two domains.

2

Saddle points and duality

The general mathematical setting for our model is conceptually more general than optimisation. We study the problem of finding a saddle point of a convexconcave function of typically many real-valued variables. We usually call this (real-valued) function, L, the Langrangian, on the basis that it can often (but not always) be derived from a Lagrange relaxation of an optimisation problem. The arguments of L are separated into primal decision variables, x = (x1 , . . . , xn ) and dual decision variables, y = (y1 , . . . , ym ). A Lagrangian, L, defined over a domain X × Y is convex-concave if and only if •

for any x1 ∈ X, x2 ∈ X, y ∈ Y and 0 ≤ α ≤ 1 we have αx2 + (1 − α)x1 ∈ X, and L(αx2 + (1 − α)x1 , y) ≤ αL(x2 , y) + (1 − α)L(x1 , y)



for any y1 ∈ Y , y2 ∈ Y , x ∈ X and 0 ≤ β ≤ 1 we have βy2 + (1 − β)y1 ∈ Y , and L(x, βy2 + (1 − β)y1 ) ≥ βL(x, y2 ) + (1 − β)L(x, y1 )

A point (x∗ , y ∗ ) is a saddle point of L(x, y) if (1)

L(x∗ , y) ≤ L(x∗ , y ∗ ) ≤ L(x, y ∗ )

∀x ∈ X, ∀y ∈ Y.

A saddle point can be interpreted as an equilibrium configuration for a game in which players are associated with variables. The primal variables try to minimise the value of the Lagrangian, and the dual variables try to maximise it. Mutually conflicting interests reach an accommodation in a saddle point. The restriction to convex-concave functions seems severe, but many network problems turn out to have a Lagrangian of this form. In particular, many network flow problems can be stated as a linear program [2]. Let A be an m × n-matrix, b and y be m-vectors, and c and x be n-vectors. A linear program of the form (2)

min cx s.t. Ax = b, x ≥ 0 2

75

ETAPS 2005

FInCo 2005

Walker and Wennink

has a corresponding Lagrangian function L(x, y) = cx − yAx + yb,

(3) ∗



x ≥ 0. ∗

If (x , y ) is a saddle point of L(x, y) in (3) then x is an optimal solution to the corresponding linear program (2). More generally, a convex-concave Lagrangian can be associated with any convex optimisation problem, not just those formulated as linear programs. Saddle points for a Lagrangian derived in this way are characterised by the well-known Karush-Kuhn-Tucker conditions [5]. To simplify the analysis we will only consider Lagrangian functions that are differentiable in the neighbourhood of their saddle point. To this end, we introduce barrier functions into (3), which we can think of as enforcing the non-negativity constraints x ≥ 0. A logarithmic barrier function is widely used in a class of solution techniques, called interior point methods, for solving optimisation problems [5]. With this modification the Lagrangian function (3) becomes n X (4) L(x, y) = cx − yAx + yb − j ln(xj ), x > 0. j=1

where j > 0, j = 1 . . . n. Now it is sufficient to require that ∂L ∂L (5) = 0, j = 1, . . . , n = 0, i = 1, . . . , m ∂xj ∂yi at (x∗ , y ∗ ) for this to be a saddle point. As the values of j are reduced, the saddle point of (4) more closely approximates that of (3). If we had started with a linear program of the form (6)

min cx s.t. Ax ≥ b, x ≥ 0

instead of (2), then the Lagrangian (3) would also be restricted to y ≥ 0 and barrier functions +δi ln(yj ), i = 1, . . . , m would be added to (4).

3

Decomposition and distribution

To explain how a saddle point problem can be distributed we introduce a running example. We seek the shortest paths to a given destination in a communication network, which can be formulated in the standard way as a minimum cost flow linear program, with Lagrangian of the form (3). The flow over link j is determined by a primal decision variable xj , while the dual variable yi becomes the distance (or cost) from node i to the destination node. A cost of cj per unit flow is imposed at link j, and flow bi is injected into node i. We can choose bi = 1 if i is one of the m−1 ingress nodes, and bi = 1−m to sink all the flow at destination node i. The matrix A is the incidence matrix of the network; Aij = 1 if node i is a source of link j, Aij = −1 if node i is a target of link j, and Aij = 0 otherwise. The network is assumed connected. The solution is degenerate in the dual variables, which is normally handled by requiring the destination node to set yi = 0. 3

76

ETAPS 2005

FInCo 2005

Walker and Wennink

y 2 b2

cc xb c

b

y2 +y2 xa

−y2 xc b

c b b

b+y2 xe

−y1 xa y 1 b1

b

y1

b

−y1 xb b

bc

xb b

cb xb

xe c b ce xe b −y4 xe b +y4 xb b

y4 b

b

+y3 xf

3 b3

y3

b

c xf b

b −y3 xd

xd

bcd xd

b

cf xf

b

−y4 xf

b

+y4 xg

xg b

y 4 b4

by

+y3 xc

c b

ca xa b

bc xa

xc

b +y5 xd

c b b

−y5 xg

y5 by

5 b5

cg xg

Fig. 1. Component graph for shortest path routing, showing underlying network structure, and decomposition for Bellman-Ford algorithm.

The Lagrangian function can be written as a sum of separate components, the structure of which can be presented graphically. We show this for a small 5-node (=m), 7-link (=n) network in Fig. 1. Primal variables are enclosed in circles and dual variables in squares. A component of the Lagrangian function is represented by a blob, which is connected to the variables on which it depends. Each link in the communication network corresponds to a column of the incidence matrix. For example, in Fig. 1, xa participates in two components −y1 xa and +y2 xa deriving from the incidence matrix, together with cost component ca xa , and an explicit log-barrier component, −a ln(xa ), discussed above, which we represent by an open circle. This graphical presentation is related to factor graphs in belief propagation networks, Tanner graphs in decoding theory, and constraint graphs in constraint programming [3,4]. We use the name ‘component graph’ to emphasise that it derives from a straightforward translation of the Lagrangian function. A nice feature is that, for this problem formulation, it naturally reflects the underlying network topology. In section 5 we will see that the same holds for component graphs derived from electrical circuits. We can distribute the problem of finding a saddle point by partitioning the component graph. Figure 1 shows such a partitioning for the shortest path problem. Each node ‘owns’ the variables in its shaded region: one primal variable for each out-link and one dual variable for the distance label. For each node, we can obtain a locally perceived Lagrangian by collecting all the components that involve any of the variables owned by that node. For node 4, for example, we find (7)

L4 (xe , xf , y4 ) = (ce + y 2 )xe + (cf + y 3 )xf − y4 xe − y4 xf +y4 (b4 + xb + xg ) − e ln(xe ) − f ln(xf )

where we use the notation y i and xj to indicate that a variable is owned by another node. Thus, from the point of view of node 4, y2 is perceived as environmental. A locally perceived Lagrangian changes whenever neighbouring nodes change the values of their decision variables. If a node has found a saddle point of its local Lagrangian, the derivatives (5) with respect to the 4

77

ETAPS 2005

Walker and Wennink

FInCo 2005

variables it owns are zero. Because every decision variable in the global Lagrangian is owned by exactly one node, it follows that the network as a whole is in a saddle point if and only if all nodes are simultaneously in a saddle point of their local Lagrangian. Each local saddle point problem is formally similar to the global saddle point problem, and the decomposition into sub-problems can, in general, be continued recursively. The above discussion suggests a straightforward distributed approach to finding the network saddle point: whenever a node receives new information about the values of its neighbours’ variables, it solves its local problem and in turn communicates the newly selected values of its own variables to the relevant neighbouring nodes. In the case of the shortest path problem this procedure converges to a solution, and recreates the Bellman-Ford algorithm: values of the dual variables as sent upstream in the communication network as ‘distance labels’, while values of primal values are given by the flows sent downstream. The Bellman-Ford algorithm is at the heart of distance vector protocols, such as RIP, widely used in the Internet [9]. The procedure described above, in which it is implied that the variables owned by an individual node (or ‘sub-process’) are adjusted instantaneously to the locally perceived saddle point, does not necessarily converge to a global saddle point for an arbitrary Lagrangian. An obvious expedient to fix this is instead to adjust the values of the decision variables incrementally. This line of reasoning leads to a dynamic system equation, described in the next section, in which the decision variables converge to a global saddle point for any strictly convex-concave Lagrangian.

4

Evaluation as dynamic system

Here we assume that the values of the variables change continuously, and at a rate such that the propagation delay incurred when exchanging messages can be ignored. We discuss these assumptions later. It is also assumed that the Lagrangian is at least once differentiable. The following dynamic equation can then be motivated as an intuitive method of finding the saddle point, i.e. the minimum of the Lagrangian with respect to x and the maximum with respect to y, dy dx = −λ . ∇x L, (8) = +µ . ∇y L, dt dt where x ∈ Rn and y ∈ Rm are vectors containing the values of the decision variables, λ ∈ Rn×n and µ ∈ Rm×m are positive definite matrices, usually assumed diagonal and ∇x L and ∇y L are the vectors of partial derivatives of L with respect to the primal and dual variables respectively. Assume that the Lagrangian is strictly convex-concave. Then it has a unique saddle point (x∗ , y ∗ ). We would like to know that a solution (x(t), y(t)) of (8) converges to (x∗ , y ∗ ) as t → ∞. We can do this by constructing a Lyapunov function. Assume the saddle point (x∗ , y ∗ ) of the Lagrangian is at 5

78

ETAPS 2005 Ll

b

L′

b

zj

(li)

zi

b

b

X ∂Ll X dzi (li) = −λ = −λ L′ (z) dt ∂zi Z

L′ (z)

l∈W (i)

L′ (z)

(ki)

zi

Lk

.. .

b

l∈W (i)

Z (jk)

Z (ik)

Lk

.. .

(ik)

FInCo 2005

Walker and Wennink

(ki)

=

∂Lk ∂zi [zi =Z (jk) ,j∈V (k)]

= zi

Fig. 2. Message passing rules to implement dynamic system evaluation strategy of equation (8).

the origin (0, 0) and define  1 t −1 x . λ . x + y t . µ−1 . y 2 where x, y and hence φ are time varying quantities. Note that λ and µ are positive definite matrices, so can be inverted. Then dφ (10) = −xt . ∇x L + y t . ∇y L dt If L(x, y) is once differentiable then, on account of its strict convex-concavity (9)

(11)

φ=

L(x2 , y) − L(x1 , y) > (x2 − x1 )t . [∇x L](x1 , y) L(x, y2 ) − L(x, y1 ) < (y2 − y1 )t . [∇y L](x, y1 )

for all x, y and x1 6= x2 , y1 6= y2 [5]. Setting x1 = x, y1 = y, x2 = x∗ = 0 and y2 = y ∗ = 0 in (11) and substituting in (10) gives dφ < L(x∗ , y) − L(x, y ∗ ) ≤ 0 (12) dt where the second inequality follows from the definition of a saddle point (1). The function φ(x, y) is decreasing everywhere, yet is non-negative, and therefore constitutes a suitable Lyapunov function. If we are given a Lagrangian that is convex-concave, but not strictly so, then it may be possible to modify it so that the above result is applicable. For example, the Lagrangian of (4) can be made strictly convex-concave by subtracting a term yM y, where M is a small positive definite matrix. To find the saddle point in a distributed system we could try to synthesise the dynamic system equations (8). This can be done through a simple protocol in which messages are (visualised as) passed in both directions ‘over the component graph’, as illustrated in Fig. 2. The protocol does not distinguish between primal variables or dual variables, so the symbol z is used for either. Here V (k) is the set of suffixes of the variables participating in component Lk , and W (i) is the set of suffixes of the components that depend on zi . A message Z (ik) sent from variable node i to Lagrangian component Lk carries the (most recent) value of the decision variable zi , and a message L0 (ki) sent from component k to variable i is the partial derivative ∂Lk /∂zi evaluated at 6

79

ETAPS 2005

Walker and Wennink

FInCo 2005

the most recently received zj = Z (jk) , j ∈ V (k). The variable node i maintains a value for zi which is adjusted in accordance with the local dynamic equation shown in Fig. 2. The matrices λ and µ are assumed diagonal. The component node k must calculate the gradient of function Li with respect to all the zi , i ∈ V (k). The question arises as to when the above protocol accurately synthesises, or approximates, the continuous dynamic equations (8). Two groups of parameters need to be considered: •

The values of λ and µ, which determine the rates at which the decision variables are adjusted. These have to be chosen sufficiently small so that propagation delay of messages can be neglected.



The frequency of sending messages. This must be sufficiently large compared with the rate of adjustment of the decision variables.

The question of determining upper limits for λ and µ, or lower limits on the frequency of sending messages is complicated in general, and is the province of control theory, dynamic systems analysis, and sampling theory. Significantly, it is possible to ‘derive’ the Bellman-Ford algorithm from this generic protocol. Consider the messages exchanged between nodes 1 and 2 in the shortest path routing problem of Fig. 1. Let Lk = y2 xa . Node 2 sends Z (2k) = y2 , i.e. its distance label, to its upstream neighbour and gets in return L0 (k2) = ∂Lk /∂y2 = xa , the value of the flow variable. Similarly for the messages exchanged between other nodes. Now reduce the rate at which messages are sent between nodes, while maintaining a high rate of message passing, and rate of adjusting variables, within a node. Each node then appears to adjust the values of its variables as a transition when observed at the timescale commensurate with messages passed between nodes. This gives the Bellman-Ford algorithm. Evidently, exchange of messages according to rules such as those in Fig. 2 can yield useful algorithms, even if the assumptions of the previous paragraph are broken. Although message passing is an expensive implementation of the computation occurring within a node, we have nevertheless simulated this particular scenario as an intermediate case between a transition semantics and a dynamic system semantics of the computation over the component graph. The general protocol described above, and illustrated in Fig. 2, has strong similarities with so-called ‘message passing algorithms’ from decoding and information theory [3], and constraint propagation algorithms from constraint programming [4]. We arrived at the rules in Fig. 2 through a metamorphosis of the ‘min-plus’ algorithm described in [3], though there the messages exchanged between nodes are functions, rather than values. In constraint programming the messages read or write to a store constraints on, or domains of, the variables.

7

80

ETAPS 2005

y1 xa

FInCo 2005

Walker and Wennink

L

y2 xb

Rb

y3 y1

xd

xc

xd C

Rc

V

λ = 1/L −y1 xa +y2 xa b b xa

b

−y1 xd

xd

bV

b

+y0 xd

1 2 2 xb Rb b

y2

−y2 xb b

xb

b −y2 xc

xd

xc

b 1 x2 R 2 c c

b +y0 xc

+y3 xb b

y3 b −y3 xe

xe

µ = 1/C +y4 xe b y4

b +y0 xe

y0 y0 b −y0 x0

x0

Fig. 3. Simple LCR circuit and its component graph

5

Analogy with electrical circuits

It is worth emphasising a formal correspondence between electrical networks and communication networks, as this provides considerable scope for the transfer of concepts such as impedance, passivity, small signal analysis, frequency domain techniques, etc. as well as the notion of interaction, from one setting to the other. It is routine for electrical engineers to characterise the behaviour of circuits differently at different frequencies, or over different timescales, whereas this style of thinking appears to be less thoroughly exploited in reasoning about communications networks and protocols. Figure 3 shows a simple LCR circuit using conventional electrical symbols, and its translation into a component graph and associated Lagrangian function. The flow (current) and potential (voltage) variables may now be either positive or negative, so no barrier functions are required. The main structure of the component graph is still determined by the incidence matrix. A resistor gives rise to a quadratic ‘cost’ component on the link flow variable. A link inductance L associates λ = 1/L with a flow variable, and a capacitor C gives µ = 1/C for a potential variable. The corresponding values for the remaining variables are assumed to be high. In other words, they are parasitic capacitances or inductances. The interpretation of λ and µ as reciprocal inductance and capacitance gives an energy interpretation to the Lyapunov function (9). The implication of the circuit diagram is that the parasitic modes of oscillation can be ignored—they are assumed to be ‘out of band’, and to decay quickly—so that all the interesting dynamics are determined by the values of the explicitly indicated inductance and capacitance. Although we do not present the details, the procedure by which these parasitic modes are eliminated, thereby recovering the circuit equations that would conventionally be associated with the circuit diagram, is standard in dynamic systems theory [11]. This example also illuminates the assumption concerning propagation delay outlined in the previous section. It is a standard ‘lumped circuit’ treatment of an electrical circuit. The diagram of Fig. 3 implies that the delay can be 8

81

ETAPS 2005 −ui (di )

−uii (dii )

−uiii (diii )

di

dii

diii

b

t u

y1i

y2i

t tu u

xib

bc

b

b

cb xib

ca xia

rs

b

rs

rs

rs

rs

b

b

cf xif

rs

xig

bc

y2ii

y3ii

t u

t u

b

xii b

xii a

bc

bc

b

rs

b

rs

b

rs

y4ii

rs

b

rs

t u rs

rs

bc

b

ce xii e

rs

xii g

bc

b

cf xii f

rs

y1iii

y2iii

y3iii

t tu u

t u

t u

rs

xii f

xii e

bc

cd xii d

y5ii

tu u t

xii d

bc

cc xii c

rs u t

rs

rs

xii c

bc

cb xii b

ca xii a

cg xig

rs

y1ii

rs

b

t u

t tu u rs

xif

bc

ce xie

rs

t u

xie

bc

b

y5i

tu u t rs

cd xid

rs

y4i

xid

bc

cc xic

rs

t u

xic

bc

rs

rs

rs

b

y3i

t u

rs xia

FInCo 2005

Walker and Wennink

bc

b

bc

b

rs

b

rs

bc

b

b

rs

rs

b

rs xiii f

bc

ce xiii e

rs

t u rs

xiii e

bc

cd xiii d

cc xiii c

rs

xiii d

xiii c

bc

cb xiii b

ca xiii a

cg xii g

rs

xiii b

xiii a

y5iii

tu u t rs

rs

rs

y4iii

b

cf xiii f

rs

xiii g

bc

bc

b

cg xiii g

rs

rs

+yx x x pa b

pb

bc

b

pc

bc

b

pd

bc

b

pf

pe

bc

b

bc

b

pg

bc

b

rs

y

−yx

y

t u

+ǫy ln(y)

bc

−ka pa −kb pb −kc pc −kd pd −ke pe −kf pf −kg pg

bc

−ǫx ln(x)

bc

y x

Fig. 4. Component graph for congestion routing

ignored at the frequencies that are of interest. Outside of this regime, a propagation delay must be made explicit by including a waveguide in the circuit, and performing a transmission line analysis.

6

Congestion routing

We now extend our running network example, ‘zooming out’ to include more of the environment. We no longer focus on flow to a single destination, but consider three different flows, α = i, . . . , iii, each with a different destination. The injected flow levels are no longer constant, but determined by variable demands dα . We use utility functions uα (dα ) to capture the users’ appetite for sending flow. Also, the cost of each link has a variable component, associated with the level of congestion on that link. This variable, pj , depends on the capacity kj of the link and the total flow it carries. (We could have zoomed out even further and included provisioning within the model, promoting the capacities themselves to become dynamic variables.) The Lagrangian for this scenario is shown as a component graph in Fig. 4. It can be related to a variant of the multi-commodity flow optimisation problem, but here we want to use it to expose various types of interactions within transport networks. As drawn, Fig. 4 emphasises the interaction between demands (top), chosen routes (middle), and congestion levels (bottom). When demands increase, the congestion levels on the shortest paths will increase, forcing the routing process to find alternative paths. In most transport networks, routing and 9

82

ETAPS 2005

Walker and Wennink

FInCo 2005

congestion control are not combined as directly as this because delay can lead to instabilities, so-called ‘route-flapping’. The usual practice is to decouple flow control from routing, as is the case in TCP/IP for example [9]. However, this decoupling can be illusory. Observed over a long enough timescale these interactions do occur. An alternative lay-out of the component graph can be created to emphasise the interaction between the different nodes in the network. Each variable is associated with one of the nodes. For example, we can associate the demand variable dα with the source node of flow α and the congestion variable pj with the source node of link j. As in the case of the Bellman-Ford decomposition in Fig. 1, we can then identify the local problems that have to be solved by each node and the patterns of communication that have to be established between the nodes. A third decomposition is obtained by grouping the variables involved in the demand and routing processes by flow type. This view emphasises the interaction between the three flows, competing for the limited available capacity. We can then interpret the bottom of Fig. 4 as a market process mediating between the flows, and the pj as congestion prices. High congestion prices force flows to re-route, or demands to reduce. This economic perspective has permeated recent work on congestion control mechanisms in communication networks [10,1,7], and market management mechanisms for distributed systems generally [13,6]. On the other hand, we could develop an electrical reading in which notions such as resistance (increase in congestion price with demand), dissipation, passivity, inductance and capacitance provide the intuitive framework.

7

Discussion and further work

We are experimenting with programming ideas, using simple extensions of the Scheme programming language to express component graphs, their decomposition and different types of message exchange. We have encoded several simulations of basic network interactions, including routing, congestion control, layering, flow control and facility placement, and have experimented with different formulations and decompositions of these problems. Our efforts are at a preliminary stage, but it is clear that the expressiveness offered by a high level language would be a very valuable tool for reasoning about network structure and functionality. The dynamic system equations (8), which ignore propagation delay and individual message passing events, provide a naive ‘evaluation strategy’. It is obviously important to include the opposite extreme in which variables can change their values over timescales much shorter than the propagation delay, in which case we can think in terms of transitions occurring at the nodes, as would be the case in a practical implementation of the BellmanFord algorithm. The mathematical programming (optimisation) setting suggests a ‘denota10

83

ETAPS 2005

Walker and Wennink

FInCo 2005

tional semantics’ as the saddle point of the Lagrangian, which is the (stable) equilibrium configuration of the system, or part of the system under consideration. In a completely interactive setting this equilibrium may never be achieved if the environment keeps changing, and our proof of convergence is only valid while the Lagrangian remains constant. Link failures, or users joining and leaving, would break this assumption for the multi-commodity flow example of section 6. The focus on convex-concave problems seems restrictive, but it captures a wide variety of problems relevant to networking, and greatly simplifies algorithms. From the point of view of constraint satisfaction, or logic programming, it avoids the need for any backtracking or search, which is hard to do in a distributed setting. Convex-concavity should therefore be a design goal, assuming such freedom is available. Networks can exhibit phenomena where routing or congestion control can stabilise in either a high-throughput or an unwanted low-throughput state. The uniqueness of the saddle point safeguards against ‘network-wide’ transitions between such states. Configuration problems occurring in BGP routing suggest a failure to ensure convexity, and can lead to unwanted Nash equilibrium states and instabilities in the inter-domain routing infrastructure [8]. There is a question as to whether the types of processes we describe here are of interest as an instance of interactive computation. The emphasis seems different to, say, process algebra. We answer in part by pointing out that network problems, such as the examples presented above, provide a non-trivial class of ongoing interactions. Moreover, concerning expressing such problems in any language, the notions of abstraction, composition, decomposition and reuse of patterns are all just as relevant in this setting as they are in mainstream functional or object oriented programming. Also, our experimental evaluators require an extension of the structure used in, say, functional programming, as variables must be dynamic, evaluation must take place over cyclic (component) graphs, and different timescales must be taken into consideration. The model as presented captures a wide variety of distributed processes and, as such, might be thought of as a generic signalling or ‘control plane’ model for communication networks. It emphasises duality. It is backwards compatible with electrical circuit theory, which offers some intriguing avenues for development, as well as for transfer of concepts, such as richer frequency domain analysis. It is also compatible with much economic theory. The fact that the same model underpins different interpretations applicable at different levels is encouraging, and suggests it might have a role to play in understanding aspects of global computation. Finally, communication networks expose an important part of the ‘parameter space’ of interactive computing.

11

84

ETAPS 2005

Walker and Wennink

FInCo 2005

References [1] M3i, market managed multiservice internet, http://www.m3i.org/. [2] Ahuja, R. K., T. L. Magnanti and J. B. Orlin, “Network Flows,” Prentice-Hall, 1993. [3] Aji, S. M. and R. J. McEliece, The generalized distributive law, IEEE Transactions on Information Theory 46 (2000), pp. 325–343. [4] Apt, K. R., “Principles of Constraint Programming,” Cambridge University Press, 2003. [5] Boyd, S. and L. Vandenberghe, “Convex Optimization,” Cambridge University Press, 2004. [6] Dash, R. K., N. R. Jennings and D. C. parkes, Computational-mechanism design: A call to arms, IEEE Intelligent systems (2003), pp. 40–47. [7] Gibbens, R. J. and F. P. Kelly, Resource pricing and the evolution of congestion control., Automatica 35 (1999), pp. 1969–1985. [8] Griffin, T. G., F. B. Shepherd and G. Wilfong, The stable paths problem and interdomain routing, IEEE Transactions on Networking 10 (2002), pp. 232–243. [9] Huitema, C., “Routing in the Internet,” Prentice Hall, 2000. [10] Kelly, F., A. Maulloo and D. Tan, Rate control in communication networks: shadow prices, proportional fairness and stability, Journal of the Operational Research Society 49 (1998), pp. 237–252. [11] Khalil, H. K., “Nonlinear Systems,” Pearson Education, 2000, 3 edition. [12] Low, S., F. Paganini and J. Doyle, Internet congestion control: An analytical perspective, IEEE Control Systems Magazine (2002). [13] Wellman, M. P., A market-oriented programming environment and its application to distributed multicommodity flow problems, Journal of Artificial intelligence Research 1 (1993), pp. 1–23.

12

85

ETAPS 2005

FInCo 2005 Preliminary Version

FInCo 2005

Towards a Logical Analysis of Interactive Systems Ian A. Mason 1,2 School of Mathematics, Statistics, and Computer Science University of New England Armidale, NSW 2351, Australia

Carolyn L. Talcott 3 Computer Science Laboratory SRI International Menlo Park, CA 94025, USA

Abstract Formalization in a logical theory can contribute to the foundational understanding of interactive systems in two ways. One is to provide language and principles for specification of and reasoning about such systems. The other is to better understand the distinction between sequential (turing equivalent) computation and interactive computation using techniques and results from recursion theory and proof theory. In this paper we briefly review the notion of interaction semantics for actor systems, and report on work in progress to formalize this interaction model. In particular we have shown that the set theoretic models of the formal interaction theory have greater recursion theoretic complexity than analogous models of theories of sequential computation, using a well-known result from recursion theory. Key words: Actors, interaction semantics, Feferman theories, recursion theory.

1

Introduction

An important challenge for foundations of interactive computation is to provide a basis for specification and reasoning about interactive systems, eventually leading to principled methods for design, implementation, and deployment of such systems. A foundation should identify primitives for specification that in combination with 1

The authors wish to thank Michael Beeson for helpful discussions, and the anonymous reviewers for helpful criticisms. The work was partially supported by NSF grant CCR-023446. 2 Email: [email protected] 3 Email: [email protected] This is a preliminary version. The final version will be published in Electronic Notes in Theoretical Computer Science URL: www.elsevier.nl/locate/entcs

86

ETAPS 2005

Mason & Talcott

FInCo 2005

appropriate logical constructs lead to specification languages and logics for reasoning about the behavior of specified systems, and means of checking (statically or dynamically) that a given system meets its specification. Another challenge is to better understand the distinction between interactive computation and computability in the sense of Turing machines or lambda calculus. Intuitively it seems clear that interactive computation is not equivalent to Turing computation. The question is how to make this intuition more precise. Traditionally, notions of computability are strongly tied to complexity of fragments of first-order and other logics. We propose that one way to begin to understand the distinction between sequential (Turing equivalent) computation and interactive computation is to understand the power of the logics needed to formally represent models of interactive computation. To explore this idea in more depth, we examine a formalization, currently being developed, of the interaction semantics of actor systems. To be clear, we are not proposing a new model of interactive computation, but rather analyzing the expressive power of an existing model in comparison to sequential computation. To set context, in Section 2 we briefly review some of our work on actor semantics that lead to this notion of interaction semantics, and work on formal theories for sequential computation. The theories of sequential computation formalize and reason about input/output relations and their properties. These theories have natural, term generated, recursively enumerable models that capture the intended semantics. For example, Feferman’s theories such as IOCΛ and IOCλ [10] all have natural recursion theoretic models where the space of total functions Nat → Nat is interpreted as the total recursive functions, and the corresponding partial function space as the partial recursive functions. Theories of interactive computation must formalize the interactions a system may have with its environment, where nothing is known about the behavior of entities in the environment. In Section 3 we describe a Feferman style formal a formal theory of interaction semantics. In Section 3.3 we show that recursively enumerable models are not adequate to capture the intended interaction semantics of actor systems using a well-known result from recursion theory.

2

Actor semantics

The actor model [13,12,2,3] is a model of distributed computation based on the notion of independent computational agents, called actors, that interact solely via message passing. An actor can create other actors; send and receive messages; and modify its own local state. An actor can only affect the local state of other actors by sending them messages, and it can only send messages to its acquaintances—either actors whose names it was given upon creation, or names it received in a message or names of actors it created. Actor semantics admits only fair computations, which in the simplest case means reliable message delivery. 2

87

ETAPS 2005

Mason & Talcott

FInCo 2005

t



e1 : t / time @ c @

e0 : t / tick

@ @ @ @ @ R @

@ @

e3 : t / time @ c @

-

e1o : c / reply(1)

e2 : t / tick @ @ @ @ R @ @ R @ -

@

e o3 : c / reply(2)

Fig. 1. A Ticker event diagram. The vertical line, t, is the local time-line for the ticker actor t. It illustrates: the arrival order (incoming arrows) at t—e0 < e1 < e2 < e3 ; and the activation order (outgoing arrows)—e1 < eo1 , e3 < eo3 . Like Email, messages need not arrive in the order sent: both eo1 < eo3 and eo3 < eo1 are possible.

2.1

Traditional actor semantics

The central concepts of traditional actor semantics [5] are the partial order of events (an event being a message receipt), acquaintance laws (who can come to know whom), and fairness of computations. The essential properties are captured in the notion of event diagram [11,6] characterizing the possible computations of an actor system. In the following, we will use the much overworked Ticker actor to illustrate concepts. A Ticker actor, t, has its own integral notion of time n, and responds to two types of messages: a tick request, and a time request. A Ticker processes requests as follows: •

upon receiving a tick message (t / tick) a Ticker increments n, and sends itself a new tick message.



upon receiving a time request from an actor c (t / time @ c) a Ticker sends c a message reply(n) where n is its current notion of time.

Figure 1 shows the event diagram for a possible Ticker computation, where the Ticker t interacts with some external actor c. What might we say and/or prove about a Ticker? •

Every time request t / time @ c gets a reply c / reply(n) for some number n.



If there is always another time request, then for any n there is a reply c / reply(n0 ) with n < n0 among the messages sent.

Not much else can be said without imposing some additional causality constraints on the events. 3

88

ETAPS 2005 2.2

Mason & Talcott

FInCo 2005

Actor Theories, Components and Interaction Semantics

Although concerned with who could receive messages and when, the early work on actor semantics did not provide a notion of interface, or a mechanism for abstracting a group of actors as an interacting component. A theory of program equivalence for actors was developed in [4] using a lambda calculus based programming language. To define the semantics, a notion of actor system interface was introduced. An interface consists of two finite sets of actor names: receptionists (system actors whose names are known externally, and thus can receive messages from the environment), and externals (environment actors whose names are known by some system actor, and thus can be sent messages from a system actor). An actor system configuration is then a collection of actors and messages encapsulated by an interface. The operational semantics for the language is defined by a transition system on actor system configurations. A transition is either a computation step by a system actor processing a message, or input of a message to a receptionist from the environment, or output of a message to an external actor. A notion of observational equivalence was defined in the usual way as indistinguishability in all contexts [7]. In this setting, a context is a closed configuration with a hole to be filled by the program. The observation to be made is whether or not a specific message is emitted by the context. Equivalence of configurations is defined similarly. A number of techniques were developed to prove equational laws for programs and equivalence configuration. These techniques allowed one to focus on messages sent and received by a configuration under consideration, to treat computations (transition sequences) modulo an equivalence relation corresponding to equating different linearizations of the same event partial order, and to collapse multiple steps of an actor into one. Generalization of these reasoning techniques lead to two new ideas presented in [25,28]. One is the notion of interaction semantics of an actor system configuration as the set of possible interaction paths (sequences of input/output interactions) that could result from interaction of the system with an arbitrary environment. The other is the notion of actor theory as a rewriting logic based specification of the behavior of actors. Rewriting logic provides an operational semantics of actor system configurations in the form of derivations viewed as computations. These derivations satisfy the equations used in [4], and contain sufficient information to also derive the interaction semantics. In [27] an actor component algebra is defined to study the compositionality of the different forms of specification and semantics of actor systems. The algebra makes minimal assumptions about what is being composed, limiting the operations to interface restriction, parallel composition, renaming, and an identity component. In particular the ability to prefix an action to a component behavior is not assumed. Using morphisms between the different syntactic and semantic structures, compositionality of computational and interaction semantics was shown, thus justifying thinking of interaction semantics as a denotational semantics (without need for CPOs and limits!). 4

89

ETAPS 2005

Mason & Talcott

FInCo 2005

Two observations about interaction paths are in order. Firstly, while event diagrams give a true concurrency model, they talk about internal events. Interaction paths correspond to what can be observed from the outside, from an arbitrary point of view. Thus all orderings of independent events will be possible, but in general it will not be possible to infer causality beyond what is implied by local observations. Secondly, although only fair computations are used to define the set of interaction paths for a given actor configuration, there are no explicit fairness constraints on what interaction sequences can form an interaction path. The only constraints are that actor acquaintance laws are obeyed. For example, a message cannot be input to an actor that is not a receptionist, and an external actor is only known if it was known initially or introduced in an incoming message.

2.3

Specifying interaction paths

Actor theories are one way to specify the possible interactions of a component. Some advantages of such specifications are that they are executable and composible. On the other hand an actor theory specifies how a system works, not what it should do. For example, one might want to express that certain requests (incoming messages) are alway answered with a reply meeting given constraints, or the results of processing particular messages in a specific order (thus giving a stronger guarantee for a requestor that always waits for a reply before sending the next request). We have explored two alternative methods for specifying the interaction semantics of a component; specification diagrams (SD) [22,23,29], and mathematical specifications. SD is a language with both textual and graphical representation. SDs can express interaction patterns of sequencing, choice and concurrency, (similar to regular expressions over interactions). SD can also express requirements on the environment — messages that must/must not be sent, and internal states that should or should not be reached. A restricted subset of SDs correspond to executable specifications in the spirit of actor theories. In general a SD may be partial (specifying behavior only for inputs of interest) and may not be realizable, for example requiring behavior to depend on the future, not just the past! Partial specifications are appealing as they allow one to concentrate on interactions with the intended environment. However, compositionality is not guaranteed, since a component may meet the specified constraints but exhibit behavior that leads other partially specified components to go wrong when subjected to situations not provided for. A detailed comparison of SDs and interaction semantics with other formalisms for concurrent, interactive computation can be found in [23]. Mathematical specifications specify a set of interaction paths by mathematical formulas with variables ranging over paths, messages, and other relevant entities. They are informal but rigorous and written in a stylized way. Notation and principles have been developed for using event diagram concepts to constrain interaction paths to those compatible with a set of event diagrams. That is, we can specify a system by saying it is indistinguishable from one whose event diagram semantics 5

90

ETAPS 2005

Mason & Talcott

FInCo 2005

obeys the event diagram constraints. This is analogous to specifying a system by requiring it to be ‘equivalent’ to some simply defined system. In the next section we describe a formal theory of interaction paths in which mathematical specifications can be represented as logical formulae. This is illustrated by TickerMS , a mathematical specification of a Ticker.

3

Variable Type Theories for Actors

When developing a formal theory the first question to ask is: What do we want to represent? Here we focus on formalization of semantic notions and their properties, a meta-logic, as a stepping stone to a logic of interaction. Thus we need to represent: actor system descriptions (behaviors, interfaces, configurations); operational semantics (transitions and fair computations); interaction semantics; and the satisfaction relation between a system description and a property of interaction paths iC |= Φ ⇔ [[iC ]] ⊆ [[Φ]]. Here iC is an actor configuration, with an explicit interface, and Φ is a sentence in our formal theory. The semantics [[iC ]], or meaning, of the configuration iC is a set of interaction paths, each interaction path distilling the observable interactions from the actions that take place in a particular computation path. Analogously the semantics [[Φ]], or meaning, of the formula Φ is the set of those interaction paths that satisfy it. We continue our approach of using logical theories developed by Feferman to formalize constructive mathematics. These are 2-sorted classical theories called variable type theories in which both functions and data are objects of discourse in a first order setting, as are collections of such things (called classifications). Classifications provide a balance between expressive power and complexity, allowing one to represent inductively and co-inductively defined sets, computable function spaces and other sets of interest, all in a first-order setting. In the spirit of Landin [16], the languages we have studied consist of lambda expressions augmented with operations for computational primitives of interest: control abstractions, memory allocation and access, actor creation and messaging, and so on. The sequential languages all have a transition system semantics with strong uniformity properties [26] and are called Landinesque languages. Earlier work on formal theories for sequential languages includes IOCC, VTLoE, and FLL. IOCC [24] is an adaptation of Feferman’s IOCλ [10] that formalizes continuations and control primitives such as Scheme’s call-cc. VTLoE [15] was developed to reason about functional programs with effects, such as ML, Scheme, or Lisp. VTLoE was generalized to a logic, FLL, (Feferman-Landin Logic) [18] for reasoning about Landinesque languages. Adapting constructions of [8,9], it was shown in [24] that IOCC has term-based, recursively enumerable models. 6

91

ETAPS 2005 3.1

Mason & Talcott

FInCo 2005

Formalizing Interaction Semantics

The formalization uses Feferman’s IOCλ as a starting point. This formal system provides quantification over individuals and classifications. Individuals include lambda terms, and numbers. Classifications (briefly classes) are collections of individuals defined by comprehension K = {x ψ(x)}. Constants, operations, axioms and rules for actor specific entities are then added, including: the Actor Communication Basis (ACB) that provides a basis for both the semantic and behavioral descriptions of actor configurations; interfaces; interaction paths, the elements of interaction semantics; configurations, specified by describing their constituent actor’s behaviors; and computation paths, corresponding to single executions of a configuration. Notationwise, Pω (X) is the set of finite subsets of X, while Mω (X) is the set of finite multisets from X, and ∅ is the empty set. Following logical tradition, the axioms are presented informally, with the understanding that it is not problematic to fill in details needed to be completely formal.

Actor Communication Basis. The actor communication basis, ACB, is a direct translation of the rewriting logic formalization of actor theory [28]. It defines the basic language needed to talk about actor interactions. The basic sorts are actor names, a ∈ A, message contents, M ∈ Msg, and message packets, a / M ∈ MP ∼ = A × Msg. The components of a message packet are call the target (an actor name) and the message contents which can be extracted by the two operations target : MP → A, and message : MP → Msg. We let mp range over MP.

Interfaces. Interfaces are used to encapsulate configurations, and interaction sequences. They consist of two finite sets of actor names, (ρ, χ) ∈ Iface, the receptionists, ρ, and the externals, χ. In other words Iface ∼ = Pω (A) × Pω (A). Recall that the receptionists are those internal actors (internal to a configuration) known externally, while the externals, are those external actors (external to a configuration) known internally.

Interaction paths. Interaction paths are formalized by introducing a class constant ISeq of interaction sequences, that is not defined by comprehension. Thus interaction sequences are not necessarily λ definable. In fact we will see below that they cannot all be λ definable. Mathematically, an interaction path, ip ∈ IP, is a sequence of interactions annotated by an initial interface. An interaction, io ∈ IO, is either the input or the output of a message packet: IO ∼ = in(MP)∪out(MP). Sequences of interactions, ϑ ∈ ISeq, are just functions from natural numbers into interactions, enriched 7

92

ETAPS 2005

Mason & Talcott

FInCo 2005

with a silent tau transition: ISeq ∼ = (Nat → IO ∪ {τ }). Interaction paths are constructed from interfaces and interaction sequences via ( ) : Iface × ISeq → IP. We write IP(ρ, χ) for the set of interaction paths of the form (ρ, χ)ϑ. We now have enough of the formalization to illustrate mathematical specifications and discuss the structure of models of the theory. To spare the reader more formalities we omit discussion of formalization of the actor behaviors and operational semantics. It follows closely the presentation given in [28]. The only problematic part is that, as for interaction sequences, we must introduce a class constant for computation paths, that can not be defined by comprehension. To provide a flavor of the omitted formalization, we give terms describing the Ticker behavior, and computation and interaction paths corresponding to the Ticker event diagram of Figure 1. A behavior is a lambda term of the form λ(a, M , ν)e, that takes a message packet, decomposed into target a, and contents M , and a function ν to be used to generate fresh names for any actors created. The body e must evaluate to a new behavior lambda together with a configuration consisting of any created actors and messages to be sent. •

Ticker (n) is a ticker actor behavior with local time n where Ticker = λn.λ(a, M , ν) if(M = tick) then(Ticker (n + 1), a / tick) else if(M = time @ c) then(Ticker (n), c / reply(n)) else(Ticker (n), ∅) In the case of the Ticker, no new actors are created, so the argument ν is not used in the body.



The following is a computation of an interfaced configuration consisting of a ticker actor and a tick message. (a : B) denotes an actor with name a and behavior B. Input/output transitions add/remove messages from the internal configuration. A delivery transition d(mp) applies the behavior of the target actor to the message packet, and a name generating function, to obtain the new behavior of that actor and any additions to the configuration. (t, ∅)(t : Ticker (0), t / tick) in(t/time@c)

=⇒

(t, c)(t : Ticker (0), t / tick, t / time @ c)

d(t/tick)

=⇒ (t, c)(t : Ticker (1), t / tick, t / time @ c)

d(t/time@c)

=⇒

(t, c)(t : Ticker (1), t / tick, c / reply(1))

d(t/tick)

=⇒ (t, c)(t : Ticker (2), t / tick, c / reply(1))

in(t/time@c)

=⇒

(t, c)(t : Ticker (2), t / tick, t / time @ c, c / reply(1)) 8

93

ETAPS 2005

Mason & Talcott out(c/reply(1))

=⇒

FInCo 2005

(t, c)(t : Ticker (2), t / tick, t / time @ c)

...



The corresponding interaction path is (t, ∅)((0, in(t / time @ c)), (4, out(c / reply(1))), , . . .) where an interaction sequence, ϑ is represented as the set of pairs (n, io) such that ϑ(n) = io omitting silent interactions (io = τ ).

3.2

Specification

To illustrate how properties might be formalized in this theory we develop notation for expressing properties using event diagram notions, and show how this can be used to specify Ticker interactions. Event Diagram Notation. We represent events as time stamped interactions (n, io). The input events (InE (ip)) and output events (OutE (ip)) of an interaction path ip = (ρ, χ)ϑ, are then defined by InE (ip) = {(n, in(mp)) n ∈ Nat ∧ mp ∈ MP ∧ ϑ(n) = in(mp)} OutE (ip) = {(n, out(mp)) n ∈ Nat ∧ mp ∈ MP ∧ ϑ(n) = out(mp)} An event diagram is given by two ordering relations: the arrival order—that determines for each actor, the order in which messages are received; and the activation order— the causal relation between message sending (as a result of a receive) and receipt by the target. For input events D (for delivered) and actor name a an ao arrival order, −→ ∈ Arro(D, a), is a total order on the events in D with target a. The arrival order is a postulated order in which the messages are delivered to a during the computation, and may be different from the order in which they are input to the system. An activation order for D,