APPLICATIVE ASSERTIONS

0 downloads 0 Views 172KB Size Report
language of Boolean expressions of the applicative language considered has ... We base the following developments on the semantics of the language .... The next properties will allow the use of assertions in subexpressions of an expression.
APPLICATIVE ASSERTIONS Bernhard M¨oller Institut f¨ ur Informatik, Technische Universit¨at M¨ unchen Postfach 20 24 20, D-8000 M¨ unchen 2, Fed. Rep. Germany

1

Introduction

We present a way of introducing and manipulating assertions in an applicative language, mainly for use in transformational program development. It turns out that this can be achieved wholly within the language itself. The advantage of this is twofold: One has the full power of the language in formulating the assertions, hence decreasing problems with expressiveness. Second, the algebraic laws of the language carry over to the assertions and allow a uniform way of manipulating programs together with assertions. We represent assertions within applicative programs by Boolean expressions. Of course, the sublanguage of Boolean expressions of the applicative language considered has to be quite rich, e.g., it has to include quantifiers. An example of such a language is CIP-L (cf. [Bauer et al. 89] for a survey; details are given in [Bauer et al. 85]). We are then free to use recursively defined predicates in assertions; this provides great flexibility. The basic idea is that an expression E with assertion A is represented by another expression F : if E satisfies A, then F should be equivalent to E alone, whereas, if E fails to satisfy A, it should be equivalent to error. This can be realized by taking for F the conditional expression if A then E else error fi which we abbreviate to A  E. In this, both A and E may be non-determinate. However, usually we will use only determinate assertions A. The main emphasis in this paper is placed on a formal calculus of such assertions and of their propagation. Such calculi have been proposed for logical frameworks (cf. [Monk 88]) and at the metalanguage level for applicative languages (cf. [Pepper 85, 87]). The idea here is to stay at the object language level as much as possible. Although we present a number of examples, we have to refer to the literature for really interesting uses of assertions during transformational program development (see e.g. [Partsch 86]); however, there the developments are usually not fully calculized.

2 2.1

The Algebra of Assertions Preliminaries

We base the following developments on the semantics of the language CIP-L. In this section we briefly sketch the central notions of this semantics; the full formal details can be found in [Bauer et al. 85]. 1

For an identifier x and expression E we write OCCURS[[x in E]] to express that x occurs free in E. By E[[F for x]] we denote the expression that results from E by replacing all free occurrences (if any) of x by F (with appropriate renaming of bound variables, if necessary). We use the notation E1 ≡ E2 to express that E1 and E2 have the same sets of possible values (non-determinacy!). Such equivalences are also denoted in the form of transformation rules, viz. as Ex1   y

h

C

E2 where C is a (possibly empty) list of applicability conditions, i.e., of conditions sufficient for the validity of the equivalence. The special value ⊥ models nontermination as well as partialities of basic operations; the expression error has ⊥ as its only possible value. The language uses the concept of the so-called erratic nondeterminism; hence e.g. {⊥, 0}, {0}, and {⊥} are considered as different sets of possible values, i.e., neither is ⊥ absorbed by defined values nor the other way around. Two important predicates about expressions E are DEFINED[[E]] DETERMINATE[[E]]

def

⇔ def ⇔

⊥ is not a possible value of E; E has only one possible value .

Moreover, for expression F and identifier x we set def

STRICT[[F wrt x]] ⇔ F [[error for x]] ≡ error . CIP-L has a call-by-value, call-time-choice semantics (cf. [Hennessy, Ashcroft 76]). Hence the usual β-conversion is not unconditionally valid for CIP-L. We only have ((m x)n : F )(E) x   y

"

(STRICT[[F wrt x]] ∨ DEFINED[[E]]) ∧ (SINGULAR[[F wrt x]] ∨ DETERMINATE[[E]])

F [[E for x]] Here, SINGULAR[[F wrt x]] means that F contains at most one free occurrence of x. The declaration of auxiliary identifiers is defined by the following rule: Let F be an expression of type n possibly containing free occurrences of the identifier x of type m, and let E be an expression of type m such that ¬OCCURS[[x in E]]. Then def

d m x ≡ E ; F c ≡ ((m x)n : F )(E) . As a consequence, we have (2.1)

G(E) ≡ d m x ≡ E ; G(x) c provided ¬OCCURS[[x in E, G]].

An important construct is the choice some m x : B with a Boolean expression B possibly involving the identifier x. Its possible values are all those values u for which B[[u for x]] has true as a possible value (for convenience, values, too, are considered as expressions in the semantic description of CIP-L). If none such value exists or if B[[u for x]] ≡ error for some u, then ⊥ is the only possible value. Assume now that = is a (strict) equality test for objects of type m, i.e., that = has type funct(m,m)bool. Then a characteristic property of the choice is 2

(2.2)

(some m x : x = E) ≡ E provided DETERMINATE[[E]] ∧ DEFINED[[E]] .

Like any applicative language, CIP-L has a conditional expression. Two important properties of it are (2.3)

if B then E else F fi ≡ if B then E[[true forsome B]] else f [[false forsome B]] fi provided DETERMINATE[[B]] .

(2.4)

E ≡ if B then E else E fi provided DEFINED[[B]] .

Here, for expressions X, Y, Z we denote by X[[Y forsome Z]] any expression that results from X by replacing some occurrences of Z by Y (with appropriate renaming of bound variables, if necessary). In the sequel, in addition to the usual strict Boolean connectives ∧ , ∨ , and ⇒ , we also use the . . and ⇒ ., ∨ sequential (or conditional) Boolean connectives ∧ defined by

(2.5)

B ∧. C B ∨. C . B⇒C

def



def



def



if B then C else false fi if B then true else C fi if B then C else true fi .

Their characteristic properties are

(2.6)

2.2

true ∧. C ≡ C . C ≡ false false ∧ . C ≡ error error ∧

true ∨. C ≡ true .C ≡ C false ∨ error ∨. C ≡ error

. true ⇒ C ≡ C . false ⇒ C ≡ true . error ⇒ C ≡ error .

Properties of Assertions

The following properties of assertions should be self-evident (indeed, in CIP-L they can be verified w.r.t the mathematical semantics of the language): (2.7)

true  E ≡ E

(2.8)

false  E ≡ error

(2.9)

error  E ≡ error .

The next properties will allow the use of assertions in subexpressions of an expression. Let B, F, H be determinate expressions such that B is of type bool and F, G have equal type m. Assume moreover that = is an equality test for objects of type m. Then we have (2.10)

B  E ≡ B  E[[true forsome B]]

(2.11)

(F = H)  E ≡ (F = H)  E[[H forsome F ]]

provided no new bindings for the free variables of B resp. H and F surround the occurrences of B resp. F that are replaced. Further useful properties of assertions are the following: (2.12)

(B ∧ C)  E ≡ B  ((B ∧ C)  E) provided DETERMINATE[[B]] . 3

(2.13)

. C)  E (B  C)  E ≡ B  (C  E) ≡ (B ∧ C)  E ≡ (B ∧

(2.14)

B  E ≡ B  (B  E) provided DETERMINATE[[B]] . . B  E ≡ B  (C  E) provided B ⇒ C ≡ true .

(2.15)

Moreover, if Q is a predicate and x is an identifier, (2.16)

Q(x) ≡ true ⇔ (Q(x)  x) ≡ x .

Hence (2.17)

Q(E) ≡ true ⇒ d m x ≡ E ; F (x) c ≡ d m x ≡ E ; Q(x)  F (x) c provided ¬OCCURS[[x in F ]] ∧ STRICT[[F ]] .

More generally, (2.18)

. R ⇒ Q(E) ≡ true ⇒ R  d m x ≡ E ; F (x) c ≡ R  d m x ≡ E ; Q(x)  F (x) c provided ¬OCCURS[[x in F ]] ∧ STRICT[[F ]] .

The next few properties serve for propagating assertions into subexpressions of an expression. (2.19)

B  f (E) ≡ f (B  E) provided STRICT[[f ]] ∨ DEFINED[[B]] .

(2.20)

B  (E1 , . . . , En ) ≡ (B  E1 , . . . , B  En ) provided DETERMINATE[[B]] .

(2.21)

B  f (E1 , . . . , En ) ≡ f (B  E1 , . . . , B  En ) provided (STRICT[[f ]] ∨ DEFINED[[B]]) ∧ DETERMINATE[[B]] .

(2.22)

if B then E else F fi ≡ if B then B  E else F fi ≡ if B then E else ¬B  F fi provided DETERMINATE[[B]] .

(2.23)

P  if B then E else F fi ≡ P  if B then (P ∧ B)  E else F fi ≡ P  if B then E else (P ∧ ¬B)  F fi ≡ if B then P  E else P  F fi ≡ if P ∧ B then P  E else P  F fi ≡ if P ∧ B then (P ∧ B)  E else (P ∧ ¬B)  F fi provided DETERMINATE[[P, B]] .

(2.24)

For ? ∈ {∀, ∃, some} and Boolean expression Q, P ? m x : Q ≡? m x : P  Q provided ¬OCCURS[[x in P ]] ∧ DETERMINATE[[P ]] .

Two auxiliary properties for Boolean expressions are the following: (2.25)

(F = H) ∧ B ≡ (F = H) ∧ B[[H forsome F ]] provided no new bindings for the free variables of H and F surround the occurrences of F that are replaced.

(2.26)

B ≡ B ∧ C provided B ⇒ C ≡ true . 4

3

Parameter restrictions

As a first example of assertions we describe parameter restrictions for functions (cf. [Bauer, W¨ossner 82, Bauer et al. 85]): Let R be a Boolean expression possibly involving the identifier x of type m. Then the declaration funct f ≡ (mx : R) n : E of function f with parameter x restricted by R and body E of result type n is by definition equivalent to funct f ≡ (mx) n : R  E . This means that f is undefined for all arguments x that violate the restriction R, or, in other words, that R is a precondition for definedness of f . If f is recursive, R has to hold also for the parameters of the recursive calls to ensure definedness; hence in this case R corresponds to what is known as an invariant at the procedural language level.

4

A Simple Example

In this section we want to demonstrate the use of the algebra of assertions in transformational program development. The problem to be treated consists in finding square roots of natural numbers. More precisely, the specification reads funct sqrt ≡ (nat m) nat : some nat k : k 2 ≤ m ∧ (k + 1)2 > m . The first step in the development uses the technique of embedding or generalization: The original problem is embedded into a more general one, in this case via an additional parameter that provides a lower bound for the search space. The specification for the embedding function esqrt reads funct esqrt ≡ (nat m, nat u : u2 ≤ m) nat : some nat k : u ≤ k ∧ k 2 ≤ m ∧ (k + 1)2 > m . Thus the specification of esqrt consists of the one of sqrt together with the requirement that the result lie above the parameter u. Hence the old problem indeed is a special case of the new one; 0 is a trivial lower bound and thus (4.1)

sqrt(m) ≡ esqrt(m, 0) for all m.

The next goal is now to use the additional degree of freedom gained by the parameter u to develop a recursive version of esqrt no longer containing any occurrences of the specification construct some. To achieve this, we use algebraic manipulations to derive a recursion equation for esqrt . In this calculation, a prominent role is played by the algebraic laws for assertions given in the previous section. We calculate:

5

esqrt(m, u) ≡ (by definition of esqrt ) (u2 ≤ m)  some nat k : u ≤ k ∧ k 2 ≤ m ∧ (k + 1)2 > m ≡ (arithmetic) (u2 ≤ m)  some nat k : (u = k ∨ u < k) ∧ k 2 ≤ m ∧ (k + 1)2 > m ≡ (distributivity of ∧ over ∨ ) (u2 ≤ m)  some nat k : (u = k ∧ k 2 ≤ m ∧ (k + 1)2 > m) ∨ (u < k ∧ k 2 ≤ m ∧ (k + 1)2 > m) ≡ (arithmetic) (u2 ≤ m)  some nat k : (u = k ∧ k 2 ≤ m ∧ (k + 1)2 > m) ∨ (u + 1 ≤ k ∧ k 2 ≤ m ∧ (k + 1)2 > m) ≡ (by (2.24)) some nat k : (u2 ≤ m)  ( (u = k ∧ k 2 ≤ m ∧ (k + 1)2 > m) ∨ (u + 1 ≤ k ∧ k 2 ≤ m ∧ (k + 1)2 > m) ) ≡ (by (2.19) and (2.20)) some nat k : ( (u2 ≤ m)  (u = k ∧ k 2 ≤ m ∧ (k + 1)2 > m) ∨ (u2 ≤ m)  (u + 1 ≤ k ∧ k 2 ≤ m ∧ (k + 1)2 > m) ) ≡ (by (2.25)) some nat k : ( (u2 ≤ m)  (u = k ∧ u2 ≤ m ∧ (k + 1)2 > m) ∨ (u2 ≤ m)  (u + 1 ≤ k ∧ k 2 ≤ m ∧ (k + 1)2 > m) ) ≡ (by (2.10)) some nat k : ( (u2 ≤ m)  (u = k ∧ true ∧ (k + 1)2 > m) ∨ (u2 ≤ m)  (u + 1 ≤ k ∧ k 2 ≤ m ∧ (k + 1)2 > m) ) ≡ (neutrality of true w.r.t. ∧ ) some nat k : ( (u2 ≤ m)  (u = k ∧ (k + 1)2 > m) ∨ (u2 ≤ m)  (u + 1 ≤ k ∧ k 2 ≤ m ∧ (k + 1)2 > m) ) ≡ (by (2.19), (2.20), (2.24)) (u2 ≤ m)  some nat k : (u = k ∧ (k + 1)2 > m) ∨ (u + 1 ≤ k ∧ k 2 ≤ m ∧ (k + 1)2 > m) ≡ (by (2.4)) (u2 ≤ m)  if (u + 1)2 > m then some nat k : (u = k ∧ (u + 1)2 > m) ∨ (u + 1 ≤ k ∧ k 2 ≤ m ∧ (k + 1)2 > m) else some nat k : (u = k ∧ (u + 1)2 > m) ∨ (u + 1 ≤ k ∧ k 2 ≤ m ∧ (k + 1)2 > m) fi ≡ (by (2.26) and arithmetic) (u2 ≤ m)  if (u + 1)2 > m then some nat k : (u = k ∧ (u + 1)2 > m) ∨ (u + 1 ≤ k ∧ (u + 1)2 ≤ k 2 ∧ k 2 ≤ m ∧ (k + 1)2 > m) else some nat k : (u = k ∧ (u + 1)2 > m) ∨ (u + 1 ≤ k ∧ k 2 ≤ m ∧ (k + 1)2 > m) fi ≡ (transitivity of ≤ and (2.26)) (u2 ≤ m)  if (u + 1)2 > m then some nat k : (u = k ∧ (u + 1)2 > m) ∨ ( u + 1 ≤ k ∧ (u + 1)2 ≤ k 2 ∧ k 2 ≤ m ∧ (u + 1)2 ≤ m ∧ (k + 1)2 > m) else some nat k : (u = k ∧ (u + 1)2 > m) ∨ (u + 1 ≤ k ∧ k 2 ≤ m ∧ (k + 1)2 > m) fi 6

≡ (arithmetic and (2.3)) (u2 ≤ m)  if (u + 1)2 > m then some nat k : (u = k ∧ true) ∨ (u + 1 ≤ k ∧ (u + 1)2 ≤ k 2 ∧ k 2 ≤ m ∧ false ∧ (k + 1)2 > m) else some nat k : (u = k ∧ false) ∨ (u + 1 ≤ k ∧ k 2 ≤ m ∧ (k + 1)2 > m) fi ≡ (Boolean algebra) (u2 ≤ m)  if (u + 1)2 > m then some nat k : u = k else some nat k : u + 1 ≤ k ∧ k 2 ≤ m ∧ (k + 1)2 > m fi ≡ (by (2.2)) (u2 ≤ m)  if (u + 1)2 > m then u else some nat k : u + 1 ≤ k ∧ k 2 ≤ m ∧ (k + 1)2 > m fi ≡ (by (2.22) and arithmetic) (u2 ≤ m)  if (u + 1)2 > m then u else (u + 1)2 ≤ m  some nat k : u + 1 ≤ k ∧ k 2 ≤ m ∧ (k + 1)2 > m fi ≡ (by definition of esqrt ) (u2 ≤ m)  if (u + 1)2 > m then u else esqrt(m, u + 1) fi . def

Since termination of the obtained recursion is immediate (take f (m, u) ≡ m−u2 as the “variant”, i.e., as the termination function, cf. [Dijkstra 76], [Bauer, W¨ossner 82]), we may pass to the recursive definition funct esqrt ≡ (nat m, nat u : u2 ≤ m) nat : if (u + 1)2 > m then u else esqrt(m, u + 1) fi . We have obtained a first operative solution of our problem. However, the repeated squaring in each of the incarnations makes the program rather inefficient. An ameliorated version is obtained by using an additional parameter that carries the square (u + 1)2 and is incremented properly from incarnation to incarnation. This technique, also called finite differencing [Paige, Koenig 82] or formal differentiation [Sharir 82], is a generalization of the well-known “strength reduction” in compiler construction. To apply it, we use another assertion that expresses the relation between the “old” and the “new” parameters. We define funct esqrts ≡ (nat m, nat u, nat v : u2 ≤ m ∧ v = (u + 1)2 ) nat : esqrt(m, u) . Based on the relation (4.2)

esqrt(m, u) ≡ esqrts(m, u, (u + 1)2 ) 7

we calculate

≡ ≡







esqrts(m, u, v) (by definition of esqrts) u2 ≤ m ∧ v = (u + 1)2  esqrt(m, u) (by definition of esqrt ) u2 ≤ m ∧ v = (u + 1)2  if (u + 1)2 > m then u else esqrt(m, u + 1) fi (by (2.13), (2.11), (4.2)) u2 ≤ m ∧ v = (u + 1)2  if v > m then u else esqrts(m, u + 1, ((u + 1) + 1)2 ) fi (arithmetic) u2 ≤ m ∧ v = (u + 1)2  if v > m then u else esqrts(m, u + 1, (u + 1)2 + 2 ∗ (u + 1) + 1) fi (by (2.13), (2.11) and arithmetic) u2 ≤ m ∧ v = (u + 1)2  if v > m then u else esqrts(m, u + 1, v + 2 ∗ u + 3) fi .

The termination argument is still valid; hence we obtain funct esqrts ≡ (nat m, nat u, nat v : u2 ≤ m ∧ v = (u + 1)2 ) nat : if v > m then u else esqrts(m, u + 1, v + 2 ∗ u + 3) fi . Another transformation of this kind leaves us with funct esqrto ≡ ( nat m, nat u, nat v, nat w : u2 ≤ m ∧ v = (u + 1)2 ∧ w = 2 ∗ u + 3) nat : if v > m then u else esqrto(m, u + 1, v + w, w + 2) fi which satisfies the relation (4.3)

esqrts(m, u, v) ≡ esqrto(m, u, v, 2 ∗ u + 3)

and hence solves the square root problem according to (4.4)

sqrt(m) ≡ esqrt(m, 0) ≡ esqrts(m, 0, 1) ≡ esqrto(m, 0, 1, 3) .

To transform esqrto into iterative form we use the following general transformation rule (ITER) for passing from tail-recursion to a loop (NEW[[y]] requires y to be a fresh identifier):

8

g(A) where funct g ≡ (m x : P (x)) n : if B(x) then E(x) else g(L(x)) fi  x   y

   

P (A) ≡ true DETERMINATE[[P ]] . ∀ m x : [(P (x) ∧. ¬B(x)) ⇒ P (L(x)) ≡ true] NEW[[X]]

h(A) where funct h ≡ (m X : P (X)) n : d var m x := X ; while ¬B(x) do x := L(x) od ; E(x) c Here, NEW[[X]] requires X to be a fresh identifier. Applying this to esqrto and unfolding the resulting function in sqrt we obtain finally funct sqrt ≡ (nat m) nat : d (var nat u, var nat v, var nat w) := (0, 1, 3) ; while v ≤ m do (u, v, w) := (u + 1, v + w, w + 2) od ; u c

5

Strengthening and Weakening Assertions

Frequently, one has to strengthen an assertion to achieve a certain simplification. However, after the simplification, one wants to get rid of the assertion again. A tool for this for the case of linearly recursive functions is the following rule: Theorem: The following transformation rule is valid: g(A) where funct g ≡ (m x : P (x)) n : if B(x) then E(x) else K(g(L(x))) fi 

x   y

           

Q(A) ≡ true DETERMINATE[[Q]] DETERMINATE[[P ]] ∀ m x : [(((P (x) ∧ Q(x)) ∧. ¬B(x)) . ⇒ Q(L(x)) ≡ true] STRICT[[K]] ¬OCCURS[[g, h in L(x), K]]

h(A) where funct h ≡ (m x : P (x) ∧ Q(x)) n : if B(x) then E(x) else K(h(L(x))) fi 9

Thus, g and h have the same recursion structure, but h has the stronger “invariant” P (x) ∧ Q(x).

Proof: We use computational induction (see e.g. [Manna 74]) to show R[g, h] where the predicate R is given by def

R[k, l] ⇔ ∀mx : [Q(x)  k(x) ≡ Q(x)  l(x)] . The induction base R[Ω, Ω] is trivial. For the induction step assume R[k, l]. Then we have to show R[τg [k], τh [l]] where τg , τh are the functionals belonging to the recursive definitions of g and h. We calculate Q(x)  τg [k](x) ≡ (by definition) Q(x)  (P (x)  if B(x) then E(x) else K(k(L(x))) fi) ≡ (by (2.13)) P (x) ∧ Q(x)  if B(x) then E(x) else K(k(L(x))) fi ≡ (by (2.23)) P (x) ∧ Q(x)  if P (x) then E(x) else P (x) ∧ Q(x) ∧ ¬B(x)  K(k(L(x))) fi ≡ (by (2.13)) P (x) ∧ Q(x)  if P (x) then E(x) . ¬B(x)  K(k(L(x))) fi else (P (x) ∧ Q(x)) ∧ ≡ (by (2.19)) P (x) ∧ Q(x)  if P (x) then E(x) . ¬B(x)  k(L(x))) fi else K((P (x) ∧ Q(x)) ∧ ≡ (by (2.1)) P (x) ∧ Q(x)  if P (x) then E(x) . ¬B(x)  else K( (P (x) ∧ Q(x)) ∧ d m y ≡ L(x) ; k(y) c ) fi ≡ (by the fourth applicability condition and (2.18)) P (x) ∧ Q(x)  if P (x) then E(x) . ¬B(x)  else K( (P (x) ∧ Q(x)) ∧ d m y ≡ L(x) ; Q(y)  k(y) c) fi ≡ (by the induction hypothesis) P (x) ∧ Q(x)  if P (x) then E(x) . ¬B(x)  else K( (P (x) ∧ Q(x)) ∧ d m y ≡ L(x) ; Q(y)  l(y) c) fi ≡ (by the fourth applicability condition and (2.18)) P (x) ∧ Q(x)  if P (x) then E(x) . ¬B(x)  else K( (P (x) ∧ Q(x)) ∧ d m y ≡ L(x) ; l(y) c ) fi 10

≡ (by (2.1)) P (x) ∧ Q(x)  if P (x) then E(x) . ¬B(x)  l(L(x))) fi else K((P (x) ∧ Q(x)) ∧ ≡ (by (2.19)) P (x) ∧ Q(x)  if P (x) then E(x) . ¬B(x)  K(l(L(x))) fi else (P (x) ∧ Q(x)) ∧ ≡ (by (2.13) and (2.23)) P (x) ∧ Q(x)  if B(x) then E(x) else K(l(L(x))) fi ≡ (by (2.12)) Q(x)  (P (x) ∧ Q(x)  if B(x) then E(x) else K(l(L(x))) fi) ≡ (by definition) Q(x)  τh [l](x) . t u In the case where P (x) ≡ true this rule gives us the possibility of discarding the invariant Q(x) from h by passing to g. Note that similar rules can be developed for more general kinds of recursion.

6

Applicative Updating

As an application of the previous theorem we want to develop an algorithm for rearranging the elements of an array. In doing so we will now make larger steps, since the detailed working of the assertion calculus has already been demonstrated. We consider arrays to be partial mappings from a fixed finite set index to a set elem of array elements. A fairly general applicative specification of our problem reads as follows: funct move ≡ (array a) array : some array b : ∀ index i : b[i] = f (a, i) . Here, f may be any function of type funct(array,index)elem where elem is the type of the array elements. Define set index all ≡ {index i : true} . Then a tail-recursive solution of this problem (the derivation of which we omit) is funct move ≡ (array a) array : rmove(a, c, all ) where funct rmove ≡ (array a, d, set index s) array : if s = ∅ then d else index i ≡ some index j : j ∈ s ; rmove(a, d[i ← f (a, i)], s\{i}) fi Here, d[i ← x] is the array that results from d by changing the i-th element to x. Note that the parameter c in the call rmove(a, c, all ) is a completely arbitrary value of type array. This will be the key to obtaining an applicative preparation of overwriting with selective updating. 11

Now we specialize the problem by assuming that the index set is linearly ordered by an order ≤. Then the choice of i above may be specialized to taking the minimum of s w.r.t. ≤: funct rmove ≡ (array a, d, set index s) array : if s = ∅ then d else index i ≡ min(s) ; rmove(a, d[i ← f (a, i)], s\{i}) fi Now the indices are processed in ascending order. If the function f is oriented w.r.t. ≤, i.e., if for all i the value f (a, i) depends at most on values a[j] with j ≥ i, we can, at the procedural level, implement an assignment a := move(a) by selectively updating the elements a[i] in ascending order. We want to prepare this step at the applicative level using again assertions. To state formally when a function is oriented, we introduce some notation: Let, for index i, ∨

def

ı ≡ {index j : i ≤ j},

i.e., the interval above i. Moreover, for an array a and a set s ⊆ index we define the selection function ( s → elem a|s : i 7→ a[i] . Then we can define def

ORIENTED[[f wrt ≤]] ⇔ ∨ ∨ ∀array b, c, index i : [b| ı= c| ı ≡ true ⇒ f (b, i) ≡ f (c, i)] ; here, = is the equality test on finite mappings. Note that the function that sorts an array cannot be oriented w.r.t. any ordering at all. We now apply the rule from the previous section to move . Here, P (x) ≡ true, and for Q(x) we choose a|s = d|s . To satisfy the preconditions of our rule we first specialize the arbitrary parameter c to a, because then Q holds for the initial call. The second condition, viz. . (a|s = d|s ∧ s 6= ∅ ⇒ a|s = d[i ← a[i]]|t) ≡ true ∨







for i ≡ min(s), t ≡ s\{i}, holds, since b| ı= c| ı implies b| = c|  for all j ≥ i by transitivity of ≤. Hence we obtain funct move ≡ (array a) array : srmove(a, a, all ) where funct srmove ≡ (array a, d, set index s : a|s = d|s) array : if s = ∅ then d else index i ≡ min(s) ; srmove(a, d[i ← f (a, i)], s\{i}) fi 12

Using ORIENTED[[f wrt ≤]] and the assertion this may now be transformed into funct move ≡ (array a) array : srmove(a, a, all ) where . funct srmove ≡ (array a, d, set index s : s 6= ∅ ⇒ a|s = d|s) array : if s = ∅ then d else index i ≡ min(s) ; srmove(a, d[i ← f (d, i)], s\{i}) fi . Now the body of srmove has become independent of a—except for the assertion. By using the rule of weakening, however, we can eliminate the assertion and thus get rid of the parameter a altogether. This is the decisive step in getting an applicative variant of overwriting. The result reads funct move ≡ (array a) array : esrmove(a, all ) where funct esrmove ≡ (array d, set index s) array : if s = ∅ then d else index i ≡ min(s) ; esrmove(d[i ← f (d, i)], s\{i}) fi We now again apply the rule (ITER) from section 2 to transform esrmove into funct esrmove ≡ (array D, set index S) array : d (var array d, var set index s) := (D, S) ; while s 6= ∅ do index i ≡ min(s) ; (d, s) := (d[i ← f (d, i)], s\{i}) od ; d c Finally, we can implement the assignment a := move(a) first by a := esrmove(a, all ) and then reuse the variable a to obtain d var set index s := all ; while s 6= ∅ do index i ≡ min(s) ; (a, s) := (a[i ← f (a, i)], s\{i}) od c We omit the formal treatment of this last step, since it is independent of the use of assertions. 13

7

Calculating Parameters From Assertions

The development in the previous section depended on the property ORIENTED[[f wrt ≤]]. If this property does not hold for a given f , one can try to find another order  on index such that ORIENTED[[f wrt ]] is satisfied. Let us demonstrate this with an example: Assume that index ≡ nat i : i < N for some constant N , and that (

(7.1)

f (a, i) =

c0 if i = 0 a[i − 1] otherwise .

The requirement on  then unfolds into ∀ array b, c, index i : [(∀ index j : j  i ⇒ b[j] = c[j]) ≡ true ⇒ (i = 0 ∨. b[i − 1] = c[i − 1]) ≡ true] . Because of the premise, the conclusion is satisfied if . (i 6= 0 ⇒ i − 1  i) ≡ true . But then  is fixed, as is easily seen by induction, viz. kl ≡ l≤k. Hence the array indices should be processed in decreasing order.

8

Discussion

We hope to have demonstrated that techniques known from the verification or verifying construction (see e.g. [Dijkstra 76]) of imperative programs can fruitfully be carried over to an applicative language style. In particular, this opens the possibility of using assertions in conjunction with transformation techniques applied to schematic programs; hence developments can be done once and for all for whole classes of problems. To make the approach practical, one must find more compact rules (best, semi-automatically applicable ones) for manipulating programs with assertions. A second point concerns the issue of functional (i.e., combinator-oriented) versus applicative (λcalculus-oriented) languages. Of course, all the rules in section 2 can be lifted to the functional level. However, the ones involving substitution get a bit more complicated. Consider, for instance, the expression p  p ◦f with a Boolean function p and an arbitrary function f , both with the same argument type. If we write this in λ-notation, viz. as λx. p(x)  p(f (x)), we see that we cannot replace the second occurrence of p by the everywhere true function, since p(x) does not necessarily imply p(f (x)). To avoid the definition of a corresponding special mechanism (substitution only for occurrences not to the left of compositions) we have used an applicative rather than a functional language style. Acknowledgment This work has profited a great deal from discussions with F.L.Bauer, U. Berger, R. Berghammer, M. Broy, H. Ehler, M. Lichtmannegger, H. Partsch, and H. Schwichtenberg. Valuable comments were also provided by the referee. 14

9

References

[Bauer, W¨ossner 82] F.L. Bauer, H. W¨ossner: Algorithmic language and program development. New York: Springer 1982 [Bauer et al. 85] F.L. Bauer et al.: The Munich project CIP. Volume I: The wide spectrum language CIP-L. Lecture Notes in Computer Science 183. New York: Springer 1985 [Bauer et al. 89] F.L. Bauer et al.: Formal program construction by transformations—Computer-aided, Intuitionguided Programming. IEEE Trans. Software Eng. 15, 165–180 (1989) [Dikstra 76] E.W. Dijkstra: A discipline of programming. Englewood Cliffs, N.J.: Prentice-Hall 1976 [Manna 74] Z. Manna: Mathematical theory of computation. New York: McGraw-Hill 1974 [Monk 88] L.G. Monk: Inference rules using local contexts. J. Automated Reasoning 4, 445–462 (1988) [Paige, Koenig 82] R. Paige, S. Koenig: Finite differencing of computable expressions. ACM TOPLAS 4, 402–454 (1982) [Partsch 86] H. Partsch: Transformational program development in a particular problem domain. Sci. Comput. Programming 7, 99–241 (1986) [Pepper 85] P. Pepper: Modal logics for applicative programs. Habilitation Thesis, Technische Universit¨at M¨ unchen, 1985 [Pepper 87] P. Pepper: Application of modal logics to the reasoning about applicative programs. In L.G.L.T. Meertens (ed.): Program specification and transformation. Amsterdam: North-Holland 1987, 429–449 [Sharir 82] M. Sharir: Some observations concerning formal differentiation of set theoretic expressions. ACM TOPLAS 4, 196–226 (1982)

15