Symbolic Model Checking for Probabilistic Processes 1 ... - CiteSeerX

1 downloads 0 Views 314KB Size Report
Jan 15, 1997 - We introduce a symbolic model checking procedure for Probabilistic ... Thus, converting to MTBDDs would ensure smooth integration ..... BDDs replacing the free relational variables (i.e. we fix the interpretation for the free relational variables .... the MTBDD for the function F(x) = maxf FB2(x); FQ0(x) FB0(x) g ...
Symbolic Model Checking for Probabilistic Processes Christel Baier

Marta Kwiatkowska and Mark Ryan

Fakult¨at f¨ur Mathematik & Informatik Universit¨at Mannheim 68131 Mannheim, Germany [email protected]

School of Computer Science University of Birmingham Edgbaston, Birmingham B15 2TT, UK fmzk,[email protected]

January 15, 1997 Abstract We introduce a symbolic model checking procedure for Probabilistic Computation Tree Logic (PCTL) and its generalization P CT L over labelled Markov chains as models. Model checking for probabilistic logics typically involves solving linear equation systems as means to ascertain the probability of a given (non-probabilistic) formula holding in a state. Our algorithm is based on the idea of representing the matrices used in the linear equation systems in terms of an appropriate generalization of Binary Decision Diagrams (BDDs) called Multi-Terminal Binary Decision Diagrams (MTBDDs) introduced in Clarke et al [8], and combining that with BDD-based model checking. To enable precise probability calculation for path formulas we use the standard construction of the Rabin automaton (via Vardi & Wolper construction of the Buchi automaton followed by Safra’s determinization) for an LTL formula and, as suggested by de Alfaro [2], construct the product of thus obtained automaton with the Markov chain model. Expressing these standard constructions as BDDs avoids explicit state space construction, thus yielding an efficient model checking procedure.

1 Introduction Probabilistic techniques, and in particular probabilistic logics, have proved successful in the specification and verification of systems that exhibit uncertainty such as fault-tolerant systems, randomized distributed systems and communication protocols. Models for such systems are variants of probabilistic automata1 (such as labelled Markov chains used in e.g. [18, 20, 31, 32, 10]), in which the usual (boolean) transition relation is replaced with its probabilistic version given in the form of a Markov probability transition matrix. The probabilistic logics are typically obtained by “lifting” a non-probabilistic logic to the probabilistic case by constructing for each formula  and a real number p in the [0; 1]-interval the formula []p in which p acts as a threshold for truth2 in the sense that for the formula []p to be satisfied (in the state s) the probability that  holds in s must be at least p. With such logics one can express quantitative properties such as “the probability of the message being delivered within t time steps is at least 0:75” (see e.g. the timing or average-case analysis of real-time or randomized distributed systems [18, 17, 4, 5, 2]) or (the more prevalent) qualitative properties, for which  is required to be satisfied by almost all executions (which amounts to showing that  is satisfied with probability 1, see e.g. [1, 10, 17, 18, 15, 16, 25, 26, 31]). Much has been published concerning the verification methods for probabilistic logics, to mention probabilistic extensions of dynamic logic [21, 13] and temporal and modal logics, e.g. [2, 5, 10, 18, 15, 22, 23, 26, 28, 31], and automatic procedures for checking satisfaction for such logics have been proposed. The latter are based on the reduction of the calculation of (an estimate of) the probability (of formulas being satisfied) to a linear programming problem: for example, in [18], the calculation of the probability of until formulas is based on solving the linear equation system given by an n n matrix where n is the size of the state space. Thus, although optimal methods are known (the lower bound is double exponential in the size of the formula [11]), these algorithms are not of much practical use when verifying realistic systems. As a result, efficiency of



1 2

In this paper we shall only consider deterministic automata. See [21, 29, 19] for a different approach.

1

probabilistic analysis lags behind efficient model checking techniques for conventional logics, such as symbolic model checking [7, 9, 12, 24], for which tools capable of tackling industrial scale applications are available (cf. smv). This is undesirable as probabilistic approaches allow one to establish that certain properties hold (in some meaningful probabilistic sense) where conventional model checkers fail, either because the property simply is not true in the state (but holds in that state with some acceptable probability), or because exhaustive search of only a portion of the system is feasible. The main difficulty with probabilistic model checking is the need to integrate a linear algebra package with a conventional model checker. Despite the power of existing linear algebra packages, this can lead to inefficient and time consuming computation through the implicit requirement for the construction of the state space. This paper proposes an alternative, which is based on expressing the probability calculations in terms of MultipleTerminal Binary Decision Diagrams (MTBDDs) [8]3 . MTBDDs are a generalization of (ordered) BDDs in the sense that they allow finitely many (arbitrary) real numbers in the terminal nodes instead of just 0 and 1, and so can provide a compact representation for matrices. As a matter of fact, in [8] MTBDDs have been shown to perform no worse than sparse matrices. Thus, converting to MTBDDs would ensure smooth integration with a symbolic model checker such as smv and has the potential to outperform sparse matrices due to the compactness of the representation, in the same way as BDDs have outperformed other methods. As with BDDs, the precise (time) complexity estimates of model checking for MTBDDs are difficult to obtain, but the success of BDDs in practice [7, 24] serves as sufficient encouragement to develop the foundations of MTBDD-based probabilistic model checkers.

In this paper we consider a probabilistic extension of CTL called Probabilistic Computation Tree Logic (PCTL) and its generalization PCTL , and give a symbolic model checking procedure which avoids the explicit construction of the state space. We use (finite-state) labelled Markov chains as models (we do not consider nondeterminism). The model checking procedure is based on the encoding of CTL in terms of the relational mu-calculus [7], which we generalise to cater for PCTL by applying MTBDDs to the probability calculations: the boolean formulas are expressed as BDDs, and a suitable combination of BDDs and MTBDDs is used for the probabilistic formulas. The case of PCTL, which only allows conjuctions of state formulas is relatively straightforward, but PCTL does not allow to express liveness properties (such as the PCTL formula [2(message sent 3message delivered)]0:75 ) which are expressible in PCTL. In the latter case conjunctions of path formulas are needed, which involves calculating the probability of the path formula f1 f2 in the state s given the respective probabilities of f1 and f2 in s. In general, i.e. in cases involving both probabilistic as well as non-deterministic choice, the calculation of precise probability estimates are difficult as the knowledge of (in)dependence of events is needed4 . The remaining part of the paper is devoted to the symbolic model checking of PCTL formulas. The construction uses the idea of de Alfaro [2] and is based on the reduction of an LTL formula to a deterministic Rabin automaton (for which the Vardi & Wolper construction [32] followed by Safra’s determinization procedure is used [27]), and constructing the product of thus obtained formula automaton and the labelled Markov chain model. This yields a (labelled Markov chain) model in which the probability of the path formulas can be ascertained precisely. Since the above construction of the Rabin automaton can be expressed in terms of BDDs, we obtain a symbolic model checking algorithm for PCTL by incorporating the MTBDD-based method of the probability calculation. The algorithm can easily be programmed by means of the MTBDD or ADD packages of [8, 3].

!

^

We assume some familiarity with BDDs, automata on infinite sequences, probability and measure theory [7, 30, 14]; for readers’ convenience we include additional details in the Appendix.

2 Labelled Markov chains We use (discrete time) Markov chains as models (we do not consider nondeterminism). Let AP denote a finite set of atomic propositions. A labelled Markov chain is a tuple M = (S; ; L) where S is a finite set of states, : S S [0; 1] a transition matrix, i.e. t2S (s; t) = 1 for all s S , L : S 2AP a labelling function

P  !

P

P

P

2

3

!

The suggestion to use MTBDDs was made by Ed Clarke. One could use the minimum probability or maximum of 0 and the sum of probabilities less 1, but those result in lower bounds only, see [29, 19] for more details. 4

2

2

which assigns to each state s S a set of atomic propositions. In what follows, to avoid collapse of states under labelling we suppose that L(s) = L(s0 ) for all states s, s0 with s = s0 5 .

6

6

Execution sequences arise by resolving the probabilistic choices. Formally, an execution sequence in M is a nonempty (finite or infinite) sequence  = s0 s1 s2 ; : : : where si are states and (si?1 ; si ) > 0, i = 1; 2; : : :. (The case  = s0 is allowed.) The first state of  is denoted by first( ). If  is finite then the last state of  is denoted by last( ).  (k ) denotes the k -th state of  (i.e. if  (k ) = sk ). An execution sequence  is also called a path, and a fulpath iff it is infinite. Path denotes the set of fulpaths. Path(s) is the set of fulpaths  with first( ) = s. A state t is called reachable from s if there exists a finite path  with first( ) = s and last() = t. For s S , let (s) be the smallest -algebra on Path(s) which contains the basic cylinders  Path(s) : ! is a prefix of  where ! ranges over all finite execution sequences starting in s. The probability measure Prob on (s) is the unique measure with Prob  Path(s) : ! is a prefix of  = (! ) where (s0 s1 : : : sk ) = (s0 ; s1 ) (s1 ; s2 ) : : : (sk?1 ; sk ).

P

2

P

P

f 2

g

P

f 2

 P

g

P

Example 2.1 We consider a simple communication protocol similar to that in [18]. The system consists of three enitities: a sender, a medium and a receiver. The sender sends a message to the medium, which in turn tries to deliver the message to the receiver. With probability 1/100, the messages get lost, in which case the medium tries again to deliver the message. With probability 1/100, the message is destroyed (but delivered); with probability 98/100, the correct message is delivered. When the (correct or faulty) message is delivered the receiver acknowledges the receipt of the message. For simplicity, we assume that the acknowledgement cannot be destroyed or lost. We describe the system in a simplified way where we omit all irrelevant states (e.g. the state where the receiver acknowledges the receipt of the correct message).

 $ '- sinit   1 0:98 ?  % sdel 1 Y H  ? JJ HJ1 ? 0:01? JJ J  JJ ? 0:01 HH j serror slost  

We use the following four states: sinit the state in which the sender passes the message to the medium sdel the state in which the medium tries to deliver the message slost the state reached when the message is lost serror the state reached when the message is destroyed

!

!

The transition sdel sinit stands for the acknowledgement of the receipt of the correct message, serror sinit for the acknowledgement of the receipt of the destroyed message. We use two atomic propositions a1 , a2 and the labelling function L(sinit ) = , L(sdel ) = a1 ; a2 , L(slost ) = a2 , L(serror ) = a1 .

;

f

g

f g

f g

3 Probabilistic branching time temporal logic In this section we present the syntax and semantics of the logic PCTL (Probabilistic Computation Tree Logic) introduced by Hansson & Jonsson [18]6 . PCTL is a probabilistic extension of CTL which allows one to express quantitative properties of probabilistic processes such as “the system terminates with probability at least 0:75”. PCTL contains atomic propositions and the operators: next-step X and until U . The operators X and U are used in connection with an interval of probabilities. The syntax of PCTL is as follows:

j a j 1 ^ 2 j : j [ X  ]wp j [ 1U 2 ]wp where a is an atomic proposition, p 2 [0; 1], w is either  or >. Formulas of the form X  or 1 U 2 , where  ::=

tt

, 1 , 2 are PCTL formulas, are called path formulas. PCTL formulas are interpreted over the states of a labelled Markov chain, whereas path formulas are interpreted over paths. Informally, X  asserts that  holds 5

This is not a real restriction since we may introduce “new” propositions as necessary if the labelling function does not distinguish distinct states. 6 For simplicity we omit the bounded until operator of [18].

3

s j= tt for all s 2 S , s j= a iff a 2 L(s) s j= 1 ^ 2 iff s j= i , i = 1; 2 s j= : iff s 6j=  s j= [X ]wp iff Prob f 2 Path(s) :  j= X g w p s j= [1 U 2 ]wp iff Prob f 2 Path(s) :  j= 1 U 2 g w p  j= X  iff (1) j=   j= 1 U 2 iff there exists k  0 with (i) j= 1 , i = 0; 1; : : : ; k ? 1 and (k) j= 2 . Figure 1: The satisfaction relation

j=

in the next state. 1 U 2 is true iff from now on 1 is fulfilled until 2 holds and 2 holds sometime in the p denotes that the probability of paths starting in the current state fulfilling the path future. The subscript formula is p.

w

w

:

_

:: ^:

!

tt, 1 2 = ( 1 2 ), 1 2 = The usual derived constants and operators are: ff = 1 2. Operators for modelling “eventually” or “always” can be derived by: [3]wp = [ttU ]wp , [2]wp = [3 ]w1?p where => and > = . For instance, [ 3 ]p states that the probability of a path starting in the current state reaching a state in which  holds is p. Formulas of the form [ 2  ]p express safety properties asserting that the probability for paths where  holds continuously is p.

: _ : :







 Let M = (S; P; L) be a labelled Markov chain. The satisfaction relation j=  S  PCTL is shown in Figure 1 (where we write s j=  instead of (s; ) 2 j=). The fact that for a path formula f the set f 2 Path(s) :  j= f g is measurable is an easy verification. If s j=  then we say s fulfills  (or also s satisfies  or  holds in s). The truth value of formulas involving the (linear time) quantifiers 3 and 2 can be derived: s = [3]wp iff Prob  Path(s) : (k) =  for some k 0 p p. s = [2 ]wp iff Prob  Path(s) : (k) =  for all k 0

j j

f 2 f 2

j j

P

 gw  gw

P

Given a probabilistic process , described by a labelled Markov chain M = (S; ; L) with an initial state s, we say satisfies a PCTL formula  iff s = . For instance, if a is an atomic proposition which stands for termination and satisfies [3a]p then terminates with probability p. If ci , i = 1; 2, are atomic propositions stating that subprocesses i of are in their critical sections and satisfies [2( c1 c2 )]p then fulfills mutual exclusion with probability p; more formally, the probability of an execution in which 1 and 2 are never simultaneously in their critical sections is at least p.

P

P

P

P

P

j

P

: _:

P

P

P

4 Multi-terminal binary decision diagrams (Ordered) Binary Decision Diagrams (BDDs) introduced by Bryant [6] and applied to model checking in 0; 1 based on the canonical rep[7, 9, 24] are a cogent representation of boolean functions f : 0; 1 n resentation of the binary tree of (the Shannon expansion of) the function as a directed graph obtained through folding internal nodes representing identical (sub)functions (subject to an ordering of the variables to guarantee uniqueness of the representation) and using 0 and 1 as leaves. In [7] it is shown how one can generalize BDDs to cogently and efficiently represent matrices: firstly, by allowing finitely many (possibly real) numbers as the leaves, and secondly by using binary expansions instead of integer indices of the matrix. Thus obtained multiterminal binary decision diagrams (MTBDDs) can serve as a representation of functions f : 0; 1 n D where D is a set of values. To represent a m m real matrix one uses the function f : 0; 1 2n IR where n = log m is the length of the binary expansion of each integer index and IR denotes the reals.

f g !f g

d

e

f g ! f g !



Formally, MTBDDs can be defined as follows. Let x1 ; : : : ; xn be pairwise distinct variables. A multi-terminal binary decision diagram (MTBDD) over (x1 ; : : : ; xn ) is a rooted, directed graph with vertex set V containing two types of vertices, nonterminal and terminal. Each nonterminal vertex v is labelled by a variable var (v ) x1 ; : : : ; xn and two sons left(v), right(v) V . Each terminal vertex v is labelled by a real number

f

g

2

2

4

value(v)7 . For each nonterminal node v, we require var(v) < var(left(v)) if left(v) is nonterminal, and similarly, var (v ) < var (right(v )) if right(v ) is nonterminal. Here we use the ordering xi < xj iff i < j . Each MTBDD Q over fx1 ; : : : ; xn g represents a function FQ : f0; 1gn ! IR, and, vice versa, each function F : f0; 1gn ! IR can be described by a MTBDD over (x1 ; : : : ; xn). Subject to the (fixed) ordering of the variables xi < xj iff i < j one can obtain a unique reduced MTBDD. A suitable adaptation of the operator REDUCE () [6] yields an operator which accepts an MTBDD as its input and returns the corresponding

reduced MTBDD.

f g !

In the sequel, by the MTBDD for a function F : 0; 1 n IR we mean the unique reduced MTBDD Q with FQ = F . If all terminal vertices are labelled by 0 or 1, i.e. if the associated function FQ is a boolean function, the MTBDD specializes to a BDD over (x1 ; : : : ; xn ).

4.1 Representing labelled Markov chains by MTBDDs To represent the transition matrix of a labelled Markov chain by a MTBDD we abstract from the names of states and instead, similarly to [7, 9], use binary tuples of atomic propositions that are true in the state. Let M = (S; ; L) be a labelled Markov chain. We fix an enumeration a1 ; : : : ; an of the atomic propositions and identify each state s with the boolean n-tuple (b1 ; : : : ; bn ) where bi = 1 iff ai L(s). Formally, we 0; 1 n by e(s) = (b1 ; : : : ; bn ) where bi = 1 iff ai L(s)8 . We replace M define an injection e : S by M = ( 0; 1 n ; ; L) where L(b1 ; : : : ; bn ) = ai : bi = 1 and where the transition matrix is given 0 0; 1 n such that b = e(S ) or b0 = e(S ), (b; b) = 1 and by: (e(s); e(t)) = (s; t) and, for all b, b 0 0 (b; b ) = 0 if b = b 9 . In what follows, we identify with the function

P

P

P

f g P 6

!f g

f 2f g

P

g

P

2

2

2

2

P

P

F ! [0; 1], F (x1 ; y1; : : : ; xn; yn) = P((x1 ; : : : ; xn ); (y1 ; : : : ; yn)), and represent M by the MTBDD for P over (x1 ; y1 : : : ; xn ; yn ). By abuse of notation we refer to P as P. The associated MTBDD is denoted by P . Example 4.1 For the system in Example 2.1 we use the encoding e(sinit ) = 00, e(sdel ) = 11, e(slost ) = 01 e(serror ) = 10. The values of the matrix P = P and the function F are given by: : f0; 1g2n

8 > > < > > :

00 01 10 11 1 : if x1 y1 x2 y2 0101; 0111; 1000 00 0 0 0 1 1 : if x1 y1 x2 y2 1011; 1110 01 0 0 0 1 F (x1 ; y1 ; x2 ; y2 ) = 100 98 : if x y x y = 1010 0 0 0 10 1 1 1 2 2 100 0 : otherwise. 11 0.98 0.01 0.01 0 The MTBDD P for F is shown in Figure 2. (The thick lines stand for the “right” edges, the thin lines for the “left” edges.)

2f 2f

g

g

4.2 Operators on MTBDDs The MTBDD operators adapted from Bryant [6] by Clarke et al [8] form the basis for our model checking algorithm, and are briefly described below.

Operator BDD (Q; I ): To evaluate the truth value of a probabilistic formula [f ]wp we use an operator BDD (Q; I ) that takes a MTBDD Q (representing the probability of f holding) and an interval I as its input and returns a BDD B over the same variables that represents the boolean function F (x) = 1 iff FQ (x) I . We obtain B = BDD(Q; I ) from Q by changing the values of the terminal vertices (into 1 or 0 depending on whether or not value(v ) I ) and applying Bryant’s reduction procedure REDUCE ( ). We write BDD(Q; > p) rather than BDD (Q; ]p; [) and BDD (Q; p) rather than BDD(Q; [p; [). If Q is a MTBDD over x = (x1 ; : : : ; xn ), z = (z1 ; : : : ; zm ) a tuple of pairwise distinct variables with zih = xh , h = 1; : : : ; m for some sequence 1 i1 < : : : < in m then Q[z ] denotes the MTBDD for the function F (z ) = FQ (x)10 .

2

2

1



 1





For our purposes, value(v ) is the probability of a certain event, i.e. value(v ) is a real number between 0 and 1. The injectivity of e follows by the assumption that L(s) 6= L(s0 ) if s 6= s0 . 9 It might be the case that M contains more states than M (in the case where e(S ) 6= f0; 1gn ). The additional states can be viewed as terminal states which can never be reached from the initial state. 10 Note that the underlying graph of Q[z] is the graph of Q. 7 8

5

    ) yk

kP

x1 P

1

J

J

J ^ J

k

y2

PP PP

P q P

  

k

y1

Z Z

Z ZZ  ~ k  = xk x Q ? ? Q s yk yk? yk Q  ? ? @@R      2

A A A AAU

2

% %% 2

1

-0  && 

2

0:98

2

0:01

Figure 2: The MTBDD P for the labelled Markov chain of Example 2.1



Operator APPLY ( ): The operator APPLY [8] allows elementwise application of the binary operator op to two MTBDDs. If op is a binary operator on real numbers (e.g. multiplication or minus ) and Q1 , Q2 are MTBDDs over x then APPLY (Q1 ; Q2 ; op) yields a MTBDD over x which represents the function



?

f (x) = fQ1 (x) op fQ2 (x): Operator COMPOSE (): This operator, suitably adapted from Bryant’s algorithm for computing the BDD representation of the composition of two boolean functions, allows the composition of a real function F : f0; 1gn+1 ! IR and a boolean function G : f0; 1gn ! f0; 1g giving H (x) = F (x; G(x)). By successively applying the composition operator, we obtain an operator COMPOSE k (Q; B1 ; : : : ; Bk ) that takes a MTBDD Q over n + k variables (x1 ; : : : ; xn+k ) and BDDs Bi over n variables (x1 ; : : : ; xn) as its input and returns the MTBDD for H (x) = FQ (x; FB1 (x); : : : ; FBk (x)). Matrix operators: In [8] algorithms for several operations on matrices are proposed, such as the multiplication of a matrix with a vector or multiplication of two matrices. If A is a matrix with 2n columns and rows then we IR where f (x1 ; y1 ; : : : ; xn ; yn ) is the value of A in the (i + 1)-th identify A with the function f : 0; 1 2n row and (j + 1)-th column and

f g ! i=

n X

 =1

x  2n? ; j =

n X

=1

y  2n? :

Thus, A can be identified with a MTBDD A over (x1 ; y1 ; : : : ; xn ; yn ). Similarly, each vector q with 2n arIR where guments can be identified with (the MTBDD over (x1 ; : : : ; xn ) for) the function F : 0; 1 n F (x1 ; : : : ; xn ) is the (i + 1)-th argument of q and where i is as above. Vice versa, each MTBDD over n variables can be viewed as a vector with 2n arguments, and each MTBDD over 2n variables as a matrix with 2n rows and columns. The operators (all based on [8]) needed are as follows :

f g !

(1) If Q is a MTBDD over n variables, A a MTBDD over 2n variables then MV MULTI (A; Q) denotes the MTBDD over n variables that represents the vector A q (where q is the vector that corresponds to Q and A the matrix that corresponds to A), and similarly for V M MULTI (Q; A) that represents q A.





(2) If A1 , A2 are MTBDDs over 2n variables then MM MULTI (A1 ; A2 ) denotes the MTBDD over 2n variables that represents the product 1 2 (here, i is the matrix that is represented by the MTBDD Ai ).

A A

A

(3) For A a MTBDD over 2n variables such that the corresponding matrix A is regular, we decompose A into a lower triangular matrix L, an upper triangular matrix U and a permutation matrix D, i.e. A = L U D. Let LU (A) be the operator that returns the tuple (L; U; D ) where L, U and D are MTBDDs that represent L, U, D respectively. Let INV (A) be the operator that takes the MTBDD of a regular lower or upper triangular matrix A as its input and returns the MTBDD for the inverse matrix ?1 .

 

A

6



Operator SOLV E ( ): The matrix operators can be combined to yield the operator SOLV E (A; Q) that takes as its input a MTBDD A over 2n variables where the corresponding matrix is regular and a MTBDD Q over n variables which represents a vector q, and returns a MTBDD Q0 over n variables which represents the unique solution of the linear equation system = 11 .

A

Ax q

4.3 Description of BDDs by relational terms of the -calculus In [7] a symbolic model checking algorithm for the relational mu-calculus can be found, which takes an nary relational term R and a domain D together with interpretations for the individual and relational variables as its input (where the relations are represented by BDDs) and returns a BDD over n variables that represents (the characteristic function of) the meaning of R. We refer the reader to [7] for more details. We fix an interpretation as follows. We consider a fixed domain (of binary expansions) of the form 0; 1 n with BDDs replacing the free relational variables (i.e. we fix the interpretation for the free relational variables.) Then, each relational term represents a BDD which can be computed with the methods of [7]. In the sequel, we identify each relational term with the corresponding BDD where we assume that the BDD for an n-ary relational term is over the variables (x1 ; : : : ; xn). For instance, if B1 , B2 are BDDs over n variables and v = (v1 ; : : : ; vn ), u = (u1 ; : : : ; un ) then B = v u [B1 (v ) B2 (u)] stands for the relational term v1 : : : vn u1 : : : un [Z1 (v1 ; : : : ; vn ) Z2 (u1 ; : : : ; un )] where Z1 and Z2 are n-ary relational variables that are interpreted by B1 and B2 respectively. Thus, we identify B with the BDD over (x1 ; : : : ; x2n ) which represents the function

f g

^

^

F (x1 ; : : : ; x2n ) = FB1 (x1 ; : : : ; xn ) ^ FB2 (xn+1 ; : : : ; x2n ):

In the sequel, we also use other names for the variables of a BDD given by a relational term. In that case, we specify explicitly the names of the variables and their ordering. As in Section 4.1 we use variables x1 ; : : : ; xn , y1 ; : : : ; yn with the ordering x1 < y1 < : : : < xn < yn for the MTBDD P of a labelled Markov chain (S; ; L).

P

Notation 4.2 We summarize the notation for the BDD operators used in the model checking algorithm. With the exception of REACH ( ), which is needed for the calculation of the probability of until formulas, all are straightforward.





We write xi for the formula SELECTin (x1 ; : : : ; xn ) where SELECTin denotes the the unique BDD over n variables which represents the function f (x1 ; : : : ; xn ) = xi .

 If x = (x1 ; : : : ; xn), y = (y1 ; : : : ; yn) then x = y denotes the formula Vni=1 (xi $ yi) where x $ y = (x ^ y) _ (:x ^ :y).  We write :B instead of x[:B (x)], and B1 ^ B2 for the BDD x[B1(x) ^ B2(x)]12 .  TRUE denotes the BDD that consists of a terminal vertex v with value(v) = 1. We define FALSE = :TRUE . For BDDs B1 , B2 with n variables and a BDD T with 2n variables, we define

REACH (B1 ; B2 ; T ) = Z [R]

where

R = x [B2 (x) _ (B1 (x) ^ 9y[Z (y) ^ T (x; y)]) ] :

x = (x1 ; : : : ; xn ), y = (y1 ; : : : ; yn) and T (x; y) = T (x1 ; y1 ; : : : ; xn ; yn). For a labelled Markov chain M = (S; P; L) which is represented by a MTBDD P as described in Section 4.1 and where B1, B2 are BDDs that represent the characteristic functions of subsets S1 , S2 of S , REACH (B1 ; B2 ; BDD (P; > 0)) represents the set of states s 2 S from which there exists an execution sequence s = s0 ; s1 ; : : : ; sk with k  0 and s0 ; : : : ; sk?1 2 S1 , sk 2 S2 , and which is used in the operator PROB () defined in Section 5. Here,

SOLV E (A; Q) computes (L; U; D) := LU (A), L0 := INV (L), Y := MV MULTI (L0 ; Q), U 0 := INV (U ), Z := MV MULTI (U 0 ; Y ) and returns Q0 := V M MULTI (Z; D). 12 Here, we suppose that x = (x1 ; : : : ; xn ) and that B , B1 , B2 are BDDs with n variables. 11

7

5 Model checking for PCTL The model checking algorithm for PCTL is based on established BDD techniques (i.e. converting boolean formulas to their BDD representation, see e.g. [7]), which it combines with a new method, namely expressing the probability calculation for the probabilistic formulas in terms of MTBDDs. In the case of [X ]wp the probability is calculated by multiplying the transition matrix by the boolean vector set to 1 iff the state satisfies , whereas for [1 U 2 ]wp we derive an operator called PROB ( ), expressed in terms of MTBDDs and based on [18]. (Recall that the operator BDD (Q; p) is explained in Section 4.)



w

P

Let M = (S; ; L) be a labelled Markov chain which is represented by a MTBDD P over 2n variables as described in Section 4.1. For each PCTL formula , we define a BDD B [] over x = (x1 ; : : : ; xn ) that represents Sat() = s S : s =  13 . We compute the BDD representation B [] of a PCTL formula  by structural induction on :

f 2

j

g

B [tt] = TRUE (1) B [ai ] = x [SELECTin (x)] (2) B [:] = :B [] (3) B [1 ^ 2 ] = B [1 ] ^ B [2 ] (4) B [ [X ]wp ] = BDD ( MV MULTI (P; B []) w p ) (5) B [ [1 U 2 ]wp ] = BDD ( PROB (B [1 ]; B [2 ]; P ); w p ) ) The operator PROB (B [1 ]; B [2 ]; P ) assigns to each state s 2 S the probability of the set of fulpaths from s satisfying 1U 2 ; formally, it represents the function S ! [0; 1], s 7! ps , where ps = Prob f 2 Path(s) :  j= 1 U 2 g . With slight modifications, we use the method proposed in [18] for computing ps , s 2 S , based on solving the regular linear equation system (I ? A)  x = b where I denotes the jV 0 j  jV 0 j-identity matrix, V is the set of all states s 2 S such that ps > 0, V 0 = V n Sat(2 ), A = (P(s; t))s;t2V 0 and b = (bs )s2V 0 , bs = Pt2Sat(2 ) P(s; t). Then, 8 > < xs : if s 2 V 0 ps = > 1 : if s 2 Sat(2 ) (0)

: 0 : otherwise where x = (xs )s2V 0 is the unique solution of (I ? A)  x = b.



We now demonstrate how PROB ( ) can be expressed in terms of MTBDDs. Let Bi = B [i ], i = 1; 2. The set V is given by the BDD B = REACH (B1 ; B2; BDD(P; > 0)), V 0 by B 0 = x [B (x) B2 (x)]. In order to avoid the BDD for the “new” transition matrix A with log V 0 variables, we instead reformulate the equation in terms of the matrix 0 = (p0 )s;t2S which is given by: p0 = (s; t) if s; t V 0 and p0 = 0 in all other cases. The MTBDD P 0 s;t s;t s;t for 0 can be obtained from the MTBDD P representing the Markov transition matrix. The following lemma 0 is regular (we omit the proof). shows that I

P

d j je

P

?P

P

^:

P

2

?P

0 is regular. The unique solution x = (xs )s2S of the Lemma 5.1 Let V 0 , 0 , I be as as above. Then, I 0 linear equation system (I ) x = q where q = (qs ), qs = t2Sat(2 ) (s; t) satisfies: xs = ps if s V 0 .

?P 



P

P

2

The algorithm for the operator PROB ( ) is shown in Figure 3. It first calculates the MTBDDs P 0 ; Q for the matrix 0 and vector q14 , and uses SOLV E ( ) (cf. section 4.2) to obtain the MTBDD Q0 satisfying FQ0 (e(s)) = ps for all s V 0 . The result, the MTBDD Q00 for the vector p = (ps)s2S , is obtained from the MTBDD for the function F (x) = max FB2 (x); FQ0 (x) FB 0 (x) which uses Q0 for all s V 0 and ensures that 1 is returned as the probability of the states already satisfying 2 . 13 More precisely, B [] describes the characteristic function of Sat0 () = fx 2 f0; 1gn : x j= g where we take the satisfaction

P

2



f



g

2

relation j= w.r.t. M 0 . Thus, Sat() is the set of states s such that e(s) belongs to Sat0 (). 14 The vector q in Lemma 5.1 is obtained by multiplying the transition matrix with the boolean vector that is given by B2 . Note that qs = t2S FP (e(s); e(t))  FB2 (e(t)).

P

P

8

Algorithm: Input: Output: Method:

PROB (B1; B2 ; P ) A labelled Markov chain represented by a MTBDD P over 2n variables, BDDs B1 , B2 over n variables MTBDD X over n variables which represents the function that assigns to each state the probability to reach a B2 -state via an execution sequence through B1 -states B := REACH (B1 ; B2 ; BDD(P; > 0)); B 0 := x [ B (x) B2 (x) ]; B 2 := x1 y1 : : : xnyn [B 0(x1 ; : : : ; xn) B 0 (y1; : : : ; yn)]; P 0 := APPLY (P; B 2 ; ); I := x1 y1 : : : xnyn [x = y]; A := APPLY (I; P 0 ; ); Q := MV MULTI (P; B2 ); Q0 := SOLV E (A; Q); Q00 := APPLY (B2 ; APPLY (Q0 ; B 0 ; ); max); Return(REDUCE (Q00 )).

 ?

AAU



Figure 3: Algorithm PROB (B1 ; B2 ; P )

xm 1

? ? ? xm 2 ? ? AA ? A 1

^:

^

HH H

xm 1

HH j H

xm 2

? ? ? 0 98  99

HH H

%

?

HH j H

xm 2

? ? ? 0

%

1  Figure 4: The MTBDD PROB (B1 ; B2 ; P ) and the BDD B [] = BDD (PROB (B1 ; B2 ; P ); > 0) Example 5.2 We consider the system in Example 2.1 and the formula

 = [ try to deliver U correctly delivered ]0:9

: ^:

where try to deliver = a2 and correctly delivered = a1 a2. Our algorithm first computes the BDDs B1 for Sat(try to deliver) = sdel ; slost , B2 for Sat(correctly delivered) = sinit , and then applies Algorithm PROB (B1 ; B2 ; P ). V = sinit ; sdel ; slost is represented by the BDD B , V 0 = sdel ; slost by the BDD B 0 . Thus, B 2 , P 0 and A stand for the matrices

f

f

g

g

0 1 0 0 0 0 0 0 B C B 0 0 0 1 B C B B2 = B@ 0 0 0 0 CA P0 = B@ 00

0 0 0 0 0:01

0 1 0 0

f

0 0 0 0

0 1 0 0

0 1 BB 10 CC CA A = B@ 0

0 1 0 0 ?0:01

q

B2 (viewed as a vector) is 2 = (0; 1; 0; 1). Thus, Q is the MTBDD for the vector solve the linear equation system

0 BB 10 B@ 0

0 1 0 0 ?0:01

1

0

0 0 0 B C 0 ?1 C B 0 A  x = B@ 1 1 0C 0 1 0:98

g f

g

1

0 0 0 ?1 C C A 1 0C 0 1

P  q2 = (0; 0; 1; 0:98). We

1 CC CA

x

which yields the solution = (0; 98=99; 1; 98=99) (represented by the MTBDD Q0 ). Moreover, the MTBDD APPLY (Q0 ; B 0 ; ) can be identified with the vector (0; 98=99; 0; 98=99). PROB (B1 ; B2 ; P ) and the BDD B [] are shown in Figure 4. Thus, B [] represents the characteristic function for Sat() = sinit ; sdel ; slost .



f

9

g

6 Model checking for PCTL Similarly to the way in which CTL extends CTL, the logic PCTL can be extended to a much richer logic PCTL which allows more complex path formulas, i.e. arbitrary combinations of path formulas by the boolean connectives and the operators X and U . The state formulas of PCTL are given by:

 :=

tt

j a j 1 ^ 2 j : j [f ]wp

where f is a path formula of PCTL :

f :=  j f1 ^ f2 j :f j Xf j f1Uf2

: :

As usual, 3f = ttUf and 2f = 3 f . Given a labelled Markov chain relation = is defined in the obvious way. For instance,

j

M = (S; P; L), the satisfaction

s j= [ X (1 U 2 ) ]wp iff Prob f 2 Path(s) :  j= X (1 U 2 )g w p where 

j= X (1 U 2) iff there is some j  1 with (j ) j= 2 and (i) j= 1 for all 1  i < j .

As in the case of model checking of PCTL in Section 5, the model checker for PCTL takes a state formula  as its input and returns a BDD B [] that represents the set Sat() = s S : s =  . It should be clear that the treatment of all formulas except those of the form [f ]wp is as for PCTL. Let f be a path formula and let 1 ; : : : ; k be the set of maximal state subformulas of f (i.e. proper state subformulas of  where at least one occurrence of i in f is not contained in some other state subformula j ). We may assume that the BDDs B [i ] are already computed. We replace each occurrence of i in f that is not contained in some other j by a “new” atomic proposition a0i , i = 1; : : : ; k . The resulting formula f 0 is a linear time logic (LTL) formula over the atomic propositions a01 ; : : : ; a0k . For instance, if

f 2



f = X (a1 ^ a3 ) _ [X (a2 U (a1 ^ a3 ))]>1=2 then we use two new atomic propositions a01 (for a1 f 0 = X (a01 a02 ).

_

j

g



^ a3), a02 (for [X (a2 U (a1 ^ a3))]>1=2 ) and replace f by c

We extend M by the new atomic propositions a01 ; : : : ; a0k and define a new labelled Markov chain M = (S; ; L) where L is a labelling function that assigns to each state a subset of AP a01 ; : : : ; a0k . Formally,

b Pb b

b

g

[f

Sb = S  f0; 1gk ;

Lb (< s; x0 >) = L(s) [ l(x0); l(x0 ) = fa0i : x0i = 1g

8 > P(s; t) : bP(< s; x0 >; < t; y0 >) = < 1 : > :0 :

where x0

= (x01 ; : : : ; x0k ),

if e0 (s) = x0 , e0 (t) = y 0 , if < s; x0 >=< t; y 0 > and e0 (s) = x0 , otherwise.

6

: S ! f0; 1gk is given by: e0 (s) = (b1 ; : : : ; bk ) where bi = 1 iff s j= i 15 . e0 induces an embedding b : S ! Sb, b(s) = < s; e0 (s) >. It is easy to see that Probf (s) = Probf 0 (b(s)) for all s 2 S . We use variables xi , yi , x0j , yj0 , i = 1; : : : ; n, j = 1; : : : ; k and the ordering x1 < y1 < : : : < xn < yn < x01 < c over z = (x1; y1 ; : : : ; xn; yn; x01 ; y10 ; : : : ; x0k ; yk0 ) is y10 < : : : < x0k < yk0 . The MTBDD representation Pb for M obtained as follows. We define BDDs C0 over (x; x0 ) and A0 , A1 over z by: Here, e0

3 2 ^ ?    x0i $ B [i ](x) 5 ; A1 := z (x = y) ^ (x0 = y0 ) ^ :C0 (x; x0 ) C0 := x x0 4 1ik

15

e0 (s) can be viewed as the encoding of s w.r.t. the atomic propositions a01 ; : : : ; a0k . 10

and A0

:= z [C0 (x; x0 ) ^ C0 (y; y0 )]. We define P0 := P [z ], i.e. P0 is the MTBDD for the function

we set

b B [1]; : : : ; B [k ]). Q := COMPOSE k (Q;

F : S  f0; 1gk  S  f0; 1gk ! [0; 1]; F (s; x0; t; y0 ) = P(s; t): We set P1 := APPLY (P0 ; A0 ; ) and Pb := APPLY (P1 ; A1 ; +). b over (x; x0) for the function Sb ! [0; 1], sb 7! Probf 0 (sb), can be computed (see below), Since the MTBDD Q Then,

FQ(e(s)) = FQb(e(s); e0 (s)) = FQb(b(s)) = Probf 0 (< s; e0 (s) >) = Probf (s): Hence, B [ [f ]wp ] = BDD (Q; w p) is the BDD representation of Sat ([f ]wp ) as desired.

It remains to describe the intermediate step of computing the operator

Probf 0 : Sb ! [0; 1]; Probf 0 (sb) = Probf 2 Path(sb) :  j= f 0g in terms of MTBDDs. This operator yields the (exact) probability of the set of paths satisfying f 0 assuming f 0 is a LTL formula with atomic propositions a01 ; : : : ; a0k as above, and P is the MTBDD representation of the new labelled Markov chain M = (S; ; L). (Note that simple methods for defining conjunctions of path formulas such as taking the minimum probability of conjuncts in the given state, or the maximum of 0 and the sum of the probability less 1, result in lower bounds only [29, 19].) Instead we propose to use a combination of the following (mostly standard) constructions, which can all be expressed in terms of BDDs in a straightforward if tedious way (see Appendix). We refer the reader to [32, 27, 2] for details of the constructions. First, we apply 0 the method of Vardi & Wolper [32] to obtain a non-deterministic B¨uchi automaton B over the alphabet 2AP , 0 a01 ; : : : ; a0k , which accepts exactly those infinite words over 2AP that fulfill f 0 . This can where AP 0 = AP be done in time exponential in the length of the formula. Next, we use Safra’s determinization procedure [27] on the B¨uchi automaton, which yields an equivalent deterministic Rabin automaton R in time O (2N log N ) where N is the size of the B¨uchi automaton. Thus, the construction of the Rabin automaton is double-exponential in the size of the formula, which is a known lower bound for probabilistic model checking [11]. Finally, we use the ideas of de Alfaro [2] and construct a new labelled Markov chain M 0 = (S 0 ; 0 ; L0 ) that can be viewed as the product of M and R , and an embedding 0 : S S 0. Moreover, from the acceptance condition of R, we derive a LTL formula ' of the form 3g with a propositional formula g such that

c

b Pb b

f

b

g

A

g

[f

A

c

b2 b

A

b!

P

A

Probf 0 (sb) = Probf 2 Path(0 (sb)) :  j= 'g

b b2 b

for each state s S . Thus, the probabilities Probf 0 (s), s S , can be computed by means of the MTBDDbased operator PROB ( ) applied to the MTBDD representation P 0 of M 0 .



7 Concluding remarks and further directions We have proposed a symbolic model checking procedure for the logics PCTL and PCTL which can be easily implemented by means of the MTBDD or ADD packages [8, 3], thus forming the basis of an efficient tool for verifying probabilistic systems. It would be interesting to investigate how this method performs in concrete experiments and if it can be combined with other efficient model checking techniques such as partial order reduction. Our algorithm can be extended to cater for “bounded until” of [18] which is useful in timing analysis of systems. An extension to the case of infinite state systems, perhaps by appropriate combination with induction, as well as a generalization to allow non-determinism, would be desirable.

References 1. R. Alur, C. Courcoubetis, D. Dill: Verifying Automata Specifications of Probabilistic Real-Time Systems, Proc. REX, LNCS 600, pp 27-44, Springer, 1991. 2. L. de Alfaro: Formal Verification of Performance and Reliability of Real-Time Systems, Techn. Report, Stanford, 1996.

11

3. I. Bahar, E. Frohm, C. Gaona, G. Hachtel, E. Macii, A. Padro, F. Somenzi: Algebraic Decision Diagrams and their Applications, Proc. ICCAD, pp 188-191, 1993. 4. C. Baier, M. Kwiatkowska: Model Checking for a Probabilistic Branching Time Logic with Fairness, submitted for publication. Available as Technical Report CSR-96-12, University of Birmingham, 1996. 5. A. Bianco, L. de Alfaro: Model Checking of Probabilistic and Nondeterministic Systems, Proc. Foundations of Software Technology and Theoretical Computer Science, LNCS 1026, pp 499-513, 1995. 6. R. Bryant: Graph-Based Algorithms for Boolean Function Manipulation, IEEE Transactions on Computers, Vol. C-35, No. 8, pp 677-691, 1986. 7. J. Burch, E. Clarke, K. McMillan, D. Dill, L. Hwang: Symbolic Model Checking: 1020 States and Beyond, Information and computation, Vol. 98 (2), pp 142-170, 1992. 8. E. Clarke, M. Fujita, X. Zhao: Multi-Terminal Binary Decision Diagrams and Hybrid Decision Diagrams, Representations of Discrete Functions. In T. Sasao and M. Fujita, eds, Kluwer Academic Publishers, pp 93-108, 1996. 9. E. Clarke, O. Grumberg, D. Long: Verfication Tools for Finite-State Concurrent Programs, Proc. REX, LNCS 803, pp 124-175, 1993. 10. C. Courcoubetis, M. Yannakakis: Verifying Temporal Properties of Finite-State Probabilistic Programs, Proc. FOCS’88, pp 338-345, 1988. 11. C. Courcoubetis, M. Yannakakis: The Complexity of Probabilistic Verification, J. ACM, 42 (4), pp 857-907, 1995. 12. R. Enders, T. Filkorn, D. Taubner: Generating BDDs for Symbolic Model Checking in CCS, Distributed Computing, Vol. 6, 1993. 13. Y. Feldmann: A Decidable Propositional Dynamic Logic, Proc. 15th ACM Symp. on Theory of Computing, pp 298-309, 1983. 14. P. Halmos: Measure Theory, Springer-Verlag, 1950. 15. S. Hart, M. Sharir: Probabilistic Temporal Logic for Finite and Bounded Models, Proc. 16th ACM Symposium on Theory of Computing, pp 1-13, 1984. 16. S. Hart, M. Sharir, A. Pnueli: Termination of Probabilistic Concurrent Programs, ACM Transactions on Programming Languages and Systems, Vol. 5, pp 356-380, 1983. 17. H. Hansson: Time and Probability in Formal Design of Distributed Systems, Elsevier, 1994. 18. H. Hansson, B. Jonsson: A Logic for Reasoning about Time and Probability, Formal Aspects of Computing, Vol. 6, pp 512-535, 1994. 19. M. Huth, M.Z. Kwiatkowska: Quantitative Analysis and Model Checking, submitted for publication. Available as Technical Report CIS-96-15, Kansas State University, 1996. 20. B. Jonsson, K.G. Larsen: Specification and Refinement of Probabilistic Processes, Proc. LICS’91, pp 266-277, 1991. 21. D. Kozen: A Probabilistic PDL, Journal of Computer and System Sciences, Vol. 30(2), pp 162-178, 1985. 22. K. Larsen, A. Skou: Bisimulation through Probabilistic Testing, Information and Computation, Vol. 94, pp 1-28, 1991. 23. K. Larsen, A. Skou: Compositional Verification of Probabilistic Processes, Proc. CONCUR, LNCS 630, pp 456-471, Springer, 1992. 24. K. McMillan: Symbolic Model Checking: An Approach to the State Explosion Problem, Kluwer Academic Publishers, 1993. 25. A. Pnueli, L. Zuck: Verification of Multiprocess Probabilistic Protocols, Distributed Computing, Vol. 1, No. 1, pp 53-72, 1986. 26. A. Pnueli, L. Zuck: Probabilistic Verification, Information and Computation, Vol. 103, pp 1-29, 1993. 27. S. Safra: On the Complexity of ! -Automata, Proc. FOCS‘88, pp 319-327, 1988. 28. R. Segala, N. Lynch: Probabilistic Simulations for Probabilistic Processes, Proc. CONCUR, LNCS 836, pp 492-493, 1994. 29. K. Seidel C. Morgan, A. McIver and J.W. Sanders: Probabilistic Predicate Transformers. Technical Report PRG-TR4-95, Oxford University Computing Laboratory, 1995. 30. W. Thomas: Automata on Infinite Objects, Handbook of Theoretical Computer Science Vol. B, pp 135-191, NorthHolland, 1990. 31. M. Vardi: Automatic Verification of Probabilistic Concurrent Finite-State Programs, Proc. FOCS’85, pp 327-338, 1985. 32. M. Vardi, P. Wolper: An Automata-Theoretic Approach to Automatic Program Verification, Proc. LICS’86, pp 332344, 1986.

12

Appendix 8 Model Checking for LTL We fix a set grammar:

AP

of atomic propositions. The syntax of

f ::=

2

tt

LTL (linear time logic) formulas is given by the

j a j :f j f1 ^ f2 j Xf j f1Uf2

where a AP . For a labelled Markov chain M given in the usual way. For f LTL, we define

2

= (S; P; L), the satisfaction relation j= Path  LTL is

Probf : S ! [0; 1]; Probf (s) = Probf 2 Path(s) :  j= f g:



Before we explain the details, we sketch our method for computing Probf ( ). By means of the method of Vardi & Wolper [32], we construct a non-deterministic B¨uchi automaton B over the alphabet 2AP that accepts exactly those infinite words over 2AP which fulfill f . Using the techniques of Safra [27], we build a deterministic Rabin automaton R that accepts exactly the same words. Finally, we use the ideas of de Alfaro [2] and construct a new labelled Markov chain M 0 that can be viewed as the “product” of M and R . Moreover, from the acceptance condition of R , we derive a LTL formula ' of the form 3g (with a propositional formula g ) such that Probf (s) = Prob  Path(s) :  = ' for each state s S . Thus, Probf ( ) can be computed with the methods of Section 5 (more precisely, using the operator PROB ( ))16 .

A

A

A

A f 2

j g

2





A

QQ

We briefly recall the definitions of B¨uchi and Rabin automata. An ! -automaton is a tuple = ( ; 0 ; Alp; ; Acc) consisting of a finite set of states, a set 0 of initial states, a non-empty and finite alphabet Alp, a tranAlp 2 and an acceptance condition Acc. is called deterministic iff 0 = 1 and sition relation  : (q; ) = 1 for all states q and all Alp. A run in over a word 1 2 : : : Alp! is an infinite sequence  = q0 ; q1 ; : : : such that q0 0 and qi (qi?1 ; i ) for all i 1. A word  is accepted by iff there exists where qi = q for infinitely an accepting run in over  . The infinity set inf ( ) is the set of states q many i. the acceptance condition Acc of a B¨uchi automaton is a set . A run  = q0 ; q1 ; : : : is accepted = . For a Rabin automaton, the acceptance condition Acc is a set ( i ; i ) : i = 1; : : : ; r iff inf ( ) consisting of pairs ( i ; i ) where i , i . A run  = q0 ; q1 ; : : : is accepted in a Rabin automaton iff there = and inf () = . is some ( ; ) Acc with inf ( )

j

j

Q

Q

A

! Q 2Q

Q Q

2

A

A

2



F Q

\F 6 ; L U LU 2

jQ j

2

A

2Q

fL U L U Q \L6 ; \U ; For the construction of the automaton AB and AR and the Markov chain M 0 we use individual variables

- xi , yi where x = (x1 ; : : : ; xn ) and y = (y1 ; : : : ; yn ) range over the states of M , - i , i where  = (1 ; : : : ; l ) and  = (1 ; : : : ; l ) range over the states of B , 0 - i , i0 where  = (0 ; : : : ; r ) and  = (00 ; : : : ; r0 ) range over the states of R , - ai where denotes the tuple ( a1 ; : : : ; an ). In the constructions of the BDDs/MTBDDs, we assume an ordering of these variables such that x1 : : : < xn < yn < 0 < 00 < : : : < n < n0 , an < 0 and 1 < : : : < l < a1 < : : : < an < 1 l .

g

A A

Notation 8.1 If b = (b1 ; : : : ; bn )

< y1 < < :::