Infinite Synchronizing Words for Probabilistic Automata - ULB

1 downloads 0 Views 219KB Size Report
Abstract. Probabilistic automata are finite-state automata where the transitions are chosen according to fixed probability distributions. We consider a semantics ...

Infinite Synchronizing Words for Probabilistic Automata Laurent Doyen1 , Thierry Massart2, and Mahsa Shirmohammadi2 1

LSV, ENS Cachan & CNRS, France [email protected] 2 Universit´e Libre de Bruxelles, Brussels, Belgium ? [email protected] [email protected]

Abstract. Probabilistic automata are finite-state automata where the transitions are chosen according to fixed probability distributions. We consider a semantics where on an input word the automaton produces a sequence of probability distributions over states. An infinite word is accepted if the produced sequence is synchronizing, i.e. the sequence of the highest probability in the distributions tends to 1. We show that this semantics generalizes the classical notion of synchronizing words for deterministic automata. We consider the emptiness problem, which asks whether some word is accepted by a given probabilistic automaton, and the universality problem, which asks whether all words are accepted. We provide reductions to establish the PSPACE-completeness of the two problems.

1

Introduction

Probabilistic automata (PA) are finite-state automata where the transitions are chosen according to fixed probability distributions. In the traditional semantics, a run of a probabilistic automaton over an input word is a path (i.e., a sequence of states and transitions), and the classical acceptance conditions over runs (such as in finite automata, B¨ uchi automata, etc.) are used to define the probability to accept a word as the measure of its accepting runs [10, 2]. Over finite and infinite words, several undecidability results are known about probabilistic automata in the traditional semantics [9, 1]. Recently, an alternative semantics for probabilistic automata has been proposed, with applications in sensor networks, queuing theory, and dynamical systems [8, 7, 5]. In this new semantics, a run over an input word is the sequence of probability distributions produced by the automaton. For an example, consider the probabilistic automaton with alphabet Σ = {a, b} on Fig. 1 and the sequence of probability distributions produced by the input word a(aba)ω . Previous works have considered qualitative conditions on this semantics. The space of probability distributions (which is a subset of [0, 1]n ) is partitioned ?

This work has been partly supported by the MoVES project (P6/39) which is part of the IAP-Phase VI Interuniversity Attraction Poles Programme funded by the Belgian State, Belgian Science Policy.

into regions defined by linear predicates, and classical acceptance conditions are used to define accepting sequences of regions. It is known that reachability of a region is undecidable for linear predicates, and that it becomes decidable for a class of qualitative predicates which essentially constrain only the support of the probability distributions [7]. In this paper, we consider a quantitative semantics which has decidable prop¯ = X0 X1 . . . of probability distribuerties, defined as follows [5]. A sequence X tions over a set of states Q is synchronizing if in the long run, the probability mass tends to accumulate in a single state. More precisely, we consider two def¯ is strongly synchronizing if lim inf i→∞ kXi k = 1 where initions: the sequence X kXi k = maxq∈Q Xi (q) is the highest probability in Xi ; it is weakly synchronizing if lim supi→∞ kXi k = 1. Intuitively, strongly synchronizing means that the probabilistic automaton behaves in the long run like a deterministic system: eventually, at every step i (or at infinitely many steps for weakly synchronizing) there is a state qˆi which accumulates almost all the probability, and therefore the sequence qˆi qˆi+1 . . . is almost deterministic. Note that the state qˆi needs not be the same at every step i. For instance, in the sequence in Fig. 1, the maximal probability in a state tends to 1, but it alternates between the three states q2 , q3 , and q4 . We define the synchronizing language L(A) of a probabilistic automaton A as the set of words3 which induce a synchronizing sequence of probability distributions. In this paper, we consider the decision problems of emptiness and universality for synchronizing language, i.e. deciding whether L(A) = ∅, and L(A) = D(Σ)ω respectively. Synchronizing words have applications in planning, control of discrete event systems, biocomputing, and robotics [3, 14]. For deterministic finite automata (DFA), a (finite) word w is synchronizing if reading w from any state of the automaton always leads to the same state. Note that DFA are a special case of probabilistic automata. A previous generalization of synchronizing words to probabilistic automata was proposed by Kfoury, but the associated decision problem is undecidable [6]. By contrast, the results of this paper show that the definition of strongly and weakly synchronizing words is a decidable generalization of synchronized words for DFA. More precisely, we show that there exists a (finite) synchronizing word for a DFA A if and only if there exists an (infinite) synchronizing word for A viewed as a probabilistic automaton with uniform initial distribution over all states. We show that the emptiness and universality problems for synchronizing languages is PSPACE-complete, for both strongly and weakly synchronizing semantics. For emptiness, the PSPACE upper bound follows from a reduction to the emptiness problem of an exponential-size B¨ uchi automaton. The construction relies on an extension of the classical subset construction. The PSPACE lower bound is obtained by a reduction from the universality problem for nondeterministic finite automata. 3

Words can be randomized, i.e. their letters can be probability distributions over the alphabet Σ. We denote by D(Σ) the set of all probability distributions over Σ.

2

a b

q1 a, b q3

1/ 2

q0 1/2

a, b

a, b

a, b q2

q0 q1 q2 q3 q4

a, b

q4

              1/2 0 0 0 0 0 1  1 n  1  1  1    1  0    a  /2  a  /2  b  0  a  /4  aba  /8  (aba)n−3  /2  3/4  −−→  7/8  −−−−−−→  1 − 1/2n   0 −  0 −  1/2  − 0 − → → → →                  0   0   1/2   0   0  0 0 1 0 0 0 0 /2 0 0 Fig. 1. The word a(aba)ω is strongly synchronizing.

For universality, the upper bound follows from a reduction to the emptiness problem of an exponential-size coB¨ uchi automaton, and the lower bound is obtained by a reduction from the emptiness problem of traditional probabilistic coB¨ uchi automata in positive semantics [4, 13]. The PSPACE-completeness bounds improve the results of [5] where it is shown that the emptiness and universality problems for synchronizing languages are decidable4 using a characterization which yields doubly exponential algorithms.

2

Automata and Synchronizing Words

A probability distribution over a finite set S is a function d : S → [0, 1] such that P s∈S d(s) = 1. The support of d is the set Supp(d) = {s ∈ S | d(s) > 0}. We denote by D(S) the set of all probability distributions over S. Given a finite alphabet Σ, we denote by Σ ∗ the set of all finite words over Σ, and by Σ ω the set of all infinite words over Σ. The length of a word w is denoted by |w| (where |w| = ∞ for infinite words). An infinite randomized word over Σ is a sequence w = d0 d1 . . . of probability distributions over Σ. We denote by D(Σ)ω the set of all infinite randomized words over Σ. A word w ∈ Σ ω can be viewed as a randomized word d0 d1 . . . in which the support of all probability distributions di is a singleton. We sometimes call w ∈ Σ ω a pure word to emphasize this. 4

Probabilistic automata are equivalent to Markov decision processes with blind strategies.

3

Finite Automata. A nondeterministic finite automaton (NFA) A = hL, `0 , Σ, δ, F i consists of a finite set L of states, an initial state `0 ∈ L, a finite alphabet Σ, a transition relation δ : L × Σ → 2L , and an acceptance condition F which can be either finite, B¨ uchi, or coB¨ uchi (and then F ⊆ L), or generalized B¨ uchi (and then F ⊆ 2L ). Finite acceptance conditions define languages of finite words, other acceptance conditions define languages of infinite words. Automata with B¨ uchi, coB¨ uchi, and generalized B¨ uchi condition are called ω-automata. A run over a (finite or infinite) word w = σ0 σ1 . . . is a sequence ρ = r0 r1 . . . such that r0 = `0 and ri+1 ∈ δ(ri , σi ) for all 0 ≤ i < |w|. A finite run r0 . . . rk is accepting if rk ∈ F , and an infinite run r0 r1 . . . is accepting for a B¨ uchi condition if rj ∈ F for infinitely many j, for a coB¨ uchi condition if rj 6∈ F for finitely many j, for a generalized B¨ uchi condition if for all s ∈ F , we have rj ∈ s for infinitely many j. The language of a (finite- or ω-) automaton is the set Lf (A) (resp., Lω (A)) of finite (resp., infinite) words over which there exists an accepting run. The emptiness problem for (finite- or ω-) automata is to decide, given an automaton A, whether Lf (A) = ∅ (resp., Lω (A) = ∅), and the universality problem is to decide whether Lf (A) = Σ ∗ (resp., Lω (A) = Σ ω ). For both finite and B¨ uchi automata, the emptiness problem is NLOGSPACE-complete, and the universality problem is PSPACE-complete [12, 11]. A deterministic finite automaton (DFA) is a special case of NFA where the transition relation is such that δ(`, σ) is a singleton for all ` ∈ L and σ ∈ Σ, which can be viewed as a function δ : L × Σ → L, and can be extended to a function δ : L × Σ ∗ → L defined inductively as follows: δ(`, ) = ` with  the empty word and δ(`, σ · w) = δ(δ(`, σ), w) for all w ∈ Σ ∗ . A synchronizing word for a DFA is a word w ∈ Σ ∗ such that δ(`, w) = δ(`0 , w) for all `, `0 ∈ L, i.e. such that from all states, a unique state is reached after reading w. Synchronizing words have applications in several areas from planning to robotics and system ˇ biology, and they gave rise to the famous Cern´ y’s conjecture [3, 14]. Probabilistic Automata. A probabilistic automaton (PA) A = hQ, µ0 , Σ, δi consists of a finite set Q of states, an initial probability distribution µ0 ∈ D(Q), a finite alphabet Σ, and a probabilistic transition function δ : Q × Σ → D(Q). In a state q ∈ Q, the probability to go to a state q 0 ∈ Q after reading a letter σ ∈ Σ is δ(q, σ)(q 0 ). Define S Post(q, S σ) = Supp(δ(q, σ)), and for a set s ⊆ Q and Σ 0 ⊆ Σ, let Post(s, Σ 0 ) = q∈s σ∈Σ 0 Post(q, σ). The outcome of an infinite randomized word w = d0 d1 . . . is the infinite sequence X0 X1 . . . of probability distributions Xi ∈ D(Q) such that X0 = µ0 is the initial distribution, and for all n > 0 and q ∈ Q, P P Xn (q) = σ∈Σ q0 ∈Q Xn−1 (q 0 ) · dn−1 (σ) · δ(q 0 , σ)(q) The norm of a probability distribution X over Q is kXk = maxq∈Q X(q). We say that w is a strongly synchronizing word if lim inf kXn k = 1, n→∞

4

(1)

and that it is a weakly synchronizing word if lim sup kXn k = 1.

(2)

n→∞

Intuitively, a word is synchronizing if in the outcome the probability mass tends to concentrate in a single state, either at every step from some point on (for strongly synchronizing), or at infinitely many steps (for weakly synchronizing). Note that equivalently, the randomized word w is strongly synchronizing if the limit limn→∞ kXn k exists and equals 1. We denote by LS (A) (resp., LW (A)) the set of strongly (resp., weakly) synchronizing words of A. In this paper, we are interested in the emptiness problem for strongly (resp., weakly) synchronizing languages which is to decide, given a probabilistic automaton A, whether LS (A) = ∅ (resp., LW (A) = ∅), and in the universality problem which is to decide, whether LS (A) = D(Σ)ω (resp., LW (A) = D(Σ)ω ). Synchronizing sequences of probability distributions have been first introduced for Markov decision processes (MDP) [5]. A probabilistic automaton can be viewed as an MDP where a word corresponds to a blind strategy (in the terminology of [5]) which chooses letters (or actions) independently of the sequence of states visited by the automaton and it only depends on the number of rounds that have been played so far. It is known that the problem of deciding the existence of a blind synchronizing strategy for MDPs is decidable5 [5, Theorem 5]. In Section 3 we provide a solution in PSPACE to this problem, as well as a matching PSPACE lower bound. Remark 1. From the results of [5], it follows that if there exists a (strongly or weakly) synchronizing word, then there exists a pure one. A deterministic finite automaton is also a special case of probabilistic automaton where the probabilistic transition function is such that Post(q, σ) is a singleton for all q ∈ Q and σ ∈ Σ (and disregarding the initial distribution µ0 ). We show that the definition of strongly (and weakly) synchronizing word generalizes to probabilistic automata the notion of synchronizing words for DFA, in the following sense. Theorem 1. Given a deterministic finite automaton A, the following statements are equivalent: 1. There exists a (finite) synchronizing word for A. 2. There exists an (infinite) strongly (or weakly) synchronizing word for A (viewed as a probabilistic automaton) with uniform initial distribution. Proof. First, if w ∈ Σ ∗ is a synchronizing word for the DFA A, there is a state q which is reached from all states of A by reading w. This implies that X|w| (q) = 1 in the PA A (no matter the initial distribution) and since the transition function of A is deterministic, any infinite word with prefix w is both strongly (and thus also weakly) synchronizing for A. 5

The results in [5] suggest a doubly exponential algorithm for solving this problem.

5

Second, assume that w is a strongly (or weakly) synchronizing word for the 1 PA A with initial distribution µ0 such that µ0 (q) = m where m = |Q| is the number of states of A. By Remark 1, we assume that w = σ0 σ1 · · · ∈ Σ ω is pure. Let X0 X1 . . . be the outcome of w in A. Since the transitions in A are 1 , i.e. deterministic, all probabilities Xi (q) for i ≥ 0 and q ∈ Q are multiples of m c Xi (q) = m for some 0 ≤ c ≤ m. Therefore, the fact that lim inf n→∞ kXn k = 1 (or lim supn→∞ kXn k = 1) implies that Xi (q) = 1 for some i ≥ 0 and q ∈ Q. Then, the finite word σ0 σ1 . . . σi−1 is synchronizing for A. u t Note that the problem of deciding whether there exists a synchronizing word for a given DFA can be solved in polynomial time, while the emptiness problem for synchronizing languages (for probabilistic automata) is PSPACE-complete (see Theorem 2). End-Components. A set C ⊆ Q is closed if for every state q ∈ C, there exists σ ∈ Σ such that Post(q, σ) ⊆ C. For each q ∈ C, let DC (q) = {σ ∈ Σ | Post(q, σ) ⊆ C}. The graph induced by C is A  C = (C, E) where E is the set of edges (q, q 0 ) ∈ C × C such that δ(q, σ)(q 0 ) > 0 for some σ ∈ DC (q). An end-component is a closed set U such that the graph A  C is strongly connected.

3

The Emptiness Problem is PSPACE-complete

In this section, we present constructions to reduce the emptiness problem for synchronizing languages of probabilistic automata to the emptiness problem for ω-automata, with B¨ uchi condition for strongly synchronizing language, and with generalized B¨ uchi condition for weakly synchronizing language. The constructions are exponential and therefore provide a PSPACE upper bound for the problems. We also prove a matching lower bound. Lemma 1. The emptiness problem for strongly synchronizing language of probabilistic automata is decidable in PSPACE. Proof. Given a PA A = hQ, µ0 , Σ, δi, we construct a B¨ uchi automaton B = hL, `0 , Σ, δB , FB i such that LS (A) = ∅ iff L(B) = ∅. The automaton B is exponential in the size of A, and thus the PSPACE bound follows from the NLOGSPACE-completeness of the emptiness problem for B¨ uchi automata. The construction of B relies on the following intuition. A strongly synchronizing word induces a sequence of probability distributions Xi in which the probability mass tends to accumulate in a single state qˆi at step i. It can be shown that for all sufficiently large i, there exists a deterministic transition from qˆi to qˆi+1 , i.e. there exists σi ∈ Σ such that Post(ˆ qi , σi ) = {ˆ qi+1 }. The B¨ uchi automaton B will guess the witness sequence qˆi qˆi+1 . . . and check that the probability mass is ‘injected’ into this sequence. The automaton B keeps track of the support si = Supp(Xi ) of the outcome sequence on the input word, and at some point guesses that the witness sequence qˆi qˆi+1 . . . starts. Then, using an obligation set oi ⊆ si , it checks that every state in si eventually ‘injects’ some 6

probability mass in the witness sequence. When the obligation set gets empty, it is recharged with the current support si . The construction of B = hL, `0 , Σ, δB , FB i is as follows: – L = 2Q ∪ (2Q × 2Q × Q) is the set of states. A state s ⊆ Q is the support of the current probability distribution. A state (s, o, qˆ) ∈ 2Q × 2Q × Q consists of the support s, the obligation set o ⊆ s, and a state qˆ ∈ s of the witness sequence. – `0 = Supp(µ0 ) is the initial state. – Σ is the alphabet of A. – δB : L × Σ → 2L is defined as follows. For all s ∈ 2Q and σ ∈ Σ, let s0 = Post(s, σ), and define δB (s, σ) = {s0 } ∪ {(s0 , s0 , qˆ) | qˆ ∈ s0 }. A transition in the state s which leads to a state (s0 , s0 , qˆ), guesses qˆ as the initial of the witness sequence. For all (s, o, qˆ) ∈ 2Q × 2Q × Q and σ ∈ Σ, let s0 = Post(s, σ). If Post(ˆ q , σ) is not a singleton, then δB ((s, o, qˆ), σ) = ∅, otherwise let {ˆ q 0 } = Post(ˆ q , σ), and: • If o 6= ∅, then δB ((s, o, qˆ), σ) = {(s0 , o0 \{ˆ q 0 }, qˆ0 ) | ∀q ∈ o : o0 ∩Post(q, σ) 6= ∅}. These transitions deterministically choose the next state qˆ0 of the witness sequence, and in addition, nondeterministically take care of paths from the obligation set o to the witness sequence. For this sake, the constraint o0 ∩ Post(q, σ) 6= ∅ is required for all q ∈ o. • If o = ∅, then δB ((s, o, qˆ), σ) = {(s0 , s0 , qˆ0 )}. This transition is to recharge the obligation set with the current support s0 when it gets empty. – FB = {(s, o, qˆ) ∈ 2Q × 2Q × Q | o = ∅} is the set of accepting states. We show that L(B) 6= ∅ iff LS (A) 6= ∅. First, assume that L(B) 6= ∅. Then, there exists an ultimately periodic run ρ = r0 r1 . . . rn−1 (rn . . . rm−1 )ω in B over some word w such that rn ∈ FB . This means that there is a cycle from rn to itself, and this cycle is accessible from the initial state. Let rm = rn and κ = m − n be the size of this cycle. Let X0 X1 . . . be the outcome of the word w in A. We prove that w is strongly synchronizing for A, meaning that lim inf i→∞ kXi k = 1. From the construction of B, we know that each ri (for all n ≤ i ≤ m) is a state of the form (si , oi , qˆi ) where si = Supp(Xi ) is the support of the outcome sequence at step i (and also at steps i + (j · κ) for all j ∈ N). Formally, si = Supp(Xi ) which means Xi (q) > 0 for all states q ∈ si . By construction, since qˆn ∈ sn , the state qˆn is reached at step n with some strictly positive probability, let say p. The sequence qˆn qˆn+1 . . . qˆm−1 qˆm forms a simple cycle in A, because Post(ˆ qi , σi ) is a singleton containing qˆi+1 for all n ≤ i < m. Thus Xi+1 (ˆ qi+1 ) ≥ Xi (ˆ qi ) ≥ p, and we claim that lim inf i→∞ Xi (ˆ qi ) = 1. Since rn ∈ F , this state is of the form (sn , ∅, qˆn ) where the obligation set is empty. Consequently rn+1 = (sn+1 , sn+1 , qˆn+1 ) where the obligation set on+1 = sn+1 is recharged. Let o0i = oi ∪ {ˆ qi } for all n ≤ i ≤ m. By a backward induction, we show that there are paths passing through each state of oi and ending in qˆm . The base of induction trivially holds because o0m = {ˆ qm } and Post(q, σm−1 ) ∩ o0m 6= ∅ for all q ∈ om−1 . The induction holds for all n ≤ i ≤ m, again due to the fact that Post(q, σi−1 ) ∩ o0i 6= ∅ for all q ∈ oi−1 . Therefore, the state 7

qˆm is reached from all states q ∈ sn+1 within m − n steps, with some positive probability which is at least ν κ where κ = m−n and ν is the smallest probability 0 0 of taking a transition P in A. Formally, ν = minq,q ∈Q,σ∈Σ (δ(q, σ)(q )). Recall that Xn (ˆ qn ) = p and q∈Q,q6=qn Xn (q) ≤ 1 − p. Above arguments show that P κ q∈Q,q6=qˆm Xm (q) ≤ (1 − p) · (1 − ν ) P where 1 − p is an upper bound of q∈Q,q6=qˆn Xn (q), and (1 − ν κ ) is an upper bound of the probability mass which does not move from other states into qˆm within κ = m − n steps. Similarly, P κ κ q∈Q,q6=qˆm+κ Xm+κ (q) ≤ (1 − p) · (1 − ν ) · (1 − ν ) P where (1 − p) · (1 − ν κ ) is an upper bound of q∈Q,q6=qˆm Xm (q), and (1 − ν κ ) is an upper bound of the probability mass which does not move from other states into qˆm+κ within κ steps. So, after j repetition, we have X Xm+j·κ (q) ≤ (1 − p)(1 − ν κ )j . q∈Q,q6=qˆm+j·κ κ

Since 0 ≤ 1 − ν < 1, therefore in long run, it tends to 0; the sandwich lemma gives X lim Xm+j·κ (q) ≤ (1 − p)(1 − ν κ )j = 0. j→∞

q∈Q,q6=qˆm+j·κ

The above arguments immediately give limi→∞ kXi k = 1. Second, assume that w ∈ LS (A). By Remark 1, we assume that w = σ0 σ1 . . . is pure. By definition, since w is strongly synchronizing, then ∀ > 0 · ∃n0 ∈ N · ∀n ≥ n0 · ∃ˆ q such that Xn (ˆ q ) > 1 −  where X0 X1 . . . is the outcome of w in A. By assuming  < 12 , the state qˆ is unique in each step n ≥ n0 ; we therefore denote this state by qˆn . Moreover, it is easy to see that the state qˆn is independent of . We claim that Post(ˆ qn , σn ) = {ˆ qn+1 } is a singleton, for all n ≥ n0 . Recall that ν . Towards ν is the smallest probability of taking a transition in A. Let  < 1+ν contradiction, assume that Post(ˆ qn , σn ) 6= {ˆ qn+1 }. So, there exists another state q 6= qˆn+1 such that {q, qˆn+1 } ⊆ Post(ˆ qn , σn ), and we have Xn+1 (q) ≥ δ(ˆ qn , σn )(q) · Xn (ˆ qn ) ≥ ν · (1 − ). Hence, the probability Xn+1 (ˆ qn+1 ) to be in qˆn+1 at step n + 1 is at most 1 − ν · (1 − ). Consequently, 1 −  ≤ kXn+1 k ≤ 1 − ν · (1 − ) ν , a contradiction. Therefore Post(ˆ qn , σn ) = {ˆ qn+1 } for all which gives  ≥ 1+ν n ≥ n0 . To complete the proof, we construct an accepting run r of B over w. For the first n0 transitions, the run r visits si where si = Supp(Xi ) for all

8

i < n0 . At step n0 , it visits (sn0 , sn0 , qˆn0 ) where qˆn0 is the unique state in which the mass of probability kXn0 k is gathered at step n0 . For a given state q and word v ∈ Σ ∗ , define Post(q, v) as the natural extension of Post(q, σ) over words: Post(q, v) = Post(Post(q, v 0 ), a) where v = v 0 · a. We claim that for all n ≥ n0 , and all states q ∈ sn , there exists n(q) ∈ N such that qˆn+n(q) ∈ Post(q, σn σn+1 . . . σn+n(q)−1 ). Towards contradiction, assume that for some n ≥ n0 , there exists q ∈ sn which does not satisfy this claim; meaning that ∀k ∈ N : qˆn+k 6∈ Post(q, σn σn+1 . . . σn+k−1 ). Since q ∈ sn , the probability Xn (q) > 0 to be there is positive. As a consequence of the assumption ∀k ∈ N : qˆn+k 6∈ Post(q, σn σn+1 . . . σn+k−1 ), we conclude that kXn+k k which is equals to Xn+k (ˆ qn+k ) is at most 1 − Xn (q) for all k ≥ n. By taking  < Xn (q), a contradiction arises with the fact that w is strongly synchronizing. Therefore, for all n ≥ n0 , and all states q ∈ sn , there exists n(q) ∈ N such that qˆn+n(q) ∈ Post(q, σn σn+1 . . . σn+n(q)−1 ). Let Pn (q) = q1 q2 . . . qn(q) where q1 = q and qn(q) = qˆn+n(q) be the path going from q ∈ sn to qˆn+n(q) ; and also Pn (q, k) be the state qk which is the k th visited state along the path Pn (q). Let M axn = maxq∈sn n(q) be the size of the longest one among the paths starting in states of sn . The run r, after the state (sn0 , on0 , qˆn0 ) visits M axn0 states consecutively as (sn0 +k , on0 +k , qˆn0 +k ) where sn0 +k = Supp(Xn0 +k ) and qn0 +k } for all 0 < k ≤ M axn0 . As a result, on0 +k = (∪q∈sn0 :n(q)≥k {Pn0 (q, k)}) \ {ˆ the obligation set on0 +Maxn0 = ∅ and rn0 +Max ∈ FB . The automaton B, by next transition, deterministically, reset the obligation set with sn0 +Max+1 . With a similar construction, the run r visits the accepting states infinitely often and v is accepting. u t Lemma 2. The emptiness problem for weakly synchronizing language of probabilistic automata is decidable in PSPACE. Proof. The proof is by a reduction to the emptiness problem of an exponentialsize ω-automaton with generalized B¨ uchi condition. The correctness and complexity argument of this construction are analogous to the proof of Lemma 1. Given a PA A = hQ, µ0 , Σ, δi, we construct a generalized B¨ uchi automaton G = hL, `0 , Σ, δG , FG i such that LW (A) = ∅ iff L(G) = ∅. The construction follows the same line as in Lemma 1. The main difference is that a weakly synchronizing word ensures that there exists a witness sequence cˆi cˆi+1 . . . of sets cˆi ⊆ Supp(Xi ) such that infinitely many of them are singleton (in the case of strongly synchronizing words, all of them would be singleton). The construction is as follows: – – – –

L = 2Q ∪ (2Q × 2Q × 2Q ). `0 = Supp(µ0 ) is the initial state. Σ is the alphabet of A. δG : L × Σ → 2L is the transition function defined by: - for all s, s0 ∈ 2Q and σ ∈ Σ where s0 = Post(s, σ): δG (s, σ) = {s0 } ∪ {(s0 , s0 , {q}) | q ∈ s0 }. - for all (s, o, c), (s0 , o0 , c0 ) ∈ 2Q × 2Q × 2Q and σ ∈ Σ where s0 = Post(s, σ) and c0 = Post(c, σ): 9

PA A

Σ #

NFA N a

a



1/2

qend Σ

1/2

a

#

a

#

qsyn 1/2

Fig. 2. Sketch of the reduction for PSPACE-hardness of the emptiness problem.

• if o 6= ∅: δG ((s, o, c), σ) = {(s0 , o0 \ c0 , c0 ) | ∀q ∈ o · ∃q 0 ∈ o0 such that q 0 ∈ Post(q, σ)}. • if o = ∅: δG ((s, o, c), σ) = {(s0 , s0 , c0 )}. – δG : L × Σ → 2L is defined as follows. For all s ∈ 2Q and σ ∈ Σ, let s0 = Post(s, σ), and define δG (s, σ) = {s0 } ∪ {(s0 , s0 , {q}) | q ∈ s0 }. For all (s, o, cˆ) ∈ 2Q × 2Q × 2Q and σ ∈ Σ, let s0 = Post(s, σ), cˆ0 = Post(ˆ c, σ), and • if o 6= ∅, then δG ((s, o, cˆ), σ) = {(s0 , o0 \ˆ c0 , cˆ0 ) | ∀q ∈ o : o0 ∩Post(q, σ) 6= ∅}, • if o = ∅, then δB ((s, o, cˆ), σ) = {(s0 , s0 , cˆ0 )}. – FG = {F1 , F2 } where F1 = {(s, o, cˆ) ∈ 2Q × 2Q × 2Q | o = ∅} and F2 = {(s, o, cˆ) ∈ 2Q × 2Q × 2Q | |ˆ c| = 1}. The generalized B¨ uchi condition ensures that every visited state eventually injects some probability in the witness sequence of sets cˆ, and that infinitely many sets in the witness sequence are singletons. The details are left to the reader. u t Lemma 3. The emptiness problem for strongly synchronizing language and for weakly synchronizing language of probabilistic automata is PSPACE-hard. Proof. We present a proof for strongly synchronizing words using a reduction from the universality problem for nondeterministic finite automata. The proof and the reduction for weakly synchronizing words is analogous. Given a NFA N , we construct a PA A, such that L(N ) = Σ ∗ iff LS (A) = ∅. The reduction is illustrated in Fig. 2. The nondeterministic transitions of N become probabilistic in A with uniform probability. The initial probability distribution assigns probability 12 to the absorbing state qsync . Therefore, a synchronizing word needs to inject all that probability into qsync . This can be done with the special symbol # from the non-accepting states of the NFA. From the accepting states, the # symbol leads to a sink state qend from which there is no way to synchronize the automaton. Let N = hL, `0 , Σ, δN , FN i be a NFA, we construct the PA A = hQ, µ0 , Σ 0 , δ, F i as follows: 10

– – – –

Q = L ∪ {qsync , qend }. µ0 (`0 ) = µ0 (qsync ) = 12 , and µ0 (q) = 0 for all q ∈ Q \ {`0 , qsync }. Σ 0 = Σ ∪ {#}. δ : Q × Σ → D(Q) is the probabilistic transition function defined as follows. For all σ ∈ Σ 0 , δ(qsync , σ)(qsync ) = 1 and δ(qend , σ)(qend ) = 1. For all q ∈ FN , δ(q, #)(qend ) = 1, and for all q 6∈ FN , δ(q, #)(qsync ) = 1. Finally, 1 if q 0 ∈ δN (q, σ), and for all q, q 0 ∈ L and σ ∈ Σ, δ(q, σ)(q 0 ) = |δN (q,σ)| 0 δ(q, σ)(q ) = 0 otherwise.

We show that L(N ) 6= Σ ∗ iff LS (A) 6= ∅. First, assume that L(N ) 6= Σ ∗ . Let w ∈ Σ ∗ such that w 6∈ L(N ). Then all runs of N over w end in a non-accepting state, and in A the state qsync is reached with probability 1 on the word w · #. Therefore, w · (#)ω is a strongly synchronizing word for A and LS (A) 6= ∅. Second, assume that LS (A) 6= ∅. Let w0 ∈ LS (A) be a strongly synchronizing word for A, and let X0 X1 . . . be the outcome of w0 in A. Since µ0 (qsync ) = 12 and qsync is a sink state, we have Xk (qsync ) ≥ 12 for all k ≥ 0 and since w0 is strongly synchronizing, it implies that limk→∞ Xk (qsync ) = 1. Then w0 has to contain #, as this is the only letter on a transition from a state in L to qsync . Let w ∈ Σ ∗ be the prefix of w0 before the first occurrence of #. We claim that w is not accepted by N . By contradiction, if there is an accepting run r of N over w, then positive probability is injected in qend by the finite word w · # and stays there forever, in contradiction with the fact that limk→∞ Xk (qsync ) = 1. Therefore w 6∈ L(N ) and L(N ) 6= Σ ∗ . u t The following result follows from Lemma 1, Lemma 2, and Lemma 3. Theorem 2. The emptiness problem for strongly synchronizing language and for weakly synchronizing language of probabilistic automata is PSPACEcomplete.

4

The Universality Problem is PSPACE-complete

In this section, we present necessary and sufficient conditions for probabilistic automata to have a universal strongly (resp., weakly) synchronizing language. We show that the construction can be checked in PSPACE. Unlike for the emptiness problem, it is not sufficient to consider only pure words for universality of strongly (resp., weakly) synchronizing languages. For instance, all infinite pure words for the probabilistic automaton in Fig. 3 are strongly (and weakly) synchronizing, but the uniformly randomized word over {a, b} is not strongly (nor weakly) synchronizing. Formally, we say an infinite randomized word is a uni1 for all σ ∈ Σ and formly randomized word over Σ denoted by wu , if di (σ) = |Σ| i ∈ N. Lemma 4. There is a probabilistic automaton for which all pure words are strongly synchronizing, but not all randomized words . 11

a, b

a, b 1/2

q1

a

q0

b

a, b

1/2

q2 a, b

Fig. 3. Randomization is necessary.

Fig. 4. Randomization is not sufficient.

The reason is that there are two sets ({q1 } and {q2 }) for which the probability can not go out. For a given PA A = hQ, µ0 , Σ, δ, F i, a maximal endcomponent U ⊆ Q is terminal, if Post(U, Σ) ⊆ U . It is easy to see that a terminal end-component keeps probability inside. To have a universal strongly/weakly synchronizing language, the PA A needs to have only a unique terminal endcomponent. Otherwise, the uniformly randomized word wu would reach all terminal end-components and would not be strongly synchronizing. Though having only a terminal end-component is necessary, it is not sufficient. For example, the infinite word (ab)ω 6∈ LS (A) for the PA A in Fig. 5 which contains only one terminal end-component. The probabilistic automaton needs to ensure that for all randomized words, all of the probability mass tends to accumulate in the unique terminal end-component. We express this property for a terminal endcomponent as being P absorbing. We say that a terminal end-component U is absorbing, if limn→∞ q∈U Xn (q) = 1 for the outcome X0 X1 . . . of all infinite randomized words w ∈ D(Σ)ω . Fig. 6 shows an automaton where the unique end competent is absorbing and the strongly synchronizing language is universal. Lemma 5. For a given PA A, deciding whether a given terminal end-component U is absorbing is decidable in PSPACE. Proof. Given a terminal end-component U of the PA A, we construct a coB¨ uchi automaton C such that U is absorbing iff L(C) = ∅. The coB¨ uchi automaton C is exponential in the size of A, and as a consequence of NLOGSPACE-completeness of the emptiness problem for coB¨ uchi automata, the PSPACE bound follows. The coB¨ uchi automaton C guesses a run of A whose outcome, from some point on, assigns a positive probability mass to a subset of states disjoint of U and this probability mass remains outside U . The alphabet of C is 2Σ where Σ is the alphabet of A. A word over this alphabet is a sequence of subsets of letters which can be viewed as the sequence of supports of a randomized word. The states of C are of the form (s, b) where s ⊆ Q is a subset of states of A and b ∈ {0, 1} is a Boolean flag; so there are two kinds of states: 0-flagged and 1-flagged states. This flag is used to record that C has made a guess of a subset of states in Q\U , and it is not allowed to guess again. The automaton C begins with the set of states in which A initially starts with a strictly positive probability and flag it with 0 (i.e., the initial state of C is (Supp(µ0 ), 0)). From a 0-flagged state (s, 0), by Σ 0 ⊆ Σ, it can move into a 0-flagged state (s0 , 0) where s0 = Post(s, Σ 0 ) is the set of successors of s, or it can guess and move to a 1-flagged state (s0 , 1) 12

where s0 is a non-empty subset of Post(s, Σ 0 ) \ U . Note that s0 ∩ U = ∅. After this guess, it deterministically constructs the subset construction and checks that it has always empty intersection with U . The coB¨ uchi acceptance condition requires to visit, infinitely often, only states (s, b) where b = 1. The construction of C = hL, `0 , 2Σ , δC , FC i is as follows: L = 2Q × {0, 1}. `0 = (Supp(µ0 ), 0) is the initial state. 2Σ \ {∅} is the alphabet. δC : L × 2Σ → 2L is the transition function defined as follows. For all s ⊆ Q and Σ 0 ⊆ Σ, let s0 = Post(s, Σ 0 ) and define δC ((s, 0)) = {(s0 , 0)} ∪ {(s00 , 1) | s00 6= ∅ ∧ s00 ⊆ s0 \ U } and define δC ((s, 1)) = {(s0 , 1)} if s0 ∩ U = ∅, and δC ((s, 1)) = ∅ otherwise. – and FC = 2Q × {1} is the coB¨ uchi acceptance condition.

– – – –

We show that L(C) = ∅ iff the terminal end-component U is absorbing. First, assume that U is absorbing. Towards contradiction, suppose that L(C) 6= ∅. Then, there exists an ultimately periodic run ρ = r0 r1 . . . rn−1 (rn . . . rm−1 )ω in B over some word v such that ri ∈ FB for all n ≤ i < m. This means that there is a cycle from rn to itself where all states in this cycle are accepting, and moreover this cycle is accessible from the initial state. Let rm = rn and κ = m−n be the size of this cycle. From v = Σ0 Σ1 . . . , we construct a randomized word w = d0 d1 . . . where di is a uniform probability distribution over Σi . In other words, di (σ) = 0 for σ 6∈ Σi and di (σ) = |Σ1i | for σ ∈ Σi . Let X0 X1 . . . be the outcome of the word w in A. We show that w is a witness word to prove P that the 6 terminal end component U is not absorbing, meaning that limi→∞ q∈U kXi k = 1. From the construction of B, we know that ri = (si , 1) (for all n ≤ i ≤ m) where si+1 = Post(si , Σi ) and sn ⊆ Supp(Xn ). Therefore, each si is a subset of the support Supp(Xi ) of the outcome sequence at step i (and P also at steps i + (j · κ) for all j ∈ N). Since ∅ 6= sn ⊆ Supp(Xn ), we haveP q∈sn Xn (q) > 0, let say it is equal to p. Since si+1 = Post(si , Σi ), the sum q∈si Xi (q) of the probability assigned to the states of si is always at least pP(for all i ≥ n). On the other hand, for all i ≥ n, we have si ⊆ Q \ U which gives q∈Q\U Xi (q) ≥ p > 0. P U is not absorbing. Therefore limi→∞ q∈U Xi (q) < 1 − p < 1 which shows P Second, assume that L(C) = ∅. We show that limn→∞ q∈U Xn (q) = 1 for all words w ∈ D(Σ)ω . For a given state q and word w = d0 d1 . . . d|w|−1 ∈ (D(Σ))∗ , define Post(q, w) = ∪σ∈Supp(d|w|−1 ) Post(Post(q, v), σ) where v = d0 d1 . . . d|w|−2 . We claim that for all words w ∈ (D(Σ))∗ and i ∈ N, for all states q ∈ Supp(Xi ) of supports of the outcome X0 X1 X2 . . . produced by w, there exists k ≤ 2|Q| such that Post(q, di di+1 . . . di+k−1 ) ∩ U 6= ∅. Towards contradiction, assume that there exists w and m ∈ N such that there exists a state qm ∈ Supp(Xm ) and for which Post(qm , dm dm+1 . . . dm+k−1 ) ∩ U = ∅ for all k ≤ 2|Q| . From the randomized word w, we construct an ultimately periodic run r of the automaton C which is accepting. For the first m − 1 transitions, this run r visits the 0flagged states (si , 0) where si = Supp(Xi ) for all i < m. At step m, the guess is made and it visits the 1-flagged state ({qm }, 1). Within the next transitions, this 13

run deterministically reaches 1-flagged states (cm+1 , 1)(cm+2 , 1) . . . (cm+k , 1) . . . where k ≤ 2|Q| and cm+k = Post(q, dm dm+1 . . . dm+k−1 ) as long as it visits some state (c, 1) again. By assumption, Post(qm , dm dm+1 . . . dm+k−1 ) ∩ U = ∅ for all k ≤ 2|Q| , and since the automaton C has 2|Q| 1-flagged states, therefore based on pigeonhole principle, at least one state (e.g. (c, 1)) would be visited twice. This means there is a cycle from (c, 1) to itself which includes only accepting states. The run r afterwards visits the states of this cycle forever. Thus, it is an accepting run, a contradiction. We have proved that for all words w and i ∈ N, for all states q ∈ Supp(Xi ) of supports of the outcome X0 X1 X2 . . . produced by w, there exists k ≤ 2|Q| such that Post(q, di di+1 . . . di+k−1 ) ∩ U 6= ∅. Therefore, for all words w and i ∈ N, one of the states of the end component U is reached from all states q ∈ Supp(Xi ) |Q| within 2|Q| steps, with some positive probability which is at least ν 2 where ν is the smallest probability of taking a transition in A. Thus, X |Q| X2|Q| (q) ≤ 1 · (1 − ν 2 ) q6∈U

P |Q| for all words w, because 1 is an upper bound of q6∈U X0 (q), and (1 − ν 2 ) is an upper bound of the probability mass which does not move from the other states to states of U within 2|Q| steps. Similarly, X |Q| |Q| X2·2|Q| (q) ≤ (1 − ν 2 ) · (1 − ν 2 ) q6∈U

P |Q| 2|Q| ) is an where (1 − ν 2 ) is an upper bound of q6∈U X2|Q| (q), and (1 − ν upper bound of the probability mass which does not move from the other states to states of U within 2|Q| steps. So, after j repetition, we have X |Q| Xj·2|Q| (q) ≤ (1 − ν 2 )j . q6∈U |Q|

Since 0 ≤ 1 − ν 2

< 1, we have lim

j→∞

X

Xj·2|Q| (q) = 0

q6∈U

The above arguments immediately give limn→∞ meaning that U is absorbing.

P

q∈U

Xn (q) = 1 for all words w, u t

Another necessary condition to have a universal strongly (resp., weakly) synchronizing language for a probabilistic automaton is that the uniformly randomized word is synchronizing as well. For instance, the automaton presented in Fig. 4 has an absorbing end-component, but since the uniformly randomized word is not strongly synchronizing, the strongly synchronizing language is not universal. 14

1/2

a

1/2

a, b

1/2

b b

a, b

a, b

1/2

1/2

1/2

a

b a, b

a b

Fig. 5. Non-absorbing end-component.

a

Fig. 6. Absorbing end-component.

Lemma 6. The universality problem for strongly synchronizing language and for weakly synchronizing language of probabilistic automata is decidable in PSPACE. Proof. The strongly (resp., weakly) synchronizing language of a given probabilistic automata A = hQ, µ0 , Σ, δi is universal, iff (I) there is a (then necessarily unique) absorbing end-component U (II) the uniformly randomized word wu is strongly (resp., weakly) synchronizing for A. First, assume that LS (A) = D(Σ)w . Below, we show both of the conditions (I) and (II) hold. Condition (I): By definition, A has at most one absorbing end-component. By contradiction, suppose that A has no absorbing end-component. Then for a reachable terminal end-component P U , there exists a randomized word w = d0 d1 d2 . . . such that limn→∞ q∈U (Xn (q))w 6= 1. Note that we have used (Xn (q))w to emphasize P by following the word w. P that it is produced (Xn (q))w ≤ q∈U (Xn+1 (q))w for all n ∈ Since U is terminal, then q∈UP N. Therefore the limit limn→∞ q∈U (Xn (q))w exists, and equals to some 0 < M < 1. On the other hand, by following the uniformly randomized |Q| word wu , the probability to be in states of U would be after |Q| P at least ν steps. This probability never leaves U , thus limn→∞ q∈U (Xn (q))wu > ν |Q| . From these two words and a cofactor α ∈ (0, 1), we construct a randomized word v which is a linear combination of w and wu as follows. Let define v = 1 d00 d01 d02 . . . where d0i (σ) = α · di (σ) + (1 − α) · |Σ| for all σ ∈ Σ. It implies that P P limn→∞ q6∈U (Xn (q))v ≥ α · (1 − M ), and also limn→∞ q∈U (Xn (q))v ) > (1 − α) · ν |Q| . Therefore, limn→∞ (kXn kv ≤ 1 − min((1 − α) · ν |Q| , α · (1 − M )), in contradiction with the fact that v is strongly synchronizing. Condition (II): This condition trivially holds, since we have LS (A) = D(Σ)ω . Second, assume that both of the conditions (I) and (II) are fulfilled: ((I) there is a unique absorbing end-component U (II) the uniformly randomized word wu is strongly (resp., weakly) synchronizing for A. We show that LS (A) = D(Σ)ω . Let U = {q0 , q1 , . . . , q|U|−1 }, for convenience also let q|U| = q0 . We claim that 15

Post(qi , Σ) = {qi+1 } is singleton for all 0 ≤ i < |U |, which means U consists ν of a simple cycle. Let  = (1+ν) . Since wu is strongly synchronizing, then there exists n0 ∈ N such that for all n ≥ n0 , kXn (ˆ q )k > 1 − . Let n1 ∈ N be such that for all n > n , the total probability to be in states Q \ U is smaller 1 P than : q6∈U Xn (q) < . Then, for all k > max(n0 , n1 ), there exists a unique state q such that Xk (q) ≥ 1 − . Assume towards contradiction that q has two successors q1 , q2 ∈ U with q1 6= q2 , that is {q1 , q2 } ⊆ Post(q, σ). Then, Xk+1 (q1 ) ≥ Xk (q) · δ(q, σ)(q1 ) > (1 − ) · ν = . By a symmetric argument, we have Xk+1 (q2 ) > , showing that kXk+1 k < 1 − , a contradiction. This shows that the U consists of a simple cycle. Towards contradiction with LS (A) = D(Σ)ω , assume that there exists an infinite randomized word w such that w 6∈ LS (A). Since U is absorbing and it consists of a simple cycle, there exists N ∈ N such that |Supp((Xn (q)w )| > 1 for all n > N ; otherwise w would be strongly synchronizing. By assumption, the uniformly randomized word wu is strongly synchronizing, therefore ∀ > 0· ∃n0 ∈ N· ∀n ≥ n0 · ∃ˆ q such that (Xn (ˆ q ))wu > 1 − . Thus, there exists n > max(n0 , N ) and two states q, q 0 ∈ U such that (Xn (q))w > 0 and (Xn (q 0 ))w > 0. All states reachable by w are also reachable by wu . Hence (Supp(Xn )w ) ⊆ (Supp(Xn )wu ), a contradiction with condition (II), because in this case, wu is not strongly synchronizing. Condition (I) can be checked in PSPACE by Lemma 5, and Condition (II) reduces to check that a Markov chain is synchronizing, which can be done in polynomial time by steady state analysis. The PSPACE bound follows. u t Lemma 7. The universality problem for strongly synchronizing language and for weakly synchronizing language of probabilistic automata is PSPACE-hard. Proof. We present a proof using a reduction from a PSPACE-complete problem so called initial state problem. Given a nondeterministic finite automaton N = hQ, q0 , Σ, δ, F i and a state q ∈ Q, we denote by Nq the automaton N in which the initial state is q, i.e. Nq = hQ, q, Σ, δ, F i. The initial state problem is to decide, given N , whether the exists a state q ∈ Q and a word w ∈ Σ ω such that all runs r of Nq over w avoid F , i.e. ri 6∈ F for all i ≥ 0. From the results of [4, 13], it follows that the initial state problem is PSPACE-complete. We present a polynomial-time reduction from the the initial state problem to the universality problem, establishing the PSPACE hardness of the universality problem. Given an NFA N = hL, `0 , Σ, δN , FN i with FN 6= ∅, we construct a PA A = hQ, µ0 , Σ, δi as follows: – Q = L ∪ {qend }. – µ0 (`0 ) = 1, and µ0 (q) = 0 for all q ∈ Q \ {`0 }. – δ : Q × Σ → D(Q) is the probabilistic transition function defined as follows. For all q ∈ Q and σ ∈ Σ, if q 6∈ F , then δ(q, σ) is the uniform distribution over δN (q, σ), and if q ∈ FN , δ(q, σ)(q 0 ) = 2|δN 1(q,σ)| for all q 0 ∈ δN (q, σ) and δ(q, σ)(qend ) = 12 . 16

We show that the answer to the initial state problem for N is Yes if and only if A is not universal. We assume w.l.o.g that all states in N are reachable. First, if the answer to the initial state problem for N is Yes, then let qˆ be an initial state and w ∈ Σ ω be a word satisfying the problem. We construct a word that is not (strongly neither weakly) synchronizing for A. First, consider the |Q|-times repetition of the uniform distribution du over Σ. Then, with positive probability the state qend is reached, and also with positive probability the state qˆ is reached, say after k steps. Let w0 ∈ Σ ω such that w = v · w0 and |v| = |Q| − k. Note that from state qˆ the finite word v is played with positive probability by the repetition of uniform distribution du . Therefore, on the word (du )|Q| ·w0 , with some positive probability the set qend is never reached, and thus it is not synchronizing, and A is not universal. Second, if A is not universal, then the terminal end-component {qend } is not absorbing and by the construction in Lemma 5, there exists a state qˆ and a pure word w ∈ Σ ω such that all runs from qˆ on w avoid qend , and therefore also avoid FN . Hence, the answer to the initial state problem for N is Yes. u t The following result follows from Lemma 6, and Lemma 7. Theorem 3. The universality problem for strongly synchronizing language and for weakly synchronizing language of probabilistic automata is PSPACEcomplete.

5

Discussion

The complexity results of this paper show that both the emptiness and the universality problems for synchronizing languages are PSPACE-complete. The results in this paper apply also to a more general definition of synchronizing sequence of probability distribution, where groups of equivalent states are clustered together. A labeling function assigns a color to each group of equivalent states. The definition of synchronizing sequences then corresponds to the requirement that the automaton essentially behaves deterministically according to the sequence of colors produced in the long run. A labeled probabilistic automaton is a PA AhQ, µ0 , Σ, δi with a labeling function L : Q → Γ where Γ is a finite set of colors. P The L-norm of a probability distribution X ∈ D(Q) is kXkL = maxγ∈Γ q:L(q)=γ X(q), and a sequence X0 X1 . . . is strongly synchronizing (resp., weakly synchronizing) if lim inf n→∞ kXn kL = 1, (resp., lim supn→∞ kXn kL = 1). The constructions of ω-automata in Lemma 1 and Lemma 2 can be adapted to show that the emptiness problem remains in PSPACE for labeled probabilistic automata. Roughly, the ω-automaton will guess the witness sequence γˆi γˆi+1 . . . of colors rather than a witness sequence of states. The solution of the universality problem is adapted analogously. 17

References 1. Baier, C., Bertrand, N., Gr¨ oßer, M.: On decision problems for probabilistic B¨ uchi automata. In: Proc. of FoSSaCS: Foundations of Software Science and Computational Structures. pp. 287–301. LNCS 4962, Springer (2008) 2. Baier, C., Gr¨ oßer, M.: Recognizing omega-regular languages with probabilistic automata. In: Proc. of LICS: Logic in Comp. Science. pp. 137–146. IEEE (2005) 3. Benenson, Y., Adar, R., Paz-Elizur, T., Livneh, Z., e. Shapiro: DNA molecule provides a computing machine with both data and fuel. Proc. National Acad. Sci. USA 100, 2191–2196 (2003) 4. Chadha, R., Sistla, A.P., Viswanathan, M.: On the expressiveness and complexity of randomization in finite state monitors. In: Proc. of LICS: Logic in Computer Science. pp. 18–29. IEEE Comp. Soc. (2008) 5. Doyen, L., Massart, T., Shirmohammadi, M.: Synchronizing objectives for Markov decision processes. In: Proc. of iWIGP. pp. 61–75 (2011) 6. Kfoury, D.: Synchronizing sequences for probabilistic automata. Studies in Applied Mathematics 29, 101–103 (1970) 7. Korthikanti, V.A., Viswanathan, M., Kwon, Y., Agha, G.: Reasoning about mdps as transformers of probability distributions. In: Proc. of QEST: Quantitative Evaluation of Systems. pp. 199–208. IEEE Computer Society (2009) 8. Kwon, Y., Agha, G.: Linear inequality ltl (iltl): A model checker for discrete time Markov chains. In: ICFEM. pp. 194–208 (2004) 9. Paz, A.: Introduction to Probabilistic Automata. Academic Press, New York (1971) 10. Rabin, M.O.: Probabilistic automata. Information and Control 6(3), 230–245 (1963) 11. Sistla, A.P., Vardi, M.Y., Wolper, P.: The complementation problem for B¨ uchi automata with applications to temporal logic. Theor. Comput. Sci. 49, 217–237 (1987) 12. Stockmeyer, L., Meyer, A.: Word problems requiring exponential time. In: Proceedings of the 5th Annual Symposium on Theory of Computing. pp. 1–9. ACM Press (1973) 13. Tracol, M., Baier, C., Gr¨ oßer, M.: Recurrence and transience for probabilistic automata. In: Proc. of FSTTCS: Foundations of Software Technology and Theoretical Computer Science. LIPIcs, vol. 4, pp. 395–406. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2009) 14. Volkov, M.V.: Synchronizing automata and the cerny conjecture. In: Proc. of LATA: Language and Automata Theory and Applications. pp. 11–27. LNCS 5196, Springer (2008)

18

Suggest Documents