Graph-controlled grammars as language acceptors

0 downloads 0 Views 214KB Size Report
Apr 30, 1997 - we continue our studies of accepting grammars 1, 2, 3, 11, 13, 14, 15]. ... this way, it is possible to describe more natural phenomena using context- ... A grammar controlled by a bicoloured digraph 5, 20, 21, 23, 27, 28] or G.
Graph-controlled grammars as language acceptors

To appear in J. Automata, Languages and Combinatorics, volume 2, 1997

Henning Fernau Wilhelm-Schickard-Institut fur Informatik, Universitat Tubingen Sand 13, D-72076 Tubingen, Germany email: [email protected]

WWW: http://www-fs.informatik.uni-tuebingen.de/~fernau/

April 30, 1997 Abstract

In this paper, we study the concept of accepting grammars within various forms of regulated grammars like programmed grammars, matrix (set) grammars, grammars with regular (set) control, periodically time-variant grammars as variants of grammars controlled by bicoloured digraphs. We focus on their descriptive capacity. In this way, we continue our studies of accepting grammars [1, 2, 3, 11, 13, 14, 15]. Periodically time-variant grammars yield the rst example of a nontrivial equivalence of generating and accepting mode in the absence of appearance checkings.

Keywords: Formal languages, regulated rewriting, accepting grammars. C.R. Categories: F.4.2, F.4.3 

Supported by Deutsche Forschungsgemeinschaft grant DFG La 618/3-1.

1

1 Introduction Regulated rewriting is one of the main and classic topics of formal language theory [5, 26]. Its starting point was to enrich context-free rewriting mechanisms by di erent kinds of regulations, hence generally enhancing the generative power of such devices compared to the context-free languages. In this way, it is possible to describe more natural phenomena using contextindependent derivation rules, see [5]. First, we present various concepts of regulated rewriting as special cases of graph-controlled rewriting. Then, we concentrate on these mechanisms as language acceptors, contrasting them with the language generating interpretation. What do we mean by the term \accepting grammar"? Of course, the core rules are accepting rules as de ned in [26, page 9f.]. For the context-free case, this means that, instead of rules of the form A ! w, where A is some nonterminal symbol, we have rules of the form w ! A. There are in principal (at least) two di erent ways to arrive at grammars as language accepting devices:  We can try to create an accepting grammar type which mimics the generating process of another generative grammar step by step. For example, for programmed grammars, this would lead to the following concept. For every labelled rule, there are two label sets  and , where  contains labels of such rules which have been successfully applied in the last derivation step, and  contains labels of such rules which have been unsuccessfully applied in the last derivation step. Such an approach has been recently undertaken by Mihalache [19] for the case of CD grammar systems. By construction, the language families generated and accepted in this way trivially coincide.  We can try to carry over the original idea of the grammar type under consideration into an accepting interpretation. For example, for programmed grammars, we would still have two label sets  and , where  contains labels indicating where to go after a successful application of the present rule, and  contains labels indicating where to go after an unsuccessful application of the present rule. Still another, third approach has been proposed in [12] for so-called analysing array grammars. 2

As in our preceding works [1, 2, 3, 11, 13, 14, 15] on this topic, we restrict ourselves to the second interpretation, so that we supplement those works here. Conventions:  denotes inclusion, ( denotes strict inclusion. ; denotes the empty set. N is the set of natural numbers excluding zero. The empty word is denoted by . Let Sub(w) (Pref(w)) denote the set of subwords (pre xes) of w. If G is a grammar with nonterminal alphabet VN and terminal alphabet VT , we denote the total alphabet VG := VN [ VT . Sometimes, we use bracket notations like Lgen (M,CF[?],ut) = Lgen (P,CF[?],ut) in order to say that the equation holds both in the case of excluding -rules (as indicated by the ? enclosed in brackets) and in the case of admitting -rules (neglecting the bracket contents). We consider two languages L1; L2 to be equal i L1 n fg = L2 n fg, and we simply write L1 = L2 in this case. We term two devices describing languages equivalent if the two described languages are equal. We have a similar interpretation of the inclusion relation of languages. We presuppose some knowledge of formal language theory on side of the reader. Especially, the Chomsky hierarchy L(CF) ( L(CS) ( L(REC) ( L(RE) should be known.

2 De nitions We rst introduce the concept of graph-controlled rewriting. This allows us to present programmed, matrix and time-variant grammars as well as grammars with regular control as special cases, leading to a uni ed and lucid framework of de nitions and arguments. Restricting ourselves to the most interesting cases, we focus on context-free core rules in the following. A grammar controlled by a bicoloured digraph [5, 20, 21, 23, 27, 28] or G grammar is an 8-tuple G = (VN ; VT ; P; S; ?; ; ; h) where  VN ; VT ; P; S de ne, as in a phrase structure grammar, the set of nonterminals, terminals, context-free core rules, and the start symbol, respectively;  ? is a bicoloured digraph, i.e., ? = (U; E ), where U is a nite set of nodes and E  U fg; rg U is a nite set of directed edges (arcs) coloured by g or r (\green" or \red");    U are the initial nodes;    U are the nal nodes; 3

 h : U ! (2P n f;g) relates nodes with rule sets.

There are two di erent de nitions of the appearance checking mode in the literature: we say that (x; u) ) (y; v) ((x; u) )c (y; v), respectively) holds in G with (x; u); (y; v) 2 (VN [ VT )  U , if either x = x1 x2; y = x1 x2; ! 2 h(u); and (u; g; v) 2 E or every (one, respectively) rule of h(u) is not applicable to x, y = x, (u; r; v) 2 E. The re exive transitive closure of ) ()c, respectively) is  denoted by ) ()c , respectively). The corresponding languages generated by G are de ned by   Lgen [c] (G) = fx 2 VT j 9u 2 9v 2 (S; u) )[c] (x; v )g : The corresponding languages accepted by G are de ned by   Lacc [c] (G) = f x 2 VT j 9u 2 9v 2 (x; u) )[c] (S; v ) g : (1) G is said to be with unconditional transfer [23] (there, the notion of \full checking" is used instead) i (u; g; v) 2 E () (u; r; v) 2 E . (2) If E \ U  frg  U = ;, then G is without appearance checking. We denote the corresponding language families (now deviating from [5]) by Lgen (G[c]; CF,ac), Lgen (G[c]; CF,ut), and Lgen (G[c]; CF) in the generating case or Lacc (G[c]; CF,ac) etc. in the accepting case. Here and in the following, we write CF? instead of CF if we do not allow rules of the form A !  in the generating case or of the form  ! A in the accepting case. We consider six special cases of G grammars in the following. Note that we do not introduce the \classical" de nitions as contained in [5] but mostly equivalent ones in order to have a uni ed presentation.  A programmed grammar or P grammar is a G grammar having the following features:  Every node contains exactly one rule. Therefore, both modes of appearance checkings coincide in this case.  There are no designated initial or nal nodes. For the generating case, this means that it is possible to start a derivation in each node containing a rule whose left-hand side equals the start symbol S , and it is possible to stop anywhere when a terminal string has been derived. 4

As language families, we obtain

Lgen (P; CF; ac); Lgen (P; CF; ut); and Lgen (P; CF) ; together with their accepting counterparts.  A grammar with regular set control or rSC grammar is a G grammar with the following feature:  If there is a red arc from node u to node v, then there is also a green arc from node u to node v. We get the classes

Lgen (rSC[c],CF[?]; ac); Lgen (rSC[c] ,CF[?]; ut); and Lgen (rSC[c] ,CF[?]) plus their accepting variants.  A grammar with regular control or rC grammar is an rSC grammar such that  every node contains exactly one rule. (Therefore, both modes of appearance checkings coincide in this case.) We obtain the families

Lgen (rC,CF[?]; ac); Lgen (rC,CF[?]; ut); and Lgen (rC,CF[?]) plus their accepting variants.  A matrix set grammar or MS grammar is an rSC grammar obeying the following restriction:  Only the initial nodes (not necessarily containing rules with left-hand side S ) are allowed to have more than one in-going green arc, while only the nal nodes are allowed to have more than one out-going green arc. Between every nal node and every initial node, there is a green arc. We get the classes

Lgen (MS[c],CF[?]; ac); Lgen (MS[c],CF[?]; ut); and Lgen (MS[c],CF[?]) plus their accepting variants.  A matrix grammar or M grammar is an MS grammar such that 5

 every node contains exactly one rule. (So, it is an rSC grammar which

is both an rC and an MS grammar.) We have the classes Lgen (M,CF[?]; ac); Lgen (M,CF[?]; ut); and Lgen (M,CF[?]) plus their accepting variants.  A (periodically) time-variant grammar or TV grammar has the following features:  If there is a red arc from node u to node v, then there is also a green arc from node u to node v. Every node has exactly one in-going green arc and one out-going green arc. In other words, the graph of green arcs has a simple ring structure.  There is one designated initial node, and every node is a nal node. We obtain the classes Lgen (TV[c],CF[?]; ac); Lgen (TV[c],CF[?]; ut); and Lgen (TV[c],CF[?]) plus their accepting variants. For reasons unknown to us, only the c-mode has been considered in the literature [5, 24, 25, 26] for periodically time-variant grammars. More precisely, the standard de nition (adapted to both ) and )c) runs as follows: a periodically time-variant grammar is a quintuple G = (VN ; VT ; P; S; ; F ) where P is a nite set of labelled context-free productions r : ! , F  Lab(P ) and  : N ! 2Lab(P ) n f;g is a mapping such that there is an integer k so that (n) = (n + k) for all n 2 N. On VG  N, we de ne the binary relations ) and )c: (x; n) ) (y; m) [(x; n) )c (y; m), respectively] i (0) m = n + 1 and either (1) there is a production ! 2 P labelled by r 2 (n) such that x = x1 x2 and y = x1 x2, or (2) the set (n) \ F is not empty, y = x, and every [one, respectively] rule ! 2 P labelled by some r 2 (n) [r 2 (n) \ F , respectively] is not applicable to x, i.e., 62 Sub(x). A word w 2 VT belongs to the language Lgen [c] (G) generated by G if there is a derivation sequence S = w1 )[c] w2 )[c]    )[c] wn = w such that (wi; i) )[c] (wi+1; i + 1) for 1  i < n. There are the following minor di erences to our de nition: 6

 we disallow pre-periods of ; this is not a real restriction, as shown in [24,

Theorem 10];  we allow no appearance checkings which are \individual" to every occurrence of a production, but only appearance checkings pertaining to rule sets (n) as a whole; obviously, this does not matter at all in case of disallowing appearance checkings or in case of unconditional transfer, but also in case of allowing arbitrary appearance checkings, our de nition does not restrict the descriptive power of time-variant grammars, as see by a short look at the simulation of matrix grammars via time-variant grammars contained in the proof of Theorem V.4.3 in [26], where appearance checkings are only possible for productions within one-element rule sets, i.e., actually the proof applies both to our ) and to our )c de nition above.

3 Results Since by our de nition, every P, M, TV and rC grammar can be viewed as a context-free grammar controlled by a bicoloured directed graph, and, moreover, every M grammar is an rC grammar, and every TV grammar working in c-mode is easily seen to be an rC grammar, considering the regular language Pref(((1)    (m))) ; where the function  has period m, we obtain the following lemma immediately. (Similar comments apply to TV and rSC grammars both with the )  and the )c derivation de nition.)

Lemma 3.1 Let X 2 fCF; CF ? g, Y 2 fac; ut; g. Then, we nd  Lgen (P; X; Y )  Lgen (G[c]; X; Y ) ;  Lgen (M; X; Y )  Lgen (rC; X; Y )  Lgen (G[c]; X; Y ) ;  Lgen (M; X; Y )  Lgen (MS[c]; X; Y )  Lgen (rSC[c]; X; Y )  Lgen (G[c]; X; Y ) ;  Lgen (TVc ; X; Y )  Lgen (rC; X; Y )  Lgen (G[c]; X; Y ) ;  Lgen (TV[c]; X; Y )  Lgen (rSC[c]; X; Y )  Lgen (G[c]; X; Y ) : The same relations are true in the accepting case. 7

2

From the monograph [5] (especially, see page 147) and our papers [9, 10], the following relations are known:

Theorem 3.2 Let X 2 fGc; G; P; M; MS; MSc ; rC; rSC; rSCc; TVc; TVg, Y 2 fCF; CF?g. Then Lgen (X; Y ) = Lgen (G; Y ) and Lgen (X; Y; ac) = Lgen (G; Y; ac). From [5, 7, 8, 16, 17] we know the strictness of the following (trivial) inclusions:

Theorem 3.3  Lgen (G; CF) ( Lgen (G; CF; ac) = L(RE).  Lgen (G; CF ? ) ( Lgen (G; CF ? ; ac) ( L(CS).

From [9, 10], we know the following facts, distinguishing between the  derivation modes )c and ) in case of unconditional transfer.

Theorem 3.4 Let X 2 fMS; rSC; G; TVg, Y 2 fCF; CF ? g. Then, we nd:

Lgen (X; Y; ut) = Lgen (P; Y; ac) ; Lgen (Xc ; Y; ut) = Lgen (M; Y; ut) = Lgen (rC; Y; ut) = Lgen (P; Y; ut) : Regarding the discussion of the possible strictness of the trivial inclusion

Lgen (P; Y; ut)  Lgen (P; Y; ac) ; we refer to [6, 9, 10]. Now, we turn to accepting grammars.

Theorem 3.5 Let X 2 fP,rC,Mg. Then,  Lacc (X; CF; ac) = L(RE);  Lacc (X; CF ? ; ac) = L(CS). The previous results are contained in [1], but we did not consider neither G nor MS nor rSC nor TV grammars, and we considered unconditional transfer only in case of programmed grammars. The next lemma is a rst step towards similar results in the remaining cases considered in this paper.

Lemma 3.6 Let X 2 fG; Gc ; TV; TVc; MS; MSc; rSC; rSCc g. Then, we nd 8

 Lacc (X; CF; ac)  L(RE);  Lacc (X; CF ? ; ac)  L(CS). Analogous inclusions for the cases of unconditional transfer and excluding appearance checkings are immediate. Proof. Since TV grammars are special cases of G grammars (either regarding the relation ) or )c), we restrict ourselves to this second case. Disallowing -productions, it is quite easy to see how a linear bounded automaton can check the applicability of rules and do the necessary replacements. We omit the details here. 2 In order to show the reverse inclusions, we introduce the concept of random string context grammars, see [1]. (In [4, 18], this concept has been introduced under the name \generalized forbidding grammars".) A forbidding random string context grammar is a system G = (VN ; VT ; P; S ), where VN ; VT ; S have their usual meaning, and P is a nite set of random string context rules, i.e., pairs of the form ( ! ; R), where ! is a rewriting rule, and R is a nite subset of VG. We write x ) y i x = x1 x2, y = x1 x2 for some x1; x2 2 VG, ( ! ; R) 2 P , and R \ (Sub(x1) [ Sub(x2)) = ;. Lgen (fRSC,CF[?]; ac) (Lacc (fRSC,CF[?]; ac)) denotes the family of languages generated (accepted) by forbidding random string context grammars with [-free] context-free productions. By [1, 4, 18], we know the following:

Theorem 3.7  Lgen (fRSC,CF) = Lacc (fRSC,CF) = L(RE);  Lgen (fRSC,CF?) = Lacc (fRSC,CF?) = L(CS): First, we discuss G and TV grammars.

Lemma 3.8 Let X 2 fG; Gc ; TV; TVcg. Then,  L(RE)  Lacc (X; CF; ut);  L(CS)  Lacc (X; CF ? ; ut). Proof. Since TV grammars are special cases of G grammars, we re-

strict ourselves to this case. Because of Theorem 3.7, we start with fRSC grammars: let G = (VN ; VT ; P; S ) be an accepting fRSC grammar, where 9

P = fp1; : : :; pm g. For a production pj : (wj ! Aj ; Rj ) 2 P , let Rj = fvj1; : : : ; vjnj g. We show how to simulate each production by a sequence of production sets of a TV grammar, distinguishing between both derivation modes. )c -derivation: One derivation step of G can be simulated by Pm(i)(nCase + 2) steps (this number equals the period length) of the simulatj =1 j ing TV grammar G0 = (VN [ VN0 [ fF g; VT ; P 0; S; ) working in c-mode, where VN0 is the alphabet of primed version of nonterminals of VN , and ?1 (n + 2). We F is an additional failure symbol. Formally, let Jj = Pji=1 i want to simulate production pj ; therefore, we de ne the production sets (Jj + 1); : : : ; (Jj + nj + 2): (Jj + 1) = fwj ! A0j ; A0j ! F g (Jj + 2) = fvj1 ! F; A0j ! F g ... = ... (Jj + 1 + nj ) = fvjnj ! F; A0j ! F g (Jj + 2 + nj ) = fA0j ! Aj g Either we guess in step Jj + 1 that we want to apply production pj or we guess that we do not want to apply pj . Aj can only be applied subject to the forbidding random context condition. This is checked via production sets (Jj + 2); : : : ; (Jj + 1 + nj ). If and only if we guessed not to apply pj , the production A0j ! F can be applied in appearance checking mode. (ii) Case )-derivation: One derivation step of G can be simulated by 4m steps (this number equals the period length) of the simulating TV grammar G0 = (VN [ VG0 [ VG [ fF; L; L0g; VT ; P 0; S; ) where VG0 (VG) is the alphabet of primed (barred) version of nonterminals and terminals of G, L is a place-holder for the empty word (only needed in case G contains productions), and F is an additional failure symbol. We de ne the production sets (1); : : : ; (4m) in the following. Let 1  j  m. (4j ? 3) = f L ! L;  ! L j wj =  g [f a ! a; a ! a j a 2 VT g [ f B ! B j B 2 VN g; (4j ? 2) = f v ! Aj j v 2 h?1((wj )) g [f B ! B 0 j B 2 VG [ fLg g [ f a ! a0 j a 2 VT g; (4j ? 1) = f vij ! F j 1  i  nj g; 10

[f B 0 ! B j B 2 VN [ fLg g [ f a0 ! a j a 2 VT g; (4j ) = fAj ! Aj g :  : (VG [ fLg) ! 2(VG[fLg[VT ) is a nite substitution, de ned by (a) = fa; ag if a 2 VT , and (A) = fAg if A 2 VN [ fLg, and h : ((VG [ VT )+ [ fLg) ! (VG [ VT ) is a mapping given by h(L) = , and h(w) = w if w 2 (VG [ VT )+ . After applying (4j ? 3), the sentential form contains at least one letter x from VN [ VT [fLg. This ensures that the following possible application of pj prepared in (4j ? 2) can be circumvented, leading to a primed version of x.

This primed version further allows the circumvention of the test productions (for the random string context) in (4j ? 1) by using appropriate de-priming rules. 2 Time-variant grammars are really quite interesting from the point of view of accepting grammars. In most regulation mechanisms, the absence of appearance checkings leads to the case that the descriptive capacity of that mechanism viewed as language generator or language acceptor is the same. In order to put this observation in a broader framework, we formulated a very general theorem for so-called context-condition grammars for which we proved this fact [3]. For example, we might deduce the following: (Observe   that we need not distinguish between ) and )c as long as no appearance checking features are involved.)

Theorem 3.9 Let X 2 fG; P; rC; rSC; M; MSg, Y 2 fCF; CF ? g. Then, we have:

Lgen (P; Y ) = Lgen (X; Y ) = Lacc (X; Y ) : 2

This theorem does not apply in the TV case. Nevertheless, the generating and accepting power of time-variant grammars coincides. We show this in the following.

Lemma 3.10 For every generating (TV,CF) grammar G = (VN ; VT ; P; S; ),

there exists an equivalent grammar G0 = (VN0 ; VT ; P; S 0; 0) whose start symbol S 0 occurs only once in a rule set, namely as a left-hand side of some rule in 0(1). G0 has -rules only if G has -rules. Proof. Set VN0 = VN [ fS 0g. Assume that  has period m. Let 0(1) = fS 0 ! S g[ (m), and 0(n) = (n ? 1) for 2  n  m. Of course, in the rst 11

derivation step, we only use the rule S 0 ! S from 0(1), while in the km +1st derivation step (where k  1), we use a rule from (m)  0(km +1) = 0(1).

2

Lemma 3.11 Lgen (TV; CF[?])  Lacc (TV; CF[?]) Proof. Let G = (VN ; VT ; P; S; ) be a generating TV grammar satisfying the normal form of the previous lemma. Assume that  has period m. Let 1  i 1 < i2    < i k  m be the set of indices n where a successful derivation according to G may stop, i.e., (9A 2 VN ; w 2 VT(A ! w 2 (n))) () (91  j  k(n = ij )) : De ne

VN0 = ((VN n fS g)  f1; : : : ; kg) [ fS g and [; j ](s) = f gj (w) ! gj (A) j A ! w 2 (((ij ? s)mod(m)) + 1) g for 1  j  k, 1  s  m, where the morphisms gj are given by gj (A) = (A; j ) if SA 2 VN n fS g, and gj (Sx) = x for x 2 VT [ fS g. Finally, let 0(s) = kj=1[; j ](s), and P 0 = ms=1 0(s). Now, G0 = (VN0 ; VT ; P 0; S; 0) accepts Lgen (G). The idea is that in the beginning G0 guesses where G would have terminated (modulo m). Then, we have the usual backward simulation of a generating grammar. There is only one possible exit point of G0, since G is in the normal form of the previous lemma. If S )G w1 )G    )G wn?1 )G wn 2 VT is a derivation according to G, the simulation is done by

VT 3 wn = gj (wn) )G0 gj (wn?1) )G0    )G0 gj (w1) )G0 gj (S ) = S ;

2

where (ij = n)mod(m).

Lemma 3.12 Lacc (TV; CF[?])  Lgen (TV; CF[?]) 12

Proof. We only give the construction in case of admitting -rules; oth-

erwise, instead of the end-position-marker #j we have to incorporate an appropriate j -colouring of symbols. Let G = (VN ; VT ; P; S; ) be an accepting TV grammar. Assume that  has period m. Let 1  i 1 < i2    < i k  m be the set of indices n where a successful derivation according to G may stop, i.e., (9w 2 (VN [ VT )(w ! S 2 (n))) () (91  j  k(n = ij )) : De ne

VN0 = (VN  f1; : : :; kg) [ fS 0; F; #1; : : :; #k g and [; j ](s) = f gj (A) ! gj (w) j w ! A 2 (((ij ? s + 1)mod(m)) + 1) g for 1  j  k, 1  s  m, where the morphisms gj are given by gj (A) = (A; j ) if A 2 VN , and gj (x) = x for x 2 VT . De ne

T (s) = f #j ! F j (s ? 2 6= ij )mod(m) g [ f #j !  j (s ? 2 = ij )mod(m) g :

0(2s ? 1) = Sk [; j ](s) [ T (s), if (s 6= 1)mod(m), and 0(1) = Finally, let  =1 Sk [; j ](1) [ T (1) [ f S 0 j! (S; j )#j j 1  j  k g, and S0(2s) = f #j ! j =1 m 0(s). Now, #j j 1  j  k g. Hence, 0 has period 2m. Let P 0 = 2s=1 G0 = (VN0 ; VT ; P 0; S; 0) generates Lacc (G). If (n?1) (2) wn?1 )(Gn) wn = S VT 3 w )(1) G w1 )G    )G

is a derivation according to G, the simulation is done by (4) (3) (2) S 0 )(1) G0 (S; j )#j = gj (wn )#j )G0 gj (wn )#j )G0 gj (wn?1 )#j )G0       )(2G0n?1) gj (w1)#j )(2G0n) gj (w1)#j )(2G0n+1) gj (w)#j = w#j )(2G0n+2) gj (w)#j = w#j )(2G0n+3) w ;

where (ij = n)mod(m). Here, )(j) means that in this derivation step we used a production from (j ) (or 0(j )). The subtle problem that #j !  is 13

not applied as the last step of the simulating grammar but prematurely is circumvented by the occurrence tests #j ! #j at even step numbers. 2 We still have to solve the cases of unconditional transfer in matrix (set) grammars and grammars with regular (set) control. Since M(S)[c] grammars are a special case of r(S)C[c] grammars, we only need to treat M grammars in the following.

Lemma 3.13 Lacc (fRSC,CF[?])  Lacc (M,CF[?],ut) Proof. Let G = (VN ; VT ; P; S ) be an accepting fRSC grammar. Let P = fp1; : : : ; ptg. For a production pj : (wj ! Aj ; Rj ) 2 P , let Rj = fvj1; : : : ; vjnj g. One derivation step of G can be simulated by a matrix mj = (wj ! A0j ; vj1 ! F; : : : ; vjnj ! F; A0j ! Aj ) : (Here, we take the usual notations of matrix grammars.) Let M = fm1; : : : ; mtg. Therefore, the accepting matrix grammar G0 = (VN [ VN0 [ fF g; VT ; M; S ) simulates G. Here, VN0 = f A0 j A 2 VN g. 2 The following theorem summarizes our results on regulated accepting grammars.

Theorem 3.14 Let X 2 fP; M; rC; G; Gc; TV; TVc ; MS; MSc; rSC; rSCcg, Z 2 fut; acg. Then  L(RE) = Lacc (X; CF; Z );  L(CS) = Lacc (X; CF?; Z );  Lgen (P; CF[?]) = Lgen (X; CF[?]) = Lacc (X; CF[?]). 2

4 Discussions As in our papers [1, 3, 11], we nd three di erent situations comparing the generating and accepting capacity of various mechanisms with unconditional transfer.

14

 In case of grammars not containing -rules, the generating capacity equals at most Lgen (P; CF ? ; ac), which is strictly contained in L(CS) due to

the famous result of Rosenkrantz [22, Theorem 4]. According to our previous theorem, L(CS) is characterizing the accepting capacity of regulated grammars with unconditional transfer.  If we admit -rules and consider the ) c derivation mode, we know that accepting grammars are at least as powerful as their generating counterparts, but it is unknown whether this inclusion is strict or not.  If we admit -rules and consider the ) derivation mode, the generating and accepting capacity of regulated grammars with unconditional transfer is the same: in any case, it is possible to characterize L(RE). Furthermore, in case of TV grammars without appearance checking, a rather tricky simulation seems to be necessary in order to show that accepting and generating TV grammars are equally powerful. Observe that unconditional transfer may lead to uncomparable language families, comparing accepting versus generating mode, in case of pure grammars and in case of array grammars [2, 13]. Acknowledgments: We are grateful for discussions with our colleagues, especially H. Bordihn, R. Freund, and M. Holzer, on this topic. Moreover, we appreciate the very careful reading of the original manuscript by the two unknown referees.

References [1] H. Bordihn and H. Fernau. Accepting grammars with regulation. IJCM, 53:1{18, 1994. [2] H. Bordihn und H. Fernau. Accepting programmed grammars without nonterminals. IN: M. Kutrib and Th. Worsch (eds.) 5. Theorietag `Automaten und Formale Sprachen'. Universitat Gieen, Arbeitsgruppe Informatik: Bericht 9503, pages 4-16, 1995. [3] H. Bordihn and H. Fernau. Accepting grammars and systems via context condition grammars. JALC, 1(2), 1996. To appear. 15

[4] E. Csuhaj-Varju and A. Meduna. Grammars with context conditions (some results and open problems). EATCS Bull., 53:199{212, 1994. [5] J. Dassow and Gh. Paun. Regulated Rewriting in Formal Language Theory, Berlin: Springer, 1989. [6] H. Fernau. Membership for 1-limited ET0L languages is not decidable. EIK, 30(4):191{211, 1994. [7] H. Fernau. A predicate for separating language classes. EATCS Bull., 56:96{97, June 1995. [8] H. Fernau. On grammar and language families. Fund. Inform., 25(1):17{ 34, 1996. [9] H. Fernau. On unconditional transfer. IN: W. Penczek and A. Szalas (eds.) MFCS'96. LNCS 1113, 348{359. Springer, 1996. [10] H. Fernau. Unconditional transfer in regulated rewriting. Technical Report WSI{96{21, Universitat Tubingen (Germany), Wilhelm-SchickardInstitut fur Informatik, 1996. One part of this report has been accepted for publication in Acta Inform., volume 34, 1997, while the other part is basically the present article. [11] H. Fernau and H. Bordihn. Remarks on accepting parallel systems. IJCM, 56:51{67, 1995. [12] H. Fernau and R. Freund. Bounded parallelism in array grammars used for character recognition. IN: P. Perner, P. Wang, and A. Rosenfeld (eds.) SSPR'96. LNCS 1121, 40{49. Springer, 1996. [13] H. Fernau and R. Freund. Accepting array grammars with control mechanisms. IN: Gh. Paun, A. Salomaa (eds.) New Trends in Formal Languages. LNCS 1218, 95{118. Springer, 1997. [14] H. Fernau and M. Holzer. Accepting multi-agent systems II. Acta Cybern., 12:361{379, 1996. [15] H. Fernau, M. Holzer, and H. Bordihn. Accepting multi-agent systems: the case of cooperating distributed grammar systems. Computers and AI, 15(2-3):123{139, 1996. 16

[16] D. Hauschildt and M. Jantzen. Petri net algorithms in the theory of matrix grammars. Acta Inform., 31:719{728, 1994. [17] F. Hinz and J. Dassow. An undecidability result for regular languages and its application to regulated rewriting. EATCS Bull., 38:168{173, 1989. [18] A. Meduna. Generalized forbidding grammars. IJCM, 36:31{39, 1990. [19] V. Mihalache. Accepting cooperating distributed grammar systems with terminal derivation. EATCS Bull., 61:80{84, 1997. [20] A. Pascu and Gh. Paun. On the planarity of bicolored digraph grammar systems. Discrete Math., 25:195{197, 1979. [21] Gh. Paun. Normal forms for bicolored-digraph-grammar systems. IJCM, 7:109{118, 1979. [22] D. J. Rosenkrantz. Programmed grammars and classes of formal languages. JACM, 16(1):107{131, 1969. [23] G. Rozenberg and A. Salomaa. Context-free grammars with graphcontrolled tables. Technical Report DAIMI PB-43, Institute of Mathematics at the University of Aarhus, Jan. 1975. [24] A. K. Salomaa. On grammars with restricted use of productions. Ann. acad. scient. Fennicae, Serie A-454:1{32, 1969. [25] A. Salomaa. Periodically time-variant context-free grammars. I&C, 17:294{311, 1970. [26] A. Salomaa. Formal Languages. Academic Press, 1973. [27] D. Wood. Bicolored digraph grammar systems. RAIRO Informatique theorique et Applications/Theoretical Informatics and Applications, 1:45{50, 1973. [28] D. Wood. A note on bicolored digraph grammar systems. IJCM, 3:301{ 308, 1973. 17