Game-theoretic Model of Computation - Semantic Scholar

4 downloads 203 Views 403KB Size Report
Feb 22, 2017 - under the name of effective strategies, giving rise to a mathematical ...... ∀s ∈ φ, t ∈ ψ, sm, tn ∈ PG.sm ≃G tn ⇒ (smm′ ∈ φ ⇒ ∃tnn′ ...
Game-theoretic Model of Computation arXiv:1702.05073v2 [cs.LO] 22 Feb 2017

Norihiro Yamada [email protected] University of Oxford February 23, 2017 Abstract We introduce in the present paper an intrinsic notion of “(effective) computability” in game semantics motivated by the fact that strategies in game semantics have been defined recursive if they are “computable in an extrinsic sense”, i.e., they are representable by partial recursive functions, and so it has been difficult to regard game semantics as an autonomous foundation of computation. As a consequence, we have formulated a general notion of “algorithms” under the name of effective strategies, giving rise to a mathematical model of computation in the same sense as Turing machines but beyond computation on natural numbers, e.g., higherorder one, solely in terms of games and strategies. It subsumes computation of the programming language PCF, and so it is in particular Turing complete. Notably, effective strategies have a natural notion of types (i.e., games) unlike Turing machines, while they are non-inductively defined as opposed to partial recursive functions as well as semantic in contrast with λ-calculi and combinatory logic. Thus, in a sense, we have captured a mathematical (or semantic) notion of computation (and computability) that is more general than the “classical ones” in a fundamental level. Exploiting the flexibility of game semantics, our game-theoretic model of computation is intended to give a mathematical foundation of various (constructive) logics and programming languages.

Contents 1

Introduction

2

2

Preliminaries: dynamic games and strategies 2.1 On the tags for disjoint union of sets . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Dynamic games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Dynamic strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 4 4 12

3

Effective strategies 3.1 Effective strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Examples of atomic strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Turing completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13 13 23 34

4

Conclusion and future work

35 1

1

Introduction

Game semantics [A+ 97, AM99, Hyl97] refers to a particular kind of semantics of logics and programming languages in which types and terms are interpreted as games and strategies, respectively. Historically, game semantics gave the first syntax-independent characterization of the programming language PCF [AJM00, HO00, Nic94]; since then a variety of games and strategies have been proposed to model various programming features [Abr14, AM99]. An advantage of game semantics is this flexibility: It models a wide range of languages by simply varying constraints on strategies, which enables one to compare and relate different languages ignoring superfluous syntactic details. Its another characteristic is its conceptual naturality: It interprets syntax as dynamic interactions between Player and Opponent of a game, providing an intensional explanation of syntax in a natural and intuitive (yet mathematically precise) manner. However, although game semantics has provided a unified framework to model various logics and programming languages, it has never been formulated as a mathematical model of computation in its own right in the same sense as Turing machines [Tur36, Koz12], the λ-calculus [Chu36, Chu40, B+ 84], combinatory logic [Sch24, Cur30], etc. More specifically, “(effective) computability” in game semantics has been always extrinsic [Abr14]: A strategy has been defined to be recursive if it is representable by a partial recursive function [AJM00, HO00]. This is mainly because a primal focus of the field has been full abstraction [Win93, Gun92], i.e., to characterize an observational equivalence in syntax in a syntax-independent manner; thus, it has not been concerned that much with (step-by-step) processes of computation. Nevertheless, it is unsatisfactory from a foundational point of view as it does not give much new insight on the notion of “effective computation”. Also, it raises an intriguing mathematical question in its own right: Is there any intrinsic (in the sense that it does not have recourse to the standard definition of computability) notion of “effective computability” in game semantics that is Turing complete (i.e., it contains every Turing-computable or partial recursive functions [Cut80, RR67])? Motivated by the above consideration, in this paper we present the notion of effective strategies in game semantics defined solely in terms of games and strategies. Roughly, a strategy is finitary if its partial function representation that assigns the next Player’s move to a bounded size of partial history of previous moves, called its table, is finite, and effective if its table is “describable” by a finitary strategy. They give a reasonable notion of “computability” as finitary strategies are clearly “computable”, and so their “descriptions” can be “effectively read off”. Note that they are defined intrinsically in the sense stated above. The main idea is to allow strategies to look at only a bounded number of previous moves, and describe them by means that is clearly “effectively executable” but more expressive than finite tables, i.e., finitary strategies. This simple notion subsumes computation of the language PCF, and thus it is Turing complete, providing a positive answer to the question posted above. As a result, we have formulated a general notion of “algorithms”, which in turn gives rise to a mathematical model of computation in the same sense as Turing machines but beyond computation on natural numbers which we call the classical computation. In hindsight, our game-theoretic model of computation may be seen as “interactive Turing machines” since its computation proceeds as an interaction between Player and Opponent, where Turing machines interact with Opponent only once as they just receive an input and produces an output (if it halts) once and for all, and the current position in a game serves as the current “state of mind” for effective strategies. It is this generalization of Turing machines that gives the game-theoretic model of computation an additional flexibility and computational power, inheriting their semantic and non-inductive nature. To the best of our knowledge, effective strategies are the first intrinsic characterization of computability in game semantics that is Turing complete. Notably, they are non-inductively

2

defined as opposed to partial recursive functions, semantic in contrast with λ-calculi and combinatory logic, and equipped with the notion of types, i.e., games, unlike Turing machines. Thus, in a sense, we have captured a mathematical notion of “computation” that is more general than the classical one, e.g., higher-order one [LN15], in a fundamental level. Therefore exploiting the flexibility of game semantics, our model of computation has a potential to give a mathematical foundation of a wide range of logics and programming languages. The rest of the paper proceeds as follows. Defining our games and strategies in Section 2, we define effective strategies and show that they may interpret every term in the language PCF in Section 3. Finally, we make a conclusion and propose some future work in Section 4. ◮ Notation. We use the following notations throughout the paper: ◮ We use bold letters s, t, u, v, etc. for sequences, in particular ǫ for the empty sequence, and letters a, b, c, d, m, n, x, y, z, etc. for elements of sequences. We often abbreviate a finite sequence s = (x1 , x2 , . . . , xn ) as x1 x2 . . . xn and write si as another notation for xi . ◮ A concatenation of sequences is represented by a juxtaposition of them, but we write as, tb, ucv for (a)s, t(b), u(c)v, etc. We sometimes write s.t for st for readability. ◮ We write even(s) (resp. odd(s)) if s is of even-length (resp. odd-length). For a set S of df.

df.

sequences, we define S even = {s ∈ S |even(s)} and S odd = {t ∈ S |odd(t)}. df.

◮ We write s  t if s is a prefix of t. For a set S of sequences, pref(S) = {s|∃t ∈ S.s  t}. ◮ For a partially ordered set P and a subset S ⊆ P , sup(S) denotes the supremum of S. df.

◮ X ∗ = {x1 x2 . . . xn |n ∈ N, ∀i ∈ {1, 2, . . . , n}.xi ∈ X } for each set X. ◮ For a function f : A → B and a subset S ⊆ A, we define f ↾ S : S → B to be the restriction df. of f to S. Also, f ∗ : A∗ → B ∗ is defined by f ∗ (a1 a2 . . . an ) = f (a1 )f (a2 ) . . . f (an ). ◮ Given sets X1 , X2 , . . . , Xn , for each i ∈ {1, 2, . . . , n} we write πi : X1 × X2 × · · · × Xn → Xi for the ith -projection function (x1 , x2 , . . . , xn ) 7→ xi . ◮ We write x ↓ if an element x is defined and x ↑ otherwise.

2

Preliminaries: dynamic games and strategies

This section presents our games and strategies. It is essentially the “dynamic refinement” of McCusker’s variant [AM99, McC98], which is proposed by the present author and Abramsky in [YA16]. Its main purpose is to refine the composition of strategies as “non-normalizing composition plus hiding” in order to capture dynamics and intensionality in computation. We have chosen this variant since the non-normalizing composition preserves “atomic computational steps” in strategies, and thus effective strategies are closed under it (but not under the usual composition). However, we need a minor modification: A particular implementation of tags for disjoint union of sets of moves (for constructions on games) has to be adopted as manipulations of the tags must be “effectively executable” by strategies, and strategies should behave “consistently” up to permutations of tags in exponential ! as in [AJM00, McC98].

3

2.1

On the tags for disjoint union of sets

In game semantics, we often take disjoint union of sets (of moves) when we form compound games such as tensor ⊗, where we usually treat “tags” for such disjoint union informally for brevity [AM99, McC98]. However, since we are concerned with “effective computability”, including how to “effectively” handle “tags”, we have to formulate them rigorously. For this reason, we introduce: P ◮ Definition 2.1.1 (Effective tags). An effective tag is a finite sequence over the alphabet = {♯, |}, where ♯, | are arbitrarily fixed symbols. We write i for || . . . | for each i ∈ N. | {z } i

P∗ → N∗ is defined ◮ Definition 2.1.2 (Decoding and encoding). The decoding function de : P df. ∗ ∗ by de(γ) = (i1 , i2 , . . . , ik ) ∈ N for all γ ∈ , where γ = i1 ♯i2 ♯ . . . ik−1 ♯ik , and the encoding P df. ∗ function en : N∗ → by en(j1 , j2 , . . . , jl ) = j1 ♯j2 ♯ . . . jl−1 ♯jl for all (j1 , j2 , . . . , jl ) ∈ N∗ . P∗ ⇆ N∗ : en are mutually inverses (n.b. they both map ǫ to Clearly, the functions de : itself). In fact, effective tags γ are to represent finite sequences de(γ) of natural numbers. However, effective tags are not sufficient for our purpose: For “nested exponentials !”, we need to “effectively” associate a natural number to each finite sequence of natural numbers in an “effectively” invertible way. Of course it is possible as there is a computable bijection h i : N∗ → N whose inverse is also computable by an elementary fact from computability theory [Cut80, RR67], but we cannot rely on it as we are aiming at developing an autonomous foundation of “effectivite computability”. On the other hand, this bijection is necessary only for manipulating effective tags, and so we would like to avoid an involved mechanism for it. Our solution for this problem is to simply introduce some symbols to denote the bijection: ◮ Definition 2.1.3 (Extended effective tags). An extended effective tag is an expression e ∈ P df. ( ∪{h, i})∗ generated by the rule e ≡ γ |e1 ♯e2 |hei, where γ ranges over effective tags.

◮ Definition 2.1.4 (Extended decoding). The extended decoding function ede : T → N∗ is defined df.

df.

df.

by ede(γ) = de(γ), ede(e1 ♯e2 ) = ede(e1 )ede(e2 ), ede(hei) = hede(e)i, where T is the set of extended effective tags, and h i : N∗ → N is any computable bijection fixed throughout the present paper such that hi1 , i2 , . . . , ik i 6= hj1 , j2 , . . . , jl i whenever k 6= l (see, e.g., [Cut80]). P∗ Of course, we lose the bijectivity between and N∗ for extended effective tags, but in return, we may “symbolically execute” the bijection h i : N∗ → N by just inserting h, i. From now on, the word tags refers to extended effective tags, and we write e, f , g, h, etc. for tags. df.

◮ Definition 2.1.5 (Tagged elements). A tagged element is any pair [m]e = (m, e) with e ∈ T . ◮ Notation. We often abbreviate a tagged element [m]e as m if the tag e is not important.

2.2

Dynamic games

Our games are essentially dynamic games introduced in [YA16] equipped with an equivalence relation on positions that “ignores” permutations of tags in exponential ! as in [AJM00, McC98]. The main idea of dynamic games is to introduce a distinction between internal and external moves; internal moves constitute “internal communication” between strategies, and they are to be a posteriori hidden by the hiding operation. Conceptually, internal moves are “invisible” to

4

Opponent as they represent how Player internally calculates the next external move. In this manner, dynamic games provide a “universe of computation” in which intensionality and dynamics in computation are represented by internal moves and the hiding operation, respectively. We first quickly review their basic definitions; see [YA16] for the details. As games defined in [AM99, McC98], dynamic games are based on two preliminary concepts: arenas and legal positions. An arena defines basic components of a game, which in turn induces a set of legal positions that specifies the basic rules of the game. ◮ Definition 2.2.1 (Arenas [YA16]). A (dynamic) arena is a triple G = (MG , λG , ⊢G ), where: ◮ MG is a set of tagged elements called moves such that {π1 (m)|m ∈ MG } is finite ◮ λG : MG → {O, P} × {Q, A} × N is a function called the labeling function, where O, P, Q, A are arbitrarily fixed symbols, that satisfies sup({λN G (m)|m ∈ MG }) ∈ N ◮ ⊢G ⊆ ({⋆} ∪ MG ) × MG is a relation, where ⋆ is an arbitrarily fixed symbol such that ⋆ 6∈ MG , called the enabling relation that satisfies: ⊲ (E1) If ⋆ ⊢G m, then λG (m) = (O, Q, 0) and n = ⋆ whenever n ⊢G m QA N N ⊲ (E2) If m ⊢G n and λQA G (n) = A, then λG (m) = Q and λG (m) = λG (n) OP ⊲ (E3) If m ⊢G n and m 6= ⋆, then λOP G (m) 6= λG (n) N OP OP ⊲ (E4) If m ⊢G n, m 6= ⋆ and λN G (m) 6= λG (n), then λG (m) = O (and λG (n) = P) df.

df.

df.

QA N in which λOP G = λG ; π1 : MG → {O, P}, λG = λG ; π2 : MG → {Q, A}, λG = λG ; π3 : MG → N. A move m ∈ MG is initial if ⋆ ⊢G m, an O-move (resp. a P-move) if λOP G (m) = O (resp. if QA QA OP λG (m) = P), a question (resp. an answer) if λG (m) = Q (resp. if λG (m) = A), and internal N ∗ (resp. external) if λN G (m) > 0 (resp. if λG (m) = 0). A sequence s ∈ MG is called d-complete ′ ′ (d ∈ N ∪ {ω}) if it ends with an external or d -internal move with d > d, where ω is the least Init transfinite ordinal. We write MG for the set of all initial moves in G.

That is, our variant of arena is an arena in [AM99] equipped with the degree of internality 1 λN G on moves and satisfying some additional axioms: ◮ The set {π1 (m)|m ∈ MG } is required to be finite, so that each move is distinguishable. ◮ The condition on the labeling function requires an upper bound of degrees of internality. ◮ E1 adds λN G (m) = 0 if m ∈ MG is initial as Opponent cannot “see” internal moves. ◮ E2 additionally requires the degree of internality between a “QA-pair” to be the same. ◮ E4 determines that only Player can make a move for a previous move if they have different degrees of internality because internal moves are “invisible” to Opponent. From now on, the word arenas refers to the variant defined above. Given an arena, we are interested in certain finite sequences of its moves equipped with a justifying relation: ◮ Definition 2.2.2 (Justified sequences [HO00, AM99, McC98]). A justified sequence (j-sequence) ∗ , in which each non-initial move n is associated with in an arena G is a finite sequence s ∈ MG (or points at) a unique move m, called the justifier of n in s, that occurs previously in s and satisfies m ⊢G n. We say that n is justified by m, or there is a pointer from n to m. 1 We need all natural numbers for λN , not only the internal/external (I/E) distinction, to define a step-by-step execution G of the hiding operation (see [YA16] for the details).

5

◮ Notation. We write Js (n) for the justifier of a non-initial move n in a j-sequence s, where Js is the “function of pointers in s”, and JG for the set of all j-sequences in an arena G. The idea is that each non-initial move in a j-sequence must be made for a specific previous move, called its justifier. Note that the first element m of each non-empty j-sequence ms ∈ JG is an initial move in G; we call m the opening move of ms and write O(ms) for it. We may consider justifiers from the “external viewpoint”: ◮ Definition 2.2.3 (External justifiers [YA16]). Let G be an arena, and s ∈ JG , d ∈ N ∪ {ω}. Each non-initial move n in s has a unique sequence of justifiers nm1 m2 . . . mk m (k > 0), i.e., Js (n) = m1 , Js (m1 ) = m2 , . . . , Js (mk−1 ) = mk , Js (mk ) = m, such that m1 , m2 , . . . , mk are d′ -internal with 0 < d′ 6 d but m is not. We call m the d-external justifier of n in s. ◮ Notation. We usually write Js⊖d (n) for the d-external justifier of n in a j-sequence s. ◮ Definition 2.2.4 (External justified subsequences [YA16]). Let s be a j-sequence in an arena G d and d ∈ N ∪ {ω}. The d-external justified (j-) subsequence HG (s) of s is obtained from s by ′ ′ deleting d -internal moves, 0 < d 6 d, equipped with the pointers Js⊖d . ◮ Definition 2.2.5 (Hiding operation on arenas [YA16]). Let d ∈ N ∪ {ω}, and G an arena. The df. ⊖d df. = 0 ∨ λN arena Hd (G) is defined by MHd (G) = {m ∈ MG | λN G (m) G (m) > d }, λHd (G) = λG ↾ ( x − d if x > d df. df. QA OP N MHd (G) , λ⊖d for all x ∈ N, and G = hλG , λG , n 7→ λG (n) ⊖ di, x ⊖ d = 0 otherwise df.

m ⊢Hd (G) n ⇔ ∃k ∈ N, m1 , m2 , . . . , m2k−1 , m2k ∈ MG \ MHd (G) . m ⊢G m1 ∧ m1 ⊢G m2 ∧ · · · ∧ m2k−1 ⊢G m2k ∧ m2k ⊢G n (note that m ⊢G n if k = 0). I.e., Hd (G) is obtained from G by deleting all d′ -internal moves, 0 < d′ 6 d, decreasing by d the degree of internality of the remaining moves and “concatenating” the enabling relation to form the “d-external” one. We clearly have: ◮ Lemma 2.2.6 (Closure of arenas and j-sequences under hiding [YA16]). If G is an arena, then so d is Hd (G) for all d ∈ N ∪ {ω} such that H0 (G) = G and HG (s) ∈ JHd (G) for all s ∈ JG . Next, let us recall the notion of “relevant part” of previous moves, called views: ◮ Definition 2.2.7 (Views [HO00, AM99, McC98]). Given a j-sequence s in an arena G, we define the Player view (P-view) ⌈s⌉G and the Opponent view (O-view) ⌊s⌋G by induction on the length of s as follows: df.

◮ ⌈ǫ⌉G = ǫ df.

◮ ⌈sm⌉G = ⌈s⌉G .m if m is a P-move df.

◮ ⌈sm⌉G = m if m is initial df.

◮ ⌈smtn⌉G = ⌈s⌉G .mn if n is an O-move with Jsmtn (n) = m df.

◮ ⌊ǫ⌋G = ǫ df.

◮ ⌊sm⌋G = ⌊s⌋G .m if m is an O-move df.

◮ ⌊smtn⌋G = ⌊s⌋G .mn if n is a P-move with Jsmtn (n) = m 6

where the justifiers of the remaining moves in ⌈s⌉G (resp. ⌊s⌋G ) are unchanged if they occur in ⌈s⌉G (resp. ⌊s⌋G ) and undefined otherwise. ◮ Notation. We omit the subscript G in ⌈s⌉G , ⌊s⌋G when the underlying game G is obvious. The idea behind this definition is as follows. Given a “position” or prefix tm of a j-sequence s in an arena G such that m is a P-move (resp. an O-move), the P-view ⌈t⌉ (resp. the O-view ⌊t⌋) is intended to be the currently “relevant” part of t for Player (resp. Opponent). That is, Player (resp. Opponent) is concerned only with the last O-move (resp. P-move), its justifier and that justifier’s “concern”, i.e., P-view (resp. O-view), which then recursively proceeds. We are now ready to define: ◮ Definition 2.2.8 (Legal positions [YA16]). A (dynamic) legal position in an arena G is a se∗ quence s ∈ MG (equipped with justifiers) that satisfies: ◮ Justification. s is a j-sequence in G. OP ◮ Alternation. If s = s1 mns2 , then λOP G (m) 6= λG (n).

◮ Generalized visibility. If s = tmu with m non-initial and d ∈ N ∪ {ω} satisfy λN G (m) = ⊖d d 0 ∨ λN (m) > d, then J (m) occurs in ⌈H (t)⌉ d if m is a P-move, and it occurs in H (G) s G G d ⌊HG (t)⌋Hd (G) if m is an O-move. N ◮ IE-switch. If s = s1 mns2 with λN G (m) 6= λG (n), then m is an O-move.

◮ Notation. We write LG for the set of all legal positions in an arena G. I.e., our (dynamic) legal positions are legal positions in [AM99] satisfying additional axioms: ◮ Generalized visibility is a natural generalization of visibility [HO00, AM99, McC98]; it requires that visibility holds after any iteration of the “hiding operation on arenas” [YA16]. ◮ IE-switch states that only Player can change the degree of internality during a play because internal moves are “invisible” to Opponent. From now on, the word legal positions refers to the variant defined above by default. Next, note that in a legal position in an arena, there may be several initial moves; the legal position consists of chains of justifiers initiated by such initial moves, and chains with the same initial move form a thread. Formally, ◮ Definition 2.2.9 (Threads [AM99, McC98]). Let G be an arena, and s ∈ LG . Assume that m is ∗ an occurrence of a move in s. The chain of justifiers from m is a sequence m0 m1 . . . mk ∈ MG such that k > 0, mk = m, Js (mk ) = mk−1 , Js (mk−1 ) = mk−2 , . . . , Js (m1 ) = m0 , and m0 is initial. In this case, we say that m is hereditarily justified by m0 . The subsequence of s consisting of the chains of justifiers in which m0 occurs is called the thread of m0 in s. An occurrence of an initial move is often called an initial occurrence. ◮ Notation. We write s ↾ I, where s ∈ LG and I is a set of initial occurrences in s, for the df. subsequence of s consisting of threads of initial occurrences in I, and define s ↾ m = s ↾ {m}. We are now ready to define our variant of games: ◮ Definition 2.2.10 (Games). A (dynamic) game is a tuple G = (MG , λG , ⊢G , PG , ≃G ), where: ◮ The triple (MG , λG , ⊢G ) forms an arena

7

◮ PG is a subset of LG whose elements are called (valid) positions in G that satisfies: ⊲ (V1) PG is non-empty and prefix-closed (i.e., sm ∈ PG ⇒ s ∈ PG ) ⊲ (V2) If s ∈ PG and I is a set of initial occurrences in s, then s ↾ I ∈ PG N ′ i ⊲ (V3) For any sm, s′ m′ ∈ PGodd , i ∈ N such that i < λN G (m) = λG (m ), if HG (s) = ⊖i i ′ ′ ⊖i ′ HG (s ), then m = m and Jsm (m) = Js′ m′ (m )

◮ ≃G is an equivalence relation on PG called the identification of positions that satisfies: ⊲ (I1) s ≃G t ⇒ π1∗ (s) = π1∗ (t) Init ⊲ (I2) sm ≃G tn ⇒ s ≃G t ∧ λG (m) = λG (n) ∧ (m, n ∈ MG ∨ ∃i ∈ N+ . Jsm (m) = si ∧ Jtn (n) = ti ) df.

⊲ (I3) ∀d ∈ N ∪ {ω}. s ≃dG t ∧ sm ∈ PG ⇒ ∃tn ∈ PG . sm ≃dG tn, where u ≃dG v ⇔ d d d d ∃u′ , v ′ ∈ PG .u′ ≃G v ′ ∧ HG (u′ ) = HG (u) ∧ HG (v ′ ) = HG (v) for all u, v ∈ PG . I.e., our variant of games is dynamic games [YA16] equipped with identification of positions that is to “ignore” permutations of tags in exponential ! as in [AJM00, McC98]. ◮ Definition 2.2.11 (Finitely well-opened games). A game G is finitely well-opened if [m]e ∈ Init MG implies e = ǫ, and s.[m] ∈ PG with [m] initial implies s = ǫ. I.e., a game is finitely well-opened if it is well-opened [AM99, McC98] and its initial moves have the empty tag ǫ only. From now on, games refer to finitely well-opened (dynamic) games. df.

◮ Example 2.2.12. The terminal game I is defined by I = (∅, ∅, ∅, {ǫ}, {(ǫ, ǫ)}). ◮ Example 2.2.13. The boolean game 2 is defined by: df.

◮ M2 = {q, ⊤, ⊥}, where each move has the empty tag ǫ ◮ λ2 : q 7→ (O, Q, 0), ⊤ 7→ (P, A, 0), ⊥ 7→ (P, A, 0) df.

◮ ⊢2 = {(⋆, q), (q, ⊤), (q, ⊥)} df.

◮ P2 = pref({q.⊤, q.⊥}), where each non-initial move is justified by q df.

◮ s ≃2 t ⇔ s = t. The positions q.⊤, q.⊥ are intended to represent true and false, respectively. ◮ Example 2.2.14. The natural number game N is defined by: df.

q, q, •, ♭}, where each move has the tag ǫ, and we often abbreviate qˆ as q ◮ MN = {ˆ ◮ λN : qˆ 7→ (O, Q, 0), q 7→ (O, Q, 0), • 7→ (P, A, 0), ♭ 7→ (P, A, 0) df.

q , ♭), (ˆ q , •), (q, ♭), (q, •), (•, q)} ◮ ⊢N = {(⋆, qˆ), (ˆ df.

q.(•.q)n .♭|n ∈ N}), where each non-initial move is justified by the last move ◮ PN = pref({ˆ df.

◮ s ≃N t ⇔ s = t. df.

q .(•.q)n .♭})even . The position qˆ.(•.q)n .♭ is to represent n ∈ N. Let us define n = pref({ˆ 8

◮ Example 2.2.15. The tag game G(T ) is defined by: df.

◮ MG(T ) = {ˆ qT , qT , ♯, |, X, h, i}, where each move has the tag ǫ, and qˆT is often written qT ◮ λG(T ) : qˆT 7→ (O, Q, 0), qT 7→ (O, Q, 0), e 7→ (P, A, 0), where e ∈ {♯, |, X, h, i} df.

◮ ⊢G(T ) = {(⋆, qˆT )} ∪ {(x, y)|x ∈ {ˆ qT , qT }, y ∈ {♯, |, X, h, i}} ∪ {(e, qT )|e ∈ {♯, |, h, i}} df.

◮ PG(T ) = pref({ˆ qT e1 qT e2 . . . qT ek qT X | k ∈ N, e1 e2 . . . ek ∈ T }), where each non-initial move is justified by the last move df.

◮ s ≃G(T ) t ⇔ s = t. The position qˆT e1 qT e2 . . . qT ek qT X is intended to represent the tag e1 e2 . . . ek ∈ T . ◮ Definition 2.2.16 (Subgames). A subgame of a game G is a game H that satisfies MH ⊆ MG , λH = λG ↾ MH , ⊢H ⊆ ⊢G ∩ ({⋆} ∪ MH ) × MH , PH ⊆ PG , ≃H ⊆≃G . In this case, we write H E G. ◮ Definition 2.2.17 (Hiding operation on games [YA16]). The d-hiding operation Hd on games for each d ∈ N ∪ {ω} is defined as follows. Given a game G, Hd (G) is the game such that df.

d d (s) | s ∈ PG }, and HG (s) ≃Hd (G) (MHd (G) , λHd (G) , ⊢Hd (G) ) is the arena Hd (G), PHd (G) = {HG df.

d HG (t) ⇔ s ≃dG t. A game G is static if Hω (G) = G.

◮ Theorem 2.2.18 (Closure of games under hiding). For any game G, Hd (G) forms a well-defined game for all d ∈ N ∪ {ω}. Moreover, if H E G, then Hd (H) E Hd (G) for all d ∈ N ∪ {ω}. d d Proof. First, ≃Hd (G) is well-defined as HG (s) ≃Hd (G) HG (t) does not depend on the representatives s, t ∈ PG . By the corresponding result in [YA16], it suffices to verify the preservation of the axioms I1, I2, I3 under Hd . Then I1, I2 for Hd (G) immediately follow from I2 on G. For ′ d d d I3, if HG (s) ≃dHd (G) HG (t) and HG (s).m ∈ PHd (G) , where we assume d 6= ω ∧ d′ 6= ω since d d otherwise I3 on Hd (G) is reduced to that on G, then ∃s′ m ∈ PG . HG (s′ m) = HG (s).m and d+d′ ′ d+d′ d+d′ HG (s ) = HG (s) ≃Hd+d′ (G) HG (t) (see [YA16] for the proof); thus by I3 on G, we may ′



d d d tn, whence ∃HG (t).n ∈ PHd (G) . HG (s).m = HG (s′ m) ≃dHd (G) conclude ∃tn ∈ PG . s′ m ≃d+d G d d HG (tn) = HG (t).n. Finally for H E G ⇒ Hd (H) E Hd (G), again by the result in [YA16] it suffices to show ≃H ⊆ ≃G ⇒ ≃Hd (H) ⊆ ≃Hd (G) , but it is immediate from the definition. 

At the end of the present section, we define constructions on games based on the standard ones [McC98, AM99, YA16] with tags explicit, equipping them with constructions on identification of positions. For this, like variable convention [Han94], we assume that there are countably infinite copies of the symbols |, ♯ h, i, and write |α , ♯β , etc., where α, β ∈ {0, 1}∗, for these copies. However, for readability, we usually omit these subscripts α unless necessary. Also, we define df.

eα = (e1 )α (e2 )α . . . (ek )α for all e = e1 e2 . . . ek ∈ T , α ∈ {0, 1}∗. ◮ Definition 2.2.19 (Tensor [AM99, McC98]). The tensor (product) A ⊗ B of games A, B is defined by: df.

◮ MA⊗B = {[(m, 0)]e0 |[m]e ∈ MA } ∪ {[(m′ , 1)]e′1 |[m′ ]e′ ∈ MB } ( λA ([m]e ) if i = 0 df. ◮ λA⊗B ([(m, i)]ei ) = λB ([m]e ) if i = 1 9

df.

◮ ⋆ ⊢A⊗B [(m, i)]ei ⇔ (i = 0 ∧ ⋆ ⊢A [m]e ) ∨ (i = 1 ∧ ⋆ ⊢B [m]e ) df.

◮ [(m, i)]ei ⊢A⊗B [(m′ , j)]e′j ⇔ (i = 0 = j ∧ [m]e ⊢A [m′ ]e′ ) ∨ (i = 1 = j ∧ [m]e ⊢B [m′ ]e′ ) df.

◮ PA⊗B = {s ∈ LA⊗B |s ↾ 0 ∈ PA , s ↾ 1 ∈ PB }, where s ↾ i is the subsequence of s with the justifiers in s that consists of moves [(m, i)]ei changed into [m]e df.

◮ s ≃A⊗B t ⇔ (π2 ◦ π1 )∗ (s) = (π2 ◦ π1 )∗ (t) ∧ s ↾ 0 ≃A t ↾ 0 ∧ s ↾ 1 ≃B t ↾ 1. ◮ Definition 2.2.20 (Linear implication [AM99, McC98]). The linear implication A ⊸ B from a static game A to another game B is defined by: df.

◮ MA⊸B = {[(m, 0)]e0 |[m]e ∈ MA } ∪ {[(m′ , 1)]e′1 |[m′ ]e′ ∈ MB } ( ( λA ([m]e ) if i = 0 P df. df. df. QA N OP ◮ λA⊸B ([(m, i)]ei ) = , λA = hλOP A , λA , λA i, λA (x) = λB ([m]e ) if i = 1 O

if λOP A (x) = O otherwise

df.

◮ ⋆ ⊢A⊸B [(m, i)]ei ⇔ i = 1 ∧ ⋆ ⊢B [m]e df.

◮ [(m, i)]ei ⊢A⊸B [(m′ , j)]e′j ⇔

(i = 0 = j ∧ [m]e ⊢A [m′ ]e′ ) ∨ (i = 1 = j ∧ [m]e ⊢B [m′ ]e′ ) ∨ (i = 1 ∧ j = 0 ∧ ⋆ ⊢B [m]e ∧ ⋆ ⊢A [m′ ]e′ )

df.

◮ PA⊸B = {s ∈ LA⊸B |s ↾ 0 ∈ PA , s ↾ 1 ∈ PB } df.

◮ s ≃A⊸B t ⇔ (π2 ◦ π1 )∗ (s) = (π2 ◦ π1 )∗ (t) ∧ s ↾ 0 ≃A t ↾ 0 ∧ s ↾ 1 ≃B t ↾ 1. ◮ Definition 2.2.21 (Product [AM99, McC98]). The product A&B of games A, B is defined by: df.

◮ MA&B = {[(m, 0)]e0 |[m]e ∈ MA } ∪ {[(m′ , 1)]e′1 |[m′ ]e′ ∈ MB } ( λA ([m]e ) if i = 0 df. ◮ λA&B ([(m, i)]ei ) = λB ([m]e ) if i = 1 df.

◮ ⋆ ⊢A&B [(m, i)]ei ⇔ (i = 0 ∧ ⋆ ⊢A [m]e ) ∨ (i = 1 ∧ ⋆ ⊢B [m]e ) df.

◮ [(m, i)]ei ⊢A&B [(m′ , j)]e′j ⇔ (i = 0 = j ∧ [m]e ⊢A [m′ ]e′ ) ∨ (i = 1 = j ∧ [m]e ⊢B [m′ ]e′ ) df.

◮ PA&B = {s ∈ LA&B |s ↾ 0 ∈ PA , s ↾ 1 = ǫ} ∪ {t ∈ LA&B |t ↾ 0 = ǫ, t ↾ 1 ∈ PB } df.

◮ s ≃A&B t ⇔ (π2 ◦ π1 )∗ (s) = (π2 ◦ π1 )∗ (t) ∧ s ↾ 0 ≃A t ↾ 0 ∧ s ↾ 1 ≃B t ↾ 1. ◮ Definition 2.2.22 (Generalized product [YA16]). The generalized product L&R of games L, R such that Hω (L) E C ⊸ A, Hω (R) E C ⊸ B for some static games A, B, C is defined by: df.

◮ ML&R = {[(m, 0)]e0 | [m]e ∈ M(C,0) ∩ (ML ∪ MR )} ∪ {[(m, 0)]e0 | [m]e ∈ ML \ M(C,0) } ∪ df.

{[(m′ , 1)]e′1 |[m′ ]e′ ∈ MR \ M(C,0) }, where M(C,0) = {[(c, 0)]e0 |[c]e ∈ MC }   λC ([c]e′ ) if [(m, i)]ei = [((c, 0), 0)]e′00 and [(c, 0)]e′0 ∈ M(C,0) df. ◮ λL&R ([(m, i)]ei ) = λL ([m]e ) if i = 0 and [m]e 6∈ M(C,0)   λR ([m]e ) if i = 1 10

df.

◮ ⋆ ⊢L&R [(m, i)]ei ⇔ (i = 0 ∧ ⋆ ⊢L [m]e ) ∨ (i = 1 ∧ ⋆ ⊢R [m]e ) df.

◮ [(m, i)]ei ⊢L&R [(m′ , j)]e′j ⇔ (i = 0 = j ∧ [m]e ⊢L [m′ ]e′ ) ∨ (i = 1 = j ∧ [m]e ⊢R [m′ ]e′ ) ∨ ([m]e , [m′ ]e′ ∈ M(C,0) ∧ [m]e ⊢R [m′ ]e′ ) ∨ (i = 1 ∧ j = 0 ∧ [m]e ⊢R [m′ ]e′ ) df.

◮ PL&R = {s ∈ LL&R | s ↾ L ∈ PL , s ↾ R = ǫ } ∪ {t ∈ LL&R | t ↾ L = ǫ, t ↾ R ∈ PR }, where s ↾ L (resp. s ↾ R) is the subsequence of s with the justifiers in s that consists of moves [(m, i)]ei hereditarily justified by an opening move [((a, 1), 0)] with [a] ∈ MAInit (resp. [((b, 1), 1)] with [b] ∈ MBInit ) changed into [m]e df.

◮ s ≃L&R t ⇔ (π2 ◦ π1 )∗ (s) = (π2 ◦ π1 )∗ (t) ∧ s ↾ L ≃L t ↾ L ∧ s ↾ R ≃R t ↾ R. ◮ Definition 2.2.23 (Exponential [McC98]). The exponential !A of a game A is defined by: df.

df.

◮ M!A = {[m]hf i♯e |[m]e ∈ MA , f ∈ T } and λ!A ([m]hf i♯e ) = λA ([m]e ) df.

df.

◮ ⋆ ⊢!A [m]hf i♯e ⇔ ⋆ ⊢A [m]e and [m]hf i♯e ⊢!A [m′ ]hf ′ i♯e′ ⇔ f = f ′ ∧ [m]e ⊢A [m′ ]e′ df.

◮ P!A = {s ∈ L!A |∀i ∈ N.s ↾ i ∈ PA }, where s ↾ i is the subsequence of s with the justifiers in s consisting of moves [m]hf i♯e such that ede(hf i) = i but changed into [m]e df.

◮ s ≃!A t ⇔ ∃ϕ ∈ P(N).(π1 ◦ ede ◦ π2 )∗ (s) = (ϕ ◦ π1 ◦ ede ◦ π2 )∗ (t) ∧ ∀i ∈ N.s ↾ ϕ(i) ≃A t ↾ i, where P(N) denotes the set of all permutations of natural numbers. I.e., our exponential !A is a slight modification of the one in [McC98] by generalizing threads [m]i♯e to [m]hf i♯e ([m]e ∈ MA ). Since we are focusing on well-opened games A, there is at most one thread with a tag hf i♯e such that ede(hf i) = i for each i ∈ N in a position of !A. As a consequence, our exponential !A is the same as the one in [McC98] except that there is a choice in the implementation hf i of tags i ∈ N (but that implementation hf i is unique within !A). ◮ Definition 2.2.24 (Concatenation [YA16]). Let J, K be games such that Hω (J) E A ⊸ B, Hω (K) E B ⊸ C for some static games A, B, C. Their concatenation J ‡ K is defined by: df.

◮ MJ‡K = {[(m, 0)]e0 |[m]e ∈ MJ } ∪ {[(m′ , 1)]e′1 |[m′ ]e′ ∈ MK }  +µ λJ ([m]e ) if i = 0 and [m]e comes from B    λ ([m] ) if i = 0 and [m]e does not come from B df. J e ◮ λJ‡K ([(m, i)]ei ) = , where µ ∈ N+ +µ  λ ([m] ) if i = 1 and [m] comes from B e e   K  λK ([m]e ) if i = 1 and [m]e does not come from B

df.

df.

+µ ′ N ′ is defined by µ = sup({λN J ([m]e ) | [m]e ∈ MJ } ∪ {λK ([m ]e′ ) | [m ]e′ ∈ MK }) + 1, λG = QA OP N hλG , λG , n 7→ λG (n) + µi for any game G df.

◮ ⋆ ⊢J‡K [(m, i)]ei ⇔ i = 1 ∧ ⋆ ⊢K [m]e df.

◮ [(m, i)]ei ⊢J‡K [(m′ , j)]e′j ⇔

(i = 0 = j ∧ [m]e ⊢J [m′ ]e′ ) ∨ (i = 1 = j ∧ [m]e ⊢K [m′ ]e′ ) ∨ (i = 1 ∧ ⋆ ⊢B [π1 (m)]e ∧ j = 0 ∧ ⋆ ⊢B [π1 (m′ )]e′ )

df.

◮ PJ‡K = {s ∈ JJ‡K | s ↾ 0 ∈ PJ , s ↾ 1 ∈ PK , s ↾ B1 , B2 ∈ prB }, where B1 , B2 are the two copies of B, s ↾ B1 , B2 is the subsequence of s consisting of moves in B1 , B2 , i.e., external moves [((m, i), j)]eij such that (i = 1 ∧ j = 0) ∨ (i = 0 ∧ j = 1), with the justifiers in s but df.

changed into [(m, j)]ej , prB = {t ∈ PB⊸B |∀u  t. even(u) ⇒ u ↾ 0 = u ↾ 1} 11

df.

◮ s ≃J‡K t ⇔ (π2 ◦ π1 )∗ (s) = (π2 ◦ π1 )∗ (t) ∧ s ↾ 0 ≃J t ↾ 0 ∧ s ↾ 1 ≃K t ↾ 1. These constructions clearly preserve the axioms I1, I2, I3 (linear implication ⊸ preserves I2 as games are well-opened), and so combined with the results in [YA16] we have: ◮ Lemma 2.2.25 (Well-defined constructions on games). The constructions ⊗, ⊸, &, !, ‡ on games are well-defined except that ⊗, ! do not preserve (finite) well-openness.

2.3

Dynamic strategies

Next, let us recall the notion of dynamic strategies [YA16]. However, there is nothing special in the definition; a strategy σ : G is dynamic if so is G, or more precisely: ◮ Definition 2.3.1 (Dynamic strategies [AM99, McC98]). A (dynamic) strategy σ on a (dynamic) game G, written σ : G, is a subset σ ⊆ PGeven that satisfies: ◮ (S1) It is non-empty and even-prefix-closed: smn ∈ σ ⇒ s ∈ σ ◮ (S2) It is deterministic: smn, smn′ ∈ σ ⇒ n = n′ ∧ Jsmn (n) = Jsmn′ (n′ ). ◮ Definition 2.3.2 (Consistent strategies). A strategy σ : G is consistent if σ ≃G σ, where for all df.

φ, ψ : G, φ ≃G ψ ⇔ ∀s ∈ φ, t ∈ ψ, sm, tn ∈ PG .sm ≃G tn ⇒ (smm′ ∈ φ ⇒ ∃tnn′ ∈ ψ.smm′ ≃G tnn′ ) ∧ (tnn′′ ∈ ψ ⇒ ∃smm′′ ∈ φ.tnn′′ ≃G smm′′ ). This condition is the same as the one in [AJM00, McC98] though the word “consistency” is not used there. It ensures that strategies behave “consistently” up to permutations of tags in exponential !; in fact, identification of positions is defined solely for consistency of strategies. For instance, a consistent strategy σ : !2 satisfies [q]hf i♯ [b]hf i♯ , [q]hf ′ i♯ [b′ ]hf ′ i♯ ∈ σ ⇒ b = b′ . As in the case of games, we define the hiding operation on strategies: ◮ Definition 2.3.3 ( (Hiding operation on strategies [YA16]). For any game G, s ∈ PG and d ∈ d HG (s) if s is d-complete d df. N∪{ω}, let s♮HG = We define the d-hiding operation d t otherwise, where HG (s) = tm. d Hd on strategies by Hd : (σ : G) 7→ {s♮HG |s ∈ σ }. A strategy σ : G is static if Hω (σ) = σ. ◮ Theorem 2.3.4 (Hiding theorem [YA16]). If σ : G, then Hd (σ) : Hd (G) for all d ∈ N ∪ {ω}. Next, let us review the standard constructions on strategies [AM99, McC98], for which we need to adopt our particular implimentation of tags. ◮ Definition 2.3.5 (Copy-cat strategies [AJ94, AJM00, HO00, McC98]). The copy-cat strategy df.

even |∀t  s. even(t) ⇒ t ↾ 0 = t ↾ 1}. cp A : A ⊸ A on a game A is defined by cp A = {s ∈ PA⊸A

◮ Definition 2.3.6 (Tensor [AJ94, McC98]). Given σ : A ⊸ C, τ : B ⊸ D, their tensor (product) df.

σ⊗τ : A ⊗ B ⊸ C ⊗ D is defined by σ⊗τ = {s ∈ LA⊗B⊸C⊗D |s ↾ [0 ] ∈ σ, s ↾ [1 ] ∈ τ }, where s ↾ [0 ] (resp. s ↾ [1 ]) is the subsequence of s with the justifiers in s that consists of moves [((m, 0), i)]e0i (resp. [((m′ , 1), j)]e1j ) changed into [(m, i)]ei (resp. [(m′ , j)]ej ). ◮ Definition 2.3.7 (Pairing [AJM00, McC98]). Given σ : C ⊸ A, τ : C ⊸ B, their pairing df.

hσ, τ i : C ⊸ A&B is defined by hσ, τ i = {s ∈ LC⊸A&B |s ↾ ([0] ⊸ [01]) ∈ σ, s ↾ ([0] ⊸ [11]) = ǫ } ∪ {s ∈ LC⊸A&B | s ↾ ([0] ⊸ [11]) ∈ τ, s ↾ ([0] ⊸ [01]) = ǫ }, where s ↾ ([0] ⊸ [01]) (resp. s ↾ ([0] ⊸ [11])) is the subsequence of s with the justifiers in s that consists of moves [(c, 0)]e0 , [((a, 0), 1)]e′01 (resp. [((b, 1), 1)]e′11 hereditarily justified by [((a′ , 0), 1)] for some [a′ ] ∈ MAInit (resp. [((b′ , 1), 1)] for some [b′ ] ∈ MBInit ) with the latter changed into [(a, 1)]e′1 (resp. [(b, 1)]e′1 ). 12

◮ Definition 2.3.8 (Generalized pairing [YA16]). Given σ : L, τ : R such that Hω (L) E C ⊸ A, Hω (R) E C ⊸ B for some static games A, B, C, their (generalized) pairing hσ, τ i : L&R is df.

defined by hσ, τ i = {s ∈ LL&R |s ↾ L ∈ σ, s ↾ R = ǫ} ∪ {s ∈ LL&R |s ↾ L = ǫ, s ↾ R ∈ τ }. ◮ Definition 2.3.9 (Promotion [AJM00, McC98]). Given σ : !A ⊸ B, its promotion σ † : !A ⊸ !B df.

is defined by σ † = {s ∈ P!A⊸!B | ∀e ∈ T . s ↾ e ∈ σ }, where s ↾ e is the subsequence of s with the justifiers in s that consists of moves [(b, 1)](hei♯e′ )1 , [(a, 0)](hhei♯hf ii♯f ′ )0 , where [b]e′ ∈ MB , [a]f ′ ∈ MA , changed into [(b, 1)]e′1 , [(a, 0)](hf i♯f ′ )0 , respectively. ◮ Definition 2.3.10 (Derelicition [AJM00, McC98]). Let A be a well-opened game. The derelicdf.

even |∀t  s. even(t) ⇒ t ↾ [0]hi♯ = t ↾ [1] }, tion der A : A ⇒ A on A is defined by der A = {s ∈ PA⇒A where t ↾ [0]hi♯ (resp. t ↾ [1] ) is the subsequence of t with the same justifiers in t that consists of moves [(a, 0)](hi♯e)0 (resp. [(a′ , 1)]e′1 ) changed into [a]e (resp. [a′ ]e′ ).

◮ Definition 2.3.11 (Concatenation and composition [YA16]). Let σ : J, τ : K, and assume that Hω (J) E A ⊸ B, Hω (K) E B ⊸ C for some static games A, B, C. Their concatenation df.

σ ‡ τ : J ‡ K is defined by σ ‡ τ = {s ∈ JJ‡K | s ↾ 0 ∈ σ, s ↾ 1 ∈ τ, s ↾ B1 , B2 ∈ prB } and their df.

composition σ; τ : Hω (J ‡ K) is defined by σ; τ = Hω (σ ‡ τ ). If J = A ⊸ B, K = B ⊸ C, then our composition σ; τ : Hω (A ⊸ B ‡ B ⊸ C) E A ⊸ C (this relation holds only up to tags) coincides with the standard one in the literature [HO00, AM99, McC98]; see [YA16] for the details.

3

Effective strategies

We have presented our variant of games and strategies. In this main section, we introduce an intrinsic notion of “effective computability” of strategies that subsumes computation of the programming language PCF [Plo77, Mit96], and so it is Turing complete in particular. ◮ Notation. We often write A ⇒ B or A → B for the linear implication !A ⊸ B for any games A, B. The operations ⊸, ⇒, → are all right associative.

3.1

Effective strategies

As history-free strategies are expressive enough to model the language PCF [AJM00], it suffices for our strategies to refer to at most three last moves in the P-view and the (“semi”) opening moves of the position, which is clearly “effective”. Thus, it remains to formulate the notion of “effective computability” of the next move from such a bounded number of previous moves. Since the set {m | [m]e ∈ MG } is finite for any game G, finitary (innocent [HO00, AM99, McC98]) strategies in the sense that their view functions [HO00, McC98] are finite seem sufficient at first glance. However, to model the fixed-point combinators in PCF, strategies need to be able to initiate new threads unboundedly many times [HO00, AJM00]; also, they have to model promotion ( )† for which infinitely many manipulations of tags are necessary. Thus, finitary strategies are not strong enough. Then how can we define a stronger notion of “effective computability” of the next move from previous moves solely in terms of games and strategies? Our solution is as follows. A strategy σ : G is effective if it is “describable” by a finitary strategy on the instruction game: ◮ Definition 3.1.1 (Instruction games). Given a game G, its instruction game G(MG ) is the product G(π1 (MG ))&G(T ), where the component game G(π1 (MG )) is defined by: 13

df.

◮ MG(π1 (MG )) = {qG , } ∪ π1 (MG ), where qG ,  are arbitrarily chosen with qG 6∈ π1 (MG ),  6∈ π1 (MG ), and each move has the empty tag ǫ ◮ λG(π1 (MG )) : qG 7→ (O, Q, 0), (m ∈ π1 (MG )) 7→ (P, A, 0),  7→ (P, A, 0) df.

◮ ⊢G(π1 (MG )) = {(⋆, qG ), (qG , )} ∪ {(qG , m)|m ∈ π1 (MG )} df.

◮ PG(π1 (MG )) = pref({qG .m|m ∈ π1 (MG )} ∪ {qG .}), where qG justifies m and  df.

◮ s ≃G(π1 (MG )) t ⇔ s = t. The positions qG .m, qG . are to represent m ∈ π1 (MG ), “no element”, respectively. ∗ ◮ Notation. Given a sequence s = xk xk−1 . . . x1 ∈ MG of moves in an arena G and a number ( s if l > k df. l ∈ N, we define s ⇂ l = A function f : π1 (MG ) → {⊤, ⊥} induces xl xl−1 . . . x1 otherwise. df.

∗ another f ⋆ : MG → π1 (MG )∗ by f ([mk ]ek [mk−1 ]ek−1 . . . [m1 ]e1 ) = mil mil−1 . . . mi1 , l 6 k, where mil mil−1 . . . mi1 is a subsequence of mk mk−1 . . . m1 that consists of mij such that f (mij ) = ⊤.

◮ Notation. Let G be a game, and [m]e ∈ MG , e = e1 e2 . . . ek ∈ T . We write [m]e for the df.

strategy hm, ei : G(MG ), where m : G(π1 (MG )), e : G(T ) are defined by m = pref({qG .m})even , df.

df.

e = pref({qT e1 qT e2 . . . qT ek qT X})even , respectively. Similarly, we define  = pref({qG .})even : df.

∗ , n > l, G(π1 (MG )), and [] = h, ǫi : G(MG ). For any s = [ml ]el [ml−1 ]el−1 . . . [m1 ]e1 ∈ MG df.

df.

we define sn = h[], . . . , [], [ml ]el , [ml−1 ]el−1 , . . . , [m1 ]e1 i : G(MG )n = G(MG )& . . . &G(MG ), | {z } | {z } n

n−l

where the pairing and product are left associative. Given a strategy σ : G(MG ), we define M(σ) ∈ MG to be the unique move such that M(σ) = σ if it exists, and undefined otherwise. We will be particularly concerned with games PG(MG )3 ⇒G(MG ) shortly. The symbols h, i in any position s ∈ PG(MG )3 ⇒G(MG ) form unique pairs similarly to “QA-pairs” for bracketing condition [HO00, AM99]. Specifically, each i is paired with the most recent “still unpaired” h in the same component game G(T ); one is called the mate of the other. Moreover, we define: ◮ Definition 3.1.2 (M-views). Let G be a game. The matching view (m-view) JsKG (we often df.

df.

abbreviate it as JsK) of a position s ∈ PG(MG )3 ⇒G(MG ) is defined by: JǫKG = ǫ, Js.h.t.iKG = df.

JsKG .h.i, where h is the mate of i, and JsmKG = JsKG .m, where m 6= i. We are now ready to make the notion of “describable by a finitary strategy” precise. ◮ Definition 3.1.3 (Algorithms). An algorithm A on a game G, written A :: G, is a collection A = odd ) ⇀ MG(MG )3 ⇒G(MG ) , where (Am )m∈SA of finite partial functions Am : ∂m (PG(M 3 G ) ⇒G(MG ) df.

SA ⊆ π1 (MG )∗ \ {ǫ} is a finite set of states, ∂m (tx) = (O(⌈tx⌉), ⌈tx⌉ ⇂ |Am |, JtxK ⇂ kAm k) for all odd tx ∈ PG(M , |Am |, kAm k ∈ N are the scopes of Am , and Am also specifies the justifier 3 G ) ⇒G(MG ) of each output in the input2 , equipped with the query (function) QA : π1 (MG ) → {⊤, ⊥} such Init that [m] ∈ MG ⇒ QA (m) = ⊤ and QA (m) = ⊤ ⇒ ∃e ∈ T .[m]e ∈ MG is initial or internal. ⋆ Init ◮ Remark. Note that QA (s) 6= ǫ for any s ∈ JG \ {ǫ} since [m] ∈ MG ⇒ QA (m) = ⊤. 2 But

we usually treat this structure implicit as in [AM99, McC98].

14

◮ Convention. Since an algorithm A :: G takes into account the opening move O(⌈tx⌉) for each df.

odd tx ∈ PG(M only occasionally, we usually take ∂m (tx) = (⌈tx⌉ ⇂ |Am |, JtxK ⇂ kAm k) for 3 G ) ⇒G(MG ) odd all tx ∈ PG(MG )3 ⇒G(MG ) for each m ∈ SA , keeping in mind that Am sees opening moves.

◮ Definition 3.1.4 (Instruction strategies). Given a game G and an algorithm A :: G, we define the instruction strategy A⋆m : G(MG )3 ⇒ G(MG ) by: df.

A⋆m = {ǫ} ∪ {txy ∈ PG(MG )3 ⇒G(MG ) |t ∈ A⋆m , tx ∈ PG(MG )3 ⇒G(MG ) , y = Am (∂m (tx))}. ◮ Convention. Only in a limited situation, an algorithm A takes into account a bounded size kAm k ∈ N of information from m-views. Thus, in most cases, each Am is a partial function odd Am : {⌈tx⌉ ⇂ |Am | | tx ∈ PG(M } ⇀ MG(MG )3 ⇒G(MG ) ; we in fact treat it as such un3 G ) ⇒G(MG ) ⋆ less necessary. Accordingly, Am is usually a strategy G(MG )3 ⇒ G(MG ) whose view function representation [HO00, McC98] Am is finite, where the scope |Am | is to keep inputs finite.3 df.

◮ Notation. ≃ denotes the Kleene equality [TVD14], i.e., x ≃ y ⇔ (x ↓ ∧ y ↓ ∧ x = y)∨(x ↑ ∧ y ↑). df.

◮ Definition 3.1.5 (Realizability). The strategy st(A) realized by A :: G is defined by st(A) =

df.

{ǫ} ∪ {sa.A⋆ (⌈sa⌉ ⇂ 3) | s ∈ st(A), sa ∈ PG , sa.A⋆ (⌈sa⌉ ⇂ 3) ∈ PG }, where A⋆ (⌈sa⌉ ⇂ 3) ≃ † ⋆ (⌈sa⌉) ∈ SA ∧ A⋆ (⌈sa⌉ ⇂ 3) ↓. M(A⋆Q⋆ (⌈sa⌉) ◦ ⌈sa⌉ ⇂ 3 ), and A⋆ (⌈sa⌉ ⇂ 3) ∈ PG presupposes QA A

3

◮ Remark. Strictly speaking, each A⋆m : G(MG )3 ⇒ G(MG ) has to specify justifiers (in PG ) of outputs. This is easily achieved by changing it into A⋆m : G(MG )3 ⇒ G(MG )&2 as the choice is ternary (the last or third last move in the P-view, or the opening move). However, since justifiers in this paper are obvious ones, we adopt the abbreviated form of A⋆m as above. Clearly, A :: G ⇒ st(A) : G holds. We are now ready to define the central notion of the paper, namely “effective computability” of strategies, in an intrinsic manner: ◮ Definition 3.1.6 (Effective strategies). A strategy σ : G is effective if there exists an algorithm A :: G that realizes σ, i.e., st(A) = σ. Given an algorithm A :: G that realizes an effective strategy σ : G, we may “effectively execute” A to compute σ roughly as follows: df.

⋆ 1. Given sa ∈ PGodd , we calculate m = QA (⌈sa⌉) and ⌈sa⌉ ⇂ 3. If m 6∈ SA , then we stop.

2. Otherwise, we compose ⌈sa⌉ ⇂ 3† with A⋆m , and “execute” A⋆m ◦ ⌈sa⌉ ⇂ 3† . 3

3. Finally, we “read off” the next move

3

M(A⋆m

◦ ⌈sa⌉ ⇂

† 33 )

(and its justifier).

This procedure is similar to the execution of Turing machines [Tur36], and intuitively “effective”, which is our conceptual justification of our notion of “effective computability”. ◮ Notation. To describe a finite partial function f , we list every input/output pair (x, y) ∈ f as f : x0 7→ y0 | x1 7→ y1 | . . . Given a game G, we abuse notation and write mi1 i2 ...ik as the symbol to denote each (. . . ((m, i1 ), i2 ), . . . , ik ) ∈ π1 (MG ). We often indicate the form of tags of moves [mi1 i2 ...ik ]e in a game G by [Gi1 i2 ...ik ]e , where we call mi1 i2 ...ik the inner element, i1 i2 . . . ik and e the (inner and outer) tags of [mi1 i2 ...ik ]e , respectively. However, we usually write tags in G(MG )3 ⇒ G(MG ) informally for brevity (which is “harmless” as instruction strategies are finitary), e.g., G(MG )0 ⇒ G(MG )1 , G(MG )0 &G(MG )1 &G(MG )2 ⇒ G(MG )3 , [qG ]0 , [qT ]1 . 3 The point here is that instruction strategies are clearly “computable”, but they achieve an unbounded collection of manipulations of tags.

15

◮ Convention. Since we shall focus on consistent strategies of the form A ⇒ B in this paper, it is reasonable to require each algorithm A not to refer to any outer tags when it computes the next internal element. Also, since a strategy of our interest modifies (i.e., not just “copy-cats”) the outer tag of the last move only if it initiates a new thread in the domain game, we assume that the only outer tag A investigates when it computes the next outer tag is just the one of the last move in the P-view, and it reads it off at most once. We call algorithms that satisfy these two conditions standard; from now on, the word algorithms refers to standard ones by default. Algorithms in this paper are all standard, and standard algorithms are closed under all constructions on algorithms we shall introduce later. This convention will save work in the proof to show the closure property of effective strategies under promotion (see Theorem 3.1.9). df.

even ◮ Example 3.1.7. The zero strategy zero = pref({[q1 ][♭1 ]})( : [I0 ]hei♯ ⇒ [N1 ] is effective df.

since we may give an algorithm A(zero) by QA(zero) (m) =

⊤ ⊥

if m = q1 df. , SA(zero) = {q1 }, otherwise

df.

|A(zero)q1 | = 1 and A(zero)q1 : [qI⇒N ]1 7→ [♭1 ]1 | [qT ]1 7→ [X]1 . Then the instruction strategy A(zero)⋆q1 is as depicted in the following diagram: G(MI⇒N )0

&

G(MI⇒N )1

G(MI⇒N )0

&

A(zero)⋆ q1



G(MI⇒N )1 [qI⇒N ]1 ([qT ]1 ) [♭1 ]1 ([X]1 )

Clearly, st(A(zero)) = zero. Next, let us consider the successor strategy df.

succ = pref({[q1 ][q0 ]hi♯ ([•0 ]hi♯ [•1 ][q1 ][q0 ]hi♯ )n [♭0 ]hi♯ [•1 ][q1 ][♭1 ]|n ∈ N})even : [N0 ]he0 i♯ ⇒ [N1 ]. ( ⊤ if m = q1 df. df. We give an algorithm A(succ) for succ by defining QA(succ) (m) = , SA(succ) = ⊥ otherwise df.

{q1 }, |A(succ)q1 | = 5, and A(succ)q1 : [qN ⇒N ]3 7→ [qN ⇒N ]2 | [qN ⇒N ]3 [qN ⇒N ]2 [q1 ]2 7→ [qN ⇒N ]0 | [qN ⇒N ]3 [qN ⇒N ]2 [q1 ]2 [qN ⇒N ]0 [x]0 7→ [q0 ]3 | [qN ⇒N ]3 [qN ⇒N ]2 [q1 ]2 [qN ⇒N ]0 [♭0 ]0 7→ [♭1 ]3 | [qT ]3 7→ [qN ⇒N ]2 | [qT ]3 [qN ⇒N ]2 [q1 ]2 7→ [qN ⇒N ]0 | [qT ]3 [qN ⇒N ]2 [q1 ]2 [qN ⇒N ]0 [x]0 7→ [h]3 | [q1 ]2 [qN ⇒N ]0 [x]0 [h]3 [qT ]3 7→ [i]3 | [x]0 [h]3 [qT ]3 [i]3 [qT ]3 7→ [♯]3 | [qT ]3 [i]3 [qT ]3 [♯]3 [qT ]3 7→ [X]3 | [qT ]3 [qN ⇒N ]2 [q1 ]2 [qN ⇒N ]0 [♭0 ]0 7→ [X]3 | [qN ⇒N ]3 [qN ⇒N ]2 [y]2 7→ [•1 ]3 | [qT ]3 [qN ⇒N ]2 [y]2 7→ [X]3 where x ∈ {, •0 } y ∈ {•0 , ♭0 }. Consequently, A(succ)⋆q1 is as depicted in the following: G(MN ⇒N )0

&

G(MN ⇒N )1

&

G(MN ⇒N )2

A(succ)⋆ q1



G(MN ⇒N )3 [qN ⇒N ]3

[qN ⇒N ]2 [q1 ]2 [qN ⇒N ]0 []0 ([•0 ]0 ) [q0 ]3 16

G(MN ⇒N )0

&

G(MN ⇒N )1

&

G(MN ⇒N )2

A(succ)⋆ q1



G(MN ⇒N )3 [qN ⇒N ]3

[qN ⇒N ]2 [q1 ]2 [qN ⇒N ]0 [♭0 ]0 [♭1 ]3 G(MN ⇒N )0

&

G(MN ⇒N )1

&

G(MN ⇒N )2

A(succ)⋆ q1



G(MN ⇒N )3 [qT ]3

[qN ⇒N ]2 [q1 ]2 [qN ⇒N ]0 []0 ([•0 ]0 ) [h]3 [qT ]3 [i]3 [qT ]3 [♯]3 [qT ]3 [X]3 G(MN ⇒N )0

&

G(MN ⇒N )1

&

G(MN ⇒N )2

A(succ)⋆ q1



G(MN ⇒N )3 [qT ]3

[qN ⇒N ]2 [q1 ]2 [qN ⇒N ]0 [♭0 ]0 [X]3 G(MN ⇒N )0

&

G(MN ⇒N )1

&

G(MN ⇒N )2

A(succ)⋆ q1



G(MN ⇒N )3 [qN ⇒N ]3

[qN ⇒N ]2 [•0 ]2 ([♭0 ]2 ) [•1 ]3 G(MN ⇒N )0

&

G(MN ⇒N )1

&

G(MN ⇒N )2

A(succ)⋆ q1



G(MN ⇒N )3 [qT ]3

[qN ⇒N ]2 [•0 ]2 ([♭0 ]2 ) [X]3 We clearly have st(A(succ)) = succ, which establishes the effectivity of succ. ◮ Example 3.1.8. Consider the fixed-point strategy fix A : ([A00 ]he′ i♯hei♯f ⇒ [A10 ]he′ i♯f ) ⇒ [A1 ]f for each game A that interprets the fixed-point combinator fixA in PCF [AJM00, HO00, McC98]. Roughly, fix A computes as follows (for its detailed description, see [Hyl97, HO00]): 1. After the first O-move [a1 ], fix A copies it and makes the second move [a10 ]hi♯ , where note that the first move must have the empty outer tag ǫ as A is finitely well-opened. 17

2. If Opponent initiates a new thread [a′00 ]he′ i♯hei♯f in the inner implication, then fix A copies it and launches a new thread in the outer implication by [a′10 ]hhe′ i♯heii♯f . 3. If Opponent makes a move [a′′00 ]he′ i♯hei♯f (resp. [a′′10 ]hi♯f , [a′′10 ]hhe′ i♯heii♯f , [a′′1 ]f ) in an existing thread, then fix A copies it and makes the next move [a′′10 ]hhe′ i♯heii♯f (resp. [a′′1 ]f , [a′′00 ]he′ i♯hei♯f , [a′′10 ]hi♯f ) in the “dual thread” (to which the third last move belongs). Clearly, fix A is not finitary for the calculation of outer tags. It is, however, effective for any game A, which is perhaps surprising to many readers. Here, let us just informally describe an algorithm A(fix A ) that realizes fix A (see Section 3.2 for the detailed treatment): ◮ QA(fix A ) (m) = ⊤ iff [m] is initial, and SA(fix A ) = {m ∈ π1 (M(A⇒A)⇒A )|⋆ ⊢(A⇒A)⇒A [m]}. Since A(fix A )m does not depend on m, fix an arbitrary m such that [m] initial. ◮ If the rightmost component of the input strategy for A(fix A )⋆m is of the form ([a1 ]f )† , then A(fix A )⋆m calculates the next move [a10 ]hi♯f once and for all for the internal element and “digit-by-digit” for the outer tag. ◮ If the rightmost component is of the form ([a10 ]hi♯f )† , then A(fix A )⋆m recognizes it by investigating the third rightmost component, and calculates the next move [a1 ]f once and for all for the internal element and “digit-by-digit” for the outer tag. ◮ If the rightmost component is of the form ([a10 ]hhe′ i♯heii♯f )† , then A(fix A )⋆m calculates the next move [a00 ]he′ i♯hei♯f in a similar manner to the above case but with the help of m-views for the outer tag; see Section 3.2 for the details. ◮ If the rightmost component is of the form ([a00 ]he′ i♯hei♯f )† , then A(fix A )⋆m calculates the next move [a10 ]hhe′ i♯heii♯f in a similar manner to the first case with the help of m-views for the outer tag (n.b. the justifier may not be the last or third last move in the P-view, but in that case it is the opening move [m]); see Section 3.2 for the details. ◮ Theorem 3.1.9 (Constructions on effective strategies). Consistent and effective strategies are closed under tensor ⊗, pairing h , i, promotion ( )† and concatenation ‡. Proof. The preservation of consistency is straightforward as in [McC98]. We first show that tensor ⊗ preserves effectivity of strategies. Let σ : [A0 ]e ⊸ [C1 ]e′ , τ : [B0 ]f ⊸ [D1 ]f ′ be effective strategies with algorithms A(σ), A(τ ) realizing σ, τ , respectively. We have to construct an algorithm A(σ ⊗ τ ) such that st(A(σ ⊗ τ )) = σ ⊗ τ : [A00 ]e ⊗ [B10 ]f ⊸ [C01 ]e′ ⊗ [D11 ]f ′ . Define the set SA(σ⊗τ ) of states and the query QA(σ⊗τ ) : π1 (MA⊗B⊸C⊗D ) → {⊤, ⊥} by: df.

(k)

(k−1)

(1)

(k)

(k−1)

(1)

SA(σ⊗τ ) = {m0ik m0ik−1 . . . m0i1 |mik mik−1 . . . mi1 ∈ SA(σ) } (l)

(l−1)

(1)

(l) (l−1)

(1)

∪ {n1jl n1jl−1 . . . n1j1 |njl njl−1 . . . nj1 ∈ SA(τ ) } QA(σ⊗τ ) : a00 7→ QA(σ) (a0 ) | b10 7→ QA(τ ) (b0 ) | c01 7→ QA(σ) (c1 ) | d11 7→ QA(τ ) (d1 ). Note that QA(σ⊗τ ) clearly satisfies the required condition, i.e., it outputs ⊤ if the input is initial, and if it outputs ⊤, then the input is initial or internal. Now, construct the finite partial functions A(σ ⊗ τ )m(k) m(k−1) ...m(1) , A(σ ⊗ τ )n(l) n(l−1) ...n(1) from A(σ)m(k) m(k−1) ...m(1) , A(τ )n(l) n(l−1) ...n(1) 0ik

0ik−1

0i1

1jl

1jl−1

1j1

ik

ik−1

i1

jl

jl−1

j1

simply by changing symbols of the form mi into m0i , m1i (including ones for tags) respectively in their (finite) tables. Since P-views of σ and τ never interact to each other in σ ⊗ τ (which is shown by induction on the length of positions), it is straightforward to see that st(A(σ ⊗ τ )) = 18

σ ⊗ τ holds. Intuitively, A(σ ⊗ τ ) sees the new digit (0 or 1) of the current state s ∈ SA(σ⊗τ ) and decides A(σ) or A(τ ) to apply (n.b. QA(σ⊗τ ) “tracks” every initial move, and so a possible state must be non-empty in the non-trivial case4 , and so it indicates the component game “at work”). Note that tags are also distinguished in this manner as each component game uses a distinguished copy of the symbols |, ♯, h, i, and we distinguish them by 0, 1 digits. Next, consider the pairing hφ, ψi : L&R of effective strategies φ : L, ψ : R such that Hω (L) = C ⊸ A, Hω (R) = C ⊸ B for some static games A, B, C. Let A(φ), A(ψ) be algorithms realizing φ, ψ, respectively. Note that L&R is the generalized pairing defined in [YA16]; roughly, it is the usual pairing but moves in C are not “duplicated”. Since the query functions QA(φ) , QA(ψ) “track” only initial or internal moves, they in particular “ignore” moves in C. Thus, we may safely apply the same construction of algorithms as that for ⊗ except that the additional 0, 1 digits lie on the righthand side, and inner tags of moves in C are not changed. Now, consider the concatenation ι ‡ κ : J ‡ K of effective strategies ι : J, κ : K such that Hω (J) = A ⊸ B, Hω (K) = B ⊸ C for some static games A, B, C. Let A(ι), A(κ) be algorithms such that st(A(ι)) = ι, st(A(κ)) = κ. Define the states and the query by: df.

(l)

(1)

(k)

(1)

(k)

(l)

(1)

(1)

SA(ι‡κ) = {njl 1 . . . nj1 1 mik 0 . . . mi1 0 |mik . . . mi1 ∈ SA(ι) , njl . . . nj1 ∈ SA(κ) } QA(ι‡κ) : mi0 7→ QA(ι) (mi ) | nj1 7→ QA(κ) (nj ). Now, define the finite partial function A(ι ‡ κ)n(l) ...n(1) m(k) ...m(1) as A(κ)n(l) ...n(1) if k = 0, and jl 1

j1 1

ik 0

i1 0

jl

j1

A(ι)m(k) ...m(1) otherwise, where we again insert additional bits 0, 1 on the righthand side of ik

i1

internal tags of symbols in the table. Note that P-views in ι ‡ κ are those in ι followed by those in κ; therefore it is straightforward to see that st(A(ι ‡ κ)) = ι ‡ κ holds. Finally, let ϕ† : [!A0 ]hei♯f ⊸ [!B1 ]he′ i♯f ′ be the promotion of a strategy ϕ : [!A0 ]hei♯f ⊸ [B1 ]f ′ df.

df.

with an algorithm A(ϕ) that realizes ϕ. We define SA(ϕ† ) = SA(ϕ) and QA(ϕ† ) = QA(ϕ) . Then roughly, the partial function A(ϕ† )m for each m ∈ SA(ϕ† ) is obtained from A(ϕ)m in such a way that if P-moves [a0 ]hei♯f , [b1 ]f˜ occur in a play for ϕ, and the corresponding play for ϕ† begins with an initial move [b′1 ]he′ i♯ , then ϕ† makes the corresponding moves [a0 ]hhe′ i♯heii♯f , [b1 ]he′ i♯f˜ in that play. This is certainly possible by modifying the manipulation of outer tags by A(ϕ)m appropriately with the help of m-views as follows: 1. The calculation of the next internal element by A(ϕ† )m is the same as A(ϕ)m since A(ϕ) is assumed to be standard. Below, we focus on the calculation of outer tags. 2. Duplicate the input/output pairs in A(ϕ)m involved in the calculation of the next internal element but replace the opening move [q!A⊸!B ]3 and the last moves [m]3 by [qT ]3 and [q!A⊸!B ]2 , respectively. Also, we “postpone”, by m-views, (A(ϕ)m )’s calculation of the next outer tag until the additional symbol he′ i has been read off. Since A(ϕ† )m “sees” whether the opening move is [q!A⊸!B ]3 or [qT ]3 , i.e., its input includes an opening move, the newly added computations will never be confused with the old ones. In this manner, A(ϕ† )m learns !A or !B the last and the next moves in the P-view respectively belong to. 3. If the last and the next moves in the P-view both belong to !A (resp. !B), then their outer tags are of the form hhe′ i♯heii♯f , hhe′ i♯h˜ eii♯f˜ (resp. he′ i♯f , he′ i♯f˜), respectively. They respectively correspond to moves with the same internal elements and the outer tags hei♯f , h˜ ei♯f˜ (resp. f , f˜) in A(ϕ)m . Since A(ϕ) is assumed to be standard, with the help of mviews, we may clearly modify (A(ϕ)m )’s computation from hei♯f to h˜ ei♯f˜ (resp. from 4 I.e.,

when the underlying game is not the terminal game I.

19

f to f˜) in such a way that (A(ϕ† )m )’s corresponding computation is standard and maps hhe′ i♯heii♯f (resp. he′ i♯f ) to hhe′ i♯h˜ eii♯f˜ (resp. he′ i♯f˜) whatever e′ is (roughly it first ′ ′ “copy-cats” hhe i♯ (resp. he i♯), and then the m-view at this point tells it to simulate the computation hei♯f 7→ h˜ ei♯f˜ (resp. f 7→ f˜), inserting another i between i and ♯). 4. If the last and the next moves belong to !B and !A, respectively, then their outer tags are of the form he′ i♯, hhe′ i♯heii♯. Note that they correspond to moves with the same internal elements and the outer tags ǫ, hei♯, respectively, in A(ϕ)m . When A(ϕ† )m continues, it first adds the symbol h, and then “copy-cats” he′ i♯. At this point, the m-view tells A(ϕ† )m to simulate the calculation of hei♯ of A(ϕ)m , inserting another i between i and ♯. We do not give a formal description of A(ϕ† )m as it would be much more involved and hard to read; however, the above description should suffice to indicate how we may construct it.  ◮ Example 3.1.10. Consider the tensor succ ⊗ pred : [!N00 ]hei♯ ⊗ [!N10 ]he′ i♯ ⊸ [N01 ] ⊗ [N11 ]. A typical play by succ ⊗ pred is as follows: [!N00 ]hei♯



[!N10 ]he′ i♯

succ⊗pred



[N01 ]



[N11 ] [q11 ]

[q10 ]hi♯ [•10 ]hi♯ [q10 ]hi♯ [q01 ] [q00 ]hi♯ [♭00 ]hi♯ [•01 ] [♭10 ]hi♯ [♭11 ] [q01 ] [♭01 ] Applying the construction described in the proof of Theorem 3.1.9, ( we construct an algorithm ⊤ if m = q01 ∨ m = q11 df. df. A(succ ⊗ pred ) by SA(succ⊗pred) = {q01 , q11 }, QA(succ⊗pred) (m) = , ⊥ otherwise and A(succ ⊗ pred )q01 (resp. A(succ ⊗ pred )q11 ) is obtained from A(succ)q1 (resp. A(pred )q1 ) by replacing symbols mi with m0i (resp. m1i ). It is easy to see that A(succ ⊗ pred ) achieves the computation in the above diagram, and moreover st(A(succ ⊗ pred )) = succ ⊗ pred holds. ◮ Example 3.1.11. Consider the pairing hsucc, pred i : [!N0 ]hei♯ ⊸ [N10 ]&[N11 ], where note that the 0, 1 digits in the codomain differ from those in the case of ⊗. Its typical plays are as follows: [!N0 ]hei♯

hsucc,predi



[N10 ] [q10 ]

&

[N11 ]

[!N0 ]hei♯

[q0 ]hi♯ [♭0 ]hi♯

hsucc,predi



[N10 ]

&

[N11 ] [q11 ]

[q0 ]hi♯ [•0 ]hi♯ [q0 ]hi♯ [♭0 ]hi♯

[•10 ] [q10 ] [♭10 ]

[♭11 ]

Again, as described in the proof of Theorem 3.1.9,(we construct an algorithm A(hsucc, pred i) by ⊤ if m = q10 ∨ m = q11 df. df. SA(hsucc,predi) = {q10 , q11 }, QA(hsucc,predi) (m) = , and A(hsucc, pred i)q10 ⊥ otherwise 20

(resp. A(hsucc, pred i)q11 ) is obtained from A(succ)q1 (resp. A(pred )q1 ) by replacing symbols m1 with m10 (resp. m11 ). Then A(hsucc, pred i) clearly achieves the computation in the above diagram, and furthermore st(A(hsucc, pred i)) = hsucc, pred i holds. ◮ Example 3.1.12. Consider the promotion succ † : [!N0 ]hei♯ ⊸ [!N1 ]he′ i♯ . Its typical play is as depicted in the following diagram: succ†



[!N0 ]hei♯

[!N1 ]he′ i♯ [q1 ]he′ i♯

[q0 ]hhe′ i♯hii♯ [•0 ]hhe′ i♯hii♯ .. .

[•1 ]he′ i♯ [q1 ]he′ i♯

[q0 ]hhe′ i♯hii♯ [•0 ]hhe′ i♯hii♯ [•1 ]he′ i♯ [q1 ]he′ i♯ [q0 ]hhe′ i♯hii♯ [♭0 ]hhe′ i♯hii♯ [•1 ]he′ i♯ [q1 ]he′ i♯ [♭1 ]he′ i♯ Let us apply the construction in the proof of Theorem 3.1.9. The set SA(succ† ) of states and the query QA(succ† ) are the same as those of A(succ). For the computation of the outer tag of the next move (i.e., when the opening move is [qT ]3 in G(MN ⇒N )3 ⇒ G(MN ⇒N )), recall, e.g., how A(succ)⋆q1 computes the second move from the opening move in N ⇒ N : G(MN ⇒N )0

&

G(MN ⇒N )1

&

G(MN ⇒N )2

A(succ)⋆ q1



G(MN ⇒N )3 [qN ⇒N ]3

[qN ⇒N ]2 [q1 ]2 [qN ⇒N ]0 []0 [q0 ]3 G(MN ⇒N )0

&

G(MN ⇒N )1

&

G(MN ⇒N )2

A(succ)⋆ q1



G(MN ⇒N )3 [qT ]3

[qN ⇒N ]2 [q1 ]2 [qN ⇒N ]0 []0 [h]3 [qT ]3 [i]3 [qT ]3 [♯]3 [qT ]3 [X]3 21

Let us see how A(succ † )⋆q1 computes the outer tag of the next move when the last move is an opening [q1 ]h2i♯ and the next move is [q0 ]hh2i♯hii♯ in !N ⊸ !N : G(MN ⇒N )0

&

G(MN ⇒N )1

&

G(MN ⇒N )2

A(succ † )⋆ q1



G(MN ⇒N )3 [qT ]3

[qN ⇒N ]2 [q1 ]2 [qN ⇒N ]0 []0 [qN ⇒N ]2 [q1 ]2 [h]3 [qT ]3 [qT ]2 [h]2 [h]3 [qT ]3 [qT ]2 [|]2 [|]3 [qT ]3 [qT ]2 [|]2 [|]3 [qT ]3 [qT ]2 [i]2 [i]3 [qT ]3 [qT ]2 [♯]2 [♯]3 [qT ]3 [qN ⇒N ]2 [q1 ]2 [qN ⇒N ]0 []0 [h]3 [qT ]3 [i]3 [qT ]3 [i]3 [qT ]3 [♯]3 [qT ]3 [X]3 It should be clear how A(succ † )⋆q1 calculates the outer tag of the next move in other cases. In this way, st(A(succ † )) = succ † in fact holds. 22

◮ Example 3.1.13. Consider the concatenation succ † ‡pred : ([!N00 ]hei♯ ⊸ [!N10 ]he′ i♯ )‡([!N01 ]he′ i♯ ⊸ [N11 ]). Its typical play is as follows: [!N00 ]hei♯

succ †





[!N10 ]he′ i♯

[!N01 ]he′ i♯

pred



[N11 ] [q11 ]

[q01 ]hi♯ [q10 ]hi♯ [q00 ]hhi♯hii♯ [♭00 ]hhi♯hii♯ [•10 ]hi♯ [•01 ]hi♯ [q01 ]hi♯ [q10 ]hi♯ [♭10 ]hi♯ [♭01 ]hi♯ [♭11 ] Applying the recipe in the proof of Theorem 3.1.9, we define(an algorithm A(succ † ‡ pred ) ⊤ if m = q10 ∨ m = q11 df. df. as follows. Define SA(succ† ‡pred) = {q10 , q11 }, QA(ι‡κ) (m) = , and ⊥ otherwise A(succ † ‡ pred )q11 q10 (resp. A(succ † ‡ pred )q11 ) is obtained from A(succ † )q1 (resp. A(pred )q1 ) just by changing symbols mi into mi0 (resp. mi1 ) in its finite table. It is then clear that A(succ † ‡pred ) achieves the computation in the above diagram, and st(A(succ † ‡ pred )) = succ † ‡ pred holds.

3.2

Examples of atomic strategies

This section presents various examples of consistent and effective “atomic” strategies. Let us remark beforehand that these strategies except the fixed-point strategies fix A are representable by finite view functions; thus, we need the notion of effective strategies only for promotion and fix A . ◮ Remark. When describing strategies below, we usually keep justifiers implicit for brevity as they are always obvious in our examples. ◮ Example 3.2.1. Similarly to zero and succ, we may give an algorithm A(pred ) for the predecessor strategy pred : [N0 ]hei♯ ⇒ [N1 ] defined by: df.

pred = pref({[q1 ][q0 ]hi♯ [•0 ]hi♯ [q0 ]hi♯ ([•0 ]hi♯ [•1 ][q1 ][q0 ]hi♯ )n [♭0 ]hi♯ [♭1 ]|n ∈ N} ∪ {[q1 ][q0 ]hi♯ [♭0 ]hi♯ [♭1 ]})even whose states and query are the same as those for succ. At this point, it suffices to show a similar diagram for A(pred )⋆q1 as it is clear that there is a finite table A(pred )q1 achieving it: G(MN ⇒N )0

&

G(MN ⇒N )1

&

G(MN ⇒N )2

A(pred)⋆ q1



G(MN ⇒N )3 [qN ⇒N ]3

[qN ⇒N ]2 [q1 ]2 ([♭0 ]2 ) [q0 ]3 ([♭1 ]3 )

23

G(MN ⇒N )0

&

G(MN ⇒N )1

&

A(pred)⋆ q1



G(MN ⇒N )2

G(MN ⇒N )3 [qN ⇒N ]3

[qN ⇒N ]2 [•0 ]2 [qN ⇒N ]0 [q1 ]0 ∈ Init (otherwise) [q0 ]3 ([•1 ]3 ) G(MN ⇒N )0

&

G(MN ⇒N )1

&

G(MN ⇒N )2

A(pred)⋆ q1



G(MN ⇒N )3 [qT ]3

[qN ⇒N ]2 [♭0 ]2 ([q1 ]2 ) [X]3 ([h]3 ) ([qT ]3 ) ([i]3 ) ([qT ]3 ) ([♯]3 ) ([qT ]3 ) ([X]3 ) G(MN ⇒N )0

&

G(MN ⇒N )1

&

G(MN ⇒N )2

A(pred)⋆ q1



G(MN ⇒N )3 [qT ]3

[qN ⇒N ]2 [•0 ]2 [qN ⇒N ]0 ([q1 ]0 ∈ Init) otherwise ([h]3 ) [X]3 ([qT ]3 ) ([i]3 ) ([qT ]3 ) ([♯]3 ) ([qT ]3 ) ([X]3 ) where [q1 ]0 ∈ Init (resp. [q1 ]0 6∈ Init) denotes the move [q1 ]0 that is (resp. not) an initial occurrence. Note that it is “effectively computable” to decide whether an occurrence of a move is initial since it suffices to see if it has a pointer. Clearly st(A(pred )) = pred . Note that A(zero), A(succ) and A(pred ) are all standard, and zero, succ and pred are all trivially consistent. ◮ Example 3.2.2. For each game A, we may give an ( algorithm A(cp A ) that realizes the copy⊤ if ⋆ ⊢A [m] df. df. cat strategy cp A : [A0 ]e ⊸ [A1 ]e by Qcp A (m) = , Scp A = {m | ⋆ ⊢A [m] }, ⊥ otherwise df.

|A(cp A )m | = 3 for all m ∈ Scp A , and A(cp A )m : [qA⊸A ]3 7→ [qA⊸A ]2 | [qA⊸A ]3 [qA⊸A ]2 [a0 ]2 7→ [a1 ]3 | [qA⊸A ]3 [qA⊸A ]2 [a1 ]2 7→ [a0 ]3 | [qT ]3 7→ [qT ]2 | [x]3 [x]2 [y]2 7→ [y]3 | [x]2 [x]3 [y]3 7→ [y]2 for all m ∈ Scp A , where a ∈ π1 (MA ), x, y ∈ π1 (MG(T ) ). Accordingly, A(cp A )⋆m is as depicted in the following diagrams: 24

G(MA⊸A )0

&

G(MA⊸A )1

G(MA⊸A )2

&

A(cp A )⋆ m



G(MA⊸A )3 [qA⊸A ]3

[qA⊸A ]2 [a0 ]2 ([a1 ]2 ) [a1 ]3 ([a0 ]3 ) G(MA⊸A )0

&

G(MA⊸A )1

G(MA⊸A )2

&

A(cp A )⋆ m



G(MA⊸A )3 [qT ]3

[qT ]2 [e]2 [e]3 .. . [qT ]3 [qT ]2 [e′ ]2

[e′ ]3 [qT ]3

[qT ]2 [X]2 [X]3 Then it is straightforward to see that st(A(cp A )) = cp A holds, showing the effectivity of cp A . Also, cp A is trivially consistent, and A(cp A ) is clearly standard. In a completely analogous way, we may show that the dereliction der A : A ⇒ A for each game A is consistent and effective with a standard algorithm realizing it as well. ◮ Example 3.2.3. Consider the case strategy case A : [A0 ]he′′ i♯f ⇒ [A01 ]he′ i♯f ⇒ [2011 ]hei♯ ⇒ [A111 ]f on each game A defined by df.

case A = pref({[a111 ][q011 ]hi♯ [⊤011 ]hi♯ [a0 ]hi♯ .s ∈ PA⇒A⇒2⇒A |[a111 ][a0 ]hi♯ .s ∈ der 0A } even ∪ {[a111 ][q011 ]hi♯ [⊥011 ]hi♯ [a01 ]hi♯ .t ∈ PA⇒A⇒2⇒A |[a111 ][a01 ]hi♯ .t ∈ der 01 A })

where der 0A : [A0 ]he′′ i♯f ⇒ [A111 ]f , der 01 A : [A01 ]he′ i♯f ⇒ [A111 ]f are the same as the usual dereliction der A : [A0 ]he′ i♯f ⇒ [A1 ]f up to tags. Since this strategy distinguishes different copies of symbols |, ♯, h, i, we explicitly write subscripts α ∈ {0, 1}∗ on them. We give an algorithm A(case A ) that realizes case A whose states and query are the same as A(cp A ), and for all m ∈ SA(case A ) the instruction strategy A(case A )⋆m is as follows (again, since we have described cp A , we skip formally writing down A(case A )m as it should be clear): G(MA⇒A⇒2⇒A )0

&

G(MA⇒A⇒2⇒A )1

&

G(MA⇒A⇒2⇒A )2

A(case A )⋆ m



G(MA⇒A⇒2⇒A )3 [qA⇒A⇒2⇒A ]3

[qA⇒A⇒2⇒A ]2 [a111 ]2 [qA⇒A⇒2⇒A ]0 []0 [q011 ]3

25

G(MA⇒A⇒2⇒A )0

&

G(MA⇒A⇒2⇒A )1

&

G(MA⇒A⇒2⇒A )2

A(case A )⋆ m



G(MA⇒A⇒2⇒A )3 [qA⇒A⇒2⇒A ]3

[qA⇒A⇒2⇒A ]2 [⊤011 ]2 ([⊥011 ]2 ) [qA⇒A⇒2⇒A ]0 [a111 ]0 [a0 ]3 ([a01 ]3 ) G(MA⇒A⇒2⇒A )0

&

G(MA⇒A⇒2⇒A )1

&

G(MA⇒A⇒2⇒A )2

A(case A )⋆ m



G(MA⇒A⇒2⇒A )3 [qA⇒A⇒2⇒A ]3

[qA⇒A⇒2⇒A ]2 [a0 ]2 ([a01 ]2 ) [a111 ]3 G(MA⇒A⇒2⇒A )0

&

G(MA⇒A⇒2⇒A )1

&

G(MA⇒A⇒2⇒A )2

A(case A )⋆ m



G(MA⇒A⇒2⇒A )3 [qA⇒A⇒2⇒A ]3

[qA⇒A⇒2⇒A ]2 [a111 ]2 [qA⇒A⇒2⇒A ]0 [a′0 ]0 ([a′01 ]0 ) [a0 ]3 ([a01 ]3 ) G(MA⇒A⇒2⇒A )0

&

G(MA⇒A⇒2⇒A )1

&

G(MA⇒A⇒2⇒A )2

A(case A )⋆ m



G(MA⇒A⇒2⇒A )3 [qT ]3

[qA⇒A⇒2⇒A ]2 [a111 ]2 [qA⇒A⇒2⇒A ]0 []0 [h011 ]3 [qT ]3 [i011 ]3 [qT ]3 [♯011 ]3 [qT ]3 [X]3 G(MA⇒A⇒2⇒A )0

&

G(MA⇒A⇒2⇒A )1

&

G(MA⇒A⇒2⇒A )2

A(case A )⋆ m



G(MA⇒A⇒2⇒A )3 [qT ]3

[qA⇒A⇒2⇒A ]2 [⊤011 ]2 ([⊥011 ]2 ) [h0 ]3 ([h01 ]3 ) [qT ]3 [i0 ]3 ([i01 ]3 ) [qT ]3 [♯0 ]3 ([♯01 ]3 ) [qT ]3 [X]3 26

G(MA⇒A⇒2⇒A )0

&

G(MA⇒A⇒2⇒A )1

&

G(MA⇒A⇒2⇒A )2

A(case A )⋆ m



G(MA⇒A⇒2⇒A )3 [qT ]3

[qA⇒A⇒2⇒A ]2 [a0 ]2 ([a01 ]2 ) [qT ]2 [h0 ]2 ([h01 ]2 ) [qT ]2 [i0 ]2 ([i01 ]2 ) [qT ]2 [♯0 ]2 ([♯01 ]2 ) [qT ]2 [e0 ]2 ([e01 ]2 ) [e111 ]3 [qT ]3 [qT ]2 [e′0 ]2 ([e′01 ]2 ) .. .

[e′111 ]3 [qT ]3

[qT ]2 [e′′0 ]2 ([e′′01 ]2 )

[e′′111 ]3 [qT ]3

[qT ]2 [X]2 [X]3

27

G(MA⇒A⇒2⇒A )0

&

G(MA⇒A⇒2⇒A )1

G(MA⇒A⇒2⇒A )2

&

A(case A )⋆ m



G(MA⇒A⇒2⇒A )3 [qT ]3

[qA⇒A⇒2⇒A ]2 [a111 ]2 [qA⇒A⇒2⇒A ]0 [a′0 ]0 ([a′01 ]0 ) [h0 ]3 ([h01 ]3 ) [qT ]3 [i0 ]3 ([i01 ]3 ) [qT ]3 [♯0 ]3 ([♯01 ]3 ) [qT ]3 [qT ]2 [e111 ]2 [(e1 )0 ]3 ([e01 ]3 ) [qT ]3 [qT ]2 [e′111 ]2

[e′0 ]3 ([e′01 ]3 )

.. .

[qT ]3 [qT ]2 [e′′111 ]2

[e′′0 ]3 ([e′′01 ]3 ) [qT ]3

[qT ]2 [X]2 [X]3 Clearly, st(A(case A )) = case A , and so case A is effective. And again, A(case A ) is clearly standard, and case A is trivially consistent. df.

◮ Example 3.2.4. Consider the ifzero strategy zero? : [N0 ]he0 i♯ ⇒ [21 ] defined by zero? = pref({[q1 ][q0 ]hi♯ [♭0 ]hi♯ [⊤1 ], [q1 ][q0 ]hi♯ [•0 ]hi♯ [⊥1 ], })even , which is trivially consistent. Let us give an ( ⊤ if m = q1 df. algorithm A(zero?) that realizes zero? as follows. Define QA(zero?) (m) = , ⊥ otherwise df.

df.

SA(zero?) = {q1 }, |A(zero?)q1 | = 3, and the instruction strategy A(zero?)⋆q1 is as depicted in the following diagrams (again, we omit the formal description of A(zero?)q1 as it should be clear at this point): G(MN ⇒2 )0

&

G(MN ⇒2 )1

&

G(MN ⇒2 )2

A(zero?)⋆ q1



G(MN ⇒2 )3 [qN ⇒2 ]3

[qN ⇒2 ]2 [q1 ]2 [q0 ]3

28

G(MN ⇒2 )0

&

G(MN ⇒2 )1

&

A(zero?)⋆ q1



G(MN ⇒2 )2

G(MN ⇒2 )3 [qT ]3

[qN ⇒2 ]2 [q1 ]2 [h]3 [qT ]3 [i]3 [qT ]3 [♯]3 [qT ]3 [X]3 G(MN ⇒2 )0

G(MN ⇒2 )1

&

&

G(MN ⇒2 )2

A(zero?)⋆ q1



G(MN ⇒2 )3 [qN ⇒2 ]3

[qN ⇒2 ]2 [♭0 ]2 ([•0 ]2 ) [⊤1 ]3 ([⊥1 ]3 ) G(MN ⇒2 )0

&

G(MN ⇒2 )1

&

G(MN ⇒2 )2

A(zero?)⋆ q1



G(MN ⇒2 )3 [qT ]3

[qN ⇒2 ]2 [♭0 ]2 ([•0 ]2 ) [X]3 We clearly have st(A(zero?)) = zero?, and A(zero?) is standard. ◮ Example 3.2.5. Consider the fixed-point strategy fix A : ([A00 ]he′ i♯hei♯f ⇒ [A10 ]he′ i♯f ) ⇒ [A1 ]f for each game A [AJM00, HO00, McC98]. We have already described fix A informally; here we give a more detailed account, but again, it should suffice to just give diagrams for A(fix A )⋆m (m ∈ Sfix A ): G(MA⇒A⇒A )0

&

G(MA⇒A⇒A )1

&

G(MA⇒A⇒A )2

A(fix A )⋆ m



G(MA⇒A⇒A )3 [qA⇒A⇒A ]3

[qA⇒A⇒A ]2 [a00 ]2 ([a1 ]2 ) [a10 ]3 G(MA⇒A⇒A )0

&

G(MA⇒A⇒A )1

&

G(MA⇒A⇒A )2

A(fix A )⋆ m



G(MA⇒A⇒A )3 [qA⇒A⇒A ]3

[qA⇒A⇒A ]2 [a10 ]2 [qA⇒A⇒A ]0 [a′1 ]0 ([a′00 ]0 ) [a1 ]3 ([a00 ]3 )

29

G(MA⇒A⇒A )0

&

G(MA⇒A⇒A )1

&

G(MA⇒A⇒A )2

A(fix A )⋆ m



G(MA⇒A⇒A )3 [qT ]3

[qA⇒A⇒A ]2 [a1 ]2 [h]3 [qT ]3 [i]3 [qT ]3 [♯]3 [qT ]3 [qT ]2 [e]2 [e]3 [qT ]3 [qT ]2 [e′ ]2 .. .

[e′ ]3 [qT ]3

[qT ]2 [e′′ ]2

[e′′ ]3 [qT ]3

[qT ]2 [X]2 [X]3

30

G(MA⇒A⇒A )0

&

G(MA⇒A⇒A )1

&

G(MA⇒A⇒A )2

A(fix A )⋆ m



G(MA⇒A⇒A )3 [qT ]3

[qA⇒A⇒A ]2 [a10 ]2 [qT ]2 [h]2 [qT ]2 [i]2 [qT ]2 [♯]2 [qT ]2 [e]2 [e]3 [qT ]3 [qT ]2 [e′ ]2 .. .

[e′ ]3 [qT ]3

[qT ]2 [e′′ ]2

[e′′ ]3 [qT ]3

[qT ]2 [X]2 [X]3

31

G(MA⇒A⇒A )0

&

G(MA⇒A⇒A )1

&

G(MA⇒A⇒A )2

A(fix A )⋆ m



G(MA⇒A⇒A )3 [qT ]3

[qA⇒A⇒A ]2 [a10 ]2 [qT ]2 [h]2 @0 [qT ]2 [h]2 @1 [h]3 @2 [qT ]3 [qT ]2 [˜ e ]2 [˜ e ]3 .. . [qT ]3 [qT ]2 [i]2 @1 [i]3 @2 [qT ]3 [qT ]2 [♯]2 [♯]3 [qT ]3 [qT ]2 [h]2 @3 .. .

[h]3 @4 [qT ]3

[qT ]2 [i]2 @3 [i]3 @4 [qT ]3 [qT ]2 [i]2 @0 [qT ]2 [e]2 [e]3 [qT ]3 [qT ]2 [e′ ]2

[e′ ]3 .. . [qT ]3

[qT ]2 [e′′ ]2

[e′′ ]3 [qT ]3

[qT ]2 [X]2 32

[X]3

G(MA⇒A⇒A )0

&

G(MA⇒A⇒A )1

&

G(MA⇒A⇒A )2

A(fix A )⋆ m



G(MA⇒A⇒A )3 [qT ]3

[qA⇒A⇒A ]2 [a00 ]2 [h]3 @0 [qT ]3 [qT ]2 [h]2 @1 [h]3 @2 [qT ]3 [qT ]2 [˜ e ]2 [˜ e ]3 .. . [qT ]3 [qT ]2 [i]2 @1 [i]3 @2 [qT ]3 [qT ]2 [♯]2 [♯]3 [qT ]3 [qT ]2 [h]2 @3 .. .

[h]3 @4 [qT ]3

[qT ]2 [i]2 @3 [i]3 @4 [qT ]3 [i]3 @0 [qT ]3 [qT ]2 [e]2 [e]3 [qT ]3 [qT ]2 [e′ ]2

[e′ ]3 .. . [qT ]3

[qT ]2 [e′′ ]2

[e′′ ]3 [qT ]3

[qT ]2 [X]2 33

[X]3

where @i in the diagrams indicates the pairs h, i of mates, i.e., h and i with the same i form such a pair. Note that with m-views, there is an obvious finite table A(fix A )m that implements the instruction strategy A(fix A )⋆m . It is then not hard to see that st(A(fix A )) = fix A holds, showing that fix A is effective. Also, it is easy to see that fix A is consistent, and A(fix A ) is standard.

3.3

Turing completeness

In the previous sections, we have seen that every “atomic” strategy that is definable in the language PCF [AM99] is consistent and realized by a standard algorithm, and constructions on strategies preserve this property. From this, our main theorem immediately follows: ◮ Theorem 3.3.1 (Main theorem). Every (static) strategy σ : A definable in PCF [AM99] has a consistent and effective strategy φσ : DA that satisfies Hω (φσ ) = σ ∧ Hω (DA ) E A up to tags. Proof. First, see [AM99] for the strategies definable in the language PCF, (and [YA16] for the (implicitly) underlying bicategory DG of dynamic games and strategies though it is not strictly necessary). We have already shown in the previous sections that the “atomic” strategies such as der A , succ, pred , zero?, case A , fix A are all consistent and effective, in particular realized by standard algorithms. Note that projections are derelictions up to internal tags, and so they are clearly consistent and effective. Similarly, the currying Λ and uncurrying Λ−1 operations are just to modify internal tags; thus, they clearly preserve consistency and effectivity (in particular realizability by standard algorithms) of strategies. In particular, the evaluation strategies ev A,B : (A ⇒ B)&A → B are obtained from derelictions by uncurrying, and so they are consistent and effective as well. Now, note that we may enumerate all the strategies definable in PCF by the following inductive construction of a set S of strategies: 1. σ ∈ S if σ : A is “atomic” 2. Λ(σ) ∈ S if σ ∈ S such that σ : G with Hω (G) E A&B → C for some games G, A, B, C 3. hϕ, ψi ∈ S if ϕ, ψ ∈ S such that ϕ : L, ψ : R with Hω (L) E C → A, Hω (R) E C → B for some games L, R, A, B, C 4. ι† ; κ ∈ S if ι, κ ∈ S such that ι : J, κ : K with Hω (J) E A → B, Hω (R) E B → C for some games J, K, A, B, C. df.

We may assign a consistent and effective strategy φσ to each σ ∈ S as follows: 1. φσ = σ if σ is df.

df.

df.

“atomic”; 2. φΛ(σ) = Λ(φσ ); 3. φhϕ,ψi = hφϕ , φψ i; 4. φι† ;κ = φ†ι ‡ φκ . By the above argument and Theorem 3.1.9, φσ is in fact consistent and effective for all σ ∈ S. It remains to show Hω (φσ ) ∼ = σ for all σ ∈ S (it is similar to show the subgame relation), where ∼ = denotes the equality up to tags. We may show it by induction with basic results in [YA16]: 1. Hω (φσ ) = Hω (σ) = σ if σ is “atomic” since every “atomic” strategy is static 2. Hω (φΛ(σ) ) = Hω (Λ(φσ )) = Λ(Hω (φσ )) ∼ = Λ(σ) by the induction hypothesis 3. Hω (φhϕ,ψi ) = Hω (hφϕ , φψ i) = hHω (φϕ ), Hω (φψ )i ∼ = hϕ, ψi by the induction hypothesis 4. Hω (φι† ;κ ) = Hω (φ†ι ‡ φκ ) = Hω (φι )† ; Hω (φκ ) ∼ = ι† ; κ by the induction hypothesis which completes the proof.



34

Since PCF is Turing complete [Gun92, LN15], this result particularly implies the following: ◮ Corollary 3.3.2 (Turing completeness). Every partial recursive function f : Nk ⇀ N, where k ∈ N, has a consistent and effective strategy φf : Df such that Hω (Df ) E N k ⇒ N and f (n) ≃ Hω (hn1 , n2 , . . . , nk i† ‡ φf ) up to tags for all (n1 , n2 , . . . , nk ) ∈ Nk .

4

Conclusion and future work

We have presented the first intrinsic notion of “effective computability” in game semantics. Due to its semantic and non-inductive nature, it can be seen as a fundamental investigation of the mathematical notion of “effective computation” beyond the classical computation. There are many directions for further work; here we only mention some of them. First, we need to analyze the exact computational power of effective strategies, in comparison with other known notion of higher-order computability [LN15]. Also, as an application, the present framework may give an accurate measure for computational complexity [Koz06]. However, the most imminent future work is perhaps, by exploiting the flexibility of game semantics, to enlarge the scope of the present work (i.e., not only the language PCF) in order to establish a mathematical model of various (constructive) logics and programming languages. Acknowledgements. The author acknowleges support from Funai Overseas Scholarship, and also he is grateful to Samson Abramsky and Robin Piedeleu for fruitful discussions.

References [A+ 97]

Samson Abramsky et al. Semantics of Interaction: An Introduction to Game Semantics. Semantics and Logics of Computation, Publications of the Newton Institute, pages 1–31, 1997.

[Abr14] Samson Abramsky. Intensionality, Definability and Computation. In Johan van Benthem on Logic and Information Dynamics, pages 121–142. Springer, 2014. [AJ94]

Samson Abramsky and Radha Jagadeesan. Games and Full Completeness for Multiplicative Linear Logic. The Journal of Symbolic Logic, 59(02):543–574, 1994.

[AJM00] Samson Abramsky, Radha Jagadeesan, and Pasquale Malacaria. Full Abstraction for PCF. Information and Computation, 163(2):409–470, 2000. [AM99] Samson Abramsky and Guy McCusker. Game Semantics. In Computational logic, pages 1–55. Springer, 1999. [B+ 84]

Hendrik Pieter Barendregt et al. The Lambda Calculus, volume 3. North-Holland Amsterdam, 1984.

[Chu36] Alonzo Church. An Unsolvable Problem of Elementary Number Theory. American journal of mathematics, 58(2):345–363, 1936. [Chu40] Alonzo Church. A Formulation of the Simple Theory of Types. The journal of symbolic logic, 5(02):56–68, 1940. [Cur30] Haskell B Curry. Grundlagen der kombinatorischen Logik. American journal of mathematics, 52(4):789–834, 1930. 35

[Cut80] Nigel Cutland. Computability: An Introduction to Recursive Function Theory. Cambridge university press, 1980. [Gun92] Carl A Gunter. Semantics of Programming Languages: Structures and Techniques. MIT press, 1992. [Han94] Chris Hankin. Lambda Calculi: A Guide for the Perplexed. 1994. [HO00] J Martin E Hyland and C-HL Ong. On Full Abstraction for PCF: I, II, and III. Information and computation, 163(2):285–408, 2000. [Hyl97] Martin Hyland. Game Semantics. Semantics and logics of computation, 14:131, 1997. [Koz06] Dexter C Kozen. Theory of Computation. Springer Science & Business Media, 2006. [Koz12] Dexter C Kozen. Automata and Computability. Springer Science & Business Media, 2012. [LN15]

John Longley and Dag Normann. Higher-Order Computability. Springer, 2015.

[McC98] Guy McCusker. Games and Full Abstraction for a Functional Metalanguage with Recursive Types. Springer Science & Business Media, 1998. [Mit96]

John C Mitchell. Foundations for Programming Languages, volume 1. MIT press Cambridge, 1996.

[Nic94]

Hanno Nickau. Hereditarily Sequential Functionals. In Logical Foundations of Computer Science, pages 253–264. Springer, 1994.

[Plo77]

Gordon D. Plotkin. Lcf considered as a Programming Language. Theoretical computer science, 5(3):223–255, 1977.

[RR67]

Hartley Rogers and H Rogers. Theory of Recursive Functions and Effective Computability, volume 5. McGraw-Hill New York, 1967.

[Sch24]

¨ Moses Schonfinkel. ¨ Uber die Bausteine der mathematischen Logik. Mathematische Annalen, 92(3):305–316, 1924.

[Tur36]

Alan Mathison Turing. On Computable Numbers, with an Application to the Entscheidungsproblem. J. of Math, 58(345-363):5, 1936.

[TVD14] Anne Sjerp Troelstra and Dirk Van Dalen. Constructivism in Mathematics, volume 2. Elsevier, 2014. [Win93] Glynn Winskel. The Formal Semantics of Programming Languages: An Introduction. MIT press, 1993. [YA16]

Norihiro Yamada and Samson Abramsky. Dynamic Games and Strategies. arXiv preprint arXiv:1601.04147, 2016.

36