Intuitionistic computability logic

12 downloads 64 Views 371KB Size Report
Jun 14, 2006 - arXiv:cs/0411008v3 [cs.LO] 14 Jun 2006. Intuitionistic computability logic. Giorgi Japaridze∗. Abstract. Computability logic (CL) is a systematic ...
Intuitionistic computability logic

arXiv:cs/0411008v3 [cs.LO] 14 Jun 2006

Giorgi Japaridze∗

Abstract Computability logic (CL) is a systematic formal theory of computational tasks and resources, which, in a sense, can be seen as a semantics-based alternative to (the syntactically introduced) linear logic. With its expressive and flexible language, where formulas represent computational problems and “truth” is understood as algorithmic solvability, CL potentially offers a comprehensive logical basis for constructive applied theories and computing systems inherently requiring constructive and computationally meaningful underlying logics. Among the best known constructivistic logics is Heyting’s intuitionistic calculus INT, whose language can be seen as a special fragment of that of CL. The constructivistic philosophy of INT, however, just like the resource philosophy of linear logic, has never really found an intuitively convincing and mathematically strict semantical justification. CL has good claims to provide such a justification and hence a materialization of Kolmogorov’s known thesis “INT = logic of problems”. The present paper contains a soundness proof for INT with respect to the CL semantics.

1

Introduction

Computability logic (CL), introduced recently in [7], is a formal theory of computability in the same sense as classical logic is a formal theory of truth. It understands formulas as (interactive) computational problems, and their “truth” as algorithmic solvability. Computational problems, in turn, are defined as games played by a machine against the environment, with algorithmic solvability meaning existence of a machine that always wins the game. Intuitionistic computability logic is not a modification or version of CL. The latter takes pride in its universal applicability, stability and “immunity to possible future revisions and tampering“ ([7], p. 12). Rather, what we refer to as intuitionistic computability logic is just a — relatively modest — fragment of CL, obtained by mechanically restricting its formalism to a special sublanguage. It was conjectured in [7] that the (set of the valid formulas of the) resulting fragment of CL is described by Heyting’s intuitionistic calculus INT. The present paper is devoted to a verification of the soundness part of that conjecture. Bringing INT and CL together could signify a step forward not only in logic but also in theoretical computer science. INT has been attracting the attention of computer scientists since long ago. And not only due to the beautiful phenomenon within the ‘formulas-as-types’ approach known as the Curry-Howard isomorphism. INT appears to be an appealing alternative to classical logic within the more traditional approaches as well. This is due to the general constructive features of its deductive machinery, and Kolmogorov’s [13] well-known yet so far rather abstract thesis according to which intuitionistic logic is (or should be) a logic of problems. The latter inspired many attempts to find a “problem semantics” for the language of intuitionistic logic [5, 12, 15], none of which, however, has fully succeeded in justifying INT as a logic of problems. Finding a semantical justification for INT was also among the main motivations for Lorenzen [14], who pioneered game-semantical approaches in logic. After a couple of decades of trial and error, the goal of obtaining soundness and completeness of INT with respect to Lorenzen’s game semantics was achieved [3]. The value of such an achievement is, however, dubious, as it came as a result of carefully tuning the semantics and adjusting it to the goal at the cost of sacrificing some natural intuitions that a game semantics could potentially offer.1 After all, some sort of a specially designed technical semantics can be found for virtually ∗ This

material is based upon work supported by the National Science Foundation under Grant No. 0208816 Blass’s [2] words, ‘Supplementary rules governing repeated attacks and defenses were devised by Lorenzen so that the formulas for which P [proponent] has a winning strategy are exactly the intuitionistically provable ones’. Quoting [6], ‘Lorenzen’s approach describes logical validity exclusively in terms of rules without appealing to any kind of truth values for atoms, and this makes the semantics somewhat vicious ... as it looks like just a “pure” syntax rather than a semantics’. 1 Using

1

every formal system, but the whole question is how natural and usable such a semantics is in its own right. In contrast, the CL semantics was elaborated without any target deductive construction in mind, following the motto “Axiomatizations should serve meaningful semantics rather than vice versa”. Only retroactively was it observed that the semantics of CL yields logics similar to or identical with some known axiomatically introduced constructivistic logics such as linear logic or INT. Discussions given in [7, 8, 9, 10] demonstrate how naturally the semantics of CL emerges and how much utility it offers, with potential application areas ranging from the pure theory of (interactive) computation to knowledgebase systems, systems for planning and action, and constructive applied theories. As this semantics has well-justified claims to be a semantics of computational problems, the results of the present article speak strongly in favor of Kolmogorov’s thesis, with a promise of a full materialization of the thesis in case a completeness proof of INT is also found. The main utility of the present result is in the possibility to base applied theories or knowledgebase systems on INT. Nonlogical axioms — or the knowledge base — of such a system would be any collection of (formulas expressing) problems whose algorithmic solutions are known. Then, our soundness theorem for INT — which comes in a strong form called uniform-constructive soundness — guarantees that every theorem T of the theory also has an algorithmic solution and, furthermore, such a solution can be effectively constructed from a proof of T . This makes INT a problem-solving tool: finding a solution for a given problem reduces to finding a proof of that problem in the theory. It is not an ambition of the present paper to motivationally (re)introduce and (re)justify computability logic and its intuitionistic fragment in particular. This job has been done in [7] and once again — in a more compact way — in [9]. An assumption is that the reader is familiar with at least the motivational/philosophical parts of either paper and this is why (s)he decided to read the present article. While helpful in fully understanding the import of the present results, from the purely technical point of view such a familiarity, however, is not necessary, as this paper provides all necessary definitions. Even if so, [7] and/or [9] could still help a less advanced reader in getting a better hold of the basic technical concepts. Those papers are written in a semitutorial style, containing ample examples, explanations and illustrations, with [9] even including exercises.

2

A brief informal overview of some basic concepts

As noted, formulas of CL represent interactive computational problems. Such problems are understood as games between two players: ⊤, called machine, and ⊥, called environment. ⊤ is a mechanical device with a fully determined, algorithmic behavior. On the other hand, there are no restrictions on the behavior of ⊥. A problem/game is considered (algorithmically) solvable/winnable iff there is a machine that wins the game no matter how the environment acts. Logical operators are understood as operations on games/problems. One of the important groups of such operations, called choice operations, consists of ⊓, ⊔, ⊓, ⊔, in our present approach corresponding to the intuitionistic operators of conjunction, disjunction, universal quantifier and existential quantifier, respectively. A1 ⊓ . . . ⊓ An is a game where the first legal move (“choice”), which should be one of the elements of {1, . . . , n}, is by the environment. After such a move/choice i is made, the play continues and the winner is determined according to the rules of Ai ; if a choice is never made, ⊥ loses. A1 ⊔ . . . ⊔ An is defined in a symmetric way with the roles of ⊥ and ⊤ interchanged: here it is ⊤ who makes an initial choice and who loses if such a choice is not made. With the universe of discourse being {1, 2, 3, . . .}, the meanings of the “big brothers” ⊓ and ⊔ of ⊓ and ⊔ can now be explained by ⊓xA(x) = A(1) ⊓ A(2) ⊓ A(3) ⊓ . . . and ⊔xA(x) = A(1) ⊔ A(2) ⊔ A(3) ⊔ . . .. The remaining two operators of intuitionistic logic are the binary ◦– (“intuitionistic implication”) and the 0-ary $ (“intuitionistic absurd”), with the intuitionistic negation of F simply understood as an abbreviation for F ◦– $. The intuitive meanings of ◦– and $ are “reduction” (in the weakest possible sense) and “a problem of universal strength”, respectively. In what precise sense is $ a universal-strength problem will be seen in Section 6. As for ◦– , its meaning can be better explained in terms of some other, more basic, operations of CL that have no official intuitionistic counterparts. One group of such operations comprises negation ¬ and the so called parallel operations ∧, ∨, →. Applying ¬ to a game A interchanges the roles of the two players: ⊤’s moves and wins become ⊥’s moves and wins, and vice versa. Say, if Chess is the game of chess from the point of view of the white player, then 2

¬Chess is the same game as seen by the black player. Playing A1 ∧ . . . ∧ An (resp. A1 ∨ . . . ∨ An ) means playing the n games in parallel where, in order to win, ⊤ needs to win in all (resp. at least one) of the components Ai . Back to our chess example, the two-board game Chess ∨ ¬Chess can be easily won by just mimicking in Chess the moves made by the adversary in ¬Chess and vice versa. On the other hand, winning Chess ⊔ ¬Chess is not easy at all: here ⊤ needs to choose between Chess and ¬Chess (i.e. between playing white or black), and then win the chosen one-board game. Technically, a move α in the kth ∧-conjunct or ∨-disjunct is made by prefixing α with ‘k.’. For example, in (the initial position of) (A ⊔ B) ∨ (C ⊓ D), the move ‘2.1’ is legal for ⊥, meaning choosing the first ⊓-conjunct in the second ∨-disjunct of the game. If such a move is made, the game will continue as (A ⊔ B) ∨ C. One of the distinguishing features of CL games from the more traditional concepts of games ([1, 2, 3, 6, 14]) is the absence of procedural rules — rules strictly regulating which of the players can or should move in any given situation. E.g., in the above game (A ⊔ B) ∨ (C ⊓ D), ⊤ also has legal moves — the moves ‘1.1’ and ‘1.2’. In such cases CL allows either player to move, depending on who wants or can act faster.2 As argued in [7] (Section 3), only this “free” approach makes it possible to adequately capture certain natural intuitions such as truly parallel/concurrent computations. The operation → is defined by A → B = (¬A) ∨ B. Intuitively, this is the problem of reducing B to A: solving A → B means solving B having A as an external computational resource. Resources are symmetric to problems: what is a problem to solve for one player is a resource that the other player can use, and vice versa. Since A is negated in (¬A) ∨ B and negation means switching the roles, A appears as a resource rather than problem for ⊤ in A → B. To get a feel of → as a problem reduction operation, the following — already “classical” in CL — example may help. Let, for any m, n, Accepts(m, n) mean the game where none of the players has legal moves, and which is automatically won by ⊤ if Turing machine m accepts input n, and otherwise automatically lost. This sort of zero-length games are called elementary in CL, which understands every classical proposition/predicate as an elementary game and vice versa,  with “true”=“won by ⊤” and “false”=“lost by ⊤”. Note that then ⊓x⊓y Accepts(x, y)⊔¬Accepts(x, y) expresses the acceptance problem as a decision problem: in order to win, the machine should be able to tell whether x accepts y or not (i.e., choose the true disjunct) for any particular values for x and y selected by the environment. This problem is undecidable, which obviously means that there is no machine that (always) wins the game  ⊓x⊓y Accepts(x, y) ⊔ ¬Accepts(x, y) . However, the acceptance problem is known to be algorithmically  reducible to the halting problem. The latter can be expressed by ⊓x⊓y Halts(x, y) ⊔ ¬Halts(x, y) , with the obvious meaning of the elementary game/predicate Halts(x, y). This reducibility translates into our terms as existence of a machine that wins   ⊓x⊓y Halts(x, y) ⊔ ¬Halts(x, y) → ⊓x⊓y Accepts(x, y) ⊔ ¬Accepts(x, y) . (1) Such a machine indeed exists. A successful strategy for it is as follows. At the beginning, ⊤ waits till ⊥ specifies some values m and n for x and y in the consequent, i.e. makes the moves ‘2.m’ and ‘2.n’. Such moves, bringing the consequent down to Accepts(m, n) ⊔ ¬Accepts(m, n), can be seen as asking the question “does machine m accept input n?”. To this question ⊤ replies by the counterquestion “does m halt on n?”, i.e. makes the moves ‘1.m and ‘1.n’, bringing the antecedent down to Halts(m, n) ⊔ ¬Halts(m, n). The environment has to correctly answer this counterquestion, or else it loses. If it answers “no” (i.e. makes the move ‘1.2’ and thus further brings the antecedent down to ¬Halts(m, n)), ⊤ also answers “no” to the original question in the consequent (i.e. makes the move ‘2.2’), with the overall game having evolved to the true and hence ⊤-won proposition/elementary game ¬Halts(m, n) → ¬Accepts(m, n). Otherwise, if the environment’s answer is “yes” (move ‘1.1’), ⊤ simulates Turing machine m on input n until it halts, and then makes the move ‘2.1’ or ‘2.2’ depending whether the simulation accepted or rejected. Various sorts of reduction have been defined and studied in an ad hoc manner in the literature. A strong case can be made in favor of the thesis that the reduction captured by our → is the most basic one, with all other reasonable concepts of reduction being definable in terms of →. Most natural of those concepts is the one captured by the earlier-mentioned operation of “intuitionistic implication” ◦– , with A ◦– B defined in terms of → and (yet another natural operation) ◦| by A ◦– B = (◦| A) → B. What makes ◦– so natural is 2 This is true for the case when the underlying model of computation is HPM (see Section 5), but seemingly not so when it is EPM — the model employed in the present paper. It should be remembered, however, that EPM is viewed as a secondary model in CL, admitted only due to the fact that it has been proven ([7]) to be equivalent to the basic HPM model.

3

that it captures our intuition of reducing one problem to another in the weakest possible sense. The wellestablished concept of Turing reduction has the same claim. But the latter is only defined for non-interactive, two-step (question/answer, or input/output) problems, such as the above halting or acceptance problems. When restricted to this sort of problems, as one might expect, ◦– indeed turns out to be equivalent to Turing reduction. The former, however, is more general than the latter as it is applicable to all problems regardless their forms and degrees of interactivity. Turing reducibility of a problem B to a problem A is defined as the possibility to algorithmically solve B having an oracle for A. Back to (1), the role of ⊥ in the antecedent is in fact that of an oracle for the halting problem. Notice, however, that the usage of the oracle is limited there as it only can be employed once: after querying regarding whether m halts of n, the machine ′ ′ would not be able to repeat the same query with  different parameters m and n , for that would require two “copies” of ⊓x⊓y Halts(x, y) ⊔ ¬Halts(x, y) rather than one. On the other hand, Turing reduction to A and, similarly, our A ◦– . . ., allow unlimited and recurring usage of A, which the resource-conscious CL understands as →-reduction not to A but to the stronger problem expressed by ◦| A, called the branching recurrence of A.3 Two more recurrence operations have been introduced within the framework of CL ([9]): | parallel recurrence ∧| and sequential recurrence − ∧ . Common to all of these operations is that, when applied to a resource A, they turn it into a resource that allows to reuse A an unbounded number of times. The difference is in how “reusage” is exactly understood. Imagine a computer that has a program successfully playing Chess. The resource that such a computer provides is obviously something stronger than just Chess, for it allows to play Chess as many times as the user wishes, while Chess, as such, only assumes one play. The simplest operating system would allow to start a session of Chess, then — after finishing or abandoning and destroying it — start a new play again, and so on. The game that such a system plays — i.e. the resource | that it supports/provides — is − ∧ Chess, which assumes an unbounded number of plays of Chess in a sequential fashion. However, a more advanced operating system would not require to destroy the old session(s) before starting a new one; rather, it would allow to run as many parallel sessions as the user needs. This is what is captured by ∧| Chess, meaning nothing but the infinite conjunction Chess ∧ Chess ∧ . . .. As a resource, ∧| Chess | | is obviously stronger than − ∧ Chess as it gives the user more flexibility. But ∧ is still not the strongest form of reusage. A really good operating system would not only allow the user to start new sessions of Chess without destroying old ones; it would also make it possible to branch/replicate each particular session, i.e. create any number of “copies” of any already reached position of the multiple parallel plays of Chess, thus giving the user the possibility to try different continuations from the same position. After analyzing the formal definition of ◦| given in Section 3 — or, better, the explanations provided in Section 13 of [7] — the reader will see that ◦| Chess is exactly what accounts for this sort of a situation. ∧| Chess can then be thought of as a restricted version of ◦| Chess where only the initial position can be replicated. A well-justified claim can be made that ◦| A captures our strongest possible intuition of “recycling”/“reusing” A. This automatically translates into another claim, according to which A ◦– B, i.e. ◦| A → B, captures our weakest possible — and hence most natural — intuition of reducing B to A. As one may expect, the three concepts of recurrence validate different principles. For example, one can show that the left ⊔- or ⊔-introduction rules of INT, which are sound with A ◦– B understood as ◦| A → B, | would fail if A ◦– B was understood as ∧| A → B or − ∧ A → B. A naive person familiar with linear logic and seeing philosophy-level connections between our recurrence operations and Girard’s [4] storage operator !, might ask which of the three recurrence operations “corresponds” to !. In the absence of a clear resource semantics for linear logic, perhaps such a question would not be quite meaningful though. Closest to our present approach is that of [1], where Blass proved soundness for the propositional fragment of INT with respect to his semantics, reintroduced 20 years later [2] in the new context of linear logic. To appreciate the difference between → and ◦– , let us remember the Kolmogorov complexity problem. It can be expressed by ⊓u⊔zK(z, u), where K(z, u) is the predicate “z is the size of the smallest (code of a) Turing machine that returns u on input 1”. Just like the acceptance problem, the Kolmogorov complexity problem has no algorithmic solution but is algorithmically reducible to the halting problem. However, such a reduction can be shown to essentially require recurring usage of the resource ⊓x⊓y Halts(x, y) ⊔  ¬Halts(x, y) . That is, while the following game is winnable by a machine, it is not so with → instead of .



term “branching recurrence” and the symbols ◦.. and ◦ were established in [9]. The earlier paper [7] uses “branching conjunction”, ! and ⇒ instead. In the present paper, ⇒ has a different meaning — that of a separator of the two parts of a sequent. 3 The

4

◦– :

 Halts(x, y) ⊔ ¬Halts(x, y) ◦– ⊓u⊔zK(z, u). (2) Here is ⊤’s strategy for (2) in relaxed terms: ⊤ waits till ⊥ selects a value m for u in the consequent, thus asking ⊤ the question “what is the Kolmogorov complexity of m?”. After this, starting from i = 1, ⊤ does the following: it creates a new copy of the (original) antecedent, and makes the two moves in it specifying x and y as i and 1, respectively, thus asking the counterquestion “does machine i halt on input 1?”. If ⊥ responds by choosing ¬Halts(i, 1) (“no”), ⊤ increments i by one and repeats the step; otherwise, if ⊥ responds by Halts(i, 1) (“yes”), ⊤ simulates machine i on input 1 until it halts; if it sees that machine i returned m, it makes the move in the consequent specifying z as |i| (here |i| means the size of i, i.e., |i| = log2 i), thus saying that |i| is the Kolmogorov complexity of m; otherwise, it increments i by one and repeats the step.

3

⊓x⊓y

Constant games

Now we are getting down to formal definitions of the concepts informally explained in the previous section. Our ultimate concept of games will be defined in the next section in terms of the simpler and more basic class of games called constant games. To define this class, we need some technical terms and conventions. Let us agree that by a move we mean any finite string over the standard keyboard alphabet. One of the non-numeric and non-punctuation symbols of the alphabet, denoted ♠, is designated as a special-status move, intuitively meaning a move that is always illegal to make. A labeled move (labmove) is a move prefixed with ⊤ or ⊥, with its prefix (label) indicating which player has made the move. A run is a (finite or infinite) sequence of labeled moves, and a position is a finite run. Convention 3.1 We will be exclusively using the letters Γ, Θ, Φ, Ψ, Υ for runs, ℘ for players, α, β, γ for moves, and λ for labmoves. Runs will be often delimited with “h” and “i”, with hi thus denoting the empty run. The meaning of an expression such as hΦ, ℘α, Γi must be clear: this is the result of appending to position hΦi the labmove h℘αi and then the run hΓi. ¬Γ (not to confuse this ¬ with the same-shape game operation of negation) will mean the result of simultaneously replacing every label ⊤ in every labmove of Γ by ⊥ and vice versa. Another important notational convention is that, for a string/move α, Γα means the result of removing from Γ all labmoves except those of the form ℘αβ, and then deleting the prefix ‘α’ in the remaining moves, i.e. replacing each such ℘αβ by ℘β. The following item is a formal definition of constant games combined with some less formal conventions regarding the usage of certain terminology. Definition 3.2 A constant game is a pair A = (LrA , WnA ), where: 1. LrA is a set of runs not containing (whatever-labeled) move ♠, satisfying the condition that a (finite or infinite) run is in LrA iff all of its nonempty finite — not necessarily proper — initial segments are in LrA (notice that this implies hi ∈ LrA ). The elements of LrA are said to be legal runs of A, and all other runs are said to be illegal. We say that α is a legal move for ℘ in a position Φ of A iff hΦ, ℘αi ∈ LrA ; otherwise α is illegal. When the last move of the shortest illegal initial segment of Γ is ℘-labeled, we say that Γ is a ℘-illegal run of A. 2. WnA is a function that sends every run Γ to one of the players ⊤ or ⊥, satisfying the condition that if Γ is a ℘-illegal run of A, then WnA hΓi = 6 ℘. When WnA hΓi = ℘, we say that Γ is a ℘-won (or won by ℘) run of A; otherwise Γ is lost by ℘. Thus, an illegal run is always lost by the player who has made the first illegal move in it. Definition 3.3 Let A, B, A1 , A2 , . . . be constant games, and n ∈ {2, 3, . . .}. 1. ¬A is defined by: Γ ∈ Lr¬A iff ¬Γ ∈ LrA ; Wn¬A hΓi = ⊤ iff WnA h¬Γi = ⊥. 2. A1 ⊓ . . . ⊓ An is defined by: Γ ∈ LrA1 ⊓...⊓An iff Γ = hi or Γ = h⊥i, Θi, where i ∈ {1, . . . , n} and Θ ∈ LrAi ; WnA1 ⊓...⊓An hΓi = ⊥ iff Γ = h⊥i, Θi, where i ∈ {1, . . . , n} and WnAi hΘi = ⊥. 3. A1 ∧ . . . ∧ An is defined by: Γ ∈ LrA1 ∧...∧An iff every move of Γ starts with ‘i.’ for one of the i ∈ {1, . . . , n} and, for each such i, Γi. ∈ LrAi ; whenever Γ ∈ LrA1 ∧...∧An , WnA1 ∧...∧An hΓi = ⊤ iff, for each i ∈ {1, . . . , n}, WnAi hΓi. i = ⊤. 5

4. A1 ⊔ . . . ⊔ An and A1 ∨ . . . ∨ An are defined exactly as A1 ⊓ . . . ⊓ An and A1 ∧ . . . ∧ An , respectively, only with “⊤” and “⊥” interchanged. And A → B is defined as (¬A) ∨ B. 5. The infinite ⊓-conjunction A1 ⊓ A2 ⊓ . . . is defined exactly as A1 ⊓ . . . ⊓ An , only with “i ∈ {1, 2, . . .}” instead of “i ∈ {1, . . . , n}”. Similarly for the infinite versions of ⊔, ∧, ∨. 6. In addition to the earlier-established meaning, the symbols ⊤ and ⊥ also denote two special — simplest — games, defined by Lr⊤ = Lr⊥ = {hi}, Wn⊤ hi = ⊤ and Wn⊥ hi = ⊥. An important operation not explicitly mentioned in Section 2 is what is called prefixation. This operation takes two arguments: a constant game A and a position Φ that must be a legal position of A (otherwise the operation is undefined), and returns the game hΦiA. Intuitively, hΦiA is the game playing which means playing A starting (continuing) from position Φ. That is, hΦiA is the game to which A evolves (will be “brought down”) after the moves of Φ have been made. We have already used this intuition when explaining the meaning of choice operations in Section 2: we said that after ⊥ makes an initial move i ∈ {1, . . . , n}, the game A1 ⊓ . . . ⊓ An continues as Ai . What this meant was nothing but that h⊥ii(A1 ⊓ . . . ⊓ An ) = Ai . Similarly, h⊤ii(A1 ⊔ . . . ⊔ An ) = Ai . Here is the definition of prefixation: Definition 3.4 Let A be a constant game and Φ a legal position of A. The game hΦiA is defined by: LrhΦiA = {Γ | hΦ, Γi ∈ LrA }; WnhΦiA hΓi = WnA hΦ, Γi. The operation ◦| is somewhat more complex and its definition relies on certain additional conventions. We will be referring to (possibly infinite) strings of 0s and 1s as bit strings, using the letters w, u as metavariables for them. The expression wu, meaningful only when w is finite, will stand for the concatenation of strings w and u. We write w  u to mean that w is a (not necessarily proper) initial segment of u. The letter ǫ will exclusively stand for the empty bit string. Convention 3.5 By a tree we mean a nonempty set T of bit strings, called branches of the tree, such that, for every w, u, we have: (a) if w ∈ T and u  w, then u ∈ T ; (b) w0 ∈ T iff w1 ∈ T ; (c) if w is infinite and every finite u with u  w is in T , then w ∈ T . Note that T is finite iff every branch of it is finite. A complete branch of T is a branch w such that no bit string u with w  u 6= w is in T . Finite branches are called nodes, and complete finite branches called leaves. Definition 3.6 We define the notion of a prelegal position, together with the function Tree that associates a finite tree TreehΦi with each prelegal position Φ, by the following induction: 1. hi is a prelegal position, and Treehi = {ǫ}. 2. hΦ, λi is a prelegal position iff Φ is so and one of the following two conditions is satisfied: (a) λ = ⊥w: for some leaf w of TreehΦi. We call this sort of a labmove λ replicative. In this case TreehΦ, λi = TreehΦi ∪ {w0, w1}. (b) λ is ⊥w.α or ⊤w.α for some node w of TreehΦi and move α. We call this sort of a labmove λ nonreplicative. In this case TreehΦ, λi = TreehΦi. The terms “replicative” and “nonreplicative” also extend from labmoves to moves. When a run Γ is infinite, it is considered prelegal iff all of its finite initial segments are so. For such a Γ, the value of TreehΓi is the smallest tree such that, for every finite initial segment Φ of Γ, TreehΦi ⊆ TreehΓi. Convention 3.7 Let u be a bit string and Γ any run. Then Γu will stand for the result of first removing from Γ all labmoves except those that look like ℘w.α for some bit string w with w  u, and then deleting this sort of prefixes ‘w.’ in the remaining labmoves, i.e. replacing each remaining labmove ℘w.α (where w is a bit string) by ℘α. Example: If u = 101000 . . . and Γ = h⊤ǫ.α1 , ⊥:, ⊥1.α2 , ⊤0.α3 , ⊥1:, ⊤10.α4 i, then Γu = h⊤α1 , ⊥α2 , ⊤α4 i. Definition 3.8 Let A be a constant game. The game ◦| A is defined by: ..

1. A position Φ is in Lr◦A iff Φ is prelegal and, for every leaf w of TreehΦi, Φw ∈ LrA . 6

..

..

2. As long as Γ ∈ Lr◦A , Wn◦A hΓi = ⊤ iff WnA hΓu i = ⊤ for every infinite bit string u.4 Next, we officially reiterate the earlier-given definition of ◦– by stipulating that A ◦– B =def ◦| A → B. Remark 3.9 Intuitively, a legal run Γ of ◦| A can be thought of as a multiset Z of parallel legal runs of A. Specifically, Z = {Γu | u is a complete branch of TreehΓi}, with complete branches of TreehΓi thus acting as names for — or “representing” — elements of Z. In order for ⊤ to win, every run from Z should be a ⊤-won run of A. The runs from Z typically share some common initial segments and, put together, can be seen as forming a tree of labmoves, with TreehΓi — that we call the underlying tree-structure of Z — in a sense presenting the shape of that tree. The meaning of a replicative move w: — making which is an exclusive privilege of ⊥ — is creating in (the evolving) Z two copies of position Γw out of one. And the meaning of a nonreplicative move w.α is making move α in all positions Γu of (the evolving) Z with w  u. This is a brutally brief explanation, of course. The reader may find it very helpful to see Section 13 of [7] for detailed explanations and illustrations of the intuitions associated with our ◦| -related formal concepts.5

4

Not-necessarily-constant games

Constant games can be seen as generalized propositions: while propositions in classical logic are just elements of {⊤, ⊥}, constant games are functions from runs to {⊤, ⊥}. As we know, however, propositions only offer a very limited expressive power, and classical logic needs to consider the more general concept of predicates, with propositions being nothing but special — constant — cases of predicates. The situation in CL is similar. Our concept of (simply) game generalizes that of constant game in the same sense as the classical concept of predicate generalizes that of proposition. Let us fix two infinite sets of expressions: the set {v1 , v2 , . . .} of variables and the set {1, 2, . . .} of constants. Without loss of generality here we assume that the above collection of constants is exactly the universe of discourse — i.e. the set over which the variables range — in all cases that we consider. By a valuation we mean a function e that sends each variable x to a constant e(x). In these terms, a classical predicate P can be understood as a function that sends each valuation e to a proposition, i.e. constant predicate. Similarly, what we call a game sends valuations to constant games: Definition 4.1 A game is a function A from valuations to constant games. We write e[A] (rather than A(e)) to denote the constant game returned by A for valuation e. Such a constant game e[A] is said to be A e[A] an instance of A. For readability, we often write LrA and Wne[A] . e and Wne instead of Lr Just as this is the case with propositions versus predicates, constant games in the sense of Definition 3.2 will be thought of as special, constant cases of games in the sense of Definition 4.1. In particular, each constant game A′ is the game A such that, for every valuation e, e[A] = A′ . From now on we will no longer distinguish between such A and A′ , so that, if A is a constant game, it is its own instance, with A = e[A] for every e. We say that a game A depends on a variable x iff there are two valuations e1 , e2 that agree on all variables except x such that e1 [A] 6= e2 [A]. Constant games thus do not depend on any variables. Just as the Boolean operations straightforwardly extend from propositions to all predicates, our operations ¬, ∧, ∨, →, ⊓, ⊔, ◦| , ◦– extend from constant games to all games. This is done by simply stipulating that e[. . .] commutes with all of those operations: ¬A is the game such that, for every e, e[¬A] = ¬e[A]; A ⊓ B is the game such that, for every e, e[A ⊓ B] = e[A] ⊓ e[B]; etc. To generalize the standard operation of substitution of variables to games, let us agree that by a term we mean either a variable or a constant; the domain of each valuation e is extended to all terms by stipulating that, for any constant c, e(c) = c. 4 For reasons pointed out on page 39 of [7], the phrase “for every infinite bit string u” here can be equivalently replaced by “for every complete branch u of TreehΓi”. Similarly, in clause 1, “every leaf w of TreehΦi” can be replaced by “every infinite bit string w”. 5 A few potentially misleading typos have been found in Section 13 (and beyond) of [7]. The current erratum note is maintained at http://www.csc.villanova.edu/∼ japaridz/CL/erratum.pdf

7

Definition 4.2 Let A be a game, x1 , . . . , xn pairwise distinct variables, and t1 , . . . , tn any (not necessarily distinct) terms. The result of substituting x1 , . . . , xn by t1 , . . . , tn in A, denoted A(x1 /t1 , . . . , xn /tn ), is defined by stipulating that, for every valuation e, e[A(x1 /t1 , . . . , xn /tn )] = e′ [A], where e′ is the valuation for which we have e′ (x1 ) = e(t1 ), . . . , e′ (xn ) = e(tn ) and, for every variable y 6∈ {x1 , . . . , xn }, e′ (y) = e(y). Intuitively A(x1 /t1 , . . . , xn /tn ) is A with x1 , . . . , xn remapped to t1 , . . . , tn , respectively. Following the standard readability-improving practice established in the literature for predicates, we will often fix a tuple (x1 , . . . , xn ) of pairwise distinct variables for a game A and write A as A(x1 , . . . , xn ). It should be noted that when doing so, by no means do we imply that x1 , . . . , xn are of all of (or only) the variables on which A depends. Representing A in the form A(x1 , . . . , xn ) sets a context in which we can write A(t1 , . . . , tn ) to mean the same as the more clumsy expression A(x1 /t1 , . . . , xn /tn ). In the above terms, we now officially reiterate the earlier-given definitions of the two main quantifier-style operations ⊓ and ⊔: ⊓xA(x) =def A(1) ⊓ A(2) ⊓ A(3) ⊓ . . . and ⊔xA(x) =def A(1) ⊔ A(2) ⊔ A(3) ⊔ . . ..

5

Computational problems and their algorithmic solvability

Our games are obviously general enough to model anything that one would call a (two-agent, two-outcome) interactive problem. However, they are a bit too general. There are games where the chances of a player to succeed essentially depend on the relative speed at which its adversary acts. A simple example would be a game where both players have a legal move in the initial position, and which is won by the player who moves first. CL does not want to consider this sort of games meaningful computational problems. Definition 4.2 of [7] imposes a simple condition on games and calls games satisfying that condition static. We are not reproducing that definition here as it is not relevant for our purposes. It should be however mentioned that, according to one of the theses on which the philosophy of CL relies, the concept of static games is an adequate formal counterpart of our intuitive concept of “pure”, speed-independent interactive computational problems. All meaningful and reasonable examples of games — including all elementary games — are static, and the class of static games is closed under all of the game operations that we have seen (Theorem 14.1 of [7]). Let us agree that from now on the term “computational problem”, or simply “problem”, is a synonym of “static game”. Now it is time to explain what computability of such problems means. The definitions given in this section are semiformal. The omitted technical details are rather standard or irrelevant and can be easily restored by anyone familiar with Turing machines. If necessary, the corresponding detailed definitions can be found in Part II of [7]. [7] defines two models of interactive computation, called the hard-play machine (HPM) and the easyplay machine (EPM). Both are sorts of Turing machines with the capability of making moves, and have three tapes: the ordinary read/write work tape, and the read-only valuation and run tapes. The valuation tape contains a full description of some valuation e (say, by listing the values of e at v1 , v2 , . . .), and its content remains fixed throughout the work of the machine. As for the run tape, it serves as a dynamic input, at any time spelling the current position, i.e. the sequence of the (lab)moves made by the two players so far. So, every time one of the players makes a move, that move — with the corresponding label — is automatically appended to the content of this tape. In the HPM model, there is no restriction on the frequency of environment’s moves. In the EPM model, on the other hand, the machine has full control over the speed of its adversary: the environment can (but is not obligated to) make a (one single) move only when the machine explicitly allows it to do so — the event that we call granting permission. The only “fairness” requirement that such a machine is expected to satisfy is that it should grant permission every once in a while; how long that “while” lasts, however, is totally up to the machine. The HPM and EPM models seem to be two extremes, yet, according to Theorem 17.2 of [7], they yield the same class of winnable static games. The present paper will only deal with the EPM model, so let us take a little closer look at it. Let M be an EPM. A configuration of M is defined in the standard way: this is a full description of the (“current”) state of the machine, the locations of its three scanning heads and the contents of its tapes, with the exception that, in order to make finite descriptions of configurations possible, we do not formally include a description of the unchanging (and possibly essentially infinite) content of the valuation tape as a part of configuration, but rather account for it in our definition of computation branch as will be seen below. The 8

initial configuration is the configuration where M is in its start state and the work and run tapes are empty. A configuration C ′ is said to be an e-successor of configuration C in M if, when valuation e is spelled on the valuation tape, C ′ can legally follow C in the standard — standard for multitape Turing machines — sense, based on the transition function (which we assume to be deterministic) of the machine and accounting for the possibility of nondeterministic updates — depending on what move ⊥ makes or whether it makes a move at all — of the content of the run tape when M grants permission. Technically granting permission happens by entering one of the specially designated states called “permission states”. An e-computation branch of M is a sequence of configurations of M where the first configuration is the initial configuration and every other configuration is an e-successor of the previous one. Thus, the set of all e-computation branches captures all possible scenarios (on valuation e) corresponding to different behaviors by ⊥. Such a branch is said to be fair iff permission is granted infinitely many times in it. Each e-computation branch B of M incrementally spells — in the obvious sense — a run Γ on the run tape, which we call the run spelled by B. Then, for a game A we write M |=e A to mean that, whenever Γ is the run spelled by some e-computation branch B of M and Γ is not ⊥-illegal, then branch B is fair and WnA e hΓi = ⊤. We write M |= A and say that M computes (solves, wins) A iff M |=e A for every valuation e. Finally, we write |= A and say that A is computable iff there is an EPM M with M |= A.

6

The language of INT and the extended language

As mentioned, the language of intuitionistic logic can be seen as a fragment of that of CL. The main building blocks of the language of INT are infinitely many problem letters, or letters for short, for which we use P, Q, R, S, . . . as metavariables. They are what in classical logic are called ‘predicate letters’, and what CL calls ‘general letters’. With each letter is associated a nonnegative integer called its arity. $ is one of the letters, with arity 0. We refer to it as the logical letter, and call all other letters nonlogical. The language also contains infinitely many variables and constants — exactly the ones fixed in Section 4. “Term” also has the same meaning as before. An atom is P (t1 , . . . , tn ), where P is an n-ary letter and the ti are terms. Such an atom is said to be P -based. If here each term ti is a constant, we say that P (t1 , . . . , tn ) is grounded. A P -based atom is n-ary, logical, nonlogical etc. iff P is so. When P is 0-ary, we write P instead of P (). INT-Formulas are the elements of the smallest class of expressions such that all atoms are INT-formulas and, if F, F1 , . . . , Fn (n ≥ 2) are INT-formulas and x is a variable, then the following expressions are also INT-formulas: (F1 ) ◦– (F2 ), (F1 ) ⊓ . . . ⊓ (Fn ), (F1 ) ⊔ . . . ⊔ (Fn ), ⊓x(F ), ⊔x(F ). Officially there is no negation in the language of INT. Rather, the intuitionistic negation of F is understood as F ◦– $. In this paper we also employ a more expressive formalism that we call the extended language. The latter has the additional connectives ⊤, ⊥, ¬, ∧, ∨, →, ◦| on top of those of the language of INT, extending the above formation rules by adding the clause that ⊤, ⊥, ¬F , (F1 ) ∧ . . . ∧ (Fn ), (F1 ) ∨ . . . ∨ (Fn ), (F1 ) → (F2 ) and | ◦ (F ) are formulas as long as F , F1 , . . . , Fn are so. ⊤ and ⊥ count as logical atoms. Henceforth by (simply) “formula”, unless otherwise specified, we mean a formula of the extended language. Parentheses will often be omitted in formulas when this causes no ambiguity. With ⊓ and ⊔ being quantifiers, the definitions of free and bound occurrences of variables are standard. In concordance with a similar notational convention for games on which we agreed in Section 4, sometimes a formula F will be represented as F (x1 , . . . , xn ), where the xi are pairwise distinct variables. When doing so, we do not necessarily mean that each such xi has a free occurrence in F , or that every variable occurring free in F is among x1 , . . . , xn . In the context set by the above representation, F (t1 , . . . , tn ) will mean the result of replacing, in F , each free occurrence of each xi (1 ≤ i ≤ n) by term ti . In case each ti is a variable yi , it may be not clear whether F (x1 , . . . , xn ) or F (y1 , . . . , yn ) was originally meant to represent F in a given context. Our disambiguating convention is that the context is set by the expression that was used earlier. That is, when we first mention F (x1 , . . . , xn ) and only after that use the expression F (y1 , . . . , yn ), the latter should be understood as the result of replacing variables in the former rather than vice versa. Let x be a variable, t a term and F (x) a formula. t is said to be free for x in F (x) iff none of the free occurrences of x in F (x) is in the scope of ⊓t or ⊔t. Of course, when t is a constant, this condition is always satisfied. An interpretation is a function ∗ that sends each n-ary letter P to a static game ∗ P = P ∗ (x1 , . . . , xn ), where the xi are pairwise distinct variables. This function induces a unique mapping that sends each formula 9

F to a game F ∗ (in which case we say that ∗ interprets F as F ∗ and that F ∗ is the interpretation of F under ∗ ) by stipulating that: 1. Where P is an n-ary letter with ∗ P = P ∗ (x1 , . . . , xn ) and t1 , . . . , tn are terms, (P (t1 , . . . , tn ))∗ = P ∗ (t1 , . . . , tn ). ∗ 2. commutes with all operators: ⊤∗ = ⊤, (F ◦– G)∗ = F ∗ ◦– G∗ , (F1 ∧ . . . ∧ Fn )∗ = F1∗ ∧ . . . ∧ Fn∗ , (⊓xF )∗ = ⊓x(F ∗ ), etc. ∗ When a given ∗ formula is represented as F (x1 , . . . , xn ), we will typically write F (x1 , . . . , xn ) instead of F (x1 , . . . , xn ) . For a formula F , we say that an interpretation ∗ is admissible for F , or simply F -admissible, iff the following conditions are satisfied: 1. For every n-ary letter P occurring in F , where ∗ P = P ∗ (x1 , . . . , xn ), the game P ∗ (x1 , . . . , xn ) does not depend on any variables that are not among x1 , . . . , xn but occur (whether free or bound) in F . 2. $∗ = B ⊓ F1∗ ⊓ F2∗ ⊓ . . ., where B is an arbitrary problem and F1 , F2 , . . . is the lexicographic list of all grounded nonlogical atoms of the language. Speaking philosophically, an admissible interpretation ∗ interprets $ as a “strongest problem”: the interpretation of every grounded atom and hence — according to Lemma 10.5 — of every formula is reducible to $∗ , and reducible in a certain uniform, interpretation-independent way. Viewing $∗ as a resource, it can be seen as a universal resource that allows its owner to solve any problem. Our choice of the dollar notation here is no accident: money is an illustrative example of an all-powerful resource in the world where everything can be bought. “Strongest”, “universal” or “all-powerful” do not necessarily mean “impossible”. So, the intuitionistic negation F ◦– $ of F here does not have the traditional “F ∗ is absurd” meaning. Rather, it means that F ∗ (too) is of universal strength. Turing completeness, NP-completeness and similar concepts are good examples of “being of universal strength”. $∗ is what [7] calls a standard universal problem of the universe hF1∗ , F2∗ , . . .i. Briefly, a universal problem of a universe (sequence) hA1 , A2 , . . .i of problems is a problem U such that |= U → A1 ⊓ A2 ⊓ . . . and hence |= U ◦– A1 ⊓ A2 ⊓ . . ., intuitively meaning a problem to which each Ai is reducible. For every B, the problem U = B ⊓ A1 ⊓ A2 . . . satisfies this condition, and universal problems of this particular form are called standard. Every universal problem U of a given universe can be shown to be equivalent to a standard universal problem U ′ of the same universe, in the sense that |= U ◦– U ′ and |= U ′ ◦– U . And all of the operators of INT can be shown to preserve equivalence. Hence, restricting universal problems to standard ones does not result in any loss of generality: a universal problem can always be safely assumed to be standard. See section 23 of [7] for an extended discussion of the philosophy and intuitions associated with universal problems. Here we only note that interpreting $ as a universal problem rather than (as one might expect) as ⊥ yields more generality, for ⊥ is nothing but a special, extreme case of a universal problem. Our soundness theorem for INT, of course, continues to hold with ⊥ instead of $. Let F be a formula. We write ⊢⊢F and say that F is valid iff |= F ∗ for every F -admissible interpretation ∗ . For an EPM E, we write E⊢ ⊢ ⊢F and say that E is a uniform solution for F iff E |= F ∗ for every F ∗ admissible interpretation . Finally, we write ⊢ ⊢ ⊢F and say that F is uniformly valid iff there is a uniform solution for F . Note that uniform validity automatically implies validity but not vice versa. Yet, these two concepts have been conjectured to be extensionally equivalent (Conjecture 26.2 of [7]).

7

The Gentzen-style axiomatization of INT

A sequent is a pair G ⇒ K, where K is an INT-formula and G is a (possibly empty) finite sequence of INTformulas. In what follows, E, F, K will be used as metavariables for formulas, and G, H as metavariables for sequences of formulas. We think of sequents as formulas, identifying ⇒ K with K, F ⇒ K with ◦| F → K (i.e. F ◦– K), and E1 , . . . , En ⇒ K (n ≥ 2) with ◦| E1 ∧ . . . ∧ ◦| En → K.6 This allows us to automatically extend 6 In order to fully remain within the language of INT, we could understand E1 , . . . , En ⇒ K as E1 ◦ (E2 ◦ . . . ◦ (En ◦ K) . . .), which can be shown to be equivalent to our present understanding. We, however, prefer to read E1 , . . . , En ⇒ K as ◦.. E1 ∧ . . . ∧ ◦.. En → K as it seems more convenient to work with.









10

the concepts such as validity, uniform validity, etc. from formulas to sequents. A formula K is considered provable in INT iff the sequent ⇒ K is so. Deductively, logic INT is given by the following 15 rules. This axiomatization is known (or can be easily shown) to be equivalent to other “standard” formulations, including Hilbert-style axiomatizations for INT and/or versions where a primitive symbol for negation is present while $ is absent, or where ⊓ and ⊔ are strictly binary, or where variables are the only terms of the language.7 Below G, H are any (possibly empty) sequences of INT-formulas; n ≥ 2; 1 ≤ i ≤ n; x is any variable; E, F , K, F1 , . . . , Fn , K1 , . . . , Kn , F (x), K(x) are any INT-formulas; y is any variable not occurring (whether free or bound) in the conclusion of the rule; in Left ⊓ (resp. Right ⊔), t is any term free for x in F (x) (resp. in K(x)). P C means “from premise(s) P conclude C”. When there are multiple premises in P, they are separated with a semicolon.

7→

Identity Domination Exchange

G, E, F, H ⇒ K G⇒K

Weakening Contraction

G, F, F ⇒ K

Right ◦–

G, F ⇒ K

Left ◦–

G, F ⇒ K1 ; H ⇒ K2

Right ⊓

G ⇒ K1 ; . . . ; G ⇒ Kn G, Fi ⇒ K

Left ⊓ Right ⊔ Left ⊔

G ⇒ Ki G, F1 ⇒ K; . . . ; G, Fn ⇒ K

⊓ Left ⊓ Right ⊔ Left ⊔ Right

G ⇒ K(y) G, F (t) ⇒ K G ⇒ K(t) G, F (y) ⇒ K

7 → 7→ 7 → 7→ 7→ 7→ 7→ 7→ 7→ 7→ 7→ 7→ 7→ 7→ 7→

K⇒K $⇒K G, F, E, H ⇒ K G, F ⇒ K G, F ⇒ K G ⇒ F ◦– K G, H, K2 ◦– F ⇒ K1 G ⇒ K1 ⊓ . . . ⊓ Kn G, F1 ⊓ . . . ⊓ Fn ⇒ K G ⇒ K1 ⊔ . . . ⊔ Kn G, F1 ⊔ . . . ⊔ Fn ⇒ K G ⇒ ⊓xK(x) G, ⊓xF (x) ⇒ K G ⇒ ⊔xK(x) G, ⊔xF (x) ⇒ K

Theorem 7.1 (Soundness:) If INT ⊢ S, then ⊢⊢S (any sequent S). Furthermore, (uniform-constructive soundness:) there is an effective procedure that takes any INT-proof of any sequent S and constructs a uniform solution for S. Proof. See Section 12. 2

8

CL2-derived validity lemmas

In our proof of Theorem 7.1 we will need a number of lemmas concerning uniform validity of certain formulas. Some such validity proofs will be given directly in Section 10. But some proofs come “for free”, based on the soundness theorem for logic CL2 proven in [11]. CL2 is a propositional logic whose logical atoms are ⊤ and ⊥ (but not $) and whose connectives are ¬, ∧, ∨, →, ⊓, ⊔. It has two sorts of nonlogical atoms, called elementary and general. General atoms are nothing but 0-ary atoms of our extended language; elementary atoms, however, are something not present in the extended language. We refer to the formulas of the language of CL2 as CL2-formulas. In this paper, the CL2-formulas that do not contain elementary atoms 7 That we allow constants is only for technical convenience. This does not really yield a stronger language, as constants behave like free variables and can be thought of as such.

11

— including ⊤ and ⊥ that count as such — are said to be general-base. Thus, every general-base formula is a formula of our extended language, and its validity or uniform validity means the same as in Section 6.8 Understanding F → G as an abbreviation for ¬F ∨ G, a positive occurrence in a CL2-formula is an occurrence that is in the scope of an even number of occurrences of ¬; otherwise the occurrence is negative. A surface occurrence is an occurrence that is not in the scope of ⊓ and/or ⊔. The elementarization of a CL2-formula F is the result of replacing in F every surface occurrence of every subformula of the form G1 ⊓ . . . ⊓ Gn (resp. G1 ⊔ . . . ⊔ Gn ) by ⊤ (resp. ⊥), and every positive (resp. negative) surface occurrence of every general atom by ⊥ (resp. ⊤). A CL2-formula F is said to be stable iff its elementarization is a tautology of classical logic. With these conventions, CL2 is axiomatized by the following three rules:

7→

~ ~ is the smallest set of formulas such that, whenever F has a positive (a) H F , where F is stable and H (resp. negative) surface occurrence of a subformula G1 ⊓ . . . ⊓ Gn (resp. G1 ⊔ . . . ⊔ Gn ), for each ~ contains the result of replacing that occurrence in F by Gi . i ∈ {1, . . . , n}, H

7→

(b) H F , where H is the result of replacing in F a negative (resp. positive) surface occurrence of a subformula G1 ⊓ . . . ⊓ Gn (resp. G1 ⊔ . . . ⊔ Gn ) by Gi for some i ∈ {1, . . . , n}.

7→

(c) H F , where H is the result of replacing in F two — one positive and one negative — surface occurrences of some general atom by a nonlogical elementary atom that does not occur in F . In this section p, q, r, s, t, u, w . . . (possibly with indices) will exlusively stand for nonlogical elementary atoms, and P, Q, R, S, T, U, W (possibly with indices) stand for general atoms. All of these atoms are implicitely assumed to be pairwise distinct in each context. Convention 8.1 In Section 7 we started using the notation G for sequences of formulas. Later we agreed to identify sequences of formulas with ∧-conjunctions of those formulas. So, from now on, an underlined expression such as G will mean an arbitrary formula G1 ∧ . . . ∧ Gn for some n ≥ 0. Such an expression will always occur in a bigger context such as G ∧ F or G → F ; our convention is that, when n = 0, G ∧ F and G → F simply mean F . As we agreed that p, q, . . . stand for elementary atoms and P, Q, . . . for general atoms, p, q, . . . will correspondingly mean ∧-conjunctions of elementary atoms, and P , Q, . . . mean ∧-conjunctions of general atoms. We will also be underlining complex expressions such as F → G, ⊓xF (x) or ◦| F . F → G should be understood as (F1 → G1 ) ∧ . . . ∧ (Fn → Gn ), ⊓xF (x) as ⊓xF1 (x) ∧ . . . ∧ ⊓xFn (x) (note that only the Fi vary but not x), ◦| F as ◦| F1 ∧. . . ◦| Fn , ◦| ◦| F as ◦| (◦| F1 ∧. . .∧◦| Fn ), ◦| F → F ∧G as ◦| F1 ∧. . . ◦| Fn → F1 ∧. . .∧Fn ∧G, etc. The axiomatization of CL2 is rather unusual, but it is easy to get a syntactic feel of it once we do a couple of exercises. Example 8.2 The following is a CL2-proof of (P → Q) ∧ (Q → T ) → (P → T ): 1. (p → q) ∧ (q → t) → (p → t) (from {} by Rule (a)). 2. (P → q) ∧ (q → t) → (P → t) (from 1 by Rule (c)). 3. (P → Q) ∧ (Q → t) → (P → t) (from 2 by Rule (c)). 4. (P → Q) ∧ (Q → T ) → (P → T ) (from 3 by Rule (c)). Example 8.3 Let n ≥ 2, and let m be the length (number of conjuncts) of both R and r. a) For i ∈ {1, . . . , n}, the formula of Lemma 8.5(j) is provable in CL2. It follows from (R → Si ) → (R → Si ) by Rule (b); the latter follows from (R → si ) → (R → si ) by Rule (c); the latter, in turn, can be derived from (r → si ) → (r → si ) applying Rule (c) m times. Finally, (r → si ) → (r → si ) is its own elementarization and is a classical tautology, so it follows from the empty set of premises by Rule (a). b) The formula of Lemma 8.5(h) is also provable in CL2. It is derivable by Rule (a) from the set of n premises, easch premise being (R → S1 ) ∧ . . . ∧ (R → Sn ) → (R → Si ) for some i ∈ {1, . . . , n}. The latter is derivable by Rule (c) from (R → S1 ) ∧ . . . ∧ (R → si ) ∧ . . . ∧ (R → Sn ) → (R → si ). The latter, in turn, can be derived from (R → S1 ) ∧ . . . ∧ (r → si ) ∧ . . . ∧ (R → Sn ) → (r → si ) applying Rule (c) m times. Finally, the latter follows from the empty set of premises by Rule (a). 8 These concepts extend to the full language of CL2 as well, with interpretations required to send elementary atoms to elementary games (i.e. predicates in the classical sense, understood in CL as games that have no nonemty legal runs).

12

Obviously CL2 is decidable. This logic has been proven sound and complete in [11]. We only need the soundness part of that theorem restricted to general-base formulas. It sounds as follows: Lemma 8.4 Any general-base CL2-provable formula is valid. Furthermore, there is an effective procedure that takes any CL2-proof of any such formula F and constructs a uniform solution for F . A substitution is a function f that sends every general atom P of the language of CL2 to a formula f (P ) of the extended language. This mapping extends to all general-base CL2-formulas by stipulating that f commutes with all operators: f (G1 → G2 ) = f (G1 ) → f (G2 ), f (G1 ⊓ . . . ⊓ Gk ) = f (G1 ) ⊓ . . . ⊓ f (Gk ), etc. We say that a formula G is a substitutional instance of a general-base CL2-formula F iff G = f (F ) for some substitution f . Thus, “G is a substitutional instance of F ” means that G has the same form as F . In the following lemma, we assume n ≥ 2 (clauses (h),(i),(j)), and 1 ≤ i ≤ n (clause (j)). Notice that, for the exception of clause (g), the expressions given below are schemata of formulas rather than formulas, for the lengths of their underlined expressions — as well as i and n — may vary. Lemma 8.5 All substitutional instances of all formulas given by the following schemata are uniformly valid. Furthermore, there is an effective procedure that takes any particular formula matching a given scheme and constructs an EPM that is a uniform solution for all substitutional instances of that formula. a) (R ∧ P ∧ Q ∧ S → T ) → (R ∧ Q ∧ P ∧ S → T ); b) (R → T ) → (R ∧ P → T ); c) (R → S) → (W ∧ R ∧ U → W ∧ S ∧ U );   d) R ∧ P → Q →   R → (P → Q) ; e) P → (Q → T ) ∧ (R → Q) → P → (R → T ) ; f ) P → (R → Q) ∧ (S ∧ Q → T ) → (S ∧ R ∧ P → T ); g) (P → Q) ∧ (Q → T ) → (P → T ); h) (R → S1 ) ∧ . . . ∧ (R → Sn ) → (R → S1 ⊓ . . . ⊓ Sn );  i) (R ∧ S1 → T ) ∧ . . . ∧ (R ∧ Sn → T ) → (R ∧ (S1 ⊔ . . . ⊔ Sn ) → T ; j) (R → Si ) → (R → S1 ⊔ . . . ⊔ Sn ). Proof. In order to prove this lemma, it would be sufficient to show that all formulas given by the above schemata are provable in CL2. Indeed, if we succeed in doing so, then an effective procedure whose existence is claimed in the lemma could be designed to work as follows: First, the procedure finds a CL2-proof of a given formula F . Then, based on that proof and using the procedure whose existence is stated in Lemma 8.4, it finds a uniform solution E for that formula. It is not hard to see that the same E will automatically be a uniform solution for every substitutional instance of F as well. The CL2-provability of the formulas given by clauses (g), (h) and (j) has been verified in Examples 8.2 and 8.3. A similar verification for the other clauses is left as an easy syntactic exercise for the reader. 2

9

Closure lemmas

In this section we let n range over natural numbers (including 0), x over any variables, F, E, G (possibly with subscripts) over any formulas, and E, C, D (possibly with subscripts) over any EPMs. Unless otherwise specified, these metavariables are assumed to be universally quantified in a context. The expression {EPMs} stands for the set of all EPMs. Lemma 9.1 If ⊢ ⊢ ⊢F , then ⊢ ⊢ ⊢◦| F . Furthermore, there is an effective function h : {EPMs} −→ {EPMs} such that, for any E and F , if E⊢ ⊢ ⊢F , then h(E)⊢ ⊢ ⊢◦| F . Proof. According to Proposition 21.2 of [7], there is an effective function h : {EPMs} −→ {EPMs} ⊢ ⊢F , such that, for any EPM E and any static game A, if E |= A, then h(E) |= ◦| A. We now claim that if E⊢ then h(E)⊢ ⊢ ⊢◦| F . Indeed, assume E⊢ ⊢ ⊢F . Consider any ◦| F -admissible interpretation ∗ . Of course, the same interpretation is also F -admissible. Hence, E⊢ ⊢ ⊢F implies E |= F ∗ . But then, by the known behavior of h, we | ∗ ∗ have h(E) |= ◦ F . Since was arbitrary, we conclude that h(E)⊢ ⊢ ⊢◦| F . 2

13

Lemma 9.2 If ⊢ ⊢ ⊢F , then ⊢ ⊢ ⊢⊓xF . Furthermore, there is an effective function h : {EPMs} −→ {EPMs} such that, for any E, x and F , if E⊢ ⊢ ⊢F , then h(E)⊢ ⊢ ⊢⊓xF . Proof. Similar to the previous lemma, only based on Proposition 21.1 of [7] instead of 21.2. 2 Lemma 9.3 If ⊢ ⊢ ⊢F and ⊢ ⊢ ⊢F → E, then ⊢ ⊢ ⊢E. Furthermore, there is an effective function h : {EPMs} × {EPMs} −→ {EPMs} such that, for any E, C, F and E, if E⊢ ⊢ ⊢F and C⊢ ⊢ ⊢F → E, then h(E, C)⊢ ⊢ ⊢E. Proof. According to Proposition 21.3 of [7], there is an effective function g that takes any HPM H and EPM E and returns an EPM C such that, for any static games A,B and any valuation e, if H |=e A and E |=e A → B, then C |=e B. Theorem 17.2 of [7], which establishes equivalence between EPMs and HPMs in a constructive sense, allows us to assume that H is an EPM rather than HPM here. More precisely, we can routinely convert g into an (again effective) function h : {EPMs} × {EPMs} −→ {EPMs} such that, for any static games A,B, valuation e and EPMs E and C, if E |=e A and C |=e A → B, then h(E, C) |=e B.

(3)

We claim that the above function h behaves as our lemma states. Assume E⊢ ⊢ ⊢F and C⊢ ⊢ ⊢F → E, and consider an arbitrary valuation e and an arbitrary E-admissible interpretation ∗ . Our goal is to show that h(E, C) |=e E ∗ , which obviously means the same as h(E, C) |=e e[E ∗ ].

(4)

We define the new interpretation † by stipulating that, for every n-ary letter P with P ∗ = P ∗ (x1 , . . . , xn ), P † is the game P † (x1 , . . . , xn ) such that, for any tuple c1 , . . . , cn of constants, P † (c1 , . . . , cn ) = e[P ∗ (c1 , . . . , cn )]. Unlike P ∗ (x1 , . . . , xn ) that may depend on some “hidden” variables (those that are not among x1 , . . . , xn ), obviously P † (x1 , . . . , xn ) does not depend on any variables other that x1 , . . . , xn . This makes † admissible for any formula, including F and F → E. Then our assumptions E⊢ ⊢ ⊢F and C⊢ ⊢ ⊢F → E imply E |=e F † and † † † † C |=e F → E . Consequently, by (3), h(E, C) |=e E , i.e. h(E, C) |=e e[E ]. Now, with some thought, we can see that e[E † ] = e[E ∗ ]. Hence (4) is true. 2 Lemma 9.4 (Modus ponens) If ⊢ ⊢ ⊢F1 , . . . , ⊢ ⊢ ⊢Fn and ⊢ ⊢ ⊢F1 ∧ . . .∧ Fn → E, then ⊢ ⊢ ⊢E. Furthermore, there is an effective function h : {EPMs}n+1 −→ {EPMs} such that, for any EPMs E1 , . . . , En , C and any formulas F1 , . . . , Fn , E, if E1 ⊢ ⊢ ⊢F1 , . . . , En ⊢ ⊢ ⊢Fn and C⊢ ⊢ ⊢F1 ∧ . . . ∧ Fn → E, then h(E1 , . . . , En , C)⊢ ⊢ ⊢E. Such a function, in turn, can be effectively constructed for each particular n. Proof. In case n = 0, h is simply the identity function h(C) = C. In case n = 1, h is the function whose existence is stated in Lemma 9.3. Below we will construct h for case n = 2. It should be clear how to generalize that construction to any greater n. Assume E1 ⊢ ⊢ ⊢F1 , E2 ⊢ ⊢ ⊢F2 and C⊢ ⊢ ⊢F1 ∧ F2 → E. By Lemma 8.5(d), (F1 ∧ F2 → E) → (F1 → (F2 → E)) has a uniform solution. Lemma 9.3 allows us to combine that solution with C and find a uniform solution D1 for F1 → (F2 → E). Now applying Lemma 9.3 to E1 and D1 , we find a uniform solution D2 for F2 → E. Finally, applying the same lemma to E2 and D2 , we find a uniform solution D for E. Note that D does not depend on F1 , F2 , E, and that we constructed D in an effective way from the arbitrary E1 , E2 and C. Formalizing this construction yields function h whose existence is claimed by the lemma. 2 Lemma 9.5 (Transitivity) If ⊢ ⊢ ⊢F → E and ⊢ ⊢ ⊢E → G, then ⊢ ⊢ ⊢F → G. Furthermore, there is an effective function h : {EPMs} × {EPMs} −→ {EPMs} such that, for any E1 , E2 , F , E and G, if E1 ⊢ ⊢ ⊢F → E and E2 ⊢ ⊢ ⊢E → G, then h(E1 , E2 )⊢ ⊢ ⊢F → G. Proof. Assume E1 ⊢ ⊢ ⊢F → E and E2 ⊢ ⊢ ⊢E → G. By Lemma 8.5(g), we also have C⊢ ⊢ ⊢(F → E)∧(E → G) → (F → G) for some (fixed) C. Lemma 9.4 allows us to combine the three uniform solutions and construct a uniform solution D for F → G. 2

14

10

More validity lemmas

As pointed out in Remark 16.4 of [7], when trying to show that a given EPM wins a given game, it is always safe to assume that the runs that the machine generates are never ⊥-illegal, i.e. that the environment never makes an illegal move, for if it does, the machine automatically wins. This assumption, that we call the clean environment assumption, will always be implicitly present in our winnability proofs. We will often employ a uniform solution for P → P called the copy-cat strategy (CCS). This strategy, that we already saw in Section 2, consists in mimicking, in the antecedent, the moves made by the environment in the consequent, and vice versa. More formally, the algorithm that CCS follows is an infinite loop, on every iteration of which CCS keeps granting permission until the environment makes a move 1.α (resp. 2.α), to which the machine responds by the move 2.α (resp. 1.α). As shown in the proof of Proposition 22.1 of [7], this strategy guarantees success in every game of the form A ∨ ¬A and hence A → A. An important detail is that CCS never looks at the past history of the game, i.e. the movement of its scanning head on the run tape is exclusively left-to-right. This guarantees that, even if the original game was something else and it only evolved to A → A later as a result of making a series of moves, switching to the CCS after the game has been brought down to A → A guarantees success no matter what happened in the past. Thgroughout this section, F , G, E, K (possibly with indices and attached tuples of variables) range over formulas, x and y over variables, t over terms, n over nonnegative integers, w over bit strings, and α, γ over moves. These (meta)variables are assumed to be universally quantified in a context unless otherwise specified. In accordance with our earlier convention, ǫ means the empty string, so that, say, ‘1.ǫ.α’ is the same as ‘1..α’. Lemma 10.1 ⊢ ⊢ ⊢◦| F → F . Furthermore, there is an EPM E such that, for any F , E⊢ ⊢ ⊢◦| F → F . Proof. The idea of a uniform solution E for ◦| F → F is very simple: just act as CCS, never making any replicative moves in the antecedent and pretending that the latter is F rather than (the stronger) ◦| F . The following formal description of the interactive algorithm that E follows is virtually the same as that of CCS, with the only difference that the move prefix ‘1.’ is replaced by ‘1.ǫ.’ everywhere. Procedure LOOP: Keep granting permission until the environment makes a move 1.ǫ.α or 2.α; in the former case make the move 2.α, and in the latter case make the move 1.ǫ.α; then repeat LOOP. Fix an arbitrary valuation e, interpretation ∗ and e-computation branch B of E. Let Θ be the run spelled by B. From the description of LOOP it is clear that permission will be granted infinitely many times in B, so this ∗branch is fair. Hence, in order to show that E wins the game, it would suffice to show that ... ∗ Wn◦eF →F hΘi = ⊤. Let Θi denote the initial segment of Θ consisting of the (lab)moves made by the players by the beginning of the ith iteration of LOOP in B (if such an iteration exists). By induction on i, based on the clean environment assumption and applying a routine analysis of the the behavior of LOOP and the definitions of the relevant game operations, one can easily find that ..





a) Θi ∈ Lr◦eF →F ; b) Θ1.ǫ. = ¬Θ2. i i ; c) All of the moves in Θ1. i have the prefix ‘ǫ.’. If LOOP is iterated infinitely many times, then the above obviously extends from Θi to Θ, because every initial segment of Θ is an initial segment of some Θi . And if LOOP is iterated only a finite number m of times, then Θ = Θm . This is so because the environment cannot make a move 1.ǫ.α or 2.α during the mth iteration (otherwise there would be a next iteration), and any other move would violate the clean environment assumption; and, as long as the environment does not move during a given iteration, neither does the machine. Thus, no matter whether LOOP is iterated a finite or infinite number of times, we have: ..





a) Θ ∈ Lr◦eF →F ; b) Θ1.ǫ. = ¬Θ2. ; c) All of the moves in Θ1. have the prefix ‘ǫ.’.

15

(5)

..





..





Since Θ ∈ Lre◦F →F , in order to show that Wn◦eF →F hΘi = ⊤, by the definition of →, it would suffice to . ∗ ∗ F∗ ¬◦. F ∗ 2. hΘ1. i = ⊤. So, assume WnF hΘ2. i = 6 ⊤, i.e. WnF hΘ2. i = ⊥, show that either Wne hΘ i = ⊤ or Wne e e ∗ ∗ ¬F ∗ ¬F F i.e. Wne h¬Θ2. i = ⊤. Then, by clause (b) of (5), Wne hΘ1.ǫ. i = ⊤, i.e. Wne h¬Θ1.ǫ. i = ⊥, i.e. ∗ 1. ǫ. WnF any infinite bit string w. In view of clause (c) of (5), we obviously have (¬Θ1. )ǫ. = e h(¬Θ ) i = ⊥. Pick . ∗ | F∗ ◦. F 1. w 1. w (¬Θ ) . Hence Wne h(¬Θ ) i = ⊥. But this, by the definition of ◦ , implies Wne h¬Θ1. i = ⊥. The . ¬◦. F ∗ latter, in turn, can be rewritten as the desired Wne hΘ1. i = ⊤. Thus, we have shown that E wins ◦| F ∗ → F ∗ . Since ∗ was arbitrary and the work of E did not depend on it, we conclude that E⊢ ⊢ ⊢◦| F → F . 2 In the subsequent constructions found in this section, ∗ will always mean an arbitrary but fixed interpretation admissible for the formula whose uniform validity we are trying to prove. Next, e will always mean an arbitrary but fixed valuation — specifically, the valuation spelled on the valuation tape of the machine under question. For readability, we will usually omit the e parameter when it is irrelevant. Also, having already seen one example, in the remaining uniform validity proofs we will typically limit ourselves to just constructing interactive algorithms, leaving the (usually routine) verification of their correctness to the reader. An exception will be the proof of Lemma 10.12 where, due to the special complexity of the case, correctness verification will be done even more rigorously than we did this in the proof of Lemma 10.1. Lemma 10.2 ⊢ ⊢ ⊢◦| (F → G) → (◦| F → ◦| G). Moreover, there is an EPM E such that, for every F and G, | E⊢ ⊢ ⊢◦ (F → G) → (◦| F → ◦| G). Proof. A relaxed description of a uniform solution E for ◦| (F → G) → (◦| F → ◦| G) is as follows. In | ∗ ∗ ∗ ◦ (F → G ) and ◦ F the machine is making exactly the same replicative moves (moves of the form w:) as the environment is making in ◦| G∗ . This ensures that the tree-structures of the three ◦| -components of the game are identical, and now all the machine needs for a success is to win the game (F ∗ → G∗ ) → (F ∗ → G∗ ) within each branch of those trees. This can be easily achieved by applying copy-cat methods to the two occurrences of F and the two occurrences of G. In precise terms, the strategy that E follows is described by the following interactive algorithm. |

Procedure LOOP: Keep granting permission until the adversary makes a move γ. Then: If γ = 2.2.w:, then make the moves 1.w: and 2.1.w:, and repeat LOOP; If γ = 2.2.w.α (resp. γ = 1.w.2.α), then make the move 1.w.2.α (resp. 2.2.w.α), and repeat LOOP; If γ = 2.1.w.α (resp. γ = 1.w.1.α), then make the move 1.w.1.α (resp. 2.1.w.α), and repeat LOOP. 2 Lemma 10.3 ⊢ ⊢ ⊢◦| F1 ∧ . . . ∧ ◦| Fn → ◦| (F1 ∧ . . . ∧ Fn ). Furthermore, there is an effective procedure that takes any particular value of n and constructs an EPM E such that, for any F1 , . . . , Fn , E⊢ ⊢ ⊢◦| F1 ∧ . . . ∧ ◦| Fn → | ◦ (F1 ∧ . . . ∧ Fn ). Proof. The idea of a uniform solution here is rather similar to the one in the proof of Lemma 10.2. Here is the algorithm: Procedure LOOP: Keep granting permission until the adversary makes a move γ. Then: If γ = 2.w:, then make the n moves 1.1.w:, . . . , 1.n.w:, and repeat LOOP; If γ = 2.w.i.α (resp. γ = 1.i.w.α) where 1 ≤ i ≤ n, then make the move 1.i.w.α (resp. 2.w.i.α), and repeat LOOP. 2 Lemma 10.4 ⊢ ⊢ ⊢◦| F → ◦| F ∧ ◦| F . Furthermore, there is an EPM E such that, for any F , E⊢ ⊢ ⊢◦| F → ◦| F ∧ ◦| F . Proof. The idea of a winning strategy here is to first replicate the antecedent turning it into something “essentially the same”9 as ◦| F ∗ ∧ ◦| F ∗ , and then switch to a strategy that is “essentially the same as” the ordinary copy-cat strategy. Precisely, here is how E works: it makes the move 1.ǫ: (replicating the antecedent), after which it follows the following algorithm: Procedure LOOP: Keep granting permission until the adversary makes a move γ. Then: 9 Using

.

the notation ◦ introduced in Section 13 of [7], in precise terms this “something” is ◦.. (F ∗ ◦ F ∗ ).

16

If γ = 1.0α (resp. γ = 2.1.α), then make the move 2.1.α (resp. 1.0α), and repeat LOOP; If γ = 1.1α (resp. γ = 2.2.α), then make the move 2.2.α (resp. 1.1α), and repeat LOOP; If γ = 1.ǫ.α, then make the moves 2.1.ǫ.α and 2.2.ǫ.α, and repeat LOOP. 2 Remember from Section 4 that, when t is a constant, e(t) = t. Lemma 10.5 For any INT-formula K, ⊢ ⊢ ⊢◦| $ → K. Furthermore, there is an effective procedure that takes any INT-formula K and constructs a uniform solution for ◦| $ → K. Proof. We construct an EPM E and verify that it is a uniform solution for ◦| $ → K; both the construction and the verification will be done by induction on the complexity of K. The goal in each case is to show that E generates a ⊤-won run of e[(◦| $ → K)∗ ] = ◦| e[$∗ ] → e[K ∗ ] which, according to our convention, we may write with “e” omitted. Case 1: K = $. This case is taken care of by Lemma 10.1. Case 2: K is a k-ary nonlogical atom P (t1 , . . . , tk ). Let c1 , . . . , ck be the constants e(t1 ), . . . , e(tk ), respectively. Evidently in this case e[K ∗ ] = e[P ∗ (c1 , . . . , ck )], so, the game for which E needs to generate a winning run is ◦| $∗ → P ∗ (c1 , . . . , ck ). Assume (P (c1 , . . . , ck ))∗ is conjunct #m of (the infinite ⊓-conjunction) $∗ . We define E to be the EPM that acts as follows. At the beginning, if necessary (i.e. unless all ti are constants), it reads the valuation tape to find c1 , . . . , ck . Then, using this information, it finds the above number m and makes the move ‘1.ǫ.m’, which can be seen10 to bring the game down to ◦| P ∗ (c1 , . . . , ck ) → P ∗ (c1 , . . . , ck ). After this move, E switches to the strategy whose existence is stated in Lemma 10.1. Case 3: K = ◦| E → F . By Lemma 8.5(b), there is a uniform solution for (◦| $ → F ) → (◦| $ ∧ ◦| E → F ). Lemmas 8.5(d) and 9.5 allow us to convert the latter into a uniform solution D for (◦| $ → F ) → ◦| $ →  (◦| E → F ) . By the induction hypothesis, there is also a uniform solution C for ◦| $ → F . Applying Lemma 9.4 to C and D, we find a uniform solution E for ◦| $ → (◦| E → F ). Case 4: K = E1 ⊔ . . . ⊔ En . By the induction hypothesis, we know how to construct an EPM C1 with C1 ⊢ ⊢ ⊢◦| $ → E1 . Now we define E to be the EPM that first makes the move 2.1, and then plays the rest of the game as C1 would play. E can be seen to be successful because its initial move 2.1 brings (◦| $ → K)∗ down to (◦| $ → E1 )∗ . Case 5: K = E1 ⊓ . . . ⊓ En . By the induction hypothesis, for each i with 1 ≤ i ≤ n we have an EPM Ci with Ci ⊢ ⊢ ⊢◦| $ → Ei . We define E to be the EPM that acts as follows. At the beginning, E keeps granting permission until the adversary makes a move. The clean environment assumption guarantees that this move should be 2.i for some 1 ≤ i ≤ n. It brings (◦| $ → E1 ⊓ . . . ⊓ En )∗ down to (◦| $ → Ei )∗ . If and after such a move 2.i is made, E continues the rest of the play as Ci . Case 6: K = ⊔xE(x). By the induction hypothesis, there is an EPM C1 with C1 ⊢ ⊢ ⊢◦| $ → E(1). Now we define E to be the EPM that first makes the move 2.1, and then plays the ∗ rest of the game as C∗1 . E can be seen to be successful because its initial move 2.1 brings ◦| $ → ⊔xE(x) down to ◦| $ → E(1) . Case 7: K = ⊓xE(x). By the induction hypothesis, for each constant c there is (and can be effectively found) an EPM Cc with Cc ⊢ ⊢ ⊢◦| $ → E(c). Now we define E to be the EPM that acts as follows. At the beginning, E keeps granting permission until the adversary makes a move. By the clean environment assumption, ∗ such a move should be 2.c for some constant c. This move can be seen to bring (◦| $ → ⊓xE(x)) down to  ∗ | ◦ $ → E(c) . If and after the above move 2.c is made, E plays the rest of the game as Cc . 2 Lemma 10.6 Assume n ≥ 2, 1 ≤ i ≤ n, and t is a term free for x in G(x). Then the following uniform validities hold. Furthermore, in each case there is an effective procedure that takes any particular values of n, i, t and constructs an EPM which is a uniform solution for the corresponding formula no matter what the values of F1 , . . . , Fn and/or G(x) (as long as t is free for x in G(x)) are. a) b) c) d)

⊢ ⊢ ⊢◦| (F1 ⊓ . . . ⊓ Fn ) → ◦| Fi ; ⊢ ⊢ ⊢◦| ⊓xG(x) → ◦| G(t); ⊢ ⊢ ⊢◦| (F1 ⊔ . . . ⊔ Fn ) → ◦| F1 ⊔ . . . ⊔ ◦| Fn ; ⊢ ⊢ ⊢◦| ⊔xG(x) → ⊔x◦| G(x).

10 See

Proposition 13.8 of [7].

17

Proof. Below come winning strategies for each case. Clause (a): Make the move ‘1.ǫ.i’. This brings the game down to ◦| Fi∗ → ◦| Fi∗ . Then switch to CCS. Clause (b): Let c = e(t). Read c from the valuation tape if necessary, i.e., if t is a variable (otherwise, simply c = t). Then make the move ‘1.ǫ.c’. This brings the game down to ◦| G∗ (c) → ◦| G∗ (c). Now, switch to CCS. Clause (c): Keep granting permission until the adversary makes a move ‘1.ǫ.j’ (1 ≤ j ≤ n), bringing the game down to ◦| Fj∗ → ◦| F1∗ ⊔ . . . ⊔ ◦| Fn∗ . If and after such a move is made (and if not, a win is automatically guaranteed), make the move ‘2.j’, which brings the game down to ◦| Fj∗ → ◦| Fj∗ . Finally, switch to CCS. Clause (d): Keep granting permission until the adversary makes the move ‘1.ǫ.c’ for some constant c. This brings the game down to ◦| G∗ (c) → ⊔x◦| G∗ (x). Now make the move ‘2.c’, which brings the game down to ◦| G∗ (c) → ◦| G∗ (c). Finally, switch to CCS. 2   Lemma 10.7 ⊢ ⊢ ⊢⊓x F (x) → G(x) → ⊓xF (x) → ⊓ xG(x) . Furthermore, there is an EPM E such that,   for any F (x) and G(x), E⊢ ⊢ ⊢⊓x F (x) → G(x) → ⊓xF (x) → ⊓xG(x) . Proof. Strategy: Wait till the environment makes the move ‘2.2.c’ for some constantc. This brings the

⊓xG∗(x) component down to G∗ (c) and hence the entire game to ⊓x F ∗ (x) → G∗ (x) → ⊓xF ∗ (x) → ∗ G∗ (c) . Then make the same move c in the antecedent  and in ⊓xF (x), i.e. make the moves ‘1.c’ and ‘2.1.c’. The game will be brought down to F ∗ (c) → G∗ (c) → F ∗ (c) → G∗ (c) . Finally, switch to CCS. 2

  Lemma 10.8 ⊢ ⊢ ⊢⊓x F1 (x)∧. . .∧Fn (x)∧E(x) → G(x) → ⊓xF1 (x)∧. . .∧⊓xFn (x)∧⊔xE(x) → ⊔xG(x) . Furthermore, there is an effective procedure that takes any particular value of n and constructs an EPM  E such that, for any F1 (x), . . . , Fn (x), E(x), G(x), E⊢ ⊢ ⊢⊓x F1 (x) ∧ . . . ∧ Fn (x) ∧ E(x) → G(x) →  ⊓xF1 (x) ∧ . . . ∧ ⊓xFn (x) ∧ ⊔xE(x) → ⊔xG(x) . ∗ Proof. Strategy for n: Wait till the environment makes a move c in the ⊔  xE (x) component. Then ∗ ∗ ∗ ∗ ∗ make the same move c in ⊔xG (x), ⊓x F1 (x) ∧ . . . ∧ Fn (x) ∧ E (x) → G (x) and each of the ⊓xFi∗(x) components. After this series of moves the game will have evolved to F1∗ (c)∧. . .∧Fn∗ (c)∧E ∗ (c) → G∗ (c) → F1∗ (c) ∧ . . . ∧ Fn∗ (c) ∧ E ∗ (c) → G∗ (c) . Now switch to CCS. 2

Lemma 10.9 Assume t is free for x in F (x). Then ⊢ ⊢ ⊢F (t) → ⊔xF (x). Furthermore, there is an effective function that takes any t and constructs an EPM E such that, for any F (x), whenever t is free for x in F (x), E⊢ ⊢ ⊢F (t) → ⊔xF (x). Proof. Strategy: Let c = e(t). Read c from the valuation tape if necessary. Then make the move ‘2.c’, bringing the game down to F ∗ (c) → F ∗ (c). Then switch to CCS. 2 Lemma 10.10 Assume F does not contain x. Then ⊢ ⊢ ⊢F → ⊓xF . Furthermore, there is an EPM E such that, for any F and x, as long as F does not contain x, E⊢ ⊢ ⊢F → ⊓xF . Proof. In this case we prefer to explicitly write the usually suppressed parameter e. Consider an arbitrary F not containing x, and an arbitrary interpretation ∗ admissible for F → ⊓xF . The formula F → ⊓xF contains x yet F does not. From the definition of admissibility and with a little thought we can see that F ∗ does not depend on x. In turn, this means — as can be seen with some thought — that the move c by the environment (whatever constant c) in e[⊓xF ∗ ] brings this game down to e[F ∗ ]. With this observation in mind, the following strategy can be seen to be successful: Wait till the environment makes the move ‘2.c’ for some constant c. Then read the sequence ‘1.α1 ’, . . . , ‘1.αn ’ of (legal) moves possibly made by the environment before it made the move ‘2.c’, and make the n moves ‘2.α1 ’, . . . , ‘2.αn ’. It can be seen that now the original game e[F ∗ ] → e[⊓xF ∗ ] will have been brought down to hΦie[F ∗ ] → hΦie[F ∗ ], where Φ = h⊤α1 , . . . , ⊤αn i. So, switching to CCS at this point guarantees success. 2 Lemma 10.11 Assume F (x) does not contain y. Then ⊢ ⊢ ⊢⊓yF (y) → In fact, CCS⊢ ⊢ ⊢⊓yF (y) → ⊓xF (x) and CCS⊢ ⊢ ⊢⊔xF (x) → ⊔yF (y). 18

⊓xF (x) and ⊢⊢⊢⊔xF (x) → ⊔yF (y).

Proof. Assuming that F (x) does not contain y and analyzing the relevant definitions, it is not hard to ∗ see that, for any ∗ interpretation ∗ admissible for ∗ ⊓yF (y) → ⊓ ∗xF (x) and/or ⊔xF (x) → ⊔yF (y), we simply have ⊓yF (y) = ⊓xF (x) and ⊔yF (y) = ⊔xF (x) . So, in both cases we deal with a game of the form A → A, for which the ordinary copy-cat strategy is successful. 2 Our proof of the following lemma is fairly long, for which reason it is given separately in Section 11: Lemma 10.12 ⊢ ⊢ ⊢◦| F → ◦| ◦| F . Furthermore, there is an EPM E such that, for any F , E⊢ ⊢ ⊢◦| F → ◦| ◦| F .

11

Proof of Lemma 10.12

Roughly speaking, the uniform solution E for ◦| F → ◦| ◦| F that we are going to construct essentially uses a copy-cat strategy between the antecedent and the consequent. However, this strategy cannot be applied directly in the form of our kind old friend CCS. The problem is that the underlying tree-structure (see Remark 3.9) of the multiset of (legal) runs of F ∗ that is being generated in the antecedent should be a simple tree T , while in the consequent it is in fact what can be called a tree T ′′ of trees. The trick that E uses is that it sees each edge of T in one of two — blue or yellow — colors. This allows E to associate with each branch y of a branch x of T ′′ a single branch z of T , and vice versa. Specifically, with x, y, z being (encoded by) bit strings, z is such that the subsequence of its blue-colored bits (=edges) coincides with x, and the subsequence of its yellow-colored bits coincides with y. By appropriately “translating” and “copying” in the antecedent the replicative moves made by the environment in the consequent, such an isomorphism between the branches of T and the branches of branches of T ′′ can be successfully maintained throughout the play. With this one-to-one correspondence in mind, every time the environment makes a (nonreplicative) move in the position(s) of F ∗ represented by a leaf or a set of leaves of T , the machine repeats the same move in the position(s) represented by the corresponding leaf-of-leaf or leaves-of-leaves of T ′′ , and vice versa. The effect achieved by this strategy is that the multisets of all runs of F ∗ in the antecedent and in the consequent of ◦| F ∗ → ◦| ◦| F ∗ are identical, which, of course, guarantees a win for E. Let us now define things more precisely. A colored bit b is a pair (c, d), where c, called the content of b, is in {0, 1}, and d, called the color of b, is in {blue,yellow}. We will be using the notation c (“blue c”) for the colored bit whose content is c and color is blue, and c (“yellow c”) for the bit whose content is c and color is yellow. A colored bit string is a finite or infinite sequence of colored bits. Consider a colored bit string v. The content of v is the result of “ignoring the colors” in v, i.e. replacing every bit of v by the content of that bit. The blue content of v is the content of the string that results from deleting in v all but blue bits. Yellow content is defined similarly. We use v, v and v to denote the content, blue content and yellow content of v, respectively. Example: if v = 10001, we have v = 10001, v = 10 and v = 001. As in the case of ordinary bit strings, ǫ stands for the empty colored bit string. And, for colored bit strings w and u, w  u again means that w is a (not necessarily proper) initial segment of u. Definition 11.1 A colored tree is a set T of colored bit strings, called its branches, such that the following conditions are satisfied: a) The set {v | v ∈ T } — that we denote by T — is a tree in the sense of Convention 3.5. b) For any w, u ∈ T , if w = u, then w = u. c) For no v ∈ T do we have {v0, v1} ⊆ T or {v0, v1} ⊆ T . A branch v of T is said to be a leaf iff v is a leaf of T . When represented in the style of Figure 1 of [7] (page 36), a colored tree will look like an ordinary tree, with the only difference that now every edge will have one of the colors blue or yellow. Also, by condition (c), both of the outgoing edges (“sibling” edges) of any non-leaf node will have the same color. Lemma 11.2 Assume T is a colored tree, and w, u are branches of T with w  u and w  u. Then w  u. Proof. Assume T is a colored tree, w, u ∈ T , and w 6 u. We want to show that then w 6 u or w 6 u. Let v be the longest common initial segment of w and u, so we have w = vw′ and u = vu′ for some w′ , u′ such that w′ is nonempty and w′ and u′ do not have a nonempty common initial segment. Assume the first 19

bit of w′ is 0 (the cases when it is 1, 0 or 1, of course, will be similar). If u′ is empty, then w obviously contains more blue bits than u does, and we are done. Assume now u′ is nonempty, in particular, b is the first bit of u′ . Since w′ and u′ do not have a nonempty common initial segment, b should be different from 0. By condition (b) of Definition 11.1, the content of b cannot be 0 (for otherwise we would have v0 = vb and hence b = 0). Consequently, b is either 1 or 1. The case b = 1 is ruled out by condition (c) of Definition 11.1. Thus, b = 1. But the blue content of v0 is v0 while the blue content of v1 is v1. Taking into account the obvious fact that the former is an initial segment of w and the latter is an initial segment of u, we find w 6 u. 2 Now comes a description of our EPM E. At the beginning, this machine creates a record T of the type ‘finite colored tree’, and initializes it to {ǫ}. After that, E follows the following procedure: Procedure LOOP: Keep granting permission until the adversary makes a move γ. If γ satisfies the conditions of one of the following four cases, act as the corresponding case prescribes. Otherwise go to an infinite loop in a permission state. Case (i): γ = 2.w: for some bit string w. Let v1 , . . . , vk be11 all of the leaves v of T with w = v. Then make the moves 1.v 1 :, . . . , 1.vk :, update T to T ∪ {v1 0, v1 1, . . . , vk 0, vk 1}, and repeat LOOP. Case (ii): γ = 2.w.u: for some bit strings w, u. Let v1 , . . . , vk be all of the leaves v of T such that w  v and u = v. Then make the moves 1.v 1 :, . . . , 1.vk :, update T to T ∪{v1 0, v1 1, . . . , vk 0, vk 1}, and repeat LOOP. Case (iii): γ = 2.w.u.α for some bits strings w, u and move α. Let v1 , . . . , vk be all of the leaves v of T such that w  v and u  v. Then make the moves 1.v 1 .α, . . . , 1.v k .α, and repeat LOOP. Case (iv): γ = 1.w.α for some bit string w and move α. Let v1 , . . . , vk be all of the leaves v of T with w  v. Then make the moves 2.v 1 .v 1 .α, . . . , 2.v k .v k .α, and repeat LOOP. Fix any interpretation ∗ , valuation e and e-computation branch B of E. Let Θ be the run spelled by B. From the above description it is immediately clear that B is a fair. Hence, in order to show that E wins, it .. ∗ .. .. ∗ would be sufficient to show that Wn◦eF →◦◦F hΘi = ⊤. Notice that the work of E does not depend on e. And, as e is fixed, we can safely and unambiguously omit this parameter (as we often did in the previous A A A section) in expressions such as e[A], LrA e or Wne and just write or say A, Lr or Wn . Of course, E is | | | | ∗ ∗ interpretation-blind, so as long as it wins ◦ F → ◦ ◦ F , it is a uniform solution for ◦ F → ◦| ◦| F . Let N = {1, . . . , m} if LOOP is iterated the finite number m of times in B, and N = {1, 2, 3, . . .} otherwise. For i ∈ N , we let Ti denote the value of record T at the beginning of the ith iteration of LOOP; Θi will mean the initial segment of Θ consisting of the moves made by the beginning of the ith iteration of 2. LOOP. Finally, Φi will stand for ¬Θ1. i and Ψi for Θi . From the description of LOOP it is immediately obvious that, for each i ∈ N , Ti is a finite colored tree, and that T1 ⊆ T2 ⊆ . . . ⊆ Ti . In our subsequent reasoning we will implicitly rely on this fact. Lemma 11.3 For every i with i ∈ N , we have: a) Φi is prelegal and TreehΦi i = Ti . b) Ψi is prelegal. is prelegal. c) For every leaf x of TreehΨi i, Ψx i d) For every leaf z of Ti , z is a leaf of TreehΨi i and z is a leaf of TreehΨiz i. e) For every leaf x of TreehΨi i and every leaf y of TreehΨx i i, there is a leaf z of Ti such that x = z and y = z. By Lemma 11.2, such a z is unique. z f ) For every leaf z of Ti , Φi = (Ψiz )z . .. ∗ .. .. ∗ g) Θi is a legal position of ◦| F ∗ → ◦| ◦| F ∗ ; hence, Φi ∈ Lr◦F and Ψi ∈ Lr◦◦F . Proof. We proceed by induction on i. The basis case with i = 1 is rather straightforward for each clause of the lemma and we do not discuss it. For the induction step, assume i + 1 ∈ N , and the seven clauses of the lemma are true for i. Clause (a): By the induction hypothesis, Φi is prelegal and TreehΦi i = T i . Assume first that the ith iteration of LOOP deals with Case (i), so that Φi+1 = hΦi , ⊥v1 :, . . . , ⊥v k :i. Each of v 1 , . . . , v k is a leaf of 11 In

each of the four cases we assume that the list v1 , . . . , vk is arranged lexicographically.

20

T i , i.e. a leaf of TreehΦi i. This guarantees that Φi+1 is prelegal. Also, by the definition of function Tree, we have TreehΦi+1 i = TreehΦi i ∪ {v 1 0, v 1 1, . . . , v k 0, v k 1}. But the latter is nothing but T i+1 as can be seen from the description of how Case (i) updates Ti to Ti+1 . A similar argument applies when the ith iteration of LOOP deals with Case (ii). Assume now the ith iteration of LOOP deals with Case (iii). Note that the moves made in the antecedent of ◦| F ∗ → ◦| ◦| F ∗ (the moves that bring Φi to Φi+1 ) are nonreplicative — specifically, look like v.α where v ∈ T i = TreehΦi i. Such moves yield prelegal positions and do not change the value of Tree, so that TreehΦi i = TreehΦi+1 i. It remains to note that T is not updated in this subcase, so that we also have T i+1 = T i . Hence TreehΦi+1 i = T i+1 . Finally, suppose the ith iteration of LOOP deals with Case (iv). It is the environment who moves in the antecedent of ◦| F ∗ → ◦| ◦| F ∗ , and does so before the machine makes any moves. Then the clean environment assumption — in conjunction with the induction hypothesis — implies that such a move cannot bring Φi to an illegal position of ◦| F ∗ and hence cannot bring it to a non-prelegal position. So, Φi+1 is prelegal. As for TreehΦi+1 i = T i+1 , it holds for the same reason as in the previous case. Clause (b): If the ith iteration of LOOP deals with Case (i), (ii) or (iii), it is the environment who moves in the consequent of ◦| F ∗ → ◦| ◦| F ∗ , and the clean environment assumption guarantees that Ψi+1 is prelegal. Assume now that the ith iteration of LOOP deals with Case (iv), so that Ψi+1 = hΨi , ⊤v 1 .v 1 .α, . . . , ⊤vk .v k .αi. By the induction hypothesis for clause (d), each v j (1 ≤ j ≤ k) is a leaf of TreehΨi i, so adding the moves ⊤v 1 .v 1 .α, . . . , ⊤vk .v k does not bring Ψi to a non-prelegal position, nor does it modify TreehΨi i because the moves are nonreplicative. Hence Ψi+1 is prelegal. Clause (c): Just as in the previous clause, when the ith iteration of LOOP deals with Case (i), (ii) or (iii), the desired conclusion follows from the clean environment assumption. Assume now that the ith iteration of LOOP deals with Case (iv). Consider any leaf x of TreehΨi+1 i. As noted when discussing Case (iv) in the proof of Clause (b), TreehΨi i = TreehΨi+1 i, so x is also a leaf of TreehΨi i. Therefore, x x if Ψx i+1 = Ψi , the conclusion that Ψi+1 is prelegal follows from the induction hypothesis. Suppose now x x x Ψi+1 6= Ψi . Note that then Ψi+1 looks like hΨx i , ⊤y1 .α, . . . , ⊤ym .αi, where for each yj (1 ≤ j ≤ m) we have z = x and z = yj for some leaf z of Ti . By the induction hypothesis for clause (d), each such yj is a leaf x of TreehΨx is prelegal. Adding to such a position i i. By the induction hypothesis for the present clause, Ψi the nonreplicative moves ⊤y1 .α, . . . , ⊤ym .α — where the yj are leaves of TreehΨx i i — cannot bring it to a non-prelegal position. Thus, Ψx remains prelegal. i+1 Clauses (d) and (e): If the ith iteration of LOOP deals with Cases (iii) or (iv), Ti is not modified, and no moves of the form x: or x.y: (where x, y are bit strings) are made in the consequent of ◦| F ∗ → ◦| ◦| F ∗ , so TreehΨi i and TreehΨx i i (any leaf x of TreehΨi i) are not affected, either. Hence Clauses (d) and (e) for i + 1 are automatically inherited from the induction hypothesis for these clauses. This inheritance also takes place — even if no longer “automatically” — when the ith iteration of LOOP deals with Case (i) or (ii). This can be verified by a routine analysis of how Cases (i) and (ii) modify Ti and the other relevant parameters. Details are left to the reader. Clause (f ): Consider any leaf z of Ti+1 . When the ith iteration of LOOP deals with Case (i) or (ii), no moves of the form x.α are made in the antecedent of ◦| F ∗ → ◦| ◦| F ∗ , and no moves of the form x.y.α are made in the consequent (any bit strings x, y). Based on this, it is easy to see that for all bit strings x, y — x y y including the case x = z and y = z — we have Φx and (Ψx = (Ψx . Hence clause (f) for i+1 ) i ) i+1 = Φi i + 1 is inherited from the same clause for i. Now suppose the ith iteration of LOOP deals with Case (iii). Then Ti+1 = Ti and hence z is also a leaf of Ti . From the description of Case (iii) one can easily see that if z z z z z z w 6 z or u 6 z, we have Φi+1 = Φi and (Ψz = (Ψz , so the equation Φi+1 = (Ψz is true by i+1 ) i ) i+1 ) z

z

z z z the induction hypothesis; and if w  z and u  z, then Φi+1 = hΦi , ⊥αi and (Ψi+1 , ⊥αi. ) = h(Ψz i ) z

z

z z But, by the induction hypothesis, Φi = (Ψz . Hence Φi+1 = (Ψz . A similar argument applies i ) i+1 ) when the ith iteration of LOOP deals with Case (iv). Clause (g): Note that all of the moves made in any of Cases (i)-(iv) of LOOP have the prefix ‘1.’ or ‘2.’, i.e. are made either in the antecedent or the consequent of ◦| F ∗ → ◦| ◦| F ∗ . Hence, in order to show that Θi+1 .. ∗ .. .. ∗ is a legal position of ◦| F ∗ → ◦| ◦| F ∗ , it would suffice to verify that Φi+1 ∈ Lr◦F and Ψi+1 ∈ Lr◦◦F . Suppose the ith iteration. . of LOOP deals with Case (i) or (ii). The clean environment assumption .. ∗ guarantees that Ψi+1 ∈ Lr◦◦F . In the antecedent of ◦| F ∗ → ◦| ◦| F ∗ only replicative moves are made. Replicative moves can yield an illegal position (Φi+1 in our case) of a ◦| -game only if they yield a non-

21

prelegal position. But, by clause (a), Φi+1 is prelegal. Hence it is a legal position of ◦| F ∗ .. . .. ∗ Suppose now the ith iteration of LOOP deals with Case (iii). Again, that Ψi+1. ∈ Lr◦◦F is guaranteed . ∗ by the clean environment assumption. So, we only need to verify that Φi+1 ∈ Lr◦F . . By clause (a), this ∗ ◦. F position is prelegal. So, it remains to see that, for any leaf y of TreehΦi+1 i, Φz . Pick an arbitrary i+1 ∈ Lr leaf y of TreehΦi+1 i — i.e., by clause (a), of T i+1 . Let z be the leaf of Ti+1 with y = z. We already know .. .. ∗ . ∗ ◦. F that Ψi+1 ∈ Lr◦◦F . By clause (d), we also know that z is a leaf of TreehΨi+1 i. Consequently, Ψz . i+1 ∈ Lr z z ∗ i. Hence, (Ψ ) should be a legal position of F . But, by Again by clause (d), z is a leaf of TreehΨz i+1 i+1 ∗

z

F z clause (f), Φi+1 = (Ψz . Thus, Φy . i+1 ) i+1 ∈ Lr Finally,. suppose the ith iteration of LOOP deals with Case (iv). By the clean environment assumption, . ∗ Φi+1 ∈ Lr◦F . Now consider Ψi+1 . This position is prelegal by clause (b). So, in order for Ψi+1 to be a legal .. ∗ x | | ◦. F ∗ position of ◦ ◦ F , for every leaf x of TreehΨi+1 i we should have Ψi+1 ∈ Lr . Consider an arbitrary such . ∗ x ◦. F leaf x. By clause (c), Ψx is that, for every leaf i+1 is prelegal. Hence, a sufficient condition for Ψi+1 ∈ Lr x x y F∗ y of TreehΨi+1 i, (Ψi+1 ) ∈ Lr . So, let y be an arbitrary such leaf. By clause (e), there is a leaf z of Ti+1 ..

z



y . But we know that Φi+1 ∈ Lr◦F and such that z = x and z = y. Therefore, by clause (f), Φi+1 = (Ψx i+1 ) ∗



z

y ∈ LrF . 2 hence (with clause (a) in mind) Φi+1 ∈ LrF . Consequently, (Ψx i+1 )

Lemma 11.4 For every finite initial segment Υ of Θ, there is i ∈ N such that Υ is a (not necessarily proper) initial segment of Θi and hence of every Θj with i ≤ j ∈ N . Proof. The statement of the lemma is straightforward when there are infinitely many iterations of LOOP, for each iteration adds a nonzero number of new moves to the run and hence there are arbitrarily long Θi s, each of them being an initial segment of Θ. Suppose now LOOP is iterated a finite number m of times. It would be (necessary and) sufficient to verify that in this case Θ = Θm , i.e. no moves are made during the last iteration of LOOP. But this is indeed so. From the description of LOOP we see that the machine does not make any moves during a given iteration unless the environment makes a move γ first. So, assume ⊥ makes move γ. during the mth iteration of LOOP. By the clean environment assumption, we . ∗ .. .. ∗ should have hΘm , ⊥γi ∈ Lr◦F →◦◦F . It is easy to see that such a γ would have to satisfy the conditions of one of the Cases (i)-(iv) of LOOP. But then there would be an (m + 1)th iteration of LOOP, contradicting out assumption that there are only m iterations. 2 Let us use Φ and Ψ to denote ¬Θ1. and Θ2. , respectively. Of course, the statement of Lemma 11.4 is true for Φ and Ψ (instead of Θ) as well. Taking into account that, by definition, a given run is legal if all of its finite initial segments are legal, the following fact is an immediate corollary of Lemmas 11.4 and 11.3(g): ...

Θ ∈ Lr◦F



..

→◦.. ◦.. F ∗

...

... ...





. Hence, Φ ∈ Lr◦F and Ψ ∈ Lr◦◦F . ..

(6)

..

.. ∗ ∗ ◦F →◦. ◦. F

To .complete our proof of Lemma 10.12, we need .to show that Wn hΘi = ⊤. With (6) in mind, . ∗ . ∗ ◦. ◦. F ◦. ◦. F if Wn hΨi = ⊤, we are done. Assume now Wn hΨi = ⊥. Then, by the definition of ◦| , there is an x infinite bit string x such that Ψ is a legal but lost (by ⊤) run of ◦| F ∗ . This means that, for some infinite bit string y, ∗ WnF h(Ψx )y i = ⊥. (7) Fix these x and y. For each i ∈ N , let xi denote the (obviously unique) leaf of TreehΨi i such that xi  x; i i such that yi  y. Next, let zi denote the leaf of Ti and let yi denote the (again unique) leaf of TreehΨx i with z i = xi and z i = yi . According to Lemma 11.3(e), such a zi exists and is unique. Consider any i with i + 1 ∈ N . Clearly xi  xi+1 and yi  yi+1 . By our choice of the zj , we then have z i  z i+1 and z i  z i+1 . Hence, by Lemma 11.2, zi  zi+1 . Let us fix an infinite bit string z such that for every i ∈ N , z i  z. Based on the just-made observation that we always have zi  zi+1 , such a z exists. In view of Lemma 11.4, Lemma 11.3(f) easily allows us to find that Φz = (Ψx )y . Therefore, by (7), .. ∗ | F∗ ◦. F z Wn hΦ i = ⊥. By the .definition of ◦ , this means that Wn hΦi = ⊥. Hence, by the definition of → . ∗ .. .. ∗ and with (6) in mind, Wn◦F →◦◦F hΘi = ⊤. Done.

22

12

Proof of Theorem 7.1

Now we are ready to prove our main Theorem 7.1. Consider an arbitrary sequent S with INT ⊢ S. By induction on the INT-derivation of S, we are going to show that S has a uniform solution E. This is sufficient to conclude that INT is ‘uniformly sound’. The theorem also claims ‘constructive soundness’, i.e. that such an E can be effectively built from a given INT-derivation of S. This claim of the theorem will be automatically taken care of by the fact that our proof of the existence of E is constructive: the uniformvalidity and closure lemmas on which we rely provide a way for actually constructing a corresponding uniform solution. With this remark in mind and for the considerations of readability, in what follows we only talk about uniform validity without explicitly mentioning uniform solutions for the corresponding formulas/sequents and without explicitly showing how to construct such solutions. Also, we no longer use ⇒ or ◦– , seeing each sequent F ⇒ K as the formula ◦| F → K and each subformula E1 ◦– E2 of such a formula as ◦| E1 → E2 . This is perfectly legitimate because, by definition, (F ⇒ K)∗ = (◦| F → K)∗ and (E1 ◦– E2 )∗ = (◦| E1 → E2 )∗ . There are 15 cases to consider, corresponding to the 15 possible rules that might have been used at the last step of an INT-derivation of S, with S being the conclusion of the rule. In each non-axiom case below, “induction hypothesis” means the assumption that the premise(s) of the corresponding rule is (are) uniformly valid. The goal in each case is to show that the conclusion of the rule is also uniformly valid. “Modus ponens” should be understood as Lemma 9.4, and “transitivity” as Lemma 9.5. Identity: Immediately from Lemma 10.1. Domination: Immediately from Lemma 10.5. Exchange: By the induction hypothesis, ⊢ ⊢ ⊢◦| G ∧ ◦| E ∧ ◦| F ∧ ◦| H → K. And, by Lemma 8.5(a), ⊢ ⊢ ⊢(◦| G ∧ | | | | | | | | | | ⊢ ⊢◦ G∧ ◦ F ∧ ◦ E ∧ ◦ H → K. ◦ E ∧ ◦ F ∧ ◦ H → K) → (◦ G∧ ◦ F ∧ ◦ E ∧ ◦ H → K). Applying modus ponens yields ⊢ |

Weakening: Similar to the previous case, using Lemma 8.5(b) instead of 8.5(a). ⊢ ⊢(◦| F → ◦| F ∧ ◦| F ) → (◦| G ∧ ◦| F → ◦| G ∧ ◦| F ∧ ◦| F ). Contraction: By Lemma 8.5(c) (with empty U ), ⊢ | | | ⊢ ⊢◦| G ∧ ◦| F → ◦| G ∧ ◦| F ∧ ◦| F. But, by the And, by Lemma 10.4, ⊢ ⊢ ⊢◦ F → ◦ F ∧ ◦ F . Hence, by modus ponens, ⊢ | | | induction hypothesis, ⊢ ⊢ ⊢◦ G ∧ ◦ F ∧ ◦ F → K. Hence, by transitivity, ⊢ ⊢ ⊢◦| G ∧ ◦| F → K.   Right ◦– : From Lemma 8.5(d), ⊢ ⊢ ⊢ ◦| G ∧ ◦| F → K → ◦| G → (◦| F → K) . And, by the induction ⊢ ⊢◦| G → (◦| F → K). hypothesis, ⊢ ⊢ ⊢◦| G ∧ ◦| F → K. Applying modus ponens, we get ⊢ Left ◦– : By the induction hypothesis, ⊢ ⊢ ⊢◦| G ∧ ◦| F → K1 ; ⊢ ⊢ ⊢◦| H → K2 .

(8) (9)

⊢ ⊢ ⊢◦| G ∧ ◦| H ∧ ◦| (◦| K2 → F ) → K1 .

(10)

Our goal is to show that By Lemma 9.1, (9) implies ⊢ ⊢ ⊢◦| (◦| H → K2 ). Also, by Lemma 10.2, ⊢ ⊢ ⊢◦| (◦| H → K2 ) → (◦| ◦| H → ◦| K2 ). | | | ⊢ ⊢◦| (◦| ◦| H → ◦| K2 ), which, Applying modus ponens, we get ⊢ ⊢ ⊢◦ ◦ H → ◦ K2 . Again using Lemma 9.1, we find ⊢ (again) by Lemma 10.2 and modus ponens, implies ⊢ ⊢ ⊢◦| ◦| ◦| H → ◦| ◦| K2 .

(11)

⊢ ⊢◦| H → ◦| ◦| H. Next, Combining Lemmas 8.5(c) (with empty W , U ) and 10.12, by modus ponens, we find ⊢ | | | | | | | by lemma 10.3, ⊢ ⊢ ⊢◦ ◦ H → ◦ ◦ H. Hence, by transitivity, ⊢ ⊢ ⊢◦ H → ◦ ◦ H. At the same time, by Lemma 10.12, ⊢ ⊢ ⊢◦| ◦| H → ◦| ◦| ◦| H. Again by transitivity, ⊢ ⊢ ⊢◦| H → ◦| ◦| ◦| H. This, together with (11), by transitivity, yields ⊢ ⊢ ⊢◦| H → ◦| ◦| K2 .

23

(12)

Next, by Lemma 10.2, ⊢ ⊢ ⊢◦| (◦| K2 → F ) → (◦| ◦| K2 → ◦| F ).

(13)

From Lemma 8.5(e),   ⊢ ⊢ ⊢ ◦| (◦| K2 → F ) → (◦| ◦| K2 → ◦| F ) ∧ (◦| H → ◦| ◦| K2 ) → ◦| (◦| K2 → F ) → (◦| H → ◦| F ) . The above, together with (13) and (12), by modus ponens, yields ⊢ ⊢ ⊢◦| (◦| K2 → F ) → (◦| H → ◦| F ).

(14)

   ⊢ ⊢ ⊢ ◦| (◦| K2 → F ) → (◦| H → ◦| F ) ∧ ◦| G ∧ ◦| F → K1 → ◦| G ∧ ◦| H ∧ ◦| (◦| K2 → F ) → K1 .

(15)

By Lemma 8.5(f),

From (14), (8) and (15), by modus ponens, we obtain the desired (10). Right ⊓: By the induction hypothesis, ⊢ ⊢ ⊢◦| G → K1 , . . . , ⊢ ⊢ ⊢◦| G → Kn . And, from Lemma 8.5(h), | | ⊢ ⊢◦| G → K1 ⊓ . . . ⊓ Kn . ⊢ ⊢ ⊢(◦ G → K1 ) ∧ . . . ∧ (◦ G → Kn ) → (◦ G → K1 ⊓ . . . ⊓ Kn ). Modus ponens yields ⊢ |

Left ⊓: By Lemma 10.6(a), ⊢ ⊢ ⊢◦| (F1 ⊓ .. . ⊓ Fn ) → ◦| Fi ; and, by Lemma 8.5(c), ⊢ ⊢ ⊢ ◦| (F1 ⊓ . . . ⊓ Fn ) →  | | | | | | | | ◦ Fi → ◦ G ∧ ◦ (F1 ⊓ . . . ⊓ Fn ) → ◦ G ∧ ◦ Fi . Modus ponens yields ◦ G ∧ ◦ (F1 ⊓ . . . ⊓ Fn ) → ◦ G ∧ ◦ Fi . But, | | | | ⊢ ⊢◦ G ∧ ◦ (F1 ⊓ . . . ⊓ Fn ) → K. by the induction hypothesis, ⊢ ⊢ ⊢◦ G ∧ ◦ Fi → K. So, by transitivity, ⊢ |

⊢ ⊢(◦| G → Ki ) → (◦| G → Right ⊔: By the induction hypothesis, ⊢ ⊢ ⊢◦| G → Ki . According to Lemma 8.5(j), ⊢ | K1 ⊔ . . . ⊔ Kn ). Therefore, by modus ponens, ⊢ ⊢ ⊢◦ G → K 1 ⊔ . . . ⊔ K n . Left ⊔: By the induction hypothesis, ⊢ ⊢ ⊢◦| G ∧ ◦| F1 → K, . . . , ⊢ ⊢ ⊢◦| G ∧ ◦| Fn → K. And, by Lemma 8.5(i), | | | | | ⊢ ⊢ ⊢(◦ G ∧ ◦ F1 → K) ∧ . . . ∧ (◦ G ∧ ◦ Fn → K) → ◦ G ∧ (◦ F1 ⊔ . . . ⊔ ◦| Fn ) → K . Hence, by modus ponens, |

⊢ ⊢ ⊢◦| G ∧ (◦| F1 ⊔ . . . ⊔ ◦| Fn ) → K.

(16)

Next, by Lemma 8.5(c),   ⊢ ⊢ ⊢ ◦| (F1 ⊔ . . . ⊔ Fn ) → ◦| F1 ⊔ . . . ⊔ ◦| Fn → ◦| G ∧ ◦| (F1 ⊔ . . . ⊔ Fn ) → ◦| G ∧ (◦| F1 ⊔ . . . ⊔ ◦| Fn ) . ⊢ ⊢◦| G ∧ ◦| (F1 ⊔ . . . ⊔ Fn ) → But, by Lemma 10.6(c), ⊢ ⊢ ⊢◦| (F1 ⊔ . . . ⊔ Fn ) → ◦| F1 ⊔ . . . ⊔ ◦| Fn . Modus ponens yields ⊢ | | | | | ⊢ ⊢◦ G ∧ ◦ (F1 ⊔ . . . ⊔ Fn ) → K. ◦ G ∧ (◦ F1 ⊔ . . . ⊔ ◦ Fn ). From here and (16), by transitivity, ⊢ Right ⊓: First, consider the case when◦| G is nonempty. By the induction hypothesis, ⊢ ⊢ ⊢◦| G → K(y). | | ⊢ ⊢⊓y ◦ G → ⊓yK(y). Therefore, by Lemma 9.2, ⊢ ⊢ ⊢⊓y ◦ G → K(y) and, by Lemma 10.7 and modus ponens, ⊢ | | | ⊢ ⊢◦ G → ⊓yK(y). But, by At the same time, by Lemma 10.10, ⊢ ⊢ ⊢◦ G → ⊓y ◦ G. By transitivity, we then get ⊢ Lemma 10.11, ⊢ ⊢ ⊢⊓yK(y) → ⊓xK(x). Transitivity yields ⊢ ⊢ ⊢◦| G → ⊓xK(x). The case when ◦| G is empty is simpler, for then ⊢ ⊢ ⊢◦| G → ⊓xK(x), i.e. ⊢ ⊢ ⊢⊓xK(x), can be obtained directly from the induction hypothesis by Lemmas 9.2, 10.11 and modus ponens. Left

⊓:

Similar to Left ⊓, only using Lemma 10.6(b) instead of 10.6(a).

⊢ ⊢K(t) → Right ⊔: By the induction hypothesis, ⊢ ⊢ ⊢◦| G → K(t). And, by Lemma 10.9, ⊢ | Transitivity yields ⊢ ⊢ ⊢◦ G → ⊔xK(x).

⊔xK(x).

⊢ ⊢⊓y ◦| G ∧ Left ⊔: By the induction hypothesis, ⊢ ⊢ ⊢◦| G ∧ ◦| F (y) → K. This, by Lemma 9.2, implies ⊢ ◦ F (y) → K . From here, by Lemma 10.8 and modus ponens, we get |

⊢ ⊢ ⊢⊓y ◦| G ∧ ⊔y ◦| F (y) → ⊔yK.

(17)  By Lemma 8.5(c), ⊢ ⊢ ⊢(◦| G → ⊓y ◦| G) → ◦| G ∧ ⊔y ◦| F (y) → ⊓y ◦| G ∧ ⊔y ◦| F (y) . This, together with Lemma 10.10, by modus ponens, implies ◦| G ∧ ⊔y ◦| F (y) → ⊓y ◦| G ∧ ⊔y ◦| F (y). From here and (17), by transitivity, 24

⊢ ⊢ ⊢◦| G∧ ⊔y ◦| F (y) → K. But, by Lemmas 10.11, 8.5(c) and modus ponens, ⊢ ⊢ ⊢◦| G∧ ⊔x◦| F (x) → ◦| G∧ ⊔y ◦| F (y). Hence, by transitivity, ⊢ ⊢ ⊢◦| G ∧ ⊔x◦| F (x) → K. (18)   | | | | | | Next, by Lemma 8.5(c), ⊢ ⊢ ⊢ ◦ ⊔xF (x) → ⊔x◦ F (x) → ◦ G ∧ ◦ ⊔xF (x) → ◦ G ∧ ⊔x◦ F (x) . But, by Lemma 10.6(d), ⊢ ⊢ ⊢◦| ⊔xF (x) → ⊔x◦| F (x). Modus ponens yields ◦| G ∧ ◦| ⊔xF (x) → ◦| G ∧ ⊔x◦| F (x). From here and (18), by transitivity, ⊢ ⊢ ⊢◦| G ∧ ◦| ⊔xF (x) → K.

References [1] A. Blass. Degrees of indeterminacy of games. Fund. Math. 77 (1972) 151-166. [2] A. Blass. A game semantics for linear logic. Ann. Pure Appl. Logic 56 (1992) 183-220. [3] W. Felscher. Dialogues, strategies, and intuitionistic provability. Ann. Pure Appl. Logic 28 (1985) 217-254. [4] J.Y. Girard. Linear logic. Theoret. Comp. Sci. 50 (1) (1987) 1-102. ¨ [5] K. G¨odel, Uber eine bisher noch nicht ben¨ utzte Erweiterung des finiten Standpunktes. Dialectica 12 (1958) 280-287. [6] G. Japaridze. A constructive game semantics for the language of linear logic. Ann. Pure Appl. Logic 85 (1997) 87-156. [7] G. Japaridze. Introduction to computability logic. Ann. Pure Appl. Logic 123 (2003) 1-99. [8] G. Japaridze. From truth to computability I. Theoret. Comp. Sci. (to appear). [9] G. Japaridze. Computability logic: a formal theory of interaction. In: Interactive Computation: The New Paradigm. D.Goldin, S.Smolka and P.Wegner, eds. Springer Verlag, Berlin (to appear). [10] G. Japaridze. Propositional computability logic I. ACM Transactions on Computational Logic 7 (2006) 302-330. [11] G. Japaridze. Propositional computability logic II. ACM Transactions on Computational Logic 7 (2006) 331-362. [12] S.C. Kleene, Introduction to Metamathematics. D. van Nostrand Company, New York / Toronto, 1952. [13] A. N. Kolmogorov. Zur Deutung der intuitionistischen Logik. Math. Zeits. 35 (1932) 58-65. [14] P. Lorenzen. Ein dialogisches Konstruktivit¨ atskriterium. In: Infinitistic Methods. In: PWN, Proc. Symp. Foundations of Mathematics, Warsaw, 1961, 193-200. [15] Y. Medvedev. Interpretation of logical formulas by means of finite problems and its relation to the realizability theory. Soviet Math. Doklady 4 (1963) 180-183.

25