Nondeterministic Polynomial Time versus ... - CiteSeerX

14 downloads 0 Views 187KB Size Report
Dec 4, 1996 - Lance Fortnow. University of Chicago & CWI. Dept. AA1 ..... Addison-Wesley, Reading, Mass., 1979. Imm88] N. Immerman. Nondeterministic ...
Nondeterministic Polynomial Time versus Nondeterministic Logarithmic Space Lance Fortnow University of Chicago & CWI Dept. AA1 P.O. Box 94079 NL-1090 GB Amsterdam The Netherlands December 4, 1996

Abstract

We discuss the possibility of using the relatively old technique of diagonalization to separate complexity classes, in particular NL from NP. We show several results in this direction.  Any nonconstant level of the polynomial-time hierarchy strictly contains NL.  SAT is not simultaneously in NL and deterministic n logj n time for any j .  On the negative side, we present a relativized world where P = NP but any nonconstant level of the polynomial-time hierarchy di ers from P.

1 Introduction Separating complexity classes remains the most important and dicult of problems in theoretical computer science. Circuit complexity and other techniques on nite functions have seen some exciting early successes (see the survey of Boppana and Sipser [BS90]) but have yet to achieve their promise of separating complexity classes above logarithmic space. Other techniques based on logic and geometry also have given us separations only on very restricted models. We should turn back to a traditional separation technique|diagonalization. In this paper, we would like to argue that diagonalization might yet help us separating two common classes, nondeterministic logarithmic space (NL) and nondeterministic polynomial-time (NP). We have no inherent reason to believe that diagonalization will not work to separate NL from NP. One may suggest that oracles exist that make NLA = NPA. However relativization results for space-bounded classes are hard to interpret (see Fortnow [For94]). In particular, any oracle model that collapses NL and NP will also collapse NL and AP (alternating polynomial time) even though we known PSPACE = AP [CKS81], NL  DSPACE[log2 n] [Sav70] and PSPACE strictly contains DSPACE[log2 n] [HS65]. Diagonalization also avoids the limits of combinatorial proofs described by Razborov and Rudich [RR94]. First we prove a powerful lemma showing that any nonconstant level of the polynomial-time hierarchy strictly contains NL. The separation results relies on the strong nondeterministic space  Email: [email protected]. Supported in part by NSF grant CCR 92-53582 and a Fulbright Scholar award.

1

hierarchy that follows from the closure of nondeterministic space under complement [Imm88, Sze88]. Diagonalization forms the core of this hierarchy|one diagonalizes against smaller space machines one at a time. From this lemma we get a mild separation result for satis ability, an NP-complete problem lying in NTIME[n]. We show that SAT does not simultaneously reside in NL and deterministic n logj n time for any j . From this we also conclude that SAT does not have log-time uniform NC1 circuits of n logj n size. If P = NP then every constant level of the polynomial-time hierarchy collapses to P. Suppose one could show that if P = NP then some nonconstant level of the polynomial-time hierarchy would collapse to P. This will separate NP from NL: If NP = NL then P = NP, so a nonconstant level of the polynomial-time hierarchy would collapse to P = NL, a contradiction. However, we show some relativizable limits of this approach: We create a relativized world where for any  > 0, SAT is in DTIME[n1+] but every nonconstant level of the polynomial-time hierarchy does not collapse to P. Still one may try to hope for results like NP = NL implies that a nonconstant level of PH collapses to P. This will still separate NP from NL. Stronger assumptions like NP = L and NP in uniform NC1 will separate NP from L and NP from uniform NC1 respectively. Complexity theorists have devoted much e ort to separating complexity classes like NL and NP. This paper shows that separating these classes might be not nearly as dicult as previously believed, perhaps considerably easier than separating P from NP.

2 De nitions

Most of the complexity classes discussed in this paper like NL, P, NP and the polynomial-time hierarchy have been well studied. De nitions and basic results of these classes can be found in basic textbooks such as Hopcroft and Ullman [HU79] or Garey and Johnson [GJ79]. We need to generalize the polynomial-time hierarchy to have super-constant levels.

De nition 2.1 The class ps n consists of the set of languages accepted by polynomial-time al( )

ternating Turing machines that start in an 9 state and on input x, makes at most s(jxj) ? 1 alternations.

Note that for constant functions s(n) = k, ps(n) corresponds to pk , the kth level of the traditional polynomial-time hierarchy. Chandra, Kozen and Stockmeyer [CKS81] show the surprising power of unlimited alternation.

Theorem 2.2 (Chandra-Kozen-Stockmeyer) PSPACE = [k pn The class NC consists of languages accepted by circuits of bounded fan-in and logarithmic depth. A NC circuit family is t(n)-time uniform if given pointers to two gates of a circuit on k

1

1

n-inputs, a t(n)-time algorithm can determine if one gates feeds into the other. We use SATA to represent a relativized version of satis ability. The relativized language SATA has the following properties for every oracle A: 1. SATA is NPA complete. In fact, every NTIMEA [t(n)] language can be reduced to a formula of size O(t(n) log t(n)). 2

2. SATA is in NTIMEA [n]. 3. Whether  is in SATA depends only on strings of A of length less than jj. See Goldsmith and Joseph [GJ93] for a formal de nition of SATA .

3 Separation In this section, we will separate nonconstant levels of the polynomial-time hierarchy from nondeterministic logarithmic space.

Lemma 3.1 Let s(n) be any monotone unbounded function computable in time polynomial in n. p There exists a language L in s n but not in NL. ( )

Note that we can apply Lemma 3.1 to a variety of slow-growing functions such as log n or the inverse Ackermann function. To prove Lemma 3.1 we need the following strong nondeterministic space hierarchy theorem. This follows from the work of Immerman [Imm88] and Szelepcsenyi [Sze88] showing that nondeterministic space is closed under complement. See Immerman [Imm88] for details.

Theorem 3.2 (Immerman-Szelepcsenyi) Let s (n) and s (n) be any fully space constructible functions such that s (n)  log n and s (n) = o(s (n)). There is a language L in NSPACE[s (n)] but not in NSPACE[s (n)]. 1

1

1

2

2

2

1

The proof of Lemma 3.1 then follows from the following lemma.

Lemma 3.3 For any monotone unbounded function s(n)  n computable in time polynomial in n, NSPACE[s(n) log n]  ps n ( )

The proof builds on ideas from the proof of Theorem 2.2 by Chandra, Kozen and Stockmeyer [CKS81] which itself builds on Savitch's Theorem [Sav70]. Proof: Fix a s(n) log n nondeterministic space-bounded Turing machine M and an input x. Consider the tableau of some potential accepting computation M . Each row i describes the entire con guration of M at time i (except for the input) which has O(s(n) log n) < n2 bits. We can assume that no con guration is repeated so there are at most 2cs(n) log n rows for some constant c. The usual divide-and-conquer algorithm of Chandra, Kozen and Stockmeyer [CKS81] would require (s(n) log n) alternations so we need to be more careful. Instead of just dividing into two pieces like Chandra, Kozen and Stockmeyer, we divide the tableau into an appropriate polynomial number of pieces. This allows us to eliminate the extra log n term. Let ID0 represent the initial con guration of M (x). We can assume that after M accepts it erases its tape, moves the head to the left and goes to a special state and stays there forever. Call this nal con guration IDf . Let IDa ` IDb be true if machine M starting in IDa reaches IDb in one step. Checking whether IDa ` IDb can be done in deterministic polynomial time. De ne CHECK(IDa ; IDb ; t) to be TRUE if M starting in con guration IDa will get to con guration IDb in t steps. We have that M (x) accepts if and only if CHECK(ID0 ; IDf ; 2cs(n) log n ). Fix the polynomial q(n) = n2c . 3

Begin CHECK(IDa; IDb ; t) if t  q(n)(q(n) + 1) Then Existentially guess ID ,: : : ,IDt? . If IDa ` ID and ID ` ID and : : : and IDt? ` IDb, Then output TRUE Else output FALSE Else Existentially guess ID ,: : : ,IDq n 1

1

1

1

2

1

1

( )

Universally guess i in f0; : : : ; q(n)g. Let m = d q(nt)+1 e.

Case (i = 0) output CHECK(IDa ; ID ; m). (0 < i < q(n)) output CHECK(IDi; IDi ; m). (i = q(n)) output CHECK(IDq n ; IDb; t ? mq(n)). END Case END CHECK 1

+1

( )

Figure 1: Algorithm for CHECK In Figure 1, we show how to compute CHECK recursively on an alternating polynomial-time Turing machine. Since the size of the con gurations are bounded by n2 bits and q(n) is a polynomial, CHECK runs in alternating polynomial time. We need to bound the number of alternations. Let ALT(t) be the number of alternations used by CHECK(IDa ; IDb ; t). Each recursive step of CHECK uses 2 alternations. We then have the recurrence ALT(t) = 2 + ALT(d q(n)t + 1 e) For t > q(n)(q(n) + 1) we have

d q(n)t + 1 e  q(n)t + 1 + 1  q(tn) :

We have ALT(q(n)(q(n) + 1)) = 1 so ALT(t) = 2 logq(n) (t). For CHECK(ID0 ; IDf ; 2cs(n) log n ) and q(n) = n2c we have cs(n) log n ) cs(n) log n ALT(2cs(n) log n) = 2 logn (2cs(n) log n) = 2 log(2 log(n2c ) = 2 2c log n = s(n) 2 Toda [Tod91] showed that any constant level of the polynomial-time hierarchy can be reduced to the complexity class PP. We show that extending his result would yield a nice separation. Corollary 3.4 If for any unbounded monotone function s(n) computable in time polynomial in n, ps(n)  PPP then PP strictly contains NL. Proof: Suppose the assumption is true and NL = PP. We then have NL = P = PP so ps(n)  PPP ) ps(n)  PP ) ps(n)  P = NL contradicting Lemma 3.1. 2 2c

4

4 A Separation for Satis ability We show how to use Lemma 3.1 to give a lower bound on the complexity of the boolean satis ability problem. Theorem 4.1 For any constant j ,

SAT 62 NL \ DTIME[n logj n] Proof: We will show the following lemma. Lemma 4.2 If for some constant j , SAT 2 DTIME[n logj n] then p

log n j log log n

= P.

Theorem 4.1 follows from Lemma 4.2 and Lemma 3.1 as follows: Assume SAT 2 NL \ = P. By SAT 2 NL we have NL = P = NP DTIME[n logj n]. By Lemma 4.2 we have p and thus p = NL. But this contradicts Lemma 3.1. Proof of Lemma 4.2: Consider the language QBFs n of quanti ed boolean formulae restricted to s(n) ? 1 alternations where the rst quanti er is \9". Similar to the proof that QBF is PSPACE-complete, we have QBFs n is ps n -complete. Also note that QBFk reduces in linear time to SATQBF ? (one query). We will prove n log n j log log n

log n j log log n

( )

( )

inductively on k that for k < j

log log log

n

( )

k 1 [1]

we have that

QBFk 2 DTIME[2nk3jk logjk n]

2nk3jk logjk n First note that for k < j logloglogn n

4i ln(i)= + 1 and n is much bigger than any strings sets in previous stages. For  of length less than n, set A to properly encode Equation (3). Set A to zero for all the other strings of length less than n. Consider the polynomial-time computation of MiA . We can consider the computation of MiA as a circuit C of size 2n over variables representing whether strings of length at most ni are in A. We wish to convert this circuit to another one that depends only on the variables relating to Bn . For strings of length between n and ni , not in Bn and not used in Equation (3), set them to zero in A. Suppose C contains the variable 1n ?n?1 0. By construction, jj  n. We can replace this variable by simulating whether  2 SATA . We use an _ of size 2jj to guess the possible satisfying assignments and an ^ of size jj to check the assignment over the strings of A queried by  for that potential satisfying assignment. Note these variables represent strings of length less than jj. Replace all of the strings not queried in Bn this way. This adds a depth of 2 to the circuit but now every variable representing a string of length m is replaced by one representing a string of length m1=1+ . If we repeat this process ln(i)= times then all variables of Equation (3) represent strings of length less than n so are all previously encoded. Thus we have a circuit C 0 of size 2n and depth 2 log i= over the ns(n)?1 variables representing whether string in Bn are in A. By Lemma 5.3, C 0 cannot compute parity. Fix a setting of the variables such that C 0 and parity disagree and set A accordingly. This guarantees that L(MiA ) 6= L(A). 2 i

1+

i

6 Conclusions

We believe that this paper has given hope to the idea that perhaps classes like NP and NL can be separated using simple techniques like diagonalization. We should not discard diagonalization as a non-\natural" proof technique. Rather we should see what diagonalization will do for us in the light of our current understanding of complexity classes.

Acknowledgments Much of the research of this paper was motivated by the author's research with Harry Buhrman and Leen Torenvliet on autoreducibility [BFT95]. The author also thanks Harry Buhrman for discussions directly related to this paper.

References [BFT95] H. Buhrman, L. Fortnow, and L. Torenvliet. Using autoreducibility to separate complexity classes. In Proceedings of the 36th IEEE Symposium on Foundations of Computer Science, pages 520{527. IEEE, New York, 1995. [BS90] R. Boppana and M. Sipser. The complexity of nite functions. In J. van Leeuwen, editor, Handbook of Theoretical Computer Science, chapter 14, pages 757{804. North-Holland, 1990. [CKS81] A. Chandra, D. Kozen, and L. Stockmeyer. Alternation. Journal of the ACM, 28(1):114{ 133, 1981. 7

[For94] L. Fortnow. The role of relativization in complexity theory. Bulletin of the European Association for Theoretical Computer Science, 52:229{244, February 1994. [GJ79] M. R. Garey and D. S. Johnson. Computers and intractability. A Guide to the theory of NP-completeness. W. H. Freeman and Company, New York, 1979. [GJ93] Goldsmith and Joseph. Relativized isomorphisms of NP-complete sets. Computational Complexity, 3:186{205, 1993. [Has89] J. Hastad. Almost optimal lower bounds for small depth circuits. In S. Micali, editor, Randomness and Computation, volume 5 of Advances in Computing Research, pages 143{ 170. JAI Press, Greenwich, 1989. [HS65] J. Hartmanis and R. Stearns. On the computational complexity of algorithms. Transactions of the American Mathematical Society, 117:285{306, 1965. [HS66] F. Hennie and R. Stearns. Two-tape simulation of multitape Turing machines. Journal of the ACM, 13(4):533{546, October 1966. [HU79] J. E. Hopcroft and J. D. Ullman. Introduction to Automata Theory, Languages and Computation. Addison-Wesley, Reading, Mass., 1979. [Imm88] N. Immerman. Nondeterministic space is closed under complementation. SIAM Journal on Computing, 17(5):935{938, 1988. [Ko89] K. Ko. Relativized polynomial time hierarchies having exactly k levels. SIAM Journal on Computing, 18:392{408, 1989. [RR94] A. Razborov and S. Rudich. Natural proofs. In Proceedings of the 26th ACM Symposium on the Theory of Computing, pages 204{213. ACM, New York, 1994. [Sav70] W. Savitch. Relationship between nondeterministic and deterministic tape classes. Journal of Computer and System Sciences, 4:177{192, 1970. [Sze88] R. Szelepcsenyi. The method of forced enumeration for nondeterministic automata. Acta Informatica, 26:279{284, 1988. [Tod91] S. Toda. PP is as hard as the polynomial-time hierarchy. SIAM Journal on Computing, 20(5):865{877, 1991.

8