Redalyc Scientific Information System Network of Scientific Journals from Latin America, the Caribbean, Spain and Portugal

Ruiz-Vanoye, Jorge A.; Díaz-Parra, Ocotlán An Overview of the Theory of Instances Computational Complexity International Journal of Combinatorial Optimization Problems and Informatics, vol. 2, núm. 2, mayo-agosto, 2011, pp. 21-27 International Journal of Combinatorial Optimization Problems and Informatics Morelos, México Available in: http://www.redalyc.org/src/inicio/ArtPdfRed.jsp?iCve=265219625004

International Journal of Combinatorial Optimization Problems and Informatics ISSN (Printed Version): 2007-1558 [email protected] International Journal of Combinatorial Optimization Problems and Informatics México

How to cite

Complete issue

More information about this article

Journal's homepage

www.redalyc.org Non-Profit Academic Project, developed under the Open Acces Initiative

© International Journal of Combinatorial Optimization Problems and Informatics, Vol. 2, No. 2, May-Aug 2011, pp. 21-27. ISSN: 2007-1558.

An Overview of the Theory of Instances Computational Complexity 1

2

Jorge A. Ruiz-Vanoye , Ocotlán Díaz-Parra 1 Universidad Popular Autónoma del Estado de Puebla, Mexico. [email protected] 2 FCAeI. Universidad Autónoma del Estado de Morelos, Mexico. [email protected] Abstract. The theory of the computational complexity is a subject of international interest of scientific, technological and enterprise organizations. The computational complexity contains diverse elements such as the classes of problems complexity (P, NP, NP-hard and NP-complete, and others), complexity of algorithms (it is a way to classify how efficient is an algorithm by means the execution time to solve a problem with the worst-case input), complexity of instances (it is computational complexity measures to determine the complex of the problems instances), and others elements. This work is focused on presenting a review of the theory of instances computational complexity. KeyWords: Instances Computational Complexity, Complexity of instances, computational complexity.

1. Introduction The theory of the computational complexity is the part of the theory of the computation that studies the resources required during the calculation to solve a problem (Hartmanis and Hopcroft, 1971). The resources commonly studied are the time and the space (amount of memory) (Cook, 1983). The theory of the computational complexity is a subject of international interest of scientific, technological and enterprise organizations. The computational complexity contains diverse elements such as the classes of problems complexity, complexity of instances, complexity of algorithms, and others elements. The questions of Hilbert (1927), in a general way, were important for the beginning of the computational complexity: a. Are complete the mathematics, in the sense that it can verify or not each mathematical asseveration? b. Are consistent the mathematics, in the sense that it cannot simultaneously try an asseveration and its negation? c. Are decidable the mathematics, in the sense that a defined method exists that can be applied to any mathematical asseveration, and that determines if this asseveration is true. The goal of Hilbert was to create a formal mathematical system (complete and consistent), in that all the assertions could consider accurately. Its idea was to find an algorithm that determined the true or false of any proposal in the formal system. This problem is called “Entscheidungsproblem” (decision problem, in the German language). Later, Turing (1936) mentions that the third question of Hilbert (the Entscheidungsproblem) could be attacked with the help of a machine, at least with the abstract concept of machine (Turing machine). The intention of Turing was to reduce the calculations to its more concise essential characteristics, describing of a simple way some basic procedures. This paper is focused on presenting a review of the theory of instances computational complexity and in addition mentions some historical research of complexity of algorithms and complexity of problems. The structure of this paper is as follow: In section 2 are the theory of instances computational complexity, and finally in section 3 are the final remarks.

2. The Theory of Instances Computational Complexity The computational complexity contains diverse elements such as the classes of problems complexity (P, NP, NP-hard and NP-complete, and others), complexity of algorithms (it is a way to classify how efficient is an algorithm by means the execution time to solve a problem with the worst-case input), complexity of instances (it is computational complexity measures to determine the complex of the problems instances), and others elements.

Accepted Nov 12, 2010.

Ruiz-Vanoye and Díaz-Parra / An Overview of the Theory of Instances Computational Complexity. IJCOPI Vol. 2, No. 2, May-Aug 2011, pp. 21-27. ISSN: 2007-1558. 2.1. Computational Complexity of problems The theory of Problems Computational Complexity (PCC) is computational complexity classes to determine the complex of the problems. The PCC could be called as Computational Complexity of Problems or Complexity of problems. The theory introduces diverse classes of complexity (P, NP, NP-hard and NP-complete, and others) of real problems (Garey and Johnson, 1979). Some examples of the real problems are (Garey and Johnson, 1979): a) The design of networks, which contains the problems of minimization of routes costs. b) The storage and recovery, which contains problems to maximize the allocation of weights (products) in partitions (storage spaces) to obtain savings in the expenses of storage, among others problems. c) The scheduling and allocation of priorities, that contain the problems of scheduling of manufacture processes (saving in the idling of the manufacture processes), of transport vehicle fleets (saving in gasoline), and others problems. Some definitions of the problems complexity classes are: a. P class. It is the class of recognizable languages by a determinist Turing Machine of one tape in polynomial time (Karp, 1972). b. NP class. It is the class of recognizable languages by a Non-determinist Turing Machine of one tape in polynomial time (Karp, 1972). c. NP-equivalent class. It is the class of problems that are considered NP-easy and NP-hard (Jonsson & Bäckström, 1995). d. NP-easy class. It is the class of problems that are recognizable in polynomial time by a Turing Machine with one Oracle (subroutine). In other words a problem X is NP-easy if and only if a Y problem exists in NP like X is reducible Turing in polynomial (Jonsson & Bäckström, 1995). e. NP-hard class. A Q problem is NP-hard if each problem in NP is reducible to Q (Garey & Johnson, 1979 (Garey and Johnson, 1979); Papadimitriou & Steiglitz, 1982). It is the class of problems classified as problems of combinatorial optimization at least as complex as NP. f. NP-complete class. A L language is NP-complete if L is in NP, and Satisfiability ≤p L (Cormen et al., 2001; Karp, 1972; Cook, 1971). It is the class of problems classified like decision problems. 2.2. Computational Complexity of Algorithms The theory of Algorithms Computational Complexity (ACC) is computational complexity measures (time and space) to determine the relations between the size of algorithms or machines and their efficiency. The ACC could be called as Computational Complexity of Algorithms or Complexity of algorithms. Computational complexity of algorithms is a way to classify how efficient is an algorithm by means the execution time (asymptotic analysis) to solve a problem with the worst-case input. It is expressed by O (f(x1, x2, … xn)) where f is a function of xi parameters of the instance (Cook, 1983). The most commonly classes of complexity of algorithms are: • Polynomial time algorithms (constant time, linear time, quadratic time, cubic time, polynomial time, strongly polynomial time, weakly polynomial time, super-polynomial time and quasi-polynomial time). • Sub-linear time algorithms (logarithmic time, log-logarithmic time, and poly-logarithmic time). • Super-polynomial time algorithms (sub-exponential time, exponential time, factorial time, and double exponential time). Shannon in 1949 proposes the complexity of Boolean circuits (or combinatorial complexity). For this measurement, Shannon is advisable to suppose that function f at issue transforms finite chains of bits (a) into finite chains of bits (b), and the complexity of f is the size of the smaller Boolean circuit than it calculates f for all the inputs of length n. Von Neumann (1953) propose the definitions: polynomial time algorithm and exponential time algorithm. The time complexity of an algorithm is commonly expressed using big O notation (it was popularized in computer science by Donald Knuth (1976). In the work of Cobham (1964) emphasized the term "intrinsic" and he was interested in an independent theory of the machines. He asked if the multiplication is more complex than the sum, and considered that the question could not be answered until the theory was not developed properly. In addition, he defined the class of functions L: those functions on the computable natural numbers in time limited by a polynomial in the length decimal of the input. In the article of Hartmanis and Stearns (1965), the fundamental notion of measurement of complexity is the time of computation is introduced on a machine of Turing multitape.

22

Ruiz-Vanoye and Díaz-Parra / An Overview of the Theory of Instances Computational Complexity. IJCOPI Vol. 2, No. 2, May-Aug 2011, pp. 21-27. ISSN: 2007-1558.

Stockmeyer (1979) mention the computational complexity of some problems and common measures of the computational resources used by an algorithm: time, the number of steps executed by the algorithm, and space (the amount of memory used by the algorithm). 2.3. Computational Complexity of instances The theory of Instances Computational Complexity (ICC) is computational complexity measures (time and space) to determine the complex of the problems instances. The ICC could be called as Computational Complexity of Instances or Complexity of instances. Computational complexity of instances is a measure for the computational complexity of individual instances (the specification of particular values on the parameters of a problem (Garey and Johnson, 1979)) of a string x with respect to a set A and time bound t (Knuth, 1976). Instance complexity or ict(x:A) is defined as the size of the smallest special-case program for A that runs in time t (Knuth, 1976). Shannon (1949) proposed the size of Boolean circuits as a measure of the complexity of functions. A circuit (or strait line program) over a basis (typically AND, OR and NOT) is a sequence of instructions, each producing a function by applying an operation form the basis to previously obtained functions. Boolean combinational circuits are built from Boolean combinational elements that are interconnected by wires. A Boolean combinational element is any circuit element that has a constant number of Boolean inputs and outputs and that performs a well-defined function. Boolean values are drawn from the set {0, 1}, where 0 represents FALSE and 1 represents TRUE (Shannon, 1949). The Boolean circuit complexity or combinatorial complexity consist by a function f that transforms finite chains of bits into finite chains of bits by means a Boolean circuit. The complexity of f (combinatorial complexity of algorithms) is the size of the smaller Boolean circuit than it calculates f for all the inputs of length n (computational complexity of instances). Later, Shannon proposes the Shannon entropy or the concept of classical information theory (Shannon, 1949). It is an ensemble notion; it is a measure of the degree of ignorance concerning which possibility holds in an ensemble with a given a priori probability distribution. Solomonoff (1960) is the first researcher in propose the Kolmogorov complexity. The Kolmogorov complexity (Kolmogorov, 1965; Solomonoff, 1984) provides a measure for the combinatorial complexity of an individual string x. The t-bounded Kolmogorov complexity of a string x is K t ( x ) = min { M | M (λ ) = x and timeM (λ ) ≤ t ( x )} , where K(x) denote the unbounded version of Kolmogorov complexity, M be a deterministic Turing machine (TM), M the size of M, time M( λ ) is the number of steps that M on input λ takes, t ( x ) is a given time function t on instance x. Stockmeyer and Meyer (1973) mentions (first in the Ph.D. thesis of Stockmeyer) an exponential lower bound on the circuit complexity of deciding the weak monadic second-order theory of one successor (WS1S) is proved. Circuits are built from binary operations, or 2-input gates, which compute arbitrary Boolean functions. In particular, to decide the truth of logical formulas of length at most 610 in this second-order language requires a circuit containing at least 10125 gates. So even if each gate were the size of a proton, the circuit would not fit in the known universe. Chaitin (1975), Davis (1978), and Gardner (1979) mention the concept of information content or information-theoretic computational complexity (in algorithmic information theory the primary concept is that of the information content of an individual object), which is a measure of how difficult it is to specify or describe how calculate that object. The information content I(x) of a binary string x is defined to be the size in bits (binary digits) of the smallest program for a canonical universal computer U to calculate x. The information I(x; y) of two strings is defined to be the size of the smallest program that makes U calculates both of them. And the conditional or relative information I(x/y) of x given y is defined to be the size of the smallest program for U to calculate x from y. This field's fundamental concept is the complexity of a binary string (a string of bits, zeros and ones). The complexity of a binary string is the minimum quantity of information needed to define the string. Garey and Johson (1979) mentions: an instance of a problem is obtained by means of the specification of particular values on the parameters of a problem. Hartmanis (1983) suggested by the strong intuitive thinking that also individual problem instances can be inherently hard. Hard independent of any particular algorithm used to decide the problem.

23

Ruiz-Vanoye and Díaz-Parra / An Overview of the Theory of Instances Computational Complexity. IJCOPI Vol. 2, No. 2, May-Aug 2011, pp. 21-27. ISSN: 2007-1558.

Sipser (1983) propose the CD-complexity. The CD-Complexity of x is the shortest program which accepts x and rejects every other string. This notion is only of interest in the time-bounded setting, since its unbounded version coincides with unbounded Kolmogorov complexity. CD-complexity can be considered as a special case of instance complexity. Ko et al. (1984) introduce the Instance complexity notion and the hard instances notion. The instance complexity o t-bounded instance complexity mentions, let A be a recursive set, t a function on the natural numbers, and x any string. The t-bounded instance complexity is IC t ( x | A) = min { M | L( M ) = A and timeM ( x) ≤ t ( x )} , where ICt(x) denote the t-bounded instance complexity, M be a deterministic Turing machine (TM), M the size of M, L(M) is the set accepted by M, time M(x) is the number of steps that M on instance x takes, t ( x ) is a given time function t on instance x. Another of Ko contributions mentions: an instance x is hard for a problem A if for any algorithm M computing A that decides x “fast”, the size of M must be at least as large as the shortest description of x. Wozniakowski (1985) and Traub (1988) propose the information-based complexity (IBC), it seeks to develop general results about the intrinsic difficulty of solving problems where available information is partial or approximate and to apply these results to specific problems. This allows determining what is meant by an optimal algorithm in many practical situations, and offers a variety of interesting and sometimes surprising theoretical results. Later, Yao (1988) builds on Shannon´s classical information theory by considering information that can be reached through a feasible computation (computational information theory). Quantum circuits were first defined by Deutsch (1989) and study of quantum circuit complexity (quantum computational complexity, where algorithms are performed by quantum-mechanical systems) was initiated by Yao (1993). Orponen (1990) introduce a measure for the computational complexity of individual instances of a decision problem. The instance complexity of a string x with respect to a set A and time bound t, ict(x:A), is the size of the smallest special-case program for A that runs in time t, decides x correctly and makes no mistakes on other strings. Orponen et al. (1994) propose if a polynomial self-reducible set has logarithmic instance complexity then it is in P class. They propose to measure (instances complexity or ict) the complexity of each individual instance of decision problems. Claiming that any set A not recognizable within a given time bound t will have infinitely many strings whose instance complexity with respect to A and t will be close to their Kolmogorov Complexity. Ruiz-Vanoye et al. (2010) propose a basic methodology for create the metrics that permit to classify or to determine the instances complexity of combinatorial optimization problems or Complexity of Instances for Combinatorial Optimization Problems (COPs). In other words how much time the best algorithm (complexity of algorithms) requires to solve real problems, and how much time the algorithms with big input size n (larger instances) requires the algorithms to solve the real problem too (complexity of instances). In the theory of the computational complexity, the complexity class of the problems that can be solved by algorithms (computational complexity of algorithms) does not specify the complex of the instances (computational complexity of instances). The measures of complexity for instances are based only on metric size or input size n (Orponen et al., 1994). In other works (Ruiz-vanoye et al., 2010) was proposed another metrics for determine the instances complexity of combinatorial optimization problems, but it doesn't mention why an instance (I1) with input size n1 is more complex than other instance (I2) with input size n2, where n2 = n1.

4. Final remark and discussion The computational complexity contains diverse elements such as the classes of problems complexity, complexity of algorithms, complexity of instances, and others elements. It is necessary to clarify that the computational complexity is not the equal to the computational complexity of algorithms. Many investigators mentions that, the computational complexity is equal to the complexity of algorithms (asymptotic analysis), but in this paper we mentions that, the computational complexity contains diverse elements such as the classes of problems complexity, complexity of algorithms, and the complexity of instances. In Table 1 are the contributions of computational complexity of algorithms (subsection 2.1) and complexity of instances.

24

Ruiz-Vanoye and Díaz-Parra / An Overview of the Theory of Instances Computational Complexity. IJCOPI Vol. 2, No. 2, May-Aug 2011, pp. 21-27. ISSN: 2007-1558. Table 1. Contributions of Computational Complexity (algorithms and instances) Research Computational Computational Complexity Complexity of Algorithms of Instances Shannon (1949, 1949b) Complexity of Boolean Complexity of inputs for Boolean combinational circuits combinational circuits Newman (1953) Polynomial time algorithm and exponential time algorithm Solomonoff (1960) Measure for the combinatorial complexity of an individual string x Cobham (1964) Is more complex the multiplication than the sum? Solomonoff (1964) Kolmogorov complexity Hartmanis Computational complexity of and Stearns (1965) algorithms Kolmogorov (1965) t-bounded Kolmogorov complexity of a string x Stockmeyer & Meyer (1973) An exponential lower bound on the circuit complexity Chaitin (1975) The complexity of a binary string. Information-theoretic computational complexity Knuth (1976) Big-O was popularized in computer science, and the related Omega and Theta notations. Davis (1978) Information-theoretic computational complexity Gardner (1979) The complexity of a binary string. Stockmeyer (1979) measures of the computational resources used by an algorithm Garey & Johnson (1979) NP-completeness Definition of instance Hartmanis (1983) suggested by the strong intuitive feeling that also individual problem instances can be inherently hard Sipser(1983) CD-complexity Ko (1986) Instances complexity notion and hard instance of a computational problem Wozniakowski (1985) Information-based complexity (IBC) Traub (1988) Information-based complexity (IBC) Yao (1988) Computational information theory Deutsch (1989) Quantum circuits Orponen (1990) instance complexity of NP-hard problems Yao (1993) Quantum computational complexity Orponen (1994) Logarithmic instance complexity and instances complexity Ruiz-Vanoye (2010) Methodology to determine instances complexity of Combinatorial Optimization Problems

25

Ruiz-Vanoye and Díaz-Parra / An Overview of the Theory of Instances Computational Complexity. IJCOPI Vol. 2, No. 2, May-Aug 2011, pp. 21-27. ISSN: 2007-1558.

References Chaitin, G. J.: Randomness and mathematical proof. Scientific American 232, No. 5 (1975) 47-52. Chaitin, G.J.: Algorithmic information theory. Encyclopedia of Statistical Sciences, Vol. 1, Wiley, New York (1982) 38-41. Cobham, A.: The intrinsic computational difficulty of functions. Proc. 1964 International Congress for Logic, Methodology, and Philosophy of Sciences. Y. Bar-Hellel, Ed., North Holland, Amsterdam (1965) 24-30. Cook, S.A.: An Overview of Computational Complexity. Communications of the ACM. Vol. 26, No. 6 (1983) 401-408 Cook, S.A.: The Complexity of Theorem Proving Procedures. Proceedings of 3rd ACM Symposium on Theory of Computing (1971) 151-158. Cormen, H.T., Leiserson, C.E., Rivest, R.L. and Stein, C.: Introduction to Algorithms, MIT Press (2001). Davis, M.: What is a computation? Mathematics Today: Twelve Informal Essays. Springer-Verlag, New York, (1978) 241-267. Deutsch, D.: Quantum computational networks. Proc. Roy. Soc. Lond. A 425 (1989) 73–90. Gardner, M.: An introduction to algorithmic information theory emphasizing the fundamental role played by Ω. Sci. Amer., 241, 5 (1979) 20-34. Garey, M.R., Johnson, D.S.: Computers and Intractability, a Guide to the Theory of NP-completeness. W. H. Freeman and Company, New York (1979). Hartmanis, J. and Hopcroft J.E.: An Overview of the Theory of Computational Complexity. Journal of the ACM (JACM). Vol. 18, No. 3 (1971) 444-475. Hartmanis, J., Hopcroft, J.E.: An Overview of the Theory of Computational Complexity, J. ACM, Vol. 18, No. 3 (1971) 444-475. Hartmanis, J., Stearns, R.E.: On the computational complexity of algorithms. Trans. Amer. Math. Soc. 117 (1965) 285-306. Hartmanis, J.: Generalized Kolmogorov complexity and the structure of feasible computations. Proceedings of the 24th Annual Symposium on the Foundations of Computer Science. IEEE. (1983) 439-445. Hilbert, D.: Mathematical problems. Bull. Amer. Math. Soc., Vol. 33, No. 4 (1927) 433-434 Jonsson, P. and Bäckström, C.: Complexity Results for State-Variable Planning under Mixed Syntactical and Structural Restrictions. Proceedings of the sixth international conference on Artificial intelligence: methodology, systems, applications, (1995) 205-213. Karp, R.M.: Reducibility Among Combinatorial Problems. Complexity of Computer Computations, R.E. Miller and J.W. Thatcher, (Ed.), Springer (1972) 85-104. Knuth, D.E.: Big Omicron and big Omega and big Theta, ACM SIGACT News, Vol. 8, No. 2 (1976). Ko, K., Orponen, P., Schoning, U. and Watanabe, O.: What is a hard instance of a computational problem? Structure in Complexity Theory, Lecture Notes in Computer Science 223 (1986) 197-217. Kolmogorov, A.N.: Three approaches to the quantitative definition of information. Prob. Info. Transmission 1 (1965) 1-7. Orponen, P., Ko, K., Schöning, U., Watanabe, O.: Instance complexity. Journal of the ACM (JACM), Vol. 41, No. 1, ACM (1994) 96-121. Orponen, P.: On the instance complexity of NP-hard problems. Proc. Structure in Complexity Theory, 5th Annual Co&, (1990) 20-27. Papadimitriou, C. and Steiglitz, K.: Combinatorial Optimization: Algorithms and Complexity, Prentice-Hall (1982). Ruiz-Vanoye, J.A., Díaz-Parra, O., Pérez-Ortega, J., Pazos R., R.A., Reyes Salgado, G. and González-Barbosa, J.J. Chapter 19: Complexity of Instances for Combinatorial Optimization Problems. In Computational Intelligence & Modern Heuristics. 319-330. Book editor: Al-Dahoud Ali. IN-TECH Education and Publishing. (2010) 319-330. Shannon, C. E. and Weaver, W.: The Mathematical Theory of Communication. University of Illinois Press. (1949). Shannon, C.E.: The synthesis of two terminal switching circuits. Bell System Technical Journal 28 (1949) 59-98. Sipser, M.: A complexity theoretic approach to randomness, Proc. 15 th ACM Symp. Theory of Computing (1983) 330-335 Solomonoff, R.: A Formal Theory of Inductive Inference, Information and Control, Part I, Vol. 7, No. 1 (1964) 1-22, and Part II, Vol. 7, No. 2 (1984) 224-254. Solomonoff, R.: A Preliminary Report on a General Theory of Inductive Inference, Report V-131, Zator Co., Cambridge, Ma. (1960). Stockmeyer, L.J. and Meyer, A.R.: Word problems requiring exponent. Proceedings of the fifth annual ACM symposium on Theory of computing table of contents. Austin, Texas, United States (1973) 1-9. Stockmeyer, L.J.: Classifying the computational complexity of problems. Research Report RC 7606 Math. Sciences Dept., IBM TJ. Watson Research Center, Yorktown Heights, N.Y. (1979) Traub, J. F., Wasilkowski, G. W., and Woźniakowski, H.: Information-Based Complexity. Academic Press Professional, Inc (1988). Traub, J.F.: Introduction to information-based complexity. Complexity in information theory, Y. S. Abu-Mostafa, Ed. Springer-Verlag New York, New York, NY (1988) 62-76. Turing, A.M.: On computable numbers with an application to the Entscheidnungsproblem. Proc. London Math. Soc. ser. 2,

26

Ruiz-Vanoye and Díaz-Parra / An Overview of the Theory of Instances Computational Complexity. IJCOPI Vol. 2, No. 2, May-Aug 2011, pp. 21-27. ISSN: 2007-1558. 43 (1937) 544-546. Von Neumann, J.: A certain zero-sum two-person game equivalent to the optimal assignment problem. Contributions to the Theory of Games II. H. W. Kahn and A. W. Tucker, Eds. Princeton Univ. Press, Princeton, N.J. (1953). Wozniakowski, H.: Survey of information-based complexity. Journal of Complexity, Vol. 1, No. 1 (1985) 11-44. Yao, A.C.: Quantum circuit complexity. In Proceedings of the 34th Annual IEEE Symposium on Foundations of Computer Science. IEEE Computer Society Press, Los Alamitos, Calif. (1993)352-361. Yao, A.C.: Theory and applications of trapdoor functions (extended abstract). Proc. 23rd IEEE Symp. on Foundations of Computer Science. IEEE Computer Society, Los Angeles (1982) 80-91.

27

Ruiz-Vanoye, Jorge A.; Díaz-Parra, Ocotlán An Overview of the Theory of Instances Computational Complexity International Journal of Combinatorial Optimization Problems and Informatics, vol. 2, núm. 2, mayo-agosto, 2011, pp. 21-27 International Journal of Combinatorial Optimization Problems and Informatics Morelos, México Available in: http://www.redalyc.org/src/inicio/ArtPdfRed.jsp?iCve=265219625004

International Journal of Combinatorial Optimization Problems and Informatics ISSN (Printed Version): 2007-1558 [email protected] International Journal of Combinatorial Optimization Problems and Informatics México

How to cite

Complete issue

More information about this article

Journal's homepage

www.redalyc.org Non-Profit Academic Project, developed under the Open Acces Initiative

© International Journal of Combinatorial Optimization Problems and Informatics, Vol. 2, No. 2, May-Aug 2011, pp. 21-27. ISSN: 2007-1558.

An Overview of the Theory of Instances Computational Complexity 1

2

Jorge A. Ruiz-Vanoye , Ocotlán Díaz-Parra 1 Universidad Popular Autónoma del Estado de Puebla, Mexico. [email protected] 2 FCAeI. Universidad Autónoma del Estado de Morelos, Mexico. [email protected] Abstract. The theory of the computational complexity is a subject of international interest of scientific, technological and enterprise organizations. The computational complexity contains diverse elements such as the classes of problems complexity (P, NP, NP-hard and NP-complete, and others), complexity of algorithms (it is a way to classify how efficient is an algorithm by means the execution time to solve a problem with the worst-case input), complexity of instances (it is computational complexity measures to determine the complex of the problems instances), and others elements. This work is focused on presenting a review of the theory of instances computational complexity. KeyWords: Instances Computational Complexity, Complexity of instances, computational complexity.

1. Introduction The theory of the computational complexity is the part of the theory of the computation that studies the resources required during the calculation to solve a problem (Hartmanis and Hopcroft, 1971). The resources commonly studied are the time and the space (amount of memory) (Cook, 1983). The theory of the computational complexity is a subject of international interest of scientific, technological and enterprise organizations. The computational complexity contains diverse elements such as the classes of problems complexity, complexity of instances, complexity of algorithms, and others elements. The questions of Hilbert (1927), in a general way, were important for the beginning of the computational complexity: a. Are complete the mathematics, in the sense that it can verify or not each mathematical asseveration? b. Are consistent the mathematics, in the sense that it cannot simultaneously try an asseveration and its negation? c. Are decidable the mathematics, in the sense that a defined method exists that can be applied to any mathematical asseveration, and that determines if this asseveration is true. The goal of Hilbert was to create a formal mathematical system (complete and consistent), in that all the assertions could consider accurately. Its idea was to find an algorithm that determined the true or false of any proposal in the formal system. This problem is called “Entscheidungsproblem” (decision problem, in the German language). Later, Turing (1936) mentions that the third question of Hilbert (the Entscheidungsproblem) could be attacked with the help of a machine, at least with the abstract concept of machine (Turing machine). The intention of Turing was to reduce the calculations to its more concise essential characteristics, describing of a simple way some basic procedures. This paper is focused on presenting a review of the theory of instances computational complexity and in addition mentions some historical research of complexity of algorithms and complexity of problems. The structure of this paper is as follow: In section 2 are the theory of instances computational complexity, and finally in section 3 are the final remarks.

2. The Theory of Instances Computational Complexity The computational complexity contains diverse elements such as the classes of problems complexity (P, NP, NP-hard and NP-complete, and others), complexity of algorithms (it is a way to classify how efficient is an algorithm by means the execution time to solve a problem with the worst-case input), complexity of instances (it is computational complexity measures to determine the complex of the problems instances), and others elements.

Accepted Nov 12, 2010.

Ruiz-Vanoye and Díaz-Parra / An Overview of the Theory of Instances Computational Complexity. IJCOPI Vol. 2, No. 2, May-Aug 2011, pp. 21-27. ISSN: 2007-1558. 2.1. Computational Complexity of problems The theory of Problems Computational Complexity (PCC) is computational complexity classes to determine the complex of the problems. The PCC could be called as Computational Complexity of Problems or Complexity of problems. The theory introduces diverse classes of complexity (P, NP, NP-hard and NP-complete, and others) of real problems (Garey and Johnson, 1979). Some examples of the real problems are (Garey and Johnson, 1979): a) The design of networks, which contains the problems of minimization of routes costs. b) The storage and recovery, which contains problems to maximize the allocation of weights (products) in partitions (storage spaces) to obtain savings in the expenses of storage, among others problems. c) The scheduling and allocation of priorities, that contain the problems of scheduling of manufacture processes (saving in the idling of the manufacture processes), of transport vehicle fleets (saving in gasoline), and others problems. Some definitions of the problems complexity classes are: a. P class. It is the class of recognizable languages by a determinist Turing Machine of one tape in polynomial time (Karp, 1972). b. NP class. It is the class of recognizable languages by a Non-determinist Turing Machine of one tape in polynomial time (Karp, 1972). c. NP-equivalent class. It is the class of problems that are considered NP-easy and NP-hard (Jonsson & Bäckström, 1995). d. NP-easy class. It is the class of problems that are recognizable in polynomial time by a Turing Machine with one Oracle (subroutine). In other words a problem X is NP-easy if and only if a Y problem exists in NP like X is reducible Turing in polynomial (Jonsson & Bäckström, 1995). e. NP-hard class. A Q problem is NP-hard if each problem in NP is reducible to Q (Garey & Johnson, 1979 (Garey and Johnson, 1979); Papadimitriou & Steiglitz, 1982). It is the class of problems classified as problems of combinatorial optimization at least as complex as NP. f. NP-complete class. A L language is NP-complete if L is in NP, and Satisfiability ≤p L (Cormen et al., 2001; Karp, 1972; Cook, 1971). It is the class of problems classified like decision problems. 2.2. Computational Complexity of Algorithms The theory of Algorithms Computational Complexity (ACC) is computational complexity measures (time and space) to determine the relations between the size of algorithms or machines and their efficiency. The ACC could be called as Computational Complexity of Algorithms or Complexity of algorithms. Computational complexity of algorithms is a way to classify how efficient is an algorithm by means the execution time (asymptotic analysis) to solve a problem with the worst-case input. It is expressed by O (f(x1, x2, … xn)) where f is a function of xi parameters of the instance (Cook, 1983). The most commonly classes of complexity of algorithms are: • Polynomial time algorithms (constant time, linear time, quadratic time, cubic time, polynomial time, strongly polynomial time, weakly polynomial time, super-polynomial time and quasi-polynomial time). • Sub-linear time algorithms (logarithmic time, log-logarithmic time, and poly-logarithmic time). • Super-polynomial time algorithms (sub-exponential time, exponential time, factorial time, and double exponential time). Shannon in 1949 proposes the complexity of Boolean circuits (or combinatorial complexity). For this measurement, Shannon is advisable to suppose that function f at issue transforms finite chains of bits (a) into finite chains of bits (b), and the complexity of f is the size of the smaller Boolean circuit than it calculates f for all the inputs of length n. Von Neumann (1953) propose the definitions: polynomial time algorithm and exponential time algorithm. The time complexity of an algorithm is commonly expressed using big O notation (it was popularized in computer science by Donald Knuth (1976). In the work of Cobham (1964) emphasized the term "intrinsic" and he was interested in an independent theory of the machines. He asked if the multiplication is more complex than the sum, and considered that the question could not be answered until the theory was not developed properly. In addition, he defined the class of functions L: those functions on the computable natural numbers in time limited by a polynomial in the length decimal of the input. In the article of Hartmanis and Stearns (1965), the fundamental notion of measurement of complexity is the time of computation is introduced on a machine of Turing multitape.

22

Ruiz-Vanoye and Díaz-Parra / An Overview of the Theory of Instances Computational Complexity. IJCOPI Vol. 2, No. 2, May-Aug 2011, pp. 21-27. ISSN: 2007-1558.

Stockmeyer (1979) mention the computational complexity of some problems and common measures of the computational resources used by an algorithm: time, the number of steps executed by the algorithm, and space (the amount of memory used by the algorithm). 2.3. Computational Complexity of instances The theory of Instances Computational Complexity (ICC) is computational complexity measures (time and space) to determine the complex of the problems instances. The ICC could be called as Computational Complexity of Instances or Complexity of instances. Computational complexity of instances is a measure for the computational complexity of individual instances (the specification of particular values on the parameters of a problem (Garey and Johnson, 1979)) of a string x with respect to a set A and time bound t (Knuth, 1976). Instance complexity or ict(x:A) is defined as the size of the smallest special-case program for A that runs in time t (Knuth, 1976). Shannon (1949) proposed the size of Boolean circuits as a measure of the complexity of functions. A circuit (or strait line program) over a basis (typically AND, OR and NOT) is a sequence of instructions, each producing a function by applying an operation form the basis to previously obtained functions. Boolean combinational circuits are built from Boolean combinational elements that are interconnected by wires. A Boolean combinational element is any circuit element that has a constant number of Boolean inputs and outputs and that performs a well-defined function. Boolean values are drawn from the set {0, 1}, where 0 represents FALSE and 1 represents TRUE (Shannon, 1949). The Boolean circuit complexity or combinatorial complexity consist by a function f that transforms finite chains of bits into finite chains of bits by means a Boolean circuit. The complexity of f (combinatorial complexity of algorithms) is the size of the smaller Boolean circuit than it calculates f for all the inputs of length n (computational complexity of instances). Later, Shannon proposes the Shannon entropy or the concept of classical information theory (Shannon, 1949). It is an ensemble notion; it is a measure of the degree of ignorance concerning which possibility holds in an ensemble with a given a priori probability distribution. Solomonoff (1960) is the first researcher in propose the Kolmogorov complexity. The Kolmogorov complexity (Kolmogorov, 1965; Solomonoff, 1984) provides a measure for the combinatorial complexity of an individual string x. The t-bounded Kolmogorov complexity of a string x is K t ( x ) = min { M | M (λ ) = x and timeM (λ ) ≤ t ( x )} , where K(x) denote the unbounded version of Kolmogorov complexity, M be a deterministic Turing machine (TM), M the size of M, time M( λ ) is the number of steps that M on input λ takes, t ( x ) is a given time function t on instance x. Stockmeyer and Meyer (1973) mentions (first in the Ph.D. thesis of Stockmeyer) an exponential lower bound on the circuit complexity of deciding the weak monadic second-order theory of one successor (WS1S) is proved. Circuits are built from binary operations, or 2-input gates, which compute arbitrary Boolean functions. In particular, to decide the truth of logical formulas of length at most 610 in this second-order language requires a circuit containing at least 10125 gates. So even if each gate were the size of a proton, the circuit would not fit in the known universe. Chaitin (1975), Davis (1978), and Gardner (1979) mention the concept of information content or information-theoretic computational complexity (in algorithmic information theory the primary concept is that of the information content of an individual object), which is a measure of how difficult it is to specify or describe how calculate that object. The information content I(x) of a binary string x is defined to be the size in bits (binary digits) of the smallest program for a canonical universal computer U to calculate x. The information I(x; y) of two strings is defined to be the size of the smallest program that makes U calculates both of them. And the conditional or relative information I(x/y) of x given y is defined to be the size of the smallest program for U to calculate x from y. This field's fundamental concept is the complexity of a binary string (a string of bits, zeros and ones). The complexity of a binary string is the minimum quantity of information needed to define the string. Garey and Johson (1979) mentions: an instance of a problem is obtained by means of the specification of particular values on the parameters of a problem. Hartmanis (1983) suggested by the strong intuitive thinking that also individual problem instances can be inherently hard. Hard independent of any particular algorithm used to decide the problem.

23

Ruiz-Vanoye and Díaz-Parra / An Overview of the Theory of Instances Computational Complexity. IJCOPI Vol. 2, No. 2, May-Aug 2011, pp. 21-27. ISSN: 2007-1558.

Sipser (1983) propose the CD-complexity. The CD-Complexity of x is the shortest program which accepts x and rejects every other string. This notion is only of interest in the time-bounded setting, since its unbounded version coincides with unbounded Kolmogorov complexity. CD-complexity can be considered as a special case of instance complexity. Ko et al. (1984) introduce the Instance complexity notion and the hard instances notion. The instance complexity o t-bounded instance complexity mentions, let A be a recursive set, t a function on the natural numbers, and x any string. The t-bounded instance complexity is IC t ( x | A) = min { M | L( M ) = A and timeM ( x) ≤ t ( x )} , where ICt(x) denote the t-bounded instance complexity, M be a deterministic Turing machine (TM), M the size of M, L(M) is the set accepted by M, time M(x) is the number of steps that M on instance x takes, t ( x ) is a given time function t on instance x. Another of Ko contributions mentions: an instance x is hard for a problem A if for any algorithm M computing A that decides x “fast”, the size of M must be at least as large as the shortest description of x. Wozniakowski (1985) and Traub (1988) propose the information-based complexity (IBC), it seeks to develop general results about the intrinsic difficulty of solving problems where available information is partial or approximate and to apply these results to specific problems. This allows determining what is meant by an optimal algorithm in many practical situations, and offers a variety of interesting and sometimes surprising theoretical results. Later, Yao (1988) builds on Shannon´s classical information theory by considering information that can be reached through a feasible computation (computational information theory). Quantum circuits were first defined by Deutsch (1989) and study of quantum circuit complexity (quantum computational complexity, where algorithms are performed by quantum-mechanical systems) was initiated by Yao (1993). Orponen (1990) introduce a measure for the computational complexity of individual instances of a decision problem. The instance complexity of a string x with respect to a set A and time bound t, ict(x:A), is the size of the smallest special-case program for A that runs in time t, decides x correctly and makes no mistakes on other strings. Orponen et al. (1994) propose if a polynomial self-reducible set has logarithmic instance complexity then it is in P class. They propose to measure (instances complexity or ict) the complexity of each individual instance of decision problems. Claiming that any set A not recognizable within a given time bound t will have infinitely many strings whose instance complexity with respect to A and t will be close to their Kolmogorov Complexity. Ruiz-Vanoye et al. (2010) propose a basic methodology for create the metrics that permit to classify or to determine the instances complexity of combinatorial optimization problems or Complexity of Instances for Combinatorial Optimization Problems (COPs). In other words how much time the best algorithm (complexity of algorithms) requires to solve real problems, and how much time the algorithms with big input size n (larger instances) requires the algorithms to solve the real problem too (complexity of instances). In the theory of the computational complexity, the complexity class of the problems that can be solved by algorithms (computational complexity of algorithms) does not specify the complex of the instances (computational complexity of instances). The measures of complexity for instances are based only on metric size or input size n (Orponen et al., 1994). In other works (Ruiz-vanoye et al., 2010) was proposed another metrics for determine the instances complexity of combinatorial optimization problems, but it doesn't mention why an instance (I1) with input size n1 is more complex than other instance (I2) with input size n2, where n2 = n1.

4. Final remark and discussion The computational complexity contains diverse elements such as the classes of problems complexity, complexity of algorithms, complexity of instances, and others elements. It is necessary to clarify that the computational complexity is not the equal to the computational complexity of algorithms. Many investigators mentions that, the computational complexity is equal to the complexity of algorithms (asymptotic analysis), but in this paper we mentions that, the computational complexity contains diverse elements such as the classes of problems complexity, complexity of algorithms, and the complexity of instances. In Table 1 are the contributions of computational complexity of algorithms (subsection 2.1) and complexity of instances.

24

Ruiz-Vanoye and Díaz-Parra / An Overview of the Theory of Instances Computational Complexity. IJCOPI Vol. 2, No. 2, May-Aug 2011, pp. 21-27. ISSN: 2007-1558. Table 1. Contributions of Computational Complexity (algorithms and instances) Research Computational Computational Complexity Complexity of Algorithms of Instances Shannon (1949, 1949b) Complexity of Boolean Complexity of inputs for Boolean combinational circuits combinational circuits Newman (1953) Polynomial time algorithm and exponential time algorithm Solomonoff (1960) Measure for the combinatorial complexity of an individual string x Cobham (1964) Is more complex the multiplication than the sum? Solomonoff (1964) Kolmogorov complexity Hartmanis Computational complexity of and Stearns (1965) algorithms Kolmogorov (1965) t-bounded Kolmogorov complexity of a string x Stockmeyer & Meyer (1973) An exponential lower bound on the circuit complexity Chaitin (1975) The complexity of a binary string. Information-theoretic computational complexity Knuth (1976) Big-O was popularized in computer science, and the related Omega and Theta notations. Davis (1978) Information-theoretic computational complexity Gardner (1979) The complexity of a binary string. Stockmeyer (1979) measures of the computational resources used by an algorithm Garey & Johnson (1979) NP-completeness Definition of instance Hartmanis (1983) suggested by the strong intuitive feeling that also individual problem instances can be inherently hard Sipser(1983) CD-complexity Ko (1986) Instances complexity notion and hard instance of a computational problem Wozniakowski (1985) Information-based complexity (IBC) Traub (1988) Information-based complexity (IBC) Yao (1988) Computational information theory Deutsch (1989) Quantum circuits Orponen (1990) instance complexity of NP-hard problems Yao (1993) Quantum computational complexity Orponen (1994) Logarithmic instance complexity and instances complexity Ruiz-Vanoye (2010) Methodology to determine instances complexity of Combinatorial Optimization Problems

25

Ruiz-Vanoye and Díaz-Parra / An Overview of the Theory of Instances Computational Complexity. IJCOPI Vol. 2, No. 2, May-Aug 2011, pp. 21-27. ISSN: 2007-1558.

References Chaitin, G. J.: Randomness and mathematical proof. Scientific American 232, No. 5 (1975) 47-52. Chaitin, G.J.: Algorithmic information theory. Encyclopedia of Statistical Sciences, Vol. 1, Wiley, New York (1982) 38-41. Cobham, A.: The intrinsic computational difficulty of functions. Proc. 1964 International Congress for Logic, Methodology, and Philosophy of Sciences. Y. Bar-Hellel, Ed., North Holland, Amsterdam (1965) 24-30. Cook, S.A.: An Overview of Computational Complexity. Communications of the ACM. Vol. 26, No. 6 (1983) 401-408 Cook, S.A.: The Complexity of Theorem Proving Procedures. Proceedings of 3rd ACM Symposium on Theory of Computing (1971) 151-158. Cormen, H.T., Leiserson, C.E., Rivest, R.L. and Stein, C.: Introduction to Algorithms, MIT Press (2001). Davis, M.: What is a computation? Mathematics Today: Twelve Informal Essays. Springer-Verlag, New York, (1978) 241-267. Deutsch, D.: Quantum computational networks. Proc. Roy. Soc. Lond. A 425 (1989) 73–90. Gardner, M.: An introduction to algorithmic information theory emphasizing the fundamental role played by Ω. Sci. Amer., 241, 5 (1979) 20-34. Garey, M.R., Johnson, D.S.: Computers and Intractability, a Guide to the Theory of NP-completeness. W. H. Freeman and Company, New York (1979). Hartmanis, J. and Hopcroft J.E.: An Overview of the Theory of Computational Complexity. Journal of the ACM (JACM). Vol. 18, No. 3 (1971) 444-475. Hartmanis, J., Hopcroft, J.E.: An Overview of the Theory of Computational Complexity, J. ACM, Vol. 18, No. 3 (1971) 444-475. Hartmanis, J., Stearns, R.E.: On the computational complexity of algorithms. Trans. Amer. Math. Soc. 117 (1965) 285-306. Hartmanis, J.: Generalized Kolmogorov complexity and the structure of feasible computations. Proceedings of the 24th Annual Symposium on the Foundations of Computer Science. IEEE. (1983) 439-445. Hilbert, D.: Mathematical problems. Bull. Amer. Math. Soc., Vol. 33, No. 4 (1927) 433-434 Jonsson, P. and Bäckström, C.: Complexity Results for State-Variable Planning under Mixed Syntactical and Structural Restrictions. Proceedings of the sixth international conference on Artificial intelligence: methodology, systems, applications, (1995) 205-213. Karp, R.M.: Reducibility Among Combinatorial Problems. Complexity of Computer Computations, R.E. Miller and J.W. Thatcher, (Ed.), Springer (1972) 85-104. Knuth, D.E.: Big Omicron and big Omega and big Theta, ACM SIGACT News, Vol. 8, No. 2 (1976). Ko, K., Orponen, P., Schoning, U. and Watanabe, O.: What is a hard instance of a computational problem? Structure in Complexity Theory, Lecture Notes in Computer Science 223 (1986) 197-217. Kolmogorov, A.N.: Three approaches to the quantitative definition of information. Prob. Info. Transmission 1 (1965) 1-7. Orponen, P., Ko, K., Schöning, U., Watanabe, O.: Instance complexity. Journal of the ACM (JACM), Vol. 41, No. 1, ACM (1994) 96-121. Orponen, P.: On the instance complexity of NP-hard problems. Proc. Structure in Complexity Theory, 5th Annual Co&, (1990) 20-27. Papadimitriou, C. and Steiglitz, K.: Combinatorial Optimization: Algorithms and Complexity, Prentice-Hall (1982). Ruiz-Vanoye, J.A., Díaz-Parra, O., Pérez-Ortega, J., Pazos R., R.A., Reyes Salgado, G. and González-Barbosa, J.J. Chapter 19: Complexity of Instances for Combinatorial Optimization Problems. In Computational Intelligence & Modern Heuristics. 319-330. Book editor: Al-Dahoud Ali. IN-TECH Education and Publishing. (2010) 319-330. Shannon, C. E. and Weaver, W.: The Mathematical Theory of Communication. University of Illinois Press. (1949). Shannon, C.E.: The synthesis of two terminal switching circuits. Bell System Technical Journal 28 (1949) 59-98. Sipser, M.: A complexity theoretic approach to randomness, Proc. 15 th ACM Symp. Theory of Computing (1983) 330-335 Solomonoff, R.: A Formal Theory of Inductive Inference, Information and Control, Part I, Vol. 7, No. 1 (1964) 1-22, and Part II, Vol. 7, No. 2 (1984) 224-254. Solomonoff, R.: A Preliminary Report on a General Theory of Inductive Inference, Report V-131, Zator Co., Cambridge, Ma. (1960). Stockmeyer, L.J. and Meyer, A.R.: Word problems requiring exponent. Proceedings of the fifth annual ACM symposium on Theory of computing table of contents. Austin, Texas, United States (1973) 1-9. Stockmeyer, L.J.: Classifying the computational complexity of problems. Research Report RC 7606 Math. Sciences Dept., IBM TJ. Watson Research Center, Yorktown Heights, N.Y. (1979) Traub, J. F., Wasilkowski, G. W., and Woźniakowski, H.: Information-Based Complexity. Academic Press Professional, Inc (1988). Traub, J.F.: Introduction to information-based complexity. Complexity in information theory, Y. S. Abu-Mostafa, Ed. Springer-Verlag New York, New York, NY (1988) 62-76. Turing, A.M.: On computable numbers with an application to the Entscheidnungsproblem. Proc. London Math. Soc. ser. 2,

26

Ruiz-Vanoye and Díaz-Parra / An Overview of the Theory of Instances Computational Complexity. IJCOPI Vol. 2, No. 2, May-Aug 2011, pp. 21-27. ISSN: 2007-1558. 43 (1937) 544-546. Von Neumann, J.: A certain zero-sum two-person game equivalent to the optimal assignment problem. Contributions to the Theory of Games II. H. W. Kahn and A. W. Tucker, Eds. Princeton Univ. Press, Princeton, N.J. (1953). Wozniakowski, H.: Survey of information-based complexity. Journal of Complexity, Vol. 1, No. 1 (1985) 11-44. Yao, A.C.: Quantum circuit complexity. In Proceedings of the 34th Annual IEEE Symposium on Foundations of Computer Science. IEEE Computer Society Press, Los Alamitos, Calif. (1993)352-361. Yao, A.C.: Theory and applications of trapdoor functions (extended abstract). Proc. 23rd IEEE Symp. on Foundations of Computer Science. IEEE Computer Society, Los Angeles (1982) 80-91.

27