Busy beaver sets and the degrees of unsolvability

1981 ◽  
Vol 46 (3) ◽  
pp. 460-474 ◽  
Author(s):  
Robert P. Daley

In this paper we show how some of the finite injury priority arguments can be simplified by making explicit use of the primitive notions of axiomatic computational complexity theory. Phrases such as “perform n steps in the enumeration of Wi” certainly bear witness to the fact that many of these complexity notions have been used implicitly from the early days of recursive function theory. However, other complexity notions such as that of an “honest” function are not so apparent, neither explicitly nor implicitly. Accordingly, one of the main factors in our simplification of these diagonalization arguments is the replacement of the characteristic function χA of a set A by the function νA, which is the next-element function of the set A. Another important factor is the use of busy beaver sets (see [3]) to provide the basis for the required diagonalizations thereby permitting rather simple and explicit descriptions of the sets constructed. Although the differences between the priority method and our method of construction are subtle, they are nonetheless real and noteworthy.In preparation for the results which follow we devote the remainder of this section to the requisite definitions and notions as well as some preliminary lemmas. A more comprehensive discussion of many of the notions in this section can be found in [3]. Since we will be dealing extensively with relative computations most of our notions here have been correspondingly relativized.

1987 ◽  
Vol 52 (1) ◽  
pp. 1-43 ◽  
Author(s):  
Larry Stockmeyer

One of the more significant achievements of twentieth century mathematics, especially from the viewpoints of logic and computer science, was the work of Church, Gödel and Turing in the 1930's which provided a precise and robust definition of what it means for a problem to be computationally solvable, or decidable, and which showed that there are undecidable problems which arise naturally in logic and computer science. Indeed, when one is faced with a new computational problem, one of the first questions to be answered is whether the problem is decidable or undecidable. A problem is usually defined to be decidable if and only if it can be solved by some Turing machine, and the class of decidable problems defined in this way remains unchanged if “Turing machine” is replaced by any of a variety of other formal models of computation. The division of all problems into two classes, decidable or undecidable, is very coarse, and refinements have been made on both sides of the boundary. On the undecidable side, work in recursive function theory, using tools such as effective reducibility, has exposed much additional structure such as degrees of unsolvability. The main purpose of this survey article is to describe a branch of computational complexity theory which attempts to expose more structure within the decidable side of the boundary.Motivated in part by practical considerations, the additional structure is obtained by placing upper bounds on the amounts of computational resources which are needed to solve the problem. Two common measures of the computational resources used by an algorithm are time, the number of steps executed by the algorithm, and space, the amount of memory used by the algorithm.


1976 ◽  
Vol 41 (3) ◽  
pp. 626-638 ◽  
Author(s):  
Robert P. Daley

We are concerned in this paper with infinite binary sequences which are noncomplex in the sense that their minimal-program complexity (i.e., the lengths of shortest programs for computing their finite initial segments), as a function of the lengths of their initial segments, grows arbitrarily (in an effective sense) slowly. Indeed, the existence of such sequences which are also nonrecursive raises some interesting questions concerning the notion of computability itself. Our first task is to give a characterization of these sequences which does not directly involve the notions of complexity theory. Although these characterizations do involve the primitives of recursive function theory, they do not involve the types of properties which one usually encounters there. While a closer connection is still hoped for, the lack of one is not entirely unexpected. For example, except for the trivial case of degree 0, there is no general correlation between program complexity and degrees of unsolvability. The reason, roughly, is this: even though the inequality deg (f) ≤ deg (g) expresses the relative information between f and g (in the sense that if one knows g then one can compute f), it is a qualitative sort of information. The minimal-program complexity is used as a measure of algorithmic information content and as such is quantitative.The second task of this paper is to present an example of one of these sequences which to its credit possesses a fairly long list of interesting properties, not the least attractive of which is that, though nonrecursive, it is very simple to describe. In fact this sequence has occurred previously in the literature. It is the characteristic sequence of the set of busy beaver numbers first studied by T. Rado [13]. Of particular interest to the discussion in the last section of this paper concerning the notion of computability is the fact that all the initial segments of this sequence are computable by arbitrarily short programs each of which is defined on all inputs and runs very quickly.


4OR ◽  
2021 ◽  
Author(s):  
Gerhard J. Woeginger

AbstractWe survey optimization problems that allow natural simple formulations with one existential and one universal quantifier. We summarize the theoretical background from computational complexity theory, and we present a multitude of illustrating examples. We discuss the connections to robust optimization and to bilevel optimization, and we explain the reasons why the operational research community should be interested in the theoretical aspects of this area.


1996 ◽  
Vol 27 (4) ◽  
pp. 3-7
Author(s):  
E. Allender ◽  
J. Feigenbaum ◽  
J. Goldsmith ◽  
T. Pitassi ◽  
S. Rudich

J. C. Shepherdson. Algorithmic procedures, generalized Turing algorithms, and elementary recursion theory. Harvey Friedman's research on the foundations of mathematics, edited by L. A. Harrington, M. D. Morley, A. S̆c̆edrov, and S. G. Simpson, Studies in logic and the foundations of mathematics, vol. 117, North-Holland, Amsterdam, New York, and Oxford, 1985, pp. 285–308. - J. C. Shepherdson. Computational complexity of real functions. Harvey Friedman's research on the foundations of mathematics, edited by L. A. Harrington, M. D. Morley, A. S̆c̆edrov, and S. G. Simpson, Studies in logic and the foundations of mathematics, vol. 117, North-Holland, Amsterdam, New York, and Oxford, 1985, pp. 309–315. - A. J. Kfoury. The pebble game and logics of programs. Harvey Friedman's research on the foundations of mathematics, edited by L. A. Harrington, M. D. Morley, A. S̆c̆edrov, and S. G. Simpson, Studies in logic and the foundations of mathematics, vol. 117, North-Holland, Amsterdam, New York, and Oxford, 1985, pp. 317–329. - R. Statman. Equality between functionals revisited. Harvey Friedman's research on the foundations of mathematics, edited by L. A. Harrington, M. D. Morley, A. S̆c̆edrov, and S. G. Simpson, Studies in logic and the foundations of mathematics, vol. 117, North-Holland, Amsterdam, New York, and Oxford, 1985, pp. 331–338. - Robert E. Byerly. Mathematical aspects of recursive function theory. Harvey Friedman's research on the foundations of mathematics, edited by L. A. Harrington, M. D. Morley, A. S̆c̆edrov, and S. G. Simpson, Studies in logic and the foundations of mathematics, vol. 117, North-Holland, Amsterdam, New York, and Oxford, 1985, pp. 339–352.

1990 ◽  
Vol 55 (2) ◽  
pp. 876-878
Author(s):  
J. V. Tucker

2019 ◽  
Vol 27 (3) ◽  
pp. 381-439
Author(s):  
Walter Dean

Abstract Computational complexity theory is a subfield of computer science originating in computability theory and the study of algorithms for solving practical mathematical problems. Amongst its aims is classifying problems by their degree of difficulty — i.e., how hard they are to solve computationally. This paper highlights the significance of complexity theory relative to questions traditionally asked by philosophers of mathematics while also attempting to isolate some new ones — e.g., about the notion of feasibility in mathematics, the $\mathbf{P} \neq \mathbf{NP}$ problem and why it has proven hard to resolve, and the role of non-classical modes of computation and proof.


Algorithms ◽  
2020 ◽  
Vol 13 (5) ◽  
pp. 122
Author(s):  
Arne Meier

In this paper, we study the relationship of parameterized enumeration complexity classes defined by Creignou et al. (MFCS 2013). Specifically, we introduce two hierarchies (IncFPTa and CapIncFPTa) of enumeration complexity classes for incremental fpt-time in terms of exponent slices and show how they interleave. Furthermore, we define several parameterized function classes and, in particular, introduce the parameterized counterpart of the class of nondeterministic multivalued functions with values that are polynomially verifiable and guaranteed to exist, TFNP, known from Megiddo and Papadimitriou (TCS 1991). We show that this class TF(para-NP), the restriction of the function variant of NP to total functions, collapsing to F(FPT), the function variant of FPT, is equivalent to the result that OutputFPT coincides with IncFPT. In addition, these collapses are shown to be equivalent to TFNP = FP, and also equivalent to P equals NP intersected with coNP. Finally, we show that these two collapses are equivalent to the collapse of IncP and OutputP in the classical setting. These results are the first direct connections of collapses in parameterized enumeration complexity to collapses in classical enumeration complexity, parameterized function complexity, classical function complexity, and computational complexity theory.


Sign in / Sign up

Export Citation Format

Share Document