Classifying the computational complexity of problems

1987 ◽  
Vol 52 (1) ◽  
pp. 1-43 ◽  
Author(s):  
Larry Stockmeyer

One of the more significant achievements of twentieth century mathematics, especially from the viewpoints of logic and computer science, was the work of Church, Gödel and Turing in the 1930's which provided a precise and robust definition of what it means for a problem to be computationally solvable, or decidable, and which showed that there are undecidable problems which arise naturally in logic and computer science. Indeed, when one is faced with a new computational problem, one of the first questions to be answered is whether the problem is decidable or undecidable. A problem is usually defined to be decidable if and only if it can be solved by some Turing machine, and the class of decidable problems defined in this way remains unchanged if “Turing machine” is replaced by any of a variety of other formal models of computation. The division of all problems into two classes, decidable or undecidable, is very coarse, and refinements have been made on both sides of the boundary. On the undecidable side, work in recursive function theory, using tools such as effective reducibility, has exposed much additional structure such as degrees of unsolvability. The main purpose of this survey article is to describe a branch of computational complexity theory which attempts to expose more structure within the decidable side of the boundary.Motivated in part by practical considerations, the additional structure is obtained by placing upper bounds on the amounts of computational resources which are needed to solve the problem. Two common measures of the computational resources used by an algorithm are time, the number of steps executed by the algorithm, and space, the amount of memory used by the algorithm.

Author(s):  
Maciej Liskiewicz ◽  
Ulrich Wölfel

This chapter provides an overview, based on current research, on theoretical aspects of digital steganography— a relatively new field of computer science that deals with hiding secret data in unsuspicious cover media. We focus on formal analysis of security of steganographic systems from a computational complexity point of view and provide models of secure systems that make realistic assumptions of limited computational resources of involved parties. This allows us to look at steganographic secrecy based on reasonable complexity assumptions similar to ones commonly accepted in modern cryptography. In this chapter we expand the analyses of stego-systems beyond security aspects, which practitioners find difficult to implement (if not impossible to realize), to the question why such systems are so difficult to implement and what makes these systems different from practically used ones.


Author(s):  
Manuel Blum ◽  
Lenore Blum

The quest to understand consciousness, once the purview of philosophers and theologians, is now actively pursued by scientists of many stripes. This paper studies consciousness from the perspective of theoretical computer science. It formalizes the Global Workspace Theory (GWT) originated by the cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, and others. Our major contribution lies in the precise formal definition of a Conscious Turing Machine (CTM), also called a Conscious AI. We define the CTM in the spirit of Alan Turing’s simple yet powerful definition of a computer, the Turing Machine (TM). We are not looking for a complex model of the brain nor of cognition but for a simple model of (the admittedly complex concept of) consciousness. After formally defining CTM, we give a formal definition of consciousness in CTM. We later suggest why the CTM has the feeling of consciousness. The reasonableness of the definitions and explanations can be judged by how well they agree with commonly accepted intuitive concepts of human consciousness, the range of related concepts that the model explains easily and naturally, and the extent of its agreement with scientific evidence.


1981 ◽  
Vol 46 (3) ◽  
pp. 460-474 ◽  
Author(s):  
Robert P. Daley

In this paper we show how some of the finite injury priority arguments can be simplified by making explicit use of the primitive notions of axiomatic computational complexity theory. Phrases such as “perform n steps in the enumeration of Wi” certainly bear witness to the fact that many of these complexity notions have been used implicitly from the early days of recursive function theory. However, other complexity notions such as that of an “honest” function are not so apparent, neither explicitly nor implicitly. Accordingly, one of the main factors in our simplification of these diagonalization arguments is the replacement of the characteristic function χA of a set A by the function νA, which is the next-element function of the set A. Another important factor is the use of busy beaver sets (see [3]) to provide the basis for the required diagonalizations thereby permitting rather simple and explicit descriptions of the sets constructed. Although the differences between the priority method and our method of construction are subtle, they are nonetheless real and noteworthy.In preparation for the results which follow we devote the remainder of this section to the requisite definitions and notions as well as some preliminary lemmas. A more comprehensive discussion of many of the notions in this section can be found in [3]. Since we will be dealing extensively with relative computations most of our notions here have been correspondingly relativized.


2018 ◽  
pp. 4-7
Author(s):  
S. I. Zenko

The article raises the problem of classification of the concepts of computer science and informatics studied at secondary school. The efficiency of creation of techniques of training of pupils in these concepts depends on its solution. The author proposes to consider classifications of the concepts of school informatics from four positions: on the cross-subject basis, the content lines of the educational subject "Informatics", the logical and structural interrelations and interactions of the studied concepts, the etymology of foreign-language and translated words in the definition of the concepts of informatics. As a result of the first classification general and special concepts are allocated; the second classification — inter-content and intra-content concepts; the third classification — stable (steady), expanding, key and auxiliary concepts; the fourth classification — concepts-nouns, conceptsverbs, concepts-adjectives and concepts — combinations of parts of speech.


Examples of the value that can be created and captured through crowdsourcing go back to at least 1714, when the UK used crowdsourcing to solve the Longitude Problem, obtaining a solution that would enable the UK to become the dominant maritime force of its time. Today, Wikipedia uses crowds to provide entries for the world’s largest and free encyclopedia. Partly fueled by the value that can be created and captured through crowdsourcing, interest in researching the phenomenon has been remarkable. For example, the Best Paper Awards in 2012 for a record-setting three journals—the Academy of Management Review, Journal of Product Innovation Management, and Academy of Management Perspectives—were about crowdsourcing. In spite of the interest in crowdsourcing—or perhaps because of it—research on the phenomenon has been conducted in different research silos within the fields of management (from strategy to finance to operations to information systems), biology, communications, computer science, economics, political science, among others. In these silos, crowdsourcing takes names such as broadcast search, innovation tournaments, crowdfunding, community innovation, distributed innovation, collective intelligence, open source, crowdpower, and even open innovation. The book aims to assemble papers from as many of these silos as possible since the ultimate potential of crowdsourcing research is likely to be attained only by bridging them. The papers provide a systematic overview of the research on crowdsourcing from different fields based on a more encompassing definition of the concept, its difference for innovation, and its value for both the private and public sectors.


J. C. Shepherdson. Algorithmic procedures, generalized Turing algorithms, and elementary recursion theory. Harvey Friedman's research on the foundations of mathematics, edited by L. A. Harrington, M. D. Morley, A. S̆c̆edrov, and S. G. Simpson, Studies in logic and the foundations of mathematics, vol. 117, North-Holland, Amsterdam, New York, and Oxford, 1985, pp. 285–308. - J. C. Shepherdson. Computational complexity of real functions. Harvey Friedman's research on the foundations of mathematics, edited by L. A. Harrington, M. D. Morley, A. S̆c̆edrov, and S. G. Simpson, Studies in logic and the foundations of mathematics, vol. 117, North-Holland, Amsterdam, New York, and Oxford, 1985, pp. 309–315. - A. J. Kfoury. The pebble game and logics of programs. Harvey Friedman's research on the foundations of mathematics, edited by L. A. Harrington, M. D. Morley, A. S̆c̆edrov, and S. G. Simpson, Studies in logic and the foundations of mathematics, vol. 117, North-Holland, Amsterdam, New York, and Oxford, 1985, pp. 317–329. - R. Statman. Equality between functionals revisited. Harvey Friedman's research on the foundations of mathematics, edited by L. A. Harrington, M. D. Morley, A. S̆c̆edrov, and S. G. Simpson, Studies in logic and the foundations of mathematics, vol. 117, North-Holland, Amsterdam, New York, and Oxford, 1985, pp. 331–338. - Robert E. Byerly. Mathematical aspects of recursive function theory. Harvey Friedman's research on the foundations of mathematics, edited by L. A. Harrington, M. D. Morley, A. S̆c̆edrov, and S. G. Simpson, Studies in logic and the foundations of mathematics, vol. 117, North-Holland, Amsterdam, New York, and Oxford, 1985, pp. 339–352.

1990 ◽  
Vol 55 (2) ◽  
pp. 876-878
Author(s):  
J. V. Tucker

10.29007/39jj ◽  
2018 ◽  
Author(s):  
Peter Wegner ◽  
Eugene Eberbach ◽  
Mark Burgin

In the paper we prove in a new and simple way that Interactionmachines are more powerful than Turing machines. To do thatwe extend the definition of Interaction machines to multiple interactivecomponents, where each component may perform simple computation.The emerging expressiveness is due to the power of interaction and allowsto accept languages not accepted by Turing machines. The mainresult that Interaction machines can accept arbitrary languages over agiven alphabet sheds a new light to the power of interaction. Despite ofthat we do not claim that Interaction machines are complete. We claimthat a complete theory of computer science cannot exist and especially,Turing machines or Interaction machines cannot be a complete model ofcomputation. However complete models of computation may and shouldbe approximated indefinitely and our contribution presents one of suchattempts.


2019 ◽  
Author(s):  
Iza Romanowska ◽  
Stefani Crabtree ◽  
Kathryn Harris ◽  
Benjamin Davies

Formal models of past human societies informed by archaeological research have a high potential for shaping some of the most topical current debates. Agent-based models, which emphasize how actions by individuals combine to produce global patterns, provide a convenient framework for developing quantitative models of historical social processes. However, being derived from computer science, the method remains largely specialized in archaeology. In this paper and the associated tutorial, we provide a jargon-free introduction to the technique, its potential and limits as well as its diverse applications in archaeology and beyond. We discuss the epistemological rationale of using computational modeling and simulation, classify types of models, and give an overview of the main concepts behind agent-based modeling.


2020 ◽  
Author(s):  
Michael Piotrowski ◽  
Aris Xanthos

The definition of the digital humanities has been a matter of heated discussion ever since the introduction of the term, earning the field the dubious reputation of being undefinable. While some seem to take pride in this reputation, the absence of a coherent definition frequently sparks off acrimonious criticism and debates. More importantly, though, it increasingly becomes a liability in the context of the progressive institutionalization of the digital humanities. Rather than vainly trying to find a definition of digital humanities that is at the same time descriptive and rigorous, we propose a stipulative definition that separates them into theoretical and applied digital humanities: the theoretical digital humanities are the metascientific discipline whose goal is the conception of formal methods that the applied digital humanities use to create formal models in the various humanities disciplines.


Sign in / Sign up

Export Citation Format

Share Document