scholarly journals Permanent does not have succinct polynomial size arithmetic circuits of constant depth

2013 ◽  
Vol 222 ◽  
pp. 195-207 ◽  
Author(s):  
Maurice Jansen ◽  
Rahul Santhanam
Author(s):  
Eric Allender ◽  
V. Arvind ◽  
Rahul Santhanam ◽  
Fengming Wang

The notion of probabilistic computation dates back at least to Turing, who also wrestled with the practical problems of how to implement probabilistic algorithms on machines with, at best, very limited access to randomness. A more recent line of research, known as derandomization, studies the extent to which randomness is superfluous. A recurring theme in the literature on derandomization is that probabilistic algorithms can be simulated quickly by deterministic algorithms, if one can obtain impressive (i.e. superpolynomial, or even nearly exponential) circuit size lower bounds for certain problems. In contrast to what is needed for derandomization, existing lower bounds seem rather pathetic. Here, we present two instances where ‘pathetic’ lower bounds of the form n 1+ ϵ would suffice to derandomize interesting classes of probabilistic algorithms. We show the following: — If the word problem over S 5 requires constant-depth threshold circuits of size n 1+ ϵ for some ϵ >0, then any language accepted by uniform polynomial size probabilistic threshold circuits can be solved in subexponential time (and, more strongly, can be accepted by a uniform family of deterministic constant-depth threshold circuits of subexponential size). — If there are no constant-depth arithmetic circuits of size n 1+ ϵ for the problem of multiplying a sequence of n  3×3 matrices, then, for every constant d , black-box identity testing for depth- d arithmetic circuits with bounded individual degree can be performed in subexponential time (and even by a uniform family of deterministic constant-depth AC 0 circuits of subexponential size).


1991 ◽  
Vol 20 (343) ◽  
Author(s):  
Gudmund Skovbjerg Frandsen ◽  
Mark Valence ◽  
David Mix Barrington

We introduce a natural set of arithmetic expressions and define the complexity class AE to consist of all those arithmetic functions (over the fields F_(2)n) that are described by these expressions. We show that AE coincides with the class of functions that are computable with constant depth and polynomial size unbounded fan-in arithmetic circuits satisfying a natural uniformity constraint (DLOGTIME-uniformity). A 1-input and 1-output arithmetic function over the fields F_(2)n may be identified with an <em>n</em>-input and an n-output Boolean function when field elements are represented as bit strings. We prove that if some such representation is X-uniform (where X is P or DLOGTIME) then the arithmetic complexity of a function (measured with X-uniform unbounded fan-in arithmetic circuits) is identical to the Boolean complexity of this function (measured with X-uniform threshold circuits). We show the existence of a P-uniform representation and we give partial results concerning the existence of representations with more restrictive uniformity properties.


1991 ◽  
Vol 01 (01) ◽  
pp. 49-87 ◽  
Author(s):  
HOWARD STRAUBING

This paper is devoted to the languages accepted by constant-depth, polynomial-size families of circuits in which every circuit element computes the sum of its input bits modulo a fixed period q. It has been conjectured that such a circuit family cannot compute the AND function of n inputs. Here it is shown that such circuit families are equivalent in power to polynomial-length programs over finite solvable groups; in particular, the conjecture implies that Barrington's result on the computational power of branching programs over nonsolvable groups cannot be extended to solvable groups. It is also shown that polynomial-length programs over dihedral groups cannot compute the AND function. Furthermore, it is shown that the conjecture is equivalent to a characterization, in terms of finite semigroups and formal logic, of the regular languages accepted by such circuit families. There is, moreover, considerable independent evidence for this characterization. This last result is established using new theorems, of independent interest, concerning the algebraic structure of finite categories.


1986 ◽  
Vol 70 (2-3) ◽  
pp. 216-240 ◽  
Author(s):  
Larry Denenberg ◽  
Yuri Gurevich ◽  
Saharon Shelah

1991 ◽  
Vol 02 (03) ◽  
pp. 183-205 ◽  
Author(s):  
Dung T. Huynh

In this paper, we investigate the complexity of computing the detector, constructor and lexicographic constructor functions for a given language. The following classes of languages will be considered: (1) context-free languages, (2) regular sets, (3) languages accepted by one-way nondeterministic auxiliary pushdown automata, (4) languages accepted by one-way nondeterministic logspace-bounded Turing machines, (5) two-way deterministic pushdown automaton languages, (6) languages accepted by uniform families of constant-depth polynomial-size Boolean circuits, and (7) languages accepted by multihead finite automata. We show that for the classes (1)–(4), efficient detectors, constructors and lexicographic constructors exist, whereas for (5)– (7) polynomial-time computable detectors, constructors and lexicographic constructors exist iff there are no sparse sets in NP−P (or equivalently, E=NE). Our results provide sharp boundaries between classes of languages which have efficient detectors, constructors and classes of languages for which efficient detectors and constructors do not appear to exist.


1990 ◽  
Vol 19 (315) ◽  
Author(s):  
Zhi-Li Zhang

We give a simple extension of Smolensky's method by replacing Smolensky's concept of U^n_F-completeness by a new definition: F-hardness. An easy consequence of this definition is that F-hard functions do not have constant depth, polynomial size Boolean circuit with Mod_p, where p is the characteristic of F. By this extension, we can explicitly show many functions are hard, we establish a {\em Hardness Lemma} for a class of functions, and characterize when a function over a finite field is hard to compute by a small depth with Mod_p gates. Furthermore, we discuss the difficulties in extending Smolensky's theory over a general ring. While in general the nice relationship between the Boolean circuit model and the algebra of functions representing Boolean functions over a ring collapses, we can still extend the complexity theoretic notions introduced by this extended Smolensky's theory to a ring in order to classify functions over such a ring by their relative complexity. A result states that any representation of <em>Majority</em> over any ring R=Z/(r) for any fixed r in N is hard. This provides a kind of evidence that <em>Majority</em> is not AC^0 reducible to Mod_r.


Author(s):  
S. Lakshmivarahan ◽  
Sudarshan K. Dhall

It is well known that among the three classes of the PRAM models, namely, CRCW, CREW, and EREW, the CRCW models are the weakest, in the sense that, they permit concurrent read/write by processors. Accordingly, algorithms on the CRCW model mainly concentrate on the core computations without much ado about data access. Consequently, this model, at least in principle, allows for the design of the fastest algorithm for a problem. It is intriguing to ask how fast prefixes can be computed on the CRCW models. Since CRCW models are equivalent to the unbounded fan-in circuits (refer to Chapter 2), the task of developing the fastest algorithms for the prefix problems is pursued in the context of the unbounded fan-in circuits. Recall from Chapter 2, that while the standard measures, such as, size and depth are still used to quantify the goodness of unbounded fanin circuits, the size of the circuit is measured by the total number of edges incident on all of its operation nodes, instead of by the number of operations nodes. It turns out that the size and depth of unbounded fanin circuits for computing prefixes, depends critically on the structure of the underlying semigroup from which the input elements are drawn. The principal result of this concluding Chapter may be stated as follows: There exists unbounded fan-in parallel prefix circuits of constant depth and polynomial size if, and only if, the underlying semigroup is group free. The proof of this result involves a very clever synthesis of a number of ideas drawn from different directions — structure of group free semigroups, their relations to a special class of regular sets, called non-counting regular sets, the relation of this latter class of regular sets to yet another class of regular sets defined by star-free regular expressions, and the design of a special class of finite state deterministic automata called RS machines that accept star-free regular expressions. In this context, it is convenient to define the notion of small circuits as the class of circuits with constant depth and polynomial size.


Sign in / Sign up

Export Citation Format

Share Document