valiant’s model
Recently Published Documents


TOTAL DOCUMENTS

5
(FIVE YEARS 0)

H-INDEX

3
(FIVE YEARS 0)

2001 ◽  
Vol 11 (04) ◽  
pp. 409-422 ◽  
Author(s):  
ALEXANDRE TISKIN

Valiant's model of bulk-synchronous parallel (BSP) computation does not allow the programmer to synchronize a subset, rather than the complete set of a parallel computer's processors. This is perceived by many to be an obstacle to expressing divide-and-conquer algorithms in the BSP model. We argue that the divide-and-conquer paradigm fits naturally into the BSP model, without any need for subset synchronization. The proposed method of divide-and-conquer BSP programming is fully compliant with the BSP computation model. The method is based on sequentially interleaved threads of BSP computation, called superthreads.


1995 ◽  
Vol 7 (5) ◽  
pp. 1054-1078 ◽  
Author(s):  
Wolfgang Maass

We consider learning on multilayer neural nets with piecewise polynomial activation functions and a fixed number k of numerical inputs. We exhibit arbitrarily large network architectures for which efficient and provably successful learning algorithms exist in the rather realistic refinement of Valiant's model for probably approximately correct learning ("PAC learning") where no a priori assumptions are required about the "target function" (agnostic learning), arbitrary noise is permitted in the training sample, and the target outputs as well as the network outputs may be arbitrary reals. The number of computation steps of the learning algorithm LEARN that we construct is bounded by a polynomial in the bit-length n of the fixed number of input variables, in the bound s for the allowed bit-length of weights, in 1/ε, where ε is some arbitrary given bound for the true error of the neural net after training, and in 1/δ where δ is some arbitrary given bound for the probability that the learning algorithm fails for a randomly drawn training sample. However, the computation time of LEARN is exponential in the number of weights of the considered network architecture, and therefore only of interest for neural nets of small size. This article provides details to the previously published extended abstract (Maass 1994).


1995 ◽  
Vol 2 ◽  
pp. 541-573 ◽  
Author(s):  
W. W. Cohen

In a companion paper it was shown that the class of constant-depth determinate k-ary recursive clauses is efficiently learnable. In this paper we present negative results showing that any natural generalization of this class is hard to learn in Valiant's model of pac-learnability. In particular, we show that the following program classes are cryptographically hard to learn: programs with an unbounded number of constant-depth linear recursive clauses; programs with one constant-depth determinate clause containing an unbounded number of recursive calls; and programs with one linear recursive clause of constant locality. These results immediately imply the non-learnability of any more general class of programs. We also show that learning a constant-depth determinate program with either two linear recursive clauses or one linear recursive clause and one non-recursive clause is as hard as learning boolean DNF. Together with positive results from the companion paper, these negative results establish a boundary of efficient learnability for recursive function-free clauses.


Sign in / Sign up

Export Citation Format

Share Document