polynomial activation functions
Recently Published Documents


TOTAL DOCUMENTS

8
(FIVE YEARS 0)

H-INDEX

3
(FIVE YEARS 0)

Author(s):  
Steffen Goebbels

AbstractSingle hidden layer feedforward neural networks can represent multivariate functions that are sums of ridge functions. These ridge functions are defined via an activation function and customizable weights. The paper deals with best non-linear approximation by such sums of ridge functions. Error bounds are presented in terms of moduli of smoothness. The main focus, however, is to prove that the bounds are best possible. To this end, counterexamples are constructed with a non-linear, quantitative extension of the uniform boundedness principle. They show sharpness with respect to Lipschitz classes for the logistic activation function and for certain piecewise polynomial activation functions. The paper is based on univariate results in Goebbels (Res Math 75(3):1–35, 2020. https://rdcu.be/b5mKH)


2019 ◽  
Vol 50 (1) ◽  
pp. 121-147 ◽  
Author(s):  
Ezequiel López-Rubio ◽  
Francisco Ortega-Zamorano ◽  
Enrique Domínguez ◽  
José Muñoz-Pérez

2002 ◽  
Vol 12 (05) ◽  
pp. 399-410
Author(s):  
NIKOLAY Y. NIKOLAEV ◽  
HITOSHI IBA

This paper presents a genetic programming system that evolves polynomial harmonic networks. These are multilayer feed-forward neural networks with polynomial activation functions. The novel hybrids assume that harmonics with non-multiple frequencies may enter as inputs the activation polynomials. The harmonics with non-multiple, irregular frequencies are derived analytically using the discrete Fourier transform. The polynomial harmonic networks have tree-structured topology which makes them especially suitable for evolutionary structural search. Empirical results show that this hybrid genetic programming system outperforms an evolutionary system manipulating polynomials, the traditional Koza-style genetic programming, and the harmonic GMDH network algorithm on processing time series.


1998 ◽  
Vol 10 (8) ◽  
pp. 2159-2173 ◽  
Author(s):  
Peter L. Bartlett ◽  
Vitaly Maiorov ◽  
Ron Meir

We compute upper and lower bounds on the VC dimension and pseudodimension of feedforward neural networks composed of piecewise polynomial activation functions. We show that if the number of layers is fixed, then the VC dimension and pseudo-dimension grow as W log W, where W is the number of parameters in the network. This result stands in opposition to the case where the number of layers is unbounded, in which case the VC dimension and pseudo-dimension grow as W2. We combine our results with recently established approximation error rates and determine error bounds for the problem of regression estimation by piecewise polynomial networks with unbounded weights.


Sign in / Sign up

Export Citation Format

Share Document