scholarly journals Asymptotics of the Sum of a Sine Series with a Convex Slowly Varying Sequence of Coefficients

Mathematics ◽  
2021 ◽  
Vol 9 (18) ◽  
pp. 2252
Author(s):  
Aleksei Solodov

We study the asymptotic behavior in a neighborhood of zero of the sum of a sine series g(b,x)=∑k=1∞bksinkx whose coefficients constitute a convex slowly varying sequence b. The main term of the asymptotics of the sum of such a series was obtained by Aljančić, Bojanić, and Tomić. To estimate the deviation of g(b,x) from the main term of its asymptotics bm(x)/x, m(x)=[π/x], Telyakovskiĭ used the piecewise-continuous function σ(b,x)=x∑k=1m(x)−1k2(bk−bk+1). He showed that the difference g(b,x)−bm(x)/x in some neighborhood of zero admits a two-sided estimate in terms of the function σ(b,x) with absolute constants independent of b. Earlier, the author found the sharp values of these constants. In the present paper, the asymptotics of the function g(b,x) on the class of convex slowly varying sequences in the regular case is obtained.

2014 ◽  
Vol 587-589 ◽  
pp. 2303-2306 ◽  
Author(s):  
Li Mian Zhao ◽  
Ji Ting Huang

In this paper, we discuss a class of linear integral equation with piecewise continuous function. Firstly, we change the integral equation to a differential equation with the initial condition. Secondly, the differential equation is solved by the constant variation formula and integration by parts. Explicit solution of the integral equation is given clearly.


2012 ◽  
Vol 1 (1) ◽  
pp. 39
Author(s):  
Muhammad Wakhid Musthofa ◽  
Ari Suparwanto

In this paper the observability of continuous descriptor system of the form Ex(t)= Ax(t) Bu(t), x(0)=x0 will be studied, where  E,A, and B are constant matrices that may be singular and u(t) is piecewise continuous function which is differentiated (m-1) times, where m is the degree of nilpotency system. Two definitions about observability of descriptor systems  along with their characterizations given by Dai and Yip will be both discussed, then further the relationship and comparison between these characterizations will be presented.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Kusano Takaŝi ◽  
Jelena V. Manojlović

AbstractWe study the asymptotic behavior of eventually positive solutions of the second-order half-linear differential equation(p(t)\lvert x^{\prime}\rvert^{\alpha}\operatorname{sgn}x^{\prime})^{\prime}+q(% t)\lvert x\rvert^{\alpha}\operatorname{sgn}x=0,where q is a continuous function which may take both positive and negative values in any neighborhood of infinity and p is a positive continuous function satisfying one of the conditions\int_{a}^{\infty}\frac{ds}{p(s)^{1/\alpha}}=\infty\quad\text{or}\quad\int_{a}^% {\infty}\frac{ds}{p(s)^{1/\alpha}}<\infty.The asymptotic formulas for generalized regularly varying solutions are established using the Karamata theory of regular variation.


2013 ◽  
Vol 2013 ◽  
pp. 1-6 ◽  
Author(s):  
Habib Mâagli ◽  
Noureddine Mhadhebi ◽  
Noureddine Zeddini

We establish the existence and uniqueness of a positive solution for the fractional boundary value problem , with the condition , where , and is a nonnegative continuous function on that may be singular at or .


2014 ◽  
Vol 2014 ◽  
pp. 1-5
Author(s):  
Hongjian Xi ◽  
Taixiang Sun ◽  
Bin Qin ◽  
Hui Wu

We consider the following difference equationxn+1=xn-1g(xn),n=0,1,…,where initial valuesx-1,x0∈[0,+∞)andg:[0,+∞)→(0,1]is a strictly decreasing continuous surjective function. We show the following. (1) Every positive solution of this equation converges toa,0,a,0,…,or0,a,0,a,…for somea∈[0,+∞). (2) Assumea∈(0,+∞). Then the set of initial conditions(x-1,x0)∈(0,+∞)×(0,+∞)such that the positive solutions of this equation converge toa,0,a,0,…,or0,a,0,a,…is a unique strictly increasing continuous function or an empty set.


Author(s):  
Ming Zhang

Real world financial data is often discontinuous and non-smooth. Accuracy will be a problem, if we attempt to use neural networks to simulate such functions. Neural network group models can perform this function with more accuracy. Both Polynomial Higher Order Neural Network Group (PHONNG) and Trigonometric polynomial Higher Order Neural Network Group (THONNG) models are studied in this chapter. These PHONNG and THONNG models are open box, convergent models capable of approximating any kind of piecewise continuous function to any degree of accuracy. Moreover, they are capable of handling higher frequency, higher order nonlinear, and discontinuous data. Results obtained using Polynomial Higher Order Neural Network Group and Trigonometric polynomial Higher Order Neural Network Group financial simulators are presented, which confirm that PHONNG and THONNG group models converge without difficulty, and are considerably more accurate (0.7542% - 1.0715%) than neural network models such as using Polynomial Higher Order Neural Network (PHONN) and Trigonometric polynomial Higher Order Neural Network (THONN) models.


Author(s):  
Richard Earl

Most functions have several numerical inputs and produce more than one numerical output. But even generally continuity requires that we can constrain the difference in outputs by suitably constraining the difference in inputs. ‘The plane and other spaces’ asks more general questions such as ‘is the distance a car has travelled a continuous function of its speed?’ This is a subtle question as neither the input nor output are numbers, but rather functions of time, with input the speed function s(t) and output the distance function d(t). In answering the question, it considers continuity between metric spaces, equivalent metrics, open sets, convergence, and compactness and connectedness, the last two being topological invariants that can be used to differentiate between spaces.


1981 ◽  
Vol 12 (2) ◽  
pp. 139-140 ◽  
Author(s):  
Hans U. Gerber

Zehnwirth (1981) contains some flaws. Ifis the Esscher premium for a risk X, the loading is H(X) — E(X) and not h as Zehnwirth states. The first and third formulas on page 78 are wrong, since o(h) is a quantity such thatA correct statement would have been thator simply that H(X) is a continuous function of the parameter h. However, this continuity is not uniform in all risks, which is illustrated by (3). No matter how small h is, there is always an X such that the difference between H(X) and E(X) is substantial. In view of this what is the meaning of a statement like “… the Esscher premium is a small perturbation of the linearized credibility premium”?


Sign in / Sign up

Export Citation Format

Share Document