scholarly journals An alternative algorithm for regularization of noisy volatility calibration in Finance

2016 ◽  
Vol Volume 23 - 2016 - Special... ◽  
Author(s):  
Medarhri Ibtissam ◽  
Aboulaich Rajae ◽  
Debit Naima

International audience This contribution is an extension of the work initiated in [1], presenting a strategy for the calibration of the local volatility. Due to Morozov's discrepancy principle [6], the Tikhonov regularization problem introduced in [7] is understood as an inequality-constrained minimization problem. An Uzawa procedure is proposed to replace this latter by a sequence of unconstrained problems dealt with in the modified Thikonov regularization procedure in [1]. Numerical tests confirm the consistency of the approach and the significant speed-up of the process of local volatility determination. Cette contribution dans ce papier est une extension des travaux initiés dans [1], qui pré-sente une stratégie pour l'estimation de la volatilité locale. En raison du principe de la différence de Morozov [6], le problème de la régularisation de Tikhonov introduite dans [7] est reformulé comme un problème de minimisation de l'inégalité des contraintes. Une procédure Uzawa est proposé de remplacer ce dernier par une séquence de problèmes non contraints traités dans la procédure de régularisation Thikonov modifié dans [1]. Des tests numériques confirment la cohérence de l'approche et l'importante accélérer le processus de détermination de la volatilité locale.

2018 ◽  
Vol 24 (3) ◽  
pp. 965-983 ◽  
Author(s):  
Roberto Ferretti ◽  
Achille Sassi

The mathematical framework of hybrid system is a recent and general tool to treat control systems involving control action of heterogeneous nature. In this paper, we construct and test a semi-Lagrangian numerical scheme for solving the Dynamic Programming equation of an infinite horizon optimal control problem for hybrid systems. In order to speed up convergence, we also propose and analyze an acceleration technique based on policy iteration. Finally, we validate the approach via some numerical tests in low dimension.


2009 ◽  
Vol Volume 11, 2009 - Special... ◽  
Author(s):  
Sofia Douda ◽  
Abdelhakim El Imrani ◽  
Mohammed Limouri

International audience The Fractal image compression has the advantage of presenting fast decoding and independent resolution but it suffers of slow encoding phase. In the present study, we propose to reduce the computational complexity by using two domain pools instead of one domain pool and encoding an image in two steps (AP2D approach). AP2D could be applied to classification methods or domain pool reduction methods leading to more reduction in encoding phase. Indeed, experimental results showed that AP2D speed up the encoding time. The time reduction obtained reached a percentage of more than 65% when AP2D was applied to Fisher classification and more than 72% when AP2D was applied to exhaustive search. The image quality was not altered by this approach while the compression ratio was slightly enhanced. La compression fractale d’images permet un décodage rapide et une indépendance de la résolution mais souffre d’une lenteur dans le codage. Le présent travail présente une approche visant à réduire le temps de calcul en utilisant deux dictionnaires et une approximation de l’image en deux étapes (AP2D). L’approche AP2D peut être appliquée aux méthodes de classification ou aux méthodes de réduction du cardinal du dictionnaire et ainsi réduire davantage le temps de codage. Les résultats expérimentaux ont montré que AP2D appliquée à une recherche exhaustive a atteint un gain de temps de plus de 72%. De même AP2D appliquée à la classification de Fisher a permis une réduction de temps de codage de plus de 65%. La qualité de l’image n’a pas été altérée par cette approche et le taux de compression a légèrement augmenté.


2015 ◽  
Vol Volume 4, Number 1, Special... (Special Issue...) ◽  
Author(s):  
Bernard Senach ◽  
Anne-Laure Negri

International audience For over 40 years, it is common knowledge that industrial society has to reduce its energy consumption. Most of people are now aware that this change is necessary. However, commitment to action is still difficult and there is substantial work to be done. Attention has turned to Human and Social Sciences, as a deeper understanding of behaviors' determinant and use of influence principles could help to speed up behavior changes. New technical devices combining game design, interaction techniques and persuasion have emerged but the field is still in its infancy. This article gives a glimpse at a toolbox helping to design and evaluate interactive persuasive devices and discuss five main challenges. This work is extended by two others papers: Negri et Senach (2015b) provide a first grid of persuasion principles and in Senach et Negri (2015c), these grid have been applied to assess the persuasive properties of an energy challenge within a company. Depuis plus de 40 ans, il est de notoriété publique qu'il est nécessaire de modifier les comportements de consommation énergétique dans les sociétés industrielles. Globalement, la prise de conscience par le grand public de la gravité des questions d'écologie est maintenant acquise. Et pourtant, le passage à l’acte est encore difficile et les gisements d'économies restent considérables. La lenteur des évolutions comportementales a conduit à rechercher du côté des Sciences Humaines et Sociales des modèles permettant une compréhension en profondeur des déterminants des comportements et à identifier des principes d'influence (crédibilité, expertise, réciprocité, etc.). L'implémentation de ces principes dans des dispositifs interactifs est une solution qui pourrait accompagner les changements de comportements ; ces nouveaux systèmes, associant le plaisir du jeu et les techniques de persuasion, baptisés ici « systèmes ludo-persuasifs » (SLP) pourraient être de bons candidats pour aider à l'adoption d'éco-gestes. Mais, l'utilisation des technologies interactives pour la persuasion est encore trop récente pour qu'une véritable ingénierie se soit développée. Dans le travail présenté ci-dessous, nous proposons l'élaboration d'une « boîte à outils » ludo-persuasive en discutant les défis qui doivent être relevés. Cet article est prolongé par deux travaux complémentaires : Negri et Senach (2015b) proposent une structuration de principes persuasifs et ils appliquent ensuite cette grille pour analyser a posteriori les caractéristiques ludo-persuasives d'un challenge énergétique conduit en entreprise (Senach & Negri, 2015c)


Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. T41-T51 ◽  
Author(s):  
Tao Xu ◽  
Guoming Xu ◽  
Ergen Gao ◽  
Yingchun Li ◽  
Xianyi Jiang ◽  
...  

We propose using a set of blocks to approximate geologically complex media that cannot be well described by layered models. Interfaces between blocks are triangulated to prevent overlaps or gaps often produced by other techniques, such as B-splines, and to speed up the calculation of intersection points between a ray and block interfaces. We also use a smoothing algorithm to make the normal vector of each triangle continuous at the boundary, so that ray tracing can be performed with stability and accuracy. Based on Fermat’s principle, we perturb an initial raypath between two points, generally obtained by shooting, with a segmentally iterative ray-tracing (SIRT) method. Intersection points on a ray are updated in sequence, instead of simultaneously, because the number of new intersection points may be increased or decreased during the iteration process. To improve convergence speed, we update the intersection points by a first-order explicit formula instead of traditional iterative methods. Only transmitted and reflected waves are considered. Numerical tests demonstrate that the combination of block modeling and segmentally iterative ray tracing is effective in implementing kinematic two-point ray tracing in complex 3D media.


2002 ◽  
Vol 05 (06) ◽  
pp. 619-643 ◽  
Author(s):  
YVES ACHDOU ◽  
OLIVIER PIRONNEAU

The aim of this paper is to propose several algorithms for finding the local volatility from partial observations of the price of an European vanilla option. Dupire's equation is used. The local volatility and the price of the option are discretized by finite elements with highly non uniform meshes and with a coarser mesh for the local volatility. The inverse problem is formulated as a least square problem and the minimization is done by an interior point method. The gradient of the cost function is computed exactly by solving an adjoint problem. A multilevel approach is proposed for accelerating the computations. Also, a suboptimal time-stepping algorithm is considered. For all the proposed algorithms, numerical tests are supplied.


Author(s):  
Brian Cross

A relatively new entry, in the field of microscopy, is the Scanning X-Ray Fluorescence Microscope (SXRFM). Using this type of instrument (e.g. Kevex Omicron X-ray Microprobe), one can obtain multiple elemental x-ray images, from the analysis of materials which show heterogeneity. The SXRFM obtains images by collimating an x-ray beam (e.g. 100 μm diameter), and then scanning the sample with a high-speed x-y stage. To speed up the image acquisition, data is acquired "on-the-fly" by slew-scanning the stage along the x-axis, like a TV or SEM scan. To reduce the overhead from "fly-back," the images can be acquired by bi-directional scanning of the x-axis. This results in very little overhead with the re-positioning of the sample stage. The image acquisition rate is dominated by the x-ray acquisition rate. Therefore, the total x-ray image acquisition rate, using the SXRFM, is very comparable to an SEM. Although the x-ray spatial resolution of the SXRFM is worse than an SEM (say 100 vs. 2 μm), there are several other advantages.


Author(s):  
A. G. Jackson ◽  
M. Rowe

Diffraction intensities from intermetallic compounds are, in the kinematic approximation, proportional to the scattering amplitude from the element doing the scattering. More detailed calculations have shown that site symmetry and occupation by various atom species also affects the intensity in a diffracted beam. [1] Hence, by measuring the intensities of beams, or their ratios, the occupancy can be estimated. Measurement of the intensity values also allows structure calculations to be made to determine the spatial distribution of the potentials doing the scattering. Thermal effects are also present as a background contribution. Inelastic effects such as loss or absorption/excitation complicate the intensity behavior, and dynamical theory is required to estimate the intensity value.The dynamic range of currents in diffracted beams can be 104or 105:1. Hence, detection of such information requires a means for collecting the intensity over a signal-to-noise range beyond that obtainable with a single film plate, which has a S/N of about 103:1. Although such a collection system is not available currently, a simple system consisting of instrumentation on an existing STEM can be used as a proof of concept which has a S/N of about 255:1, limited by the 8 bit pixel attributes used in the electronics. Use of 24 bit pixel attributes would easily allowthe desired noise range to be attained in the processing instrumentation. The S/N of the scintillator used by the photoelectron sensor is about 106 to 1, well beyond the S/N goal. The trade-off that must be made is the time for acquiring the signal, since the pattern can be obtained in seconds using film plates, compared to 10 to 20 minutes for a pattern to be acquired using the digital scan. Parallel acquisition would, of course, speed up this process immensely.


2004 ◽  
Vol 63 (1) ◽  
pp. 17-29 ◽  
Author(s):  
Friedrich Wilkening ◽  
Claudia Martin

Children 6 and 10 years of age and adults were asked how fast a toy car had to be to catch up with another car, the latter moving with a constant speed throughout. The speed change was required either after half of the time (linear condition) or half of the distance (nonlinear condition), and responses were given either on a rating scale (judgment condition) or by actually producing the motion (action condition). In the linear condition, the data patterns for both judgments and actions were in accordance with the normative rule at all ages. This was not true for the nonlinear condition, where children’s and adults’ judgment and also children’s action patterns were linear, and only adults’ action patterns were in line with the nonlinearity principle. Discussing the reasons for the misconceptions and for the action-judgment dissociations, a claim is made for a new view on the development of children’s concepts of time and speed.


Sign in / Sign up

Export Citation Format

Share Document