scholarly journals A Modified SSOR Preconditioning Strategy for Helmholtz Equations

2012 ◽  
Vol 2012 ◽  
pp. 1-9 ◽  
Author(s):  
Shi-Liang Wu ◽  
Cui-Xia Li

The finite difference method discretization of Helmholtz equations usually leads to the large spare linear systems. Since the coefficient matrix is frequently indefinite, it is difficult to solve iteratively. In this paper, a modified symmetric successive overrelaxation (MSSOR) preconditioning strategy is constructed based on the coefficient matrix and employed to speed up the convergence rate of iterative methods. The idea is to increase the values of diagonal elements of the coefficient matrix to obtain better preconditioners for the original linear systems. Compared with SSOR preconditioner, MSSOR preconditioner has no additional computational cost to improve the convergence rate of iterative methods. Numerical results demonstrate that this method can reduce both the number of iterations and the computational time significantly with low cost for construction and implementation of preconditioners.

2011 ◽  
Vol 11 (04) ◽  
pp. 571-587 ◽  
Author(s):  
WILLIAM ROBSON SCHWARTZ ◽  
HELIO PEDRINI

Fractal image compression is one of the most promising techniques for image compression due to advantages such as resolution independence and fast decompression. It exploits the fact that natural scenes present self-similarity to remove redundancy and obtain high compression rates with smaller quality degradation compared to traditional compression methods. The main drawback of fractal compression is its computationally intensive encoding process, due to the need for searching regions with high similarity in the image. Several approaches have been developed to reduce the computational cost to locate similar regions. In this work, we propose a method based on robust feature descriptors to speed up the encoding time. The use of robust features provides more discriminative and representative information for regions of the image. When the regions are better represented, the search for similar parts of the image can be reduced to focus only on the most likely matching candidates, which leads to reduction on the computational time. Our experimental results show that the use of robust feature descriptors reduces the encoding time while keeping high compression rates and reconstruction quality.


Aerospace ◽  
2021 ◽  
Vol 8 (12) ◽  
pp. 398
Author(s):  
Angelos Kafkas ◽  
Spyridon Kilimtzidis ◽  
Athanasios Kotzakolios ◽  
Vassilis Kostopoulos ◽  
George Lampeas

Efficient optimization is a prerequisite to realize the full potential of an aeronautical structure. The success of an optimization framework is predominately influenced by the ability to capture all relevant physics. Furthermore, high computational efficiency allows a greater number of runs during the design optimization process to support decision-making. The efficiency can be improved by the selection of highly optimized algorithms and by reducing the dimensionality of the optimization problem by formulating it using a finite number of significant parameters. A plethora of variable-fidelity tools, dictated by each design stage, are commonly used, ranging from costly high-fidelity to low-cost, low-fidelity methods. Unfortunately, despite rapid solution times, an optimization framework utilizing low-fidelity tools does not necessarily capture the physical problem accurately. At the same time, high-fidelity solution methods incur a very high computational cost. Aiming to bridge the gap and combine the best of both worlds, a multi-fidelity optimization framework was constructed in this research paper. In our approach, the low-fidelity modules and especially the equivalent-plate methodology structural representation, capable of drastically reducing the associated computational time, form the backbone of the optimization framework and a MIDACO optimizer is tasked with providing an initial optimized design. The higher fidelity modules are then employed to explore possible further gains in performance. The developed framework was applied to a benchmark airliner wing. As demonstrated, reasonable mass reduction was obtained for a current state of the art configuration.


Filomat ◽  
2017 ◽  
Vol 31 (5) ◽  
pp. 1441-1452
Author(s):  
Mehdi Dehghana ◽  
Marzieh Dehghani-Madisehb ◽  
Masoud Hajarianc

Solving linear systems is a classical problem of engineering and numerical analysis which has various applications in many sciences and engineering. In this paper, we study efficient iterative methods, based on the diagonal and off-diagonal splitting of the coefficient matrix A for solving linear system Ax = b, where A ? Cnxn is nonsingular and x,b ? Cnxm. The new method is a two-parameter two-step method that has some iterative methods as its special cases. Numerical examples are presented to illustrate the effectiveness of the new method.


2014 ◽  
Vol 13 (1) ◽  
Author(s):  
Jakub Kierzkowski

AbstractWe present new iterative methods for solving the Sylvester equation belonging to the class of SOR-like methods, based on the SOR (Successive Over-Relaxation) method for solving linear systems. We discuss convergence characteristics of the methods. Numerical experimentation results are included, illustrating the theoretical results and some other noteworthy properties of the Methods.


Mathematics ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 1503
Author(s):  
Chengzhi Liu ◽  
Zhongyun Liu

The progressive iterative approximation (PIA) plays an important role in curve and surface fitting. By using the diagonally compensated reduction of the collocation matrix, we propose the preconditioned progressive iterative approximation (PPIA) to improve the convergence rate of PIA. For most of the normalized totally positive bases, we show that the presented PPIA can accelerate the convergence rate significantly in comparison with the weighted progressive iteration approximation (WPIA) and the progressive iterative approximation with different weights (DWPIA). Furthermore, we propose an inexact variant of the PPIA (IPPIA) to reduce the computational complexity of the PPIA. We introduce the inexact solver of the preconditioning system by employing some state-of-the-art iterative methods. Numerical results show that both the PPIA and the IPPIA converge faster than the WPIA and DWPIA, while the elapsed CPU times of the PPIA and IPPIA are less than those of the WPIA and DWPIA.


2014 ◽  
Vol 2014 ◽  
pp. 1-6
Author(s):  
Cuiyu Liu ◽  
Chen-liang Li

The preconditioner presented by Hadjidimos et al. (2003) can improve on the convergence rate of the classical iterative methods to solve linear systems. In this paper, we extend this preconditioner to solve linear complementarity problems whose coefficient matrix isM-matrix orH-matrix and present a multisplitting and Schwarz method. The convergence theorems are given. The numerical experiments show that the methods are efficient.


2018 ◽  
Vol 36 (3) ◽  
pp. 155-172
Author(s):  
Lakhdar Elbouyahyaoui ◽  
Mohammed Heyouni

In the present paper, we are concerned by weighted Arnoldi like methods for solving large and sparse linear systems that have different right-hand sides but have the same coefficient matrix. We first give detailed descriptions of the weighted Gram-Schmidt process and of a Ruhe variant of the weighted block Arnoldi algorithm. We also establish some theoretical results that links the iterates of the weighted block Arnoldi process to those of the non weighted one. Then, to accelerate the convergence of the classical restarted block and seed GMRES methods, we introduce the weighted restarted block and seed GMRES methods. Numerical experiments that are done with different matrices coming from the Matrix Market repository or from the university of Florida sparse matrix collection are reported at the end of this work in order to compare the performance and show the effectiveness of the proposed methods.


2015 ◽  
Vol 2015 ◽  
pp. 1-20 ◽  
Author(s):  
Enrico Bertolazzi ◽  
Marco Frego

A new preconditioner for symmetric complex linear systems based on Hermitian and skew-Hermitian splitting (HSS) for complex symmetric linear systems is herein presented. It applies to conjugate orthogonal conjugate gradient (COCG) or conjugate orthogonal conjugate residual (COCR) iterative solvers and does not require any estimation of the spectrum of the coefficient matrix. An upper bound of the condition number of the preconditioned linear system is provided. To reduce the computational cost the preconditioner is approximated with an inexact variant based on incomplete Cholesky decomposition or on orthogonal polynomials. Numerical results show that the present preconditioner and its inexact variant are efficient and robust solvers for this class of linear systems. A stability analysis of the inexact polynomial version completes the description of the preconditioner.


2020 ◽  
Vol 16 (12) ◽  
pp. e1008495
Author(s):  
Ivan Borisov ◽  
Evgeny Metelkin

Practical identifiability of Systems Biology models has received a lot of attention in recent scientific research. It addresses the crucial question for models’ predictability: how accurately can the models’ parameters be recovered from available experimental data. The methods based on profile likelihood are among the most reliable methods of practical identification. However, these methods are often computationally demanding or lead to inaccurate estimations of parameters’ confidence intervals. Development of methods, which can accurately produce parameters’ confidence intervals in reasonable computational time, is of utmost importance for Systems Biology and QSP modeling. We propose an algorithm Confidence Intervals by Constraint Optimization (CICO) based on profile likelihood, designed to speed-up confidence intervals estimation and reduce computational cost. The numerical implementation of the algorithm includes settings to control the accuracy of confidence intervals estimates. The algorithm was tested on a number of Systems Biology models, including Taxol treatment model and STAT5 Dimerization model, discussed in the current article. The CICO algorithm is implemented in a software package freely available in Julia (https://github.com/insysbio/LikelihoodProfiler.jl) and Python (https://github.com/insysbio/LikelihoodProfiler.py).


Author(s):  
Tu Huynh-Kha ◽  
Thuong Le-Tien ◽  
Synh Ha ◽  
Khoa Huynh-Van

This research work develops a new method to detect the forgery in image by combining the Wavelet transform and modified Zernike Moments (MZMs) in which the features are defined from more pixels than in traditional Zernike Moments. The tested image is firstly converted to grayscale and applied one level Discrete Wavelet Transform (DWT) to reduce the size of image by a half in both sides. The approximation sub-band (LL), which is used for processing, is then divided into overlapping blocks and modified Zernike moments are calculated in each block as feature vectors. More pixels are considered, more sufficient features are extracted. Lexicographical sorting and correlation coefficients computation on feature vectors are next steps to find the similar blocks. The purpose of applying DWT to reduce the dimension of the image before using Zernike moments with updated coefficients is to improve the computational time and increase exactness in detection. Copied or duplicated parts will be detected as traces of copy-move forgery manipulation based on a threshold of correlation coefficients and confirmed exactly from the constraint of Euclidean distance. Comparisons results between proposed method and related ones prove the feasibility and efficiency of the proposed algorithm.


Sign in / Sign up

Export Citation Format

Share Document