scholarly journals Fraction Free Gaussian Elimination for Sparse Matrices

1995 ◽  
Vol 19 (5) ◽  
pp. 393-402 ◽  
Author(s):  
Hong R. Lee ◽  
B.David Saunders
1980 ◽  
Vol 9 (123) ◽  
Author(s):  
Ole Østerby ◽  
Zahari Zlatev

<p>The mathematical models of many practical problems lead to systems of linear algebraic equations where the coefficient matrix is large and sparse. Typical examples are the solutions of partial differential equations by finite difference or finite element methods but many other applications could be mentioned.</p><p>When there is a large proportion of zeros in the coefficient matrix then it is fairly obvious that we do not want to store all those zeros in the computer, but it might not be quite so obvious how to get around it. We first describe storage techniques which are convenient to use with direct solution methods, and we then show how a very efficient computational scheme can be based on Gaussian elimination and iterative refinement.</p><p>A serious problem in the storage and handling of sparse matrices is the appearance of fill-ins, i.e. new elements which are created in the process of generating zeros below the diagonal. Many of these new elements tend to be smaller than the original matrix elements, and if they are smaller than a quantity which we shall call the drop tolerance we simply ignore them. In this way we may preserve the sparsity quite well but we probably introduce rather large errors in the LU decomposition to the effect that the solution becomes unacceptable. In order to retrieve the accuracy we use iterative refinement and we show theoreticaly and with practical experiments that it is ideal for the purpose.</p><p>Altogether, the combination of Gaussian elimination, a large drop tolerance, and iterative refinement gives a very efficient and competitive computational scheme for sparse problems. For dense matrices iterative refinement will always require more storage and computation time, and the extra accuracy it yields may not be enough to justify it. For sparse problems, however, iterative refinement combined with a large drop tolerance will in most cases give very accurate results and reliable error estimates with less storage and computation time.</p>


1997 ◽  
Author(s):  
Claudson Bornstein ◽  
Bruce Maggs ◽  
Gary Miller ◽  
R. Ravi
Keyword(s):  

Author(s):  
I Misztal ◽  
I Aguilar ◽  
D Lourenco ◽  
L Ma ◽  
J Steibel ◽  
...  

Abstract Genomic selection is now practiced successfully across many species. However, many questions remain such as long-term effects, estimations of genomic parameters, robustness of GWAS with small and large datasets, and stability of genomic predictions. This study summarizes presentations from at the 2020 ASAS symposium. The focus of many studies until now is on linkage disequilibrium (LD) between two loci. Ignoring higher level equilibrium may lead to phantom dominance and epistasis. The Bulmer effect leads to a reduction of the additive variance; however, selection for increased recombination rate can release anew genetic variance. With genomic information, estimates of genetic parameters may be biased by genomic preselection, but costs of estimation can increase drastically due to the dense form of the genomic information. To make computation of estimates feasible, genotypes could be retained only for the most important animals, and methods of estimation should use algorithms that can recognize dense blocks in sparse matrices. GWAS studies using small genomic datasets frequently find many marker-trait associations whereas studies using much bigger datasets find only a few. Most current tools use very simple models for GWAS, possibly causing artifacts. These models are adequate for large datasets where pseudo-phenotypes such as deregressed proofs indirectly account for important effects for traits of interest. Artifacts arising in GWAS with small datasets can be minimized by using data from all animals (whether genotyped or not), realistic models, and methods that account for population structure. Recent developments permit computation of p-values from GBLUP, where models can be arbitrarily complex but restricted to genotyped animals only, and to single-step GBLUP that also uses phenotypes from ungenotyped animals. Stability was an important part of nongenomic evaluations, where genetic predictions were stable in the absence of new data even with low prediction accuracies. Unfortunately, genomic evaluations for such animals change because all animals with genotypes are connected. A top ranked animal can easily drop in the next evaluation, causing a crisis of confidence in genomic evaluations. While correlations between consecutive genomic evaluations are high, outliers can have differences as high as one SD. A solution to fluctuating genomic evaluations is to base selection decisions on groups of animals. While many issues in genomic selection have been solved, many new issues that require additional research continue to surface.


Sign in / Sign up

Export Citation Format

Share Document