rounding error
Recently Published Documents


TOTAL DOCUMENTS

125
(FIVE YEARS 4)

H-INDEX

13
(FIVE YEARS 0)

2021 ◽  
pp. 1-43
Author(s):  
E. Adam Paxton ◽  
Matthew Chantry ◽  
Milan Klöwer ◽  
Leo Saffin ◽  
Tim Palmer

AbstractMotivated by recent advances in operational weather forecasting, we study the efficacy of low-precision arithmetic for climate simulations. We develop a framework to measure rounding error in a climate model which provides a stress-test for a low-precision version of the model, and we apply our method to a variety of models including the Lorenz system; a shallow water approximation for ow over a ridge; and a coarse resolution spectral global atmospheric model with simplified parameterisations (SPEEDY). Although double precision (52 significant bits) is standard across operational climate models, in our experiments we find that single precision (23 sbits) is more than enough and that as low as half precision (10 sbits) is often sufficient. For example, SPEEDY can be run with 12 sbits across the code with negligible rounding error, and with 10 sbits if minor errors are accepted, amounting to less than 0.1 mm/6hr for average grid-point precipitation, for example. Our test is based on the Wasserstein metric and this provides stringent non-parametric bounds on rounding error accounting for annual means as well as extreme weather events. In addition, by testing models using both round-to-nearest (RN) and stochastic rounding (SR) we find that SR can mitigate rounding error across a range of applications, and thus our results also provide some evidence that SR could be relevant to next-generation climate models. Further research is needed to test if our results can be generalised to higher resolutions and alternative numerical schemes. However, the results open a promising avenue towards the use of low-precision hardware for improved climate modelling.


Author(s):  
Valerii Zadiraka ◽  
Inna Shvidchenko

Introduction. When solving problems of transcomputational complexity, the problem of evaluating the rounding error is relevant, since it can be dominant in evaluating the accuracy of solving the problem. The ways to reduce it are important, as are the reserves for optimizing the algorithms for solving the problem in terms of accuracy. In this case, you need to take into account the rounding-off rules and calculation modes. The article shows how the estimates of the rounding error can be used in modern computer technologies for solving problems of computational, applied mathematics, as well as information security. The purpose of the article is to draw the attention of the specialists in computational and applied mathematics to the need to take into account the rounding error when analyzing the quality of the approximate solution of problems. This is important for mathematical modeling problems, problems using Bigdata, digital signal and image processing, cybersecurity, and many others. The article demonstrates specific estimates of the rounding error for solving a number of problems: estimating the mathematical expectation, calculating the discrete Fourier transform, using multi-digit arithmetic and using the estimates of the rounding error in algorithms for solving computer steganography problems. The results. The estimates of the rounding error of the algorithms for solving the above-mentioned classes of problems are given for different rounding-off rules and for different calculation modes. For the problem of constructing computer steganography, the use of the estimates of the rounding error in computer technologies for solving problems of hidden information transfer is shown. Conclusions. Taking into account the rounding error is an important factor in assessing the accuracy of the approximate solution of problems of the complexity above average. Keywords: rounding error, computer technology, discrete Fourier transform, multi-digit arithmetic, computer steganography.


2021 ◽  
Vol 15 ◽  
Author(s):  
James Paul Turner ◽  
Thomas Nowotny

Motivated by the challenge of investigating the reproducibility of spiking neural network simulations, we have developed the Arpra library: an open source C library for arbitrary precision range analysis based on the mixed Interval Arithmetic (IA)/Affine Arithmetic (AA) method. Arpra builds on this method by implementing a novel mixed trimmed IA/AA, in which the error terms of AA ranges are minimised using information from IA ranges. Overhead rounding error is minimised by computing intermediate values as extended precision variables using the MPFR library. This optimisation is most useful in cases where the ratio of overhead error to range width is high. Three novel affine term reduction strategies improve memory efficiency by merging affine terms of lesser significance. We also investigate the viability of using mixed trimmed IA/AA and other AA methods for studying reproducibility in unstable spiking neural network simulations.


2021 ◽  
Vol 43 (3) ◽  
pp. A1723-A1753
Author(s):  
L. Minah Yang ◽  
Alyson Fox ◽  
Geoffrey Sanders

Author(s):  
Andrew A. Renshaw ◽  
Edwin W. Gould

Context.— Tumor size is an important prognostic feature in many synoptic reports. The best format to report this feature is not clearly defined. Objective.— To define formatting features that impact the significance of tumor size. Design.— We reviewed multiple formatting features of tumor size in synoptic reports and correlated them with size distribution, reproducibility, and other pathologic features. Results.— Reporting tumors in millimeters rather than centimeters was more precise because of reduced rounding error and was significantly more reproducible (P = .01). Tumor sizes where the pathologist was concerned that the size may be underestimated are associated with significantly higher tumor N stage than tumors of similar size that are not so identified. Reported tumor sizes in multifocal tumors are also associated with significantly higher N stage than unifocal tumors of the same size. Conclusions.— Tumor sizes should be reported in millimeters, and when tumors are reported as either “at least” a specific size or as “multifocal” this information should also be recorded because these sizes likely underestimate the true biologic potential of the tumor.


Author(s):  
Pierre Blanchard ◽  
Desmond J Higham ◽  
Nicholas J Higham

Abstract Evaluating the log-sum-exp function or the softmax function is a key step in many modern data science algorithms, notably in inference and classification. Because of the exponentials that these functions contain, the evaluation is prone to overflow and underflow, especially in low-precision arithmetic. Software implementations commonly use alternative formulas that avoid overflow and reduce the chance of harmful underflow, employing a shift or another rewriting. Although mathematically equivalent, these variants behave differently in floating-point arithmetic and shifting can introduce subtractive cancellation. We give rounding error analyses of different evaluation algorithms and interpret the error bounds using condition numbers for the functions. We conclude, based on the analysis and numerical experiments, that the shifted formulas are of similar accuracy to the unshifted ones, so can safely be used, but that a division-free variant of softmax can suffer from loss of accuracy.


Symmetry ◽  
2020 ◽  
Vol 12 (4) ◽  
pp. 543
Author(s):  
Nikos Petrellis

In this paper, we focus on Orthogonal Frequency Division Multiplexing (OFDM) transceivers where undersampling is employed by the receiver Analog/Digital Converter (ADC) when sparse information is exchanged. Several Fast Fourier Transform (FFT) symmetry properties are exploited to allow the substitution of specific input values by others that have already been sampled by the ADC. Several architectures have been proposed in the literature for efficient FFT implementations in terms of power, speed and hardware resources. The FFT input/output values, twiddle factors, etc., are complex numbers with their real and imaginary parts being represented using fixed point format. A tradeoff has to be made between rounding error and complexity. The optimal minimum FFT word length is investigated by combining the undersampling and the rounding error. A configurable new FFT architecture has been developed in hardware description language to test the error model with various FFT sizes, word lengths and Quadrature Amplitude Modulations (QAM). A system designer can take into account the sparseness of the input data and define the desired rounding and undersampling error relation. Τhe developed error model would then predict the required word length and ADC resolution with average Root Mean Square Error (RMSE) less than 1.


Author(s):  
Fabienne Jézéquel ◽  
Stef Graillat ◽  
Daichi Mukunoki ◽  
Toshiyuki Imamura ◽  
Roman Iakymchuk

Sign in / Sign up

Export Citation Format

Share Document