Combining Constraint Programming and Abstract Interpretation for Value Analysis of Floating-point Programs

Author(s):  
Olivier Ponsini ◽  
Claude Michel ◽  
Michel Rueher
Author(s):  
Sandrine Blazy ◽  
Vincent Laporte ◽  
André Maroneze ◽  
David Pichardie

10.29007/j2fd ◽  
2018 ◽  
Author(s):  
Nasrine Damouche ◽  
Matthieu Martel

This article describes Salsa, an automatic tool to improve the accuracy of the floating- point computations done in numerical codes. Based on static analysis methods by abstract interpretation, our tool takes as input an original program, applies to it a set of transformations and then generates an optimized program which is more accurate than the initial one. The original and the transformed programs are written in the same imperative language. This article is a concise description of former work on the techniques implemented in Salsa, extended with a presentation of the main software architecture, the inputs and outputs of the tool as well as experimental results obtained by applying our tool on a set of sample programs coming from embedded systems and numerical analysis.


2020 ◽  
Vol 21 (2) ◽  
Author(s):  
Somasundaram Kanagasabapathi ◽  
MG Thushara

In this article, we introduce a new static analysis for numerical accuracy. Weaddress the problem of determining the minimal accuracy on the inputs and on the intermediary results of a program containing  foating-point computations in order to ensure a desired accuracy on the outputs. The main approach is to combine a forward and a backward static analysis, done by abstract interpretation. The backward analysis computes the minimal accuracy needed for the inputs and intermediary results of the program in order to ensure a desired accuracy on the results, specied by the user. In practice, the information collected by our analysis may help to optimize the formats used to represent the values stored in the variables of the program or to select the appropriate sensors. To illustrate our analysis, we have shown a prototype example with experimental results.


10.29007/tfls ◽  
2018 ◽  
Author(s):  
Farah Benmouhoub ◽  
Nasrine Damouche ◽  
Matthieu Martel

In high performance computing, nearly all the implementations and published experiments use floating-point arithmetic. However, since floating-point numbers are finite approximations of real numbers, it may result in hazards because of the accumulated errors. These round-off errors may cause damages whose gravity varies depending on the critical level of the application. To deal with this issue, we have developed a tool which im- proves the numerical accuracy of computations by automatically transforming programs in a source-to-source manner. Our transformation, relies on static analysis by abstract interpretation and operates on pieces of code with assignments, conditionals, loops, functions and arrays. In this article, we apply our techniques to optimize a parallel program representative of the high performance computing domain. Parallelism introduces new numerical accuracy problems due to the order of operations in this kind of systems. We are also interested in studying the compromise between execution time and numerical accuracy.


Sign in / Sign up

Export Citation Format

Share Document