Fixed points and frontiers: a new perspective

1991 ◽  
Vol 1 (1) ◽  
pp. 91-120 ◽  
Author(s):  
Sebastian Hunt ◽  
Chris Hankin

AbstractAbstract interpretation is the collective name for a family of semantics-based techniques for compile-time analysis of programs. One of the most costly operations in automating such analyses is the computation of fixed points. The frontiers algorithm is an elegant method, invented by Chris Clack and Simon Peyton Jones, which addresses this issue.In this article we present a new approach to the frontiers algorithm based on the insight that frontiers represent upper and lower subsets of a function's argument domain. This insight leads to a new formulation of the frontiers algorithm for higher-order functions, which is considerably more concise than previous versions.We go on to argue that for many functions, especially in the higher-order case, finding fixed points is an intractable problem unless the sizes of the abstract domains are reduced. We show how the semantic machinery of abstract interpretation allows us to place upper and lower bounds on the values of fixed points in large lattices by working within smaller ones.

1997 ◽  
Vol 7 (4) ◽  
pp. 357-394
Author(s):  
TYNG-RUEY CHUANG ◽  
BENJAMIN GOLDBERG

This paper describes a method for finding the least fixed points of higher-order functions over finite domains using symbolic manipulation. Fixed point finding is an essential component in the calculation of abstract semantics of functional programs, providing the foundation for program analyses based on abstract interpretation. Previous methods for fixed point finding have primarily used semantic approaches, which often must traverse large portions of the semantic domain even for simple programs. This paper provides the theoretical framework for a syntax-based analysis that is potentially very fast. The proposed syntactic method is based on an augmented simply typed lambda calculus where the symbolic representation of each function produced in the fixed point iteration is transformed to a syntactic normal form. Normal forms resulting from successive iterations are then compared syntactically to determine their ordering in the semantic domain, and to decide whether a fixed point has been reached. We show the method to be sound, complete and compositional. Examples are presented to show how this method can be used to perform strictness analysis for higher-order functions over non-flat domains. Our method is compositional in the sense that the strictness property of an expression can be easily calculated from those of its sub-expressions. This is contrary to most strictness analysers, where the strictness property of an expression has to be computed anew whenever one of its subexpressions changes. We also compare our approach with recent developments in strictness analysis.


2020 ◽  
Vol 25 (3) ◽  
pp. 49
Author(s):  
Silvia Licciardi ◽  
Rosa Maria Pidatella ◽  
Marcello Artioli ◽  
Giuseppe Dattoli

In this paper, we show that the use of methods of an operational nature, such as umbral calculus, allows achieving a double target: on one side, the study of the Voigt function, which plays a pivotal role in spectroscopic studies and in other applications, according to a new point of view, and on the other, the introduction of a Voigt transform and its possible use. Furthermore, by the same method, we point out that the Hermite and Laguerre functions, extension of the corresponding polynomials to negative and/or real indices, can be expressed through a definition in a straightforward and unified fashion. It is illustrated how the techniques that we are going to suggest provide an easy derivation of the relevant properties along with generalizations to higher order functions.


2013 ◽  
Vol 22 (1) ◽  
pp. 41-46
Author(s):  
ANDREI BOZANTAN ◽  
◽  
VASILE BERINDE ◽  

This paper describes the main aspects of the ”piecewise-linear homotopy method” for fixed point approximation proposed by Eaves and Saigal [Eaves, C. B. and Saigal, R., Homotopies for computation of fixed points on unbounded regions, Mathematical Programming, 3 (1972), No. 1, 225–237]. The implementation of the method is developed using the modern programming language C# and then is used for solving some unconstrained optimization problems. The PL homotopy algorithm appears to be more reliable than the classical Newton method in the case of the problem of finding a local minima for Schwefel’s function and other optimization problems.


Author(s):  
Andre Cardoso Barato ◽  
Taylor Wampler

Abstract The thermodynamic uncertainty relation is a prominent result in stochastic thermodynamics that provides a bound on the fluctuations of any thermodynamic flux, also known as current, in terms of the average rate of entropy production. Such fluctuations are quantified by the second moment of the probability distribution of the current. The role of higher order standardized moments such as skewness and kurtosis remains largely unexplored. We analyze the skewness and kurtosis associated with the first passage time of thermodynamic currents within the framework of stochastic thermodynamics. We develop a method to evaluate higher order standardized moments associated with the first passage time of any current. For systems with a unicyclic network of states, we conjecture upper and lower bounds on skewness and kurtosis associated with entropy production. These bounds depend on the number of states and the thermodynamic force that drives the system out of equilibrium. We show that these bounds for skewness and kurtosis do not hold for multicyclic networks. We discuss the application of our results to infer an underlying network of states.


Sign in / Sign up

Export Citation Format

Share Document