scholarly journals Efficient compilation of algebraic effect handlers

2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-28
Author(s):  
Georgios Karachalias ◽  
Filip Koprivec ◽  
Matija Pretnar ◽  
Tom Schrijvers

The popularity of algebraic effect handlers as a programming language feature for user-defined computational effects is steadily growing. Yet, even though efficient runtime representations have already been studied, most handler-based programs are still much slower than hand-written code. This paper shows that the performance gap can be drastically narrowed (in some cases even closed) by means of type-and-effect directed optimising compilation. Our approach consists of source-to-source transformations in two phases of the compilation pipeline. Firstly, elementary rewrites, aided by judicious function specialisation, exploit the explicit type and effect information of the compiler’s core language to aggressively reduce handler applications. Secondly, after erasing the effect information further rewrites in the backend of the compiler emit tight code. This work comes with a practical implementation: an optimising compiler from Eff, an ML style language with algebraic effect handlers, to OCaml. Experimental evaluation with this implementation demonstrates that in a number of benchmarks, our approach eliminates much of the overhead of handlers, outperforms capability-passing style compilation and yields competitive performance compared to hand-written OCaml code as well Multicore OCaml’s dedicated runtime support.

1979 ◽  
Vol 11 (1) ◽  
pp. 18-21
Author(s):  
Richard Furuta ◽  
P. Michael Kemp

2005 ◽  
Vol 13 (4) ◽  
pp. 277-298 ◽  
Author(s):  
Rob Pike ◽  
Sean Dorward ◽  
Robert Griesemer ◽  
Sean Quinlan

Very large data sets often have a flat but regular structure and span multiple disks and machines. Examples include telephone call records, network logs, and web document repositories. These large data sets are not amenable to study using traditional database techniques, if only because they can be too large to fit in a single relational database. On the other hand, many of the analyses done on them can be expressed using simple, easily distributed computations: filtering, aggregation, extraction of statistics, and so on. We present a system for automating such analyses. A filtering phase, in which a query is expressed using a new procedural programming language, emits data to an aggregation phase. Both phases are distributed over hundreds or even thousands of computers. The results are then collated and saved to a file. The design – including the separation into two phases, the form of the programming language, and the properties of the aggregators – exploits the parallelism inherent in having data and computation distributed across many machines.


Author(s):  
William O’Toole ◽  
Dr Stephen Luke ◽  
Travis Semmens ◽  
Dr Jason Brown ◽  
Andrew Tatrai

This chapter reviews planning methods and practices. Significant work has been published and used for long periods on planning methods. Preplanning is essential due to the life safety factors that a crowd can develop in situ. Planning can be considered in two phases. Information and background planning essential to communicate facts and identify risk areas in crowd management and operational planning. This then provides resourcing and contingency planning once the operation is in place. Like military operations both phases are important, however in many crowd situations operational and contingency planning is given less scrutiny. This is because the plans are normally scrutinised by authorities, councils, government, venue or land owners and they are more comfortable with pre-information type plans that inform them of the context background and communication flows. How the crowds are managed by security contractors is not usually an area they are experienced in, hence less attention is paid to these areas. The aim of this chapter is to provide enough knowledge for all event stakeholders to review and discuss practical implementation issues in security deployment and control. Planning and preparation requires an increased focus for crowd management because the emerging behaviour from the collective requires more options to be considered and prepared for. As crowds can cause life safety issues and because agents and systems can interact to exaggerate interactions and responses quickly, preparation and contingency planning is vital. Crowd risk assessments have to be conducted to understand and communicate the magnitude of the problems that can occur. If the consequences of the crowd activity are significant to the risk appetite of the organiser then response methods and measures should be developed and implemented. An example of this would be preparing additional signage, barriers and guards to divert pedestrians away or around potential bottlenecks when the flow becomes too congested.


2007 ◽  
Vol 17 (02) ◽  
pp. 339-353
Author(s):  
DWIGHT WOOLARD ◽  
WEIDONG ZHANG ◽  
ELLIOTT BROWN ◽  
BORIS GELMONT ◽  
ROBERT TREW

A design and analysis study is presented for a new optically-triggered (OT) interband resonant-tunneling-diode (I-RTD) device that has potential for generating terahertz (THz) frequency oscillations and achieving enhanced output power levels under pulsed operation. The proposed device utilizes novel nanoscale mechanisms to achieve externally driven oscillations that consist of two phases – i.e., an initial transient phase produced by a natural Zener (interband) tunneling process and a second discharging transient phase induced by optical annihilation of stored hole-charge by externally-injected photon flux. The specific focus of this paper will be on an OT-I-RTD oscillator that utilizes In 1- x Ga x As / GaSb y As 1- y hetero-systems and the application of band-engineering to enable triggering by 1.55 μm laser technology. The paper presents performance results for the hybrid circuit design, along with a practical implementation strategy for integrating the optical triggering and an analysis of the heating induced during large signal operation.


2014 ◽  
Vol 24 (03) ◽  
pp. 1441003 ◽  
Author(s):  
Marcel Köster ◽  
Roland Leißa ◽  
Sebastian Hack ◽  
Richard Membarth ◽  
Philipp Slusallek

A straightforward implementation of an algorithm in a general-purpose programming language does usually not deliver peak performance: Compilers often fail to automatically tune the code for certain hardware peculiarities like memory hierarchy or vector execution units. Manually tuning the code is firstly error-prone as well as time-consuming and secondly taints the code by exposing those peculiarities to the implementation. A popular method to avoid these problems is to implement the algorithm in a Domain-Specific Language (DSL). A DSL compiler can then automatically tune the code for the target platform. In this article we show how to embed a DSL for stencil codes in another language. In contrast to prior approaches we only use a single language for this task which offers explicit control over code refinement. This is used to specialize stencils for particular scenarios. Our results show that our specialized programs achieve competitive performance compared to hand-tuned CUDA programs while maintaining a convenient coding experience.


1994 ◽  
Vol 34 (7) ◽  
pp. 885-912 ◽  
Author(s):  
P. Ewen King-Smith ◽  
Scott S. Grigsby ◽  
Algis J. Vingrys ◽  
Susan C. Benes ◽  
Aaron Supowit

2013 ◽  
Vol 10 (4) ◽  
pp. 1661-1672
Author(s):  
Hemang Mehta ◽  
S.J. Balaji ◽  
Dharanipragada Janakiram

The contemporary software systems written in C face maintainability issues because of tight coupling. Introducing object orientation can address these problems by raising the abstraction to objects, thereby providing better programmability and understandability. However, compiling a C software with a C++ compiler is difficult because of the incompatibilities between C and C++. Some of the incompatibilities such as designated initializers are nontrivial in nature and hence are very difficult to handle by automation such as scripting or by manual efforts. Moreover, runtime support for features such as global constructors, exception handling, runtime type inference, etc. is also required in the target system. Clearly, the traditional procedural language compiler cannot provide these features. In this paper, we propose extending programming language such as C++ to support object orientation in legacy systems instead of completely redesigning them. With a case study of Linux kernel, we report major issues in providing the compile and runtime support for C++ in legacy systems, and provide a solution to these issues. Our approach paves the way for converting a large C based software into C++. The experiments demonstrate that the proposed extension saves significant manual efforts with very little change in the g++ compiler. In addition, the performance study considers other legacy systems written in C and shows that the overhead resulting from the modifications in the compiler is negligible in comparison to the functionality achieved.


Author(s):  
A. M. Babich ◽  
M. V. Akimov ◽  
D. S. Stelmakh

The article is devoted to the solution of key issues arising in the course of practical implementation of the artificial intelligence system elements. Based on the need to distribute tasks among several developers and ensure scalability, the structure of the system, the programming language and the data transfer protocol of the program modules were determined. The structure of the software package is conditioned by the requirement to modify and expand its capabilities by connecting additional software modules or completely replacing them. A refined algorithm for receiving and processing a command from the external environment is presented. The choice of the programming language is based on the availability of already developed libraries that solve the tasks of artificial intelligence, as well as the need to ensure cross-platform software. The choice of the programming language is based on the availability of already developed libraries that solve the tasks of artificial intelligence, as well as the need to ensure cross-platform software. The choice of the protocol for exchanging data between the individual program modules of the system was carried out proceeding from the need to transfer data of arbitrary size.


Sign in / Sign up

Export Citation Format

Share Document