scholarly journals Program synthesis: challenges and opportunities

Author(s):  
Cristina David ◽  
Daniel Kroening

Program synthesis is the mechanized construction of software, dubbed ‘self-writing code’. Synthesis tools relieve the programmer from thinking about how the problem is to be solved; instead, the programmer only provides a description of what is to be achieved. Given a specification of what the program should do, the synthesizer generates an implementation that provably satisfies this specification. From a logical point of view, a program synthesizer is a solver for second-order existential logic. Owing to the expressiveness of second-order logic, program synthesis has an extremely broad range of applications. We survey some of these applications as well as recent trends in the algorithms that solve the program synthesis problem. In particular, we focus on an approach that has raised the profile of program synthesis and ushered in a generation of new synthesis tools, namely counter-example-guided inductive synthesis (CEGIS). We provide a description of the CEGIS architecture, followed by recent algorithmic improvements. We conjecture that the capacity of program synthesis engines will see further step change, in a manner that is transparent to the applications, which will open up an even broader range of use-cases. This article is part of the themed issue ‘Verified trustworthy software systems’.

1981 ◽  
Vol 4 (1) ◽  
pp. 151-172
Author(s):  
Pierangelo Miglioli ◽  
Mario Ornaghi

The aim of this paper is to provide a general explanation of the “algorithmic content” of proofs, according to a point of view adequate to computer science. Differently from the more usual attitude of program synthesis, where the “algorithmic content” is captured by translating proofs into standard algorithmic languages, here we propose a “direct” interpretation of “proofs as programs”. To do this, a clear explanation is needed of what is to be meant by “proof-execution”, a concept which must generalize the usual “program-execution”. In the first part of the paper we discuss the general conditions to be satisfied by the executions of proofs and consider, as a first example of proof-execution, Prawitz’s normalization. According to our analysis, simple normalization is not fully adequate to the goals of the theory of programs: so, in the second section we present an execution-procedure based on ideas more oriented to computer science than Prawitz’s. We provide a soundness theorem which states that our executions satisfy an appropriate adequacy condition, and discuss the sense according to which our “proof-algorithms” inherently involve parallelism and non determinism. The Properties of our computation model are analyzed and also a completeness theorem involving a notion of “uniform evaluation” of open formulas is stated. Finally, an “algorithmic completeness” theorem is given, which essentially states that every flow-chart program proved to be totally correct can be simulated by an appropriate “purely logical proof”.


Author(s):  
Yves Mausen

Abstract The logic of evidence in Bartolistic literature, A reading of the Summa circa testes et examinationem eorum (Ms. Bruxelles, B.R., II 1442, fol.101 ra – 103 rb). – Bartolus teaches how to read testimonies from a logical point of view. On the one hand, the facts that the witness recounts constitute the minor premise of a syllogism, its conclusion being their legal characterization; therefore he is prohibited from pronouncing directly on any legal matter. On the other hand, given that the witness' knowledge of the facts has to stem from sensory perception, the information he provides has at least to constitute the minor premise of another syllogism, making for establishing the causa of his testimony.


1994 ◽  
Vol 116 (4) ◽  
pp. 741-750 ◽  
Author(s):  
C. H. Venner

This paper addresses the development of efficient numerical solvers for EHL problems from a rather fundamental point of view. A work-accuracy exchange criterion is derived, that can be interpreted as setting a limit to the price paid in terms of computing time for a solution of a given accuracy. The criterion can serve as a guideline when reviewing or selecting a numerical solver and a discretization. Earlier developed multilevel solvers for the EHL line and circular contact problem are tested against this criterion. This test shows that, to satisfy the criterion a second-order accurate solver is needed for the point contact problem whereas the solver developed earlier used a first-order discretization. This situation arises more often in numerical analysis, i.e., a higher order discretization is desired when a lower order solver already exists. It is explained how in such a case the multigrid methodology provides an easy and straightforward way to obtain the desired higher order of approximation. This higher order is obtained at almost negligible extra work and without loss of stability. The approach was tested out by raising an existing first order multilevel solver for the EHL line contact problem to second order. Subsequently, it was used to obtain a second-order solver for the EHL circular contact problem. Results for both the line and circular contact problem are presented.


2019 ◽  
Vol 11 (1) ◽  
pp. 59-66
Author(s):  
Ayechew Adera Getu

The aim of this point of view paper is to discuss the challenges and opportunities in teaching and research in the field of human physiology in Ethiopia. The challenges are seen as low availability of physiology teachers, especially those that have PhDs, low research productivity, absence of grants for basic sciences and brain drain. Opportunity for improvement is seen in the emergency of more medical schools in the country. However, close attention to standards of quality, particularly the provision of the full range of inputs required to support teaching and research, is urged.


2009 ◽  
Vol 50 ◽  
Author(s):  
Jérémy Besson ◽  
Albertas Čaplinskas

In the last decade the component technologies have evolved from object-oriented to serviceoriented ones. Services are seen as utilities based on a pay-for-use model. This model requires providing and guaranteeinga certain Quality of Service (QoS). However, QoS and even a service itself can be defined and understood in many different ways. It is by far not obvious which of these approaches and in what extent they should be used when developing service-oriented software systems. This paper analyzes the notion of QoS namely from this point of view.


2018 ◽  
Vol 14 (1) ◽  
pp. 57-58
Author(s):  
Alessio M. Pacces ◽  
Laurent Germain ◽  
Áron Perényi

This review covers the book titled “CORPORATE GOVERNANCE: NEW CHALLENGES AND OPPORTUNITIES”, which was written by Alexander N. Kostyuk, Udo Braendle and Vincenzo Capizzi (Virtus Interpress, 2017, Hardcover, ISBN: 978-617-7309-00-9). The review shortly outlines the structure of the book, pays attention to it’s strong sides and issues that will be, by the reviewers’ point of view, most interesting for the reader.


Author(s):  
Roger Hyam

Many of the world’s natural history collections are creating high resolution digital images of their specimens. They often make these available on the web through some form or zoomable viewer. For historical reasons, a hotchpotch of technologies are used to achieve this. This diversity has lead to two issues. Firstly, maintenance becomes costly as technologies need replacing. Secondly there is little chance to share data between institutions or provide a unified user experience. A researcher visiting four different virtual collections may have four very different experiences. Similar issues exist in the archives and libraries disciplines. They also need to share high resolution, annotated images of the physical objects in their care. In response to this issue many have coalesced around the International Image Interoperability Framework (IIIF). IIIF is a set of shared application programming interface (API) specifications for interoperable functionality in digital image repositories. It separates the notion of a viewer, which may be used as part of a website or other application, and the web services that feed data to that viewer. By using a common API for serving data about images, different viewers can be used to view the same images, thus providing an upgrade path that does not require replacement of viewer and server software at the same time and allows different viewers to be used for the same image data. Potentially more importantly, it facilitates the construction of applications that view data from different collections as if they were in the same place. From the researcher’s point of view, the experience could be the same whether the virtual specimen is hosted locally or in a museum on another continent. There is one important thing that has been deliberately omitted from the IIIF standard. This has both enabled its rapid adoption but also makes it incomplete for building research applications. IIIF transmits no semantic data about the subject of the images, only labels. The IIIF data therefore needs to be bound to semantically rich data about the specimens being viewed, in some uniform way. Consortium of Taxonomic Facilities (CETAF ) specimen identifiers are now widely adopted by natural history collections in Europe. Each individual collection object is designated by a URI chosen and maintained by the institution owning the specimen (Groom et al. 2017, Güntsch et al. 2018, Güntsch et al. 2017, HYAM et al. 2012). Under Linked Data conventions, content negotiation is used at the server so that users accessing an object using a web-browser are redirected to a human-readable representation of the object, typically a web-page, whilst software systems requiring machine-processable representations are redirected to an RDF-encoded metadata record. CETAF specimen identifiers are therefore ideal partners for IIIF representations of specimens. But how should we join the two together in a semantically rich way that will be generally understandable? SYNTHESYS+ is a European Commission funded programme that facilitates collaboration and network building among European natural history collections. It is concerned with both physical and virtual access to the 390 million specimens of plants and animals housed in participating institutions. Under Task 4.3 of this project, we have been working to create a reliable way to link between the RDF metadata about specimens and images of those specimens in IIIF as well as from images of specimens back to metadata of those specimens. By January 2021, we aim to have ten exemplar institutions publishing IIIF manifest files linked to CETAF identifiers for a few million specimens and for this to act as a catalyst for wider adoption in the natural history community. This presentation gives an update on the rollout of these implementations, paying particular attention to the challenges of semantically annotating specimens with images.


Sign in / Sign up

Export Citation Format

Share Document