structured objects
Recently Published Documents


TOTAL DOCUMENTS

82
(FIVE YEARS 3)

H-INDEX

11
(FIVE YEARS 0)

2021 ◽  
pp. 41-57
Author(s):  
Tatiana Matveevna Kosovskaya ◽  

The problem of knowledge representation for a complex structured object is one of the actual problems of AI. This is due to the fact that many of the objects under study are not a single indivisible object characterized by its properties, but complex structures whose elements have some known properties and are in some, often multiplace, relations with each other. An approach to the representation of such knowledge based on first-order logic (predicate calculus formulas) is compared in this paper with two currently widespread approaches based on the representation of data information with the use of finite-valued strings or graphs. It is shown that the use of predicate calculus formulas for description of a complex structured object, despite the NP-difficulty of the solved problems arising after formalization, actually have no greater computational complexity than the other two approaches, what is usually not mentioned by their supporters. An algorithm for constructing an ontology is proposed that does not depend on the methodof desc ribing an object, and is based on the selection of the maximum common property of objects from a given set.


Algorithms ◽  
2020 ◽  
Vol 13 (9) ◽  
pp. 212
Author(s):  
Titouan Vayer ◽  
Laetitia Chapel ◽  
Remi Flamary ◽  
Romain Tavenard ◽  
Nicolas Courty

Optimal transport theory has recently found many applications in machine learning thanks to its capacity to meaningfully compare various machine learning objects that are viewed as distributions. The Kantorovitch formulation, leading to the Wasserstein distance, focuses on the features of the elements of the objects, but treats them independently, whereas the Gromov–Wasserstein distance focuses on the relations between the elements, depicting the structure of the object, yet discarding its features. In this paper, we study the Fused Gromov-Wasserstein distance that extends the Wasserstein and Gromov–Wasserstein distances in order to encode simultaneously both the feature and structure information. We provide the mathematical framework for this distance in the continuous setting, prove its metric and interpolation properties, and provide a concentration result for the convergence of finite samples. We also illustrate and interpret its use in various applications, where structured objects are involved.


2020 ◽  
Author(s):  
Kevin Yang ◽  
Wengong Jin ◽  
Kyle Swanson ◽  
Regina Barzilay ◽  
Tommi S Jaakkola

Generative models in molecular design tend to be richly parameterized, data-hungry neural models, as they must create complex structured objects as outputs. Estimating such models from data may be challenging due to the lack of sufficient training data. In this paper, we propose a surprisingly effective self-training approach for iteratively creating additional molecular targets. We first pre-train the generative model together with a simple property predictor. The property predictor is then used as a likelihood model for filtering candidate structures from the generative model. Additional targets are iteratively produced and used in the course of stochastic EM iterations to maximize the log-likelihood that the candidate structures are accepted. A simple rejection (re-weighting) sampler suffices to draw posterior samples since the generative model is already reasonable after pre-training. We demonstrate significant gains over strong baselines for both unconditional and conditional molecular design. In particular, our approach outperforms the previous state-of-the-art in conditional molecular design by over 10% in absolute gain.


2020 ◽  
Author(s):  
Kevin Yang ◽  
Wengong Jin ◽  
Kyle Swanson ◽  
Regina Barzilay ◽  
Tommi S Jaakkola

Generative models in molecular design tend to be richly parameterized, data-hungry neural models, as they must create complex structured objects as outputs. Estimating such models from data may be challenging due to the lack of sufficient training data. In this paper, we propose a surprisingly effective self-training approach for iteratively creating additional molecular targets. We first pre-train the generative model together with a simple property predictor. The property predictor is then used as a likelihood model for filtering candidate structures from the generative model. Additional targets are iteratively produced and used in the course of stochastic EM iterations to maximize the log-likelihood that the candidate structures are accepted. A simple rejection (re-weighting) sampler suffices to draw posterior samples since the generative model is already reasonable after pre-training. We demonstrate significant gains over strong baselines for both unconditional and conditional molecular design. In particular, our approach outperforms the previous state-of-the-art in conditional molecular design by over 10% in absolute gain.


2019 ◽  
Vol 100 (4) ◽  
Author(s):  
Wuhong Zhang ◽  
Dongkai Zhang ◽  
Xiaodong Qiu ◽  
Lixiang Chen

2019 ◽  
Vol 21 (8) ◽  
pp. 1900227
Author(s):  
Werner A. Goedel ◽  
Kerstin Gläser ◽  
Dana Mitra ◽  
Robert Thalheim ◽  
Peter Ueberfuhr ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document