scholarly journals Correction to “Fast and Accurate Molecular Property Prediction: Learning Atomic Interactions and Potentials with Neural Networks”

2019 ◽  
Vol 10 (9) ◽  
pp. 2066-2067
Author(s):  
Masashi Tsubaki ◽  
Teruyasu Mizoguchi
2020 ◽  
Vol 60 (8) ◽  
pp. 3770-3780 ◽  
Author(s):  
Lior Hirschfeld ◽  
Kyle Swanson ◽  
Kevin Yang ◽  
Regina Barzilay ◽  
Connor W. Coley

Author(s):  
Oliver Wieder ◽  
Stefan Kohlbacher ◽  
Mélaine Kuenemann ◽  
Arthur Garon ◽  
Pierre Ducrot ◽  
...  

2021 ◽  
Author(s):  
Agnieszka Pocha ◽  
Tomasz Danel ◽  
Sabina Podlewska ◽  
Jacek Tabor ◽  
Lukasz Maziarka

2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Juncai Li ◽  
Xiaofei Jiang

Molecular property prediction is an essential task in drug discovery. Most computational approaches with deep learning techniques either focus on designing novel molecular representation or combining with some advanced models together. However, researchers pay fewer attention to the potential benefits in massive unlabeled molecular data (e.g., ZINC). This task becomes increasingly challenging owing to the limitation of the scale of labeled data. Motivated by the recent advancements of pretrained models in natural language processing, the drug molecule can be naturally viewed as language to some extent. In this paper, we investigate how to develop the pretrained model BERT to extract useful molecular substructure information for molecular property prediction. We present a novel end-to-end deep learning framework, named Mol-BERT, that combines an effective molecular representation with pretrained BERT model tailored for molecular property prediction. Specifically, a large-scale prediction BERT model is pretrained to generate the embedding of molecular substructures, by using four million unlabeled drug SMILES (i.e., ZINC 15 and ChEMBL 27). Then, the pretrained BERT model can be fine-tuned on various molecular property prediction tasks. To examine the performance of our proposed Mol-BERT, we conduct several experiments on 4 widely used molecular datasets. In comparison to the traditional and state-of-the-art baselines, the results illustrate that our proposed Mol-BERT can outperform the current sequence-based methods and achieve at least 2% improvement on ROC-AUC score on Tox21, SIDER, and ClinTox dataset.


Methods ◽  
2020 ◽  
Vol 179 ◽  
pp. 65-72 ◽  
Author(s):  
Jeonghee Jo ◽  
Bumju Kwak ◽  
Hyun-Soo Choi ◽  
Sungroh Yoon

Sign in / Sign up

Export Citation Format

Share Document