Adaptive step edge model for self-consistent training of neural network for probabilistic edge labelling

1996 ◽  
Vol 143 (1) ◽  
pp. 41 ◽  
Author(s):  
W.C. Chen ◽  
N.A. Thacker ◽  
P.I. Rockett
PLoS ONE ◽  
2020 ◽  
Vol 15 (2) ◽  
pp. e0229418 ◽  
Author(s):  
Seyed Amir Hossein Hosseini ◽  
Chi Zhang ◽  
Sebastian Weingärtner ◽  
Steen Moeller ◽  
Matthias Stuber ◽  
...  

2021 ◽  
Author(s):  
Yashas Samaga B L ◽  
Shampa Raghunathan ◽  
U. Deva Priyakumar

<div>Engineering proteins to have desired properties by mutating amino acids at specific sites is commonplace. Such engineered proteins must be stable to function. Experimental methods used to determine stability at throughputs required to scan the protein sequence space thoroughly are laborious. To this end, many machine learning based methods have been developed to predict thermodynamic stability changes upon mutation. These methods have been evaluated for symmetric consistency by testing with hypothetical reverse mutations. In this work, we propose transitive data augmentation, evaluating transitive consistency, and a new machine learning based method, first of its kind, that incorporates both symmetric and transitive properties into the architecture. Our method, called SCONES, is an interpretable neural network that estimates a residue's contributions towards protein stability dG in its local structural environment. The difference between independently predicted contributions of the reference and mutant residues in a missense mutation is reported as dG. We show that this self-consistent machine learning architecture is immune to many common biases in datasets, relies less on data than existing methods, and is robust to overfitting.</div><div><br></div>


2019 ◽  
Vol 85 (6) ◽  
Author(s):  
L. Hesslow ◽  
L. Unnerfelt ◽  
O. Vallhagen ◽  
O. Embreus ◽  
M. Hoppe ◽  
...  

Integrated modelling of electron runaway requires computationally expensive kinetic models that are self-consistently coupled to the evolution of the background plasma parameters. The computational expense can be reduced by using parameterized runaway generation rates rather than solving the full kinetic problem. However, currently available generation rates neglect several important effects; in particular, they are not valid in the presence of partially ionized impurities. In this work, we construct a multilayer neural network for the Dreicer runaway generation rate which is trained on data obtained from kinetic simulations performed for a wide range of plasma parameters and impurities. The neural network accurately reproduces the Dreicer runaway generation rate obtained by the kinetic solver. By implementing it in a fluid runaway-electron modelling tool, we show that the improved generation rates lead to significant differences in the self-consistent runaway dynamics as compared to the results using the previously available formulas for the runaway generation rate.


2020 ◽  
Vol 29 (10) ◽  
pp. 105008
Author(s):  
P W Stokes ◽  
M J E Casey ◽  
D G Cocks ◽  
J de Urquijo ◽  
G García ◽  
...  

2021 ◽  
Author(s):  
Yashas Samaga B L ◽  
Shampa Raghunathan ◽  
U. Deva Priyakumar

<div>Engineering proteins to have desired properties by mutating amino acids at specific sites is commonplace. Such engineered proteins must be stable to function. Experimental methods used to determine stability at throughputs required to scan the protein sequence space thoroughly are laborious. To this end, many machine learning based methods have been developed to predict thermodynamic stability changes upon mutation. These methods have been evaluated for symmetric consistency by testing with hypothetical reverse mutations. In this work, we propose transitive data augmentation, evaluating transitive consistency, and a new machine learning based method, first of its kind, that incorporates both symmetric and transitive properties into the architecture. Our method, called SCONES, is an interpretable neural network that estimates a residue's contributions towards protein stability dG in its local structural environment. The difference between independently predicted contributions of the reference and mutant residues in a missense mutation is reported as dG. We show that this self-consistent machine learning architecture is immune to many common biases in datasets, relies less on data than existing methods, and is robust to overfitting.</div><div><br></div>


Sign in / Sign up

Export Citation Format

Share Document