Positivity Preservation of a First-Order Scheme for a Quasi-Conservative Compressible Two-Material Model

2021 ◽  
Vol 43 (4) ◽  
pp. B1029-B1055
Author(s):  
Khosro Shahbazi
1983 ◽  
Vol 50 (4a) ◽  
pp. 740-742 ◽  
Author(s):  
B. Stora˚kers

The classical Fo¨ppl equations, governing the deflection of plane membranes, constitute the first-order consistent approximation in the case of linear elastic material behavior. It is shown that despite the static and kinematic nonlinearities present, for arbitrary load histories a correspondence principle for viscoelastic material behavior exists if all relevant relaxation moduli are of uniform time dependence. Application of the principle is illustrated by means of a popular material model.


2013 ◽  
Vol 194 (3) ◽  
pp. 1473-1485 ◽  
Author(s):  
Guihua Long ◽  
Yubo Zhao ◽  
Jun Zou

Author(s):  
Sathya N. Ravi ◽  
Tuan Dinh ◽  
Vishnu Suresh Lokhande ◽  
Vikas Singh

A number of results have recently demonstrated the benefits of incorporating various constraints when training deep architectures in vision and machine learning. The advantages range from guarantees for statistical generalization to better accuracy to compression. But support for general constraints within widely used libraries remains scarce and their broader deployment within many applications that can benefit from them remains under-explored. Part of the reason is that Stochastic gradient descent (SGD), the workhorse for training deep neural networks, does not natively deal with constraints with global scope very well. In this paper, we revisit a classical first order scheme from numerical optimization, Conditional Gradients (CG), that has, thus far had limited applicability in training deep models. We show via rigorous analysis how various constraints can be naturally handled by modifications of this algorithm. We provide convergence guarantees and show a suite of immediate benefits that are possible — from training ResNets with fewer layers but better accuracy simply by substituting in our version of CG to faster training of GANs with 50% fewer epochs in image inpainting applications to provably better generalization guarantees using efficiently implementable forms of recently proposed regularizers.


1973 ◽  
Vol 58 (11) ◽  
pp. 5182-5183 ◽  
Author(s):  
Walter England

2006 ◽  
Vol 56 (9) ◽  
pp. 1136-1146 ◽  
Author(s):  
O. Alvarez ◽  
E. Carlini ◽  
R. Monneau ◽  
E. Rouy

Sign in / Sign up

Export Citation Format

Share Document