scholarly journals Tracking the critical points of curves evolving under planar curvature flows

2021 ◽  
Vol 0 (0) ◽  
pp. 0
Author(s):  
Eszter Fehér ◽  
Gábor Domokos ◽  
Bernd Krauskopf

<p style='text-indent:20px;'>We are concerned with the evolution of planar, star-like curves and associated shapes under a broad class of curvature-driven geometric flows, which we refer to as the Andrews-Bloore flow. This family of flows has two parameters that control one constant and one curvature-dependent component for the velocity in the direction of the normal to the curve. The Andrews-Bloore flow includes as special cases the well known Eikonal, curve-shortening and affine shortening flows, and for positive parameter values its evolution shrinks the area enclosed by the curve to zero in finite time. A question of key interest has been how various shape descriptors of the evolving shape behave as this limit is approached. Star-like curves (which include convex curves) can be represented by a periodic scalar polar distance function <inline-formula><tex-math id="M1">\begin{document}$ r(\varphi) $\end{document}</tex-math></inline-formula> measured from a reference point, which may or may not be fixed. An important question is how the numbers and the trajectories of critical points of the distance function <inline-formula><tex-math id="M2">\begin{document}$ r(\varphi) $\end{document}</tex-math></inline-formula> and of the curvature <inline-formula><tex-math id="M3">\begin{document}$ \kappa(\varphi) $\end{document}</tex-math></inline-formula> (characterized by <inline-formula><tex-math id="M4">\begin{document}$ dr/d\varphi = 0 $\end{document}</tex-math></inline-formula> and <inline-formula><tex-math id="M5">\begin{document}$ d\kappa /d\varphi = 0 $\end{document}</tex-math></inline-formula>, respectively) evolve under the Andrews-Bloore flows for different choices of the parameters.</p><p style='text-indent:20px;'>We present a numerical method that is specifically designed to meet the challenge of computing accurate trajectories of the critical points of an evolving curve up to the vicinity of a limiting shape. Each curve is represented by a piecewise polynomial periodic radial distance function, as determined by a chosen mesh; different types of meshes and mesh adaptation can be chosen to ensure a good balance between accuracy and computational cost. As we demonstrate with test-case examples and two longer case studies, our method allows one to perform numerical investigations into subtle questions of planar curve evolution. More specifically — in the spirit of experimental mathematics — we provide illustrations of some known results, numerical evidence for two stated conjectures, as well as new insights and observations regarding the limits of shapes and their critical points.</p>

2021 ◽  
Vol 5 (3) ◽  
pp. 80
Author(s):  
Hari Mohan Srivastava ◽  
Artion Kashuri ◽  
Pshtiwan Othman Mohammed ◽  
Dumitru Baleanu ◽  
Y. S. Hamed

In this paper, the authors define a new generic class of functions involving a certain modified Fox–Wright function. A useful identity using fractional integrals and this modified Fox–Wright function with two parameters is also found. Applying this as an auxiliary result, we establish some Hermite–Hadamard-type integral inequalities by using the above-mentioned class of functions. Some special cases are derived with relevant details. Moreover, in order to show the efficiency of our main results, an application for error estimation is obtained as well.


Author(s):  
Edgar Solomonik ◽  
James Demmel

AbstractIn matrix-vector multiplication, matrix symmetry does not permit a straightforward reduction in computational cost. More generally, in contractions of symmetric tensors, the symmetries are not preserved in the usual algebraic form of contraction algorithms. We introduce an algorithm that reduces the bilinear complexity (number of computed elementwise products) for most types of symmetric tensor contractions. In particular, it lowers the bilinear complexity of symmetrized contractions of symmetric tensors of order {s+v} and {v+t} by a factor of {\frac{(s+t+v)!}{s!t!v!}} to leading order. The algorithm computes a symmetric tensor of bilinear products, then subtracts unwanted parts of its partial sums. Special cases of this algorithm provide improvements to the bilinear complexity of the multiplication of a symmetric matrix and a vector, the symmetrized vector outer product, and the symmetrized product of symmetric matrices. While the algorithm requires more additions for each elementwise product, the total number of operations is in some cases less than classical algorithms, for tensors of any size. We provide a round-off error analysis of the algorithm and demonstrate that the error is not too large in practice. Finally, we provide an optimized implementation for one variant of the symmetry-preserving algorithm, which achieves speedups of up to 4.58\times for a particular tensor contraction, relative to a classical approach that casts the problem as a matrix-matrix multiplication.


2021 ◽  
Author(s):  
Samier Pierre ◽  
Raguenel Margaux ◽  
Darche Gilles

Abstract Solving the equations governing multiphase flow in geological formations involves the generation of a mesh that faithfully represents the structure of the porous medium. This challenging mesh generation task can be greatly simplified by the use of unstructured (tetrahedral) grids that conform to the complex geometric features present in the subsurface. However, running a million-cell simulation problem using an unstructured grid on a real, faulted field case remains a challenge for two main reasons. First, the workflow typically used to construct and run the simulation problems has been developed for structured grids and needs to be adapted to the unstructured case. Second, the use of unstructured grids that do not satisfy the K-orthogonality property may require advanced numerical schemes that preserve the accuracy of the results and reduce potential grid orientation effects. These two challenges are at the center of the present paper. We describe in detail the steps of our workflow to prepare and run a large-scale unstructured simulation of a real field case with faults. We perform the simulation using four different discretization schemes, including the cell-centered Two-Point and Multi-Point Flux Approximation (respectively, TPFA and MPFA) schemes, the cell- and vertex-centered Vertex Approximate Gradient (VAG) scheme, and the cell- and face-centered hybrid Mimetic Finite Difference (MFD) scheme. We compare the results in terms of accuracy, robustness, and computational cost to determine which scheme offers the best compromise for the test case considered here.


Author(s):  
Alessandra Cuneo ◽  
Alberto Traverso ◽  
Shahrokh Shahpar

In engineering design, uncertainty is inevitable and can cause a significant deviation in the performance of a system. Uncertainty in input parameters can be categorized into two groups: aleatory and epistemic uncertainty. The work presented here is focused on aleatory uncertainty, which can cause natural, unpredictable and uncontrollable variations in performance of the system under study. Such uncertainty can be quantified using statistical methods, but the main obstacle is often the computational cost, because the representative model is typically highly non-linear and complex. Therefore, it is necessary to have a robust tool that can perform the uncertainty propagation with as few evaluations as possible. In the last few years, different methodologies for uncertainty propagation and quantification have been proposed. The focus of this study is to evaluate four different methods to demonstrate strengths and weaknesses of each approach. The first method considered is Monte Carlo simulation, a sampling method that can give high accuracy but needs a relatively large computational effort. The second method is Polynomial Chaos, an approximated method where the probabilistic parameters of the response function are modelled with orthogonal polynomials. The third method considered is Mid-range Approximation Method. This approach is based on the assembly of multiple meta-models into one model to perform optimization under uncertainty. The fourth method is the application of the first two methods not directly to the model but to a response surface representing the model of the simulation, to decrease computational cost. All these methods have been applied to a set of analytical test functions and engineering test cases. Relevant aspects of the engineering design and analysis such as high number of stochastic variables and optimised design problem with and without stochastic design parameters were assessed. Polynomial Chaos emerges as the most promising methodology, and was then applied to a turbomachinery test case based on a thermal analysis of a high-pressure turbine disk.


Author(s):  
Ioannis K. Argyros ◽  
Santhosh George

Abstract We present a local convergence analysis of inexact Gauss-Newton-like method (IGNLM) for solving nonlinear least-squares problems in a Euclidean space setting. The convergence analysis is based on our new idea of restricted convergence domains. Using this idea, we obtain a more precise information on the location of the iterates than in earlier studies leading to smaller majorizing functions. This way, our approach has the following advantages and under the same computational cost as in earlier studies: A large radius of convergence and more precise estimates on the distances involved to obtain a desired error tolerance. That is, we have a larger choice of initial points and fewer iterations are also needed to achieve the error tolerance. Special cases and numerical examples are also presented to show these advantages.


1952 ◽  
Vol 19 (3) ◽  
pp. 263-266
Author(s):  
Ti-Chiang Lee

Abstract This paper presents an analytic solution of the stresses in a rotating disk of variable thickness. By introducing two parameters, the profile of the disk is assumed to vary exponentially with any power of the radial distance from the center of the disk. In some respects this solution may be considered as a generalization of Malkin’s solution, but it differs essentially from the latter in the method of solution. Here, the stresses are solved through a stress function instead of being solved directly. The required stress function is expressed in terms of confluent hypergeometric functions. Numerical examples are also shown for illustration.


Author(s):  
Alexander Liefke ◽  
Peter Jaksch ◽  
Sebastian Schmitz ◽  
Vincent Marciniak ◽  
Uwe Janoske ◽  
...  

Abstract This paper shows how to use discrete CFD and FEM adjoint surface sensitivities to derive objective-based tolerances for turbine blades, instead of relying on geometric tolerances. For this purpose a multidisciplinary adjoint evaluation tool chain is introduced to quantify the effect of real manufacturing imperfections on aerodynamic efficiency and probabilistic low cycle fatigue life time. Before the adjoint method is applied, a numerical validation of the CFD and FEM adjoint gradients is performed using 102 heavy duty turbine vane scans. The results show that the relative error for adjoint CFD gradients is below 0.5%, while the FEM life time gradient relative errors are below 5%. The adjoint assessment tool chain further reduces the computational cost by around 85% for the investigated test case compared to non-linear methods. Through the application of the presented tool chain, the definition of specified objective-based tolerances becomes available as a design assessment tool and allows to improve overall turbine efficiency and the accuracy of life time prediction.


2003 ◽  
Vol 11 (4) ◽  
pp. 316-344 ◽  
Author(s):  
Curtis S. Signorino

Social scientists are often confronted with theories in which one or more actors make choices over a discrete set of options. In this article, I generalize a broad class of statistical discrete choice models, with both well-known and new nonstrategic and strategic special cases. I demonstrate how to derive statistical models from theoretical discrete choice models and, in doing so, I address the statistical implications of three sources of uncertainty: agent error, private information about payoffs, and regressor error. For strategic and some nonstrategic choice models, the three types of uncertainty produce different statistical models. In these cases, misspecifying the type of uncertainty leads to biased and inconsistent estimates, and to incorrect inferences based on estimated probabilities.


Author(s):  
Jason Krawciw ◽  
Damian Martin ◽  
Paul Denman

Thermal protection of gas turbine combustors relies heavily upon the delivery of a carefully managed film of coolant air to the hot-side of the combustor liner. Furthermore, improvements in engine sfc and the trend to ever more aggressive engine cycles means greater emphasis is being placed upon more efficient use of the proportion of combustion system air made available for cooling. As a result, there is a requirement to better understand the development of cooling films deposited onto the hot-side of the liner through complex effusion arrays. This study, therefore, is concerned with the prediction and measurement of adiabatic film effectiveness of a number of engine-representative designs. A RANS based CFD approach is used to predict film effectiveness in which computational cost is minimised by solving first for a single coolant passage to provide high fidelity, near-exit boundary conditions to the effusion arrays. Equivalent measurements are made for each test case using a Pressure Sensitive Paint (PSP) technique in which the oxygen-quenched fluorescence properties of the paint are employed together with a Nitrogen gas cooling simulant to determine adiabatic film effectiveness. This study demonstrates that whist the model under-predicts the mixing of the coolant with the main-stream flow, and hence the film development over the surface, the approach works well at quantifying the relative performance of each design.


Author(s):  
E. N. Dzhafarov ◽  
Ru Zhang ◽  
Janne Kujala

Most behavioural and social experiments aimed at revealing contextuality are confined to cyclic systems with binary outcomes. In quantum physics, this broad class of systems includes as special cases Klyachko–Can–Binicioglu–Shumovsky-type, Einstein–Podolsky–Rosen–Bell-type and Suppes–Zanotti–Leggett–Garg-type systems. The theory of contextuality known as contextuality-by-default allows one to define and measure contextuality in all such systems, even if there are context-dependent errors in measurements, or if something in the contexts directly interacts with the measurements. This makes the theory especially suitable for behavioural and social systems, where direct interactions of ‘everything with everything’ are ubiquitous. For cyclic systems with binary outcomes, the theory provides necessary and sufficient conditions for non-contextuality, and these conditions are known to be breached in certain quantum systems. We review several behavioural and social datasets (from polls of public opinion to visual illusions to conjoint choices to word combinations to psychophysical matching), and none of these data provides any evidence for contextuality. Our working hypothesis is that this may be a broadly applicable rule: behavioural and social systems are non-contextual, i.e. all ‘contextual effects’ in them result from the ubiquitous dependence of response distributions on the elements of contexts other than the ones to which the response is presumably or normatively directed.


Sign in / Sign up

Export Citation Format

Share Document