numerical complexity
Recently Published Documents


TOTAL DOCUMENTS

47
(FIVE YEARS 5)

H-INDEX

8
(FIVE YEARS 1)

Energies ◽  
2022 ◽  
Vol 15 (2) ◽  
pp. 505
Author(s):  
Muhammad Salman Siddiqui ◽  
Muhammad Hamza Khalid ◽  
Abdul Waheed Badar ◽  
Muhammed Saeed ◽  
Taimoor Asim

The reliance on Computational Fluid Dynamics (CFD) simulations has drastically increased over time to evaluate the aerodynamic performance of small-scale wind turbines. With the rapid variability in customer demand, industrial requirements, economic constraints, and time limitations associated with the design and development of small-scale wind turbines, the trade-off between computational resources and the simulation’s numerical accuracy may vary significantly. In the context of wind turbine design and analysis, high fidelity simulation under full geometric and numerical complexity is more accurate but pose significant demands from a computational standpoint. There is a need to understand and quantify performance deterioration of high fidelity simulations under reduced geometric or numerical approximation on a single small scale turbine model. In the present work, the flow past a small-scale Horizontal Axis Wind Turbine (HAWT) was simulated under various geometric and numerical configurations. The geometric complexity was varied based on stationary and rotating turbine conditions. In the stationary case, simple 2D airfoil, 2.5D blade, 3D blade sections are evaluated, while rotational effects are introduced for the configuration 3D blade, rotor only, and the full-scale wind turbine with and without the inclusion of a nacelle and tower. In terms of numerical complexity, the Single Reference Frame (SRF), Multiple Reference Frames (MRF), and the Sliding Meshing Interface (SMI) is analyzed over Tip Speed Ratios (TSR) of 3, 6, 10. The quantification of aerodynamic coefficients of the blade (Cl, Cd) and turbine (Cp, Ct) was conducted along with the discussion on wake patterns in comparison with experimental data.


Author(s):  
H. S. Tavares ◽  
L. Biferale ◽  
M. Sbragaglia ◽  
A. A. Mailybaev

We develop a multicomponent lattice Boltzmann (LB) model for the two-dimensional Rayleigh–Taylor turbulence with a Shan–Chen pseudopotential implemented on GPUs. In the immiscible case, this method is able to accurately overcome the inherent numerical complexity caused by the complicated structure of the interface that appears in the fully developed turbulent regime. The accuracy of the LB model is tested both for early and late stages of instability. For the developed turbulent motion, we analyse the balance between different terms describing variations of the kinetic and potential energies. Then we analyse the role of the interface in the energy balance and also the effects of the vorticity induced by the interface in the energy dissipation. Statistical properties are compared for miscible and immiscible flows. Our results can also be considered as a first validation step to extend the application of LB model to three-dimensional immiscible Rayleigh-Taylor turbulence. This article is part of the theme issue ‘Progress in mesoscale methods for fluid dynamics simulation’.


OR Spectrum ◽  
2021 ◽  
Author(s):  
Nathan Sudermann-Merx ◽  
Steffen Rebennack

AbstractThe design of regression models that are not affected by outliers is an important task which has been subject of numerous papers within the statistics community for the last decades. Prominent examples of robust regression models are least trimmed squares (LTS), where the k largest squared deviations are ignored, and least trimmed absolute deviations (LTA) which ignores the k largest absolute deviations. The numerical complexity of both models is driven by the number of binary variables and by the value k of ignored deviations. We introduce leveraged least trimmed absolute deviations (LLTA) which exploits that LTA is already immune against y-outliers. Therefore, LLTA has only to be guarded against outlying values in x, so-called leverage points, which can be computed beforehand, in contrast to y-outliers. Thus, while the mixed-integer formulations of LTS and LTA have as many binary variables as data points, LLTA only needs one binary variable per leverage point, resulting in a significant reduction of binary variables. Based on 11 data sets from the literature, we demonstrate that (1) LLTA’s prediction quality improves much faster than LTS and as fast as LTA for increasing values of k and (2) that LLTA solves the benchmark problems about 80 times faster than LTS and about five times faster than LTA, in median.


Mathematics ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 123
Author(s):  
Pavel Loskot

The paper investigates the problem of performing a correlation analysis when the number of observations is large. In such a case, it is often necessary to combine random observations to achieve dimensionality reduction of the problem. A novel class of statistical measures is obtained by approximating the Taylor expansion of a general multivariate scalar symmetric function by a univariate polynomial in the variable given as a simple sum of the original random variables. The mean value of the polynomial is then a weighted sum of statistical central sum-moments with the weights being application dependent. Computing the sum-moments is computationally efficient and amenable to mathematical analysis, provided that the distribution of the sum of random variables can be obtained. Among several auxiliary results also obtained, the first order sum-moments corresponding to sample means are used to reduce the numerical complexity of linear regression by partitioning the data into disjoint subsets. Illustrative examples provided assume the first and the second order Markov processes.


Econometrica ◽  
2021 ◽  
Vol 89 (4) ◽  
pp. 1699-1715 ◽  
Author(s):  
Ilya Archakov ◽  
Peter Reinhard Hansen

We introduce a novel parametrization of the correlation matrix. The reparametrization facilitates modeling of correlation and covariance matrices by an unrestricted vector, where positive definiteness is an innate property. This parametrization can be viewed as a generalization of Fisher's Z‐transformation to higher dimensions and has a wide range of potential applications. An algorithm for reconstructing the unique n ×  n correlation matrix from any vector in R n ( n − 1 ) / 2 is provided, and we derive its numerical complexity.


2020 ◽  
Vol 497 (4) ◽  
pp. 4937-4955
Author(s):  
Hendrik Müller ◽  
Christoph Behrens ◽  
David J E Marsh

ABSTRACT We present a same-level comparison of the most prominent inversion methods for the reconstruction of the matter density field in the quasi-linear regime from the Ly α forest flux. Moreover, we present a pathway for refining the reconstruction in the framework of numerical optimization. We apply this approach to construct a novel hybrid method. The methods which are used so far for matter reconstructions are the Richardson–Lucy algorithm, an iterative Gauss–Newton method and a statistical approach assuming a one-to-one correspondence between matter and flux. We study these methods for high spectral resolutions such that thermal broadening becomes relevant. The inversion methods are compared on synthetic data (generated with the lognormal approach) with respect to their performance, accuracy, their stability against noise, and their robustness against systematic uncertainties. We conclude that the iterative Gauss–Newton method offers the most accurate reconstruction, in particular at small S/N, but has also the largest numerical complexity and requires the strongest assumptions. The other two algorithms are faster, comparably precise at small noise-levels, and, in the case of the statistical approach, more robust against inaccurate assumptions on the thermal history of the intergalactic medium (IGM). We use these results to refine the statistical approach using regularization. Our new approach has low numerical complexity and makes few assumptions about the history of the IGM, and is shown to be the most accurate reconstruction at small S/N, even if the thermal history of the IGM is not known. Our code will be made publicly available.


2020 ◽  
Vol 142 (3) ◽  
Author(s):  
Anton van Beek ◽  
Siyu Tao ◽  
Matthew Plumlee ◽  
Daniel W. Apley ◽  
Wei Chen

Abstract The cost of adaptive sampling for global metamodeling depends on the total number of costly function evaluations and to which degree these evaluations are performed in parallel. Conventionally, samples are taken through a greedy sampling strategy that is optimal for either a single sample or a handful of samples. The limitation of such an approach is that they compromise optimality when more samples are taken. In this paper, we propose a thrifty adaptive batch sampling (TABS) approach that maximizes a multistage reward function to find an optimal sampling policy containing the total number of sampling stages, the number of samples per stage, and the spatial location of each sample. Consequently, the first batch identified by TABS is optimal with respect to all potential future samples, the available resources, and is consistent with a modeler’s preference and risk attitude. Moreover, we propose two heuristic-based strategies that reduce numerical complexity with a minimal reduction in optimality. Through numerical examples, we show that TABS outperforms or is comparable with greedy sampling strategies. In short, TABS provides modelers with a flexible adaptive sampling tool for global metamodeling that effectively reduces sampling costs while maintaining prediction accuracy.


Author(s):  
Ravi Kumar Saidala

Clustering, one of the most attractive data analysis concepts in data mining, are frequently used by many researchers for analysing data of variety of real-world applications. It is stated in the literature that traditional clustering methods are trapped in local optima and fail to obtain optimal clusters. This research work gives the design and development of an advanced optimum clustering method for unmasking abnormal entries in the clinical dataset. The basis is the NOA, a recently proposed algorithm, driven by mimicking the migration pattern of Northern Bald Ibis (Threskiornithidae) birds. First, we developed the variant of the standard NOA by replacing C1 and C2 parameters of NOA with chaotic maps turning it into the VNOA. Later, we utilized the VNOA in the design of a new and advanced clustering method. VNOA is first benchmarked on a 7 unimodal (F1–F7) and 6 multimodal (F8–F13) mathematical functions. We tested the numerical complexity of proposed VNOA-based clustering methods on a clinical dataset. We then compared the obtained graphical and statistical results with well-known algorithms. The superiority of the presented clustering method is evidenced from the simulations and comparisons.


2019 ◽  
Vol 2019 ◽  
pp. 1-10
Author(s):  
Liang-Dong Guo ◽  
Sheng-Juan Huang ◽  
Li-Bing Wu

The problem of absolute stability analysis for neutral-type Lur’e systems with time-varying delays is investigated. Novel delay-decomposing approaches are proposed to divide the variation interval of the delay into three unequal subintervals. Some new augment Lyapunov–Krasovskii functionals (LKFs) are defined on the obtained subintervals. The integral inequality method and the reciprocally convex technique are utilized to deal with the derivative of the LKFs. Several improved delay-dependent criteria are derived in terms of the linear matrix inequalities (LMIs). Compared with some previous criteria, the proposed ones give the results with less conservatism and lower numerical complexity. Two numerical examples are included to illustrate the effectiveness and the improvement of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document