scholarly journals (2n-1)-Point Ternary Approximating and Interpolating Subdivision Schemes

2011 ◽  
Vol 2011 ◽  
pp. 1-13 ◽  
Author(s):  
Muhammad Aslam ◽  
Ghulam Mustafa ◽  
Abdul Ghaffar

We present an explicit formula which unifies the mask of(2n-1)-point ternary interpolating as well as approximating subdivision schemes. We observe that the odd point ternary interpolating and approximating schemes introduced by Lian (2009), Siddiqi and Rehan (2010, 2009) and Hassan and Dodgson (2003) are special cases of our proposed masks/schemes. Moreover, schemes introduced by Zheng et al. (2009) can easily be generated by our proposed masks. It is also proved from comparison that(2n-1)-point schemes are better than2n-scheme in the sense of computational cost, support and error bounds.

2012 ◽  
Vol 2012 ◽  
pp. 1-20 ◽  
Author(s):  
Ghulam Mustafa ◽  
Jiansong Deng ◽  
Pakeeza Ashraf ◽  
Najma Abdul Rehman

We present an explicit formula for the mask of odd pointsn-ary, for any oddn⩾3, interpolating subdivision schemes. This formula provides the mask of lower and higher arity schemes. The 3-point and 5-pointa-ary schemes introduced by Lian, 2008, and (2m+1)-pointa-ary schemes introduced by, Lian, 2009, are special cases of our explicit formula. Moreover, other well-known existing odd pointn-ary schemes including the schemes introduced by Zheng et al., 2009, can easily be generated by our formula. In addition, error bounds between subdivision curves and control polygons of schemes are computed. It has been noticed that error bounds decrease when the complexity of the scheme decreases and vice versa. Also, as we increase arity of the schemes the error bounds decrease. Furthermore, we present brief comparison of total absolute curvature of subdivision schemes having different arity with different complexity. Convexity preservation property of scheme is also presented.


Author(s):  
Edgar Solomonik ◽  
James Demmel

AbstractIn matrix-vector multiplication, matrix symmetry does not permit a straightforward reduction in computational cost. More generally, in contractions of symmetric tensors, the symmetries are not preserved in the usual algebraic form of contraction algorithms. We introduce an algorithm that reduces the bilinear complexity (number of computed elementwise products) for most types of symmetric tensor contractions. In particular, it lowers the bilinear complexity of symmetrized contractions of symmetric tensors of order {s+v} and {v+t} by a factor of {\frac{(s+t+v)!}{s!t!v!}} to leading order. The algorithm computes a symmetric tensor of bilinear products, then subtracts unwanted parts of its partial sums. Special cases of this algorithm provide improvements to the bilinear complexity of the multiplication of a symmetric matrix and a vector, the symmetrized vector outer product, and the symmetrized product of symmetric matrices. While the algorithm requires more additions for each elementwise product, the total number of operations is in some cases less than classical algorithms, for tensors of any size. We provide a round-off error analysis of the algorithm and demonstrate that the error is not too large in practice. Finally, we provide an optimized implementation for one variant of the symmetry-preserving algorithm, which achieves speedups of up to 4.58\times for a particular tensor contraction, relative to a classical approach that casts the problem as a matrix-matrix multiplication.


2019 ◽  
Vol 17 (1) ◽  
pp. 1599-1614
Author(s):  
Zhiwu Hou ◽  
Xia Jing ◽  
Lei Gao

Abstract A new error bound for the linear complementarity problem (LCP) of Σ-SDD matrices is given, which depends only on the entries of the involved matrices. Numerical examples are given to show that the new bound is better than that provided by García-Esnaola and Peña [Linear Algebra Appl., 2013, 438, 1339–1446] in some cases. Based on the obtained results, we also give an error bound for the LCP of SB-matrices. It is proved that the new bound is sharper than that provided by Dai et al. [Numer. Algor., 2012, 61, 121–139] under certain assumptions.


2014 ◽  
Vol 28 (06) ◽  
pp. 1450017 ◽  
Author(s):  
RUIHU LI ◽  
GEN XU ◽  
LUOBIN GUO

In this paper, we discuss two problems on asymmetric quantum error-correcting codes (AQECCs). The first one is on the construction of a [[12, 1, 5/3]]2 asymmetric quantum code, we show an impure [[12, 1, 5/3 ]]2 exists. The second one is on the construction of AQECCs from binary cyclic codes, we construct many families of new asymmetric quantum codes with dz> δ max +1 from binary primitive cyclic codes of length n = 2m-1, where δ max = 2⌈m/2⌉-1 is the maximal designed distance of dual containing narrow sense BCH code of length n = 2m-1. A number of known codes are special cases of the codes given here. Some of these AQECCs have parameters better than the ones available in the literature.


Author(s):  
Ioannis K. Argyros ◽  
Santhosh George

Abstract We present a local convergence analysis of inexact Gauss-Newton-like method (IGNLM) for solving nonlinear least-squares problems in a Euclidean space setting. The convergence analysis is based on our new idea of restricted convergence domains. Using this idea, we obtain a more precise information on the location of the iterates than in earlier studies leading to smaller majorizing functions. This way, our approach has the following advantages and under the same computational cost as in earlier studies: A large radius of convergence and more precise estimates on the distances involved to obtain a desired error tolerance. That is, we have a larger choice of initial points and fewer iterations are also needed to achieve the error tolerance. Special cases and numerical examples are also presented to show these advantages.


1988 ◽  
Vol 41 (3) ◽  
pp. 469
Author(s):  
HJ Juretschke ◽  
HK Wagenfeld

Unless special precautions are taken, the experimental determination of two-beam structure factors to better than 1 % may include contributions from neighbouring n-beam interactions. In any particular experimental configuration, corrections for such contributions are easily carried out using the modified two-beam structure factor formalism developed recently (Juretschke 1984), once the full indexing of the pertinent n-beam interactions is known. The method is illustrated for both weak and strong primary reflections and its applicability in special cases, as well as for less than perfect crystals, is discussed.


2021 ◽  
Author(s):  
Ana Barbosa Aguiar ◽  
Jennifer Waters ◽  
Martin Price ◽  
Gordon Inverarity ◽  
Christine Pequignet ◽  
...  

<div> <p>The importance of oceans for atmospheric forecasts as well as climate simulations is being increasingly recognised with the advent of coupled ocean / atmosphere forecast models. Having comparable resolutions in both domains maximises the benefits for a given computational cost. The Met Office has recently upgraded its operational global ocean-only model from an eddy permitting 1/4 degree tripolar grid (ORCA025) to the eddy resolving 1/12 degree ORCA12 configuration while retaining 1/4 degree data assimilation. </p> </div><div> <p>We will present a description of the ocean-only ORCA12 system, FOAM-ORCA12, alongside some initial results. Qualitatively, FOAM-ORCA12 seems to represent better (than FOAM-ORCA025) the details of mesoscale features in SST and surface currents. Overall, traditional statistical results suggest that the new FOAM-ORCA12 system performs similarly or slightly worse than the pre-existing FOAM-ORCA025. However, it is known that comparisons of models running at different resolutions suffer from a double penalty effect, whereby higher-resolution models are penalised more than lower-resolution models for features that are offset in time and space. Neighbourhood verification methods seek to make a fairer comparison using a common spatial scale for both models and it can be seen that, as neighbourhood sizes increase, ORCA12 consistently has lower continuous ranked probability scores (CRPS) than ORCA025. CRPS measures the accuracy of the pseudo-ensemble created by the neighbourhood method and generalises the mean absolute error measure for deterministic forecasts. </p> </div><div> <p>The focus over the next year will be on diagnosing the performance of both the model and assimilation. A planned development that is expected to enhance the system is the update of the background-error covariances used for data assimilation. </p> </div>


Mathematics ◽  
2020 ◽  
Vol 8 (8) ◽  
pp. 1297 ◽  
Author(s):  
Judy P. Yang ◽  
Hon Fung Samuel Lam

The weighted reproducing kernel collocation method exhibits high accuracy and efficiency in solving inverse problems as compared with traditional mesh-based methods. Nevertheless, it is known that computing higher order reproducing kernel (RK) shape functions is generally an expensive process. Computational cost may dramatically increase, especially when dealing with strong-from equations where high-order derivative operators are required as compared to weak-form approaches for obtaining results with promising levels of accuracy. Under the framework of gradient approximation, the derivatives of reproducing kernel shape functions can be constructed synchronically, thereby alleviating the complexity in computation. In view of this, the present work first introduces the weighted high-order gradient reproducing kernel collocation method in the inverse analysis. The convergence of the method is examined through the weights imposed on the boundary conditions. Then, several configurations of multiply connected domains are provided to numerically investigate the stability and efficiency of the method. To reach the desired accuracy in detecting the outer boundary for two special cases, special treatments including allocation of points and use of ghost points are adopted as the solution strategy. From four benchmark examples, the efficacy of the method in detecting the unknown boundary is demonstrated.


2020 ◽  
Vol 45 (3) ◽  
pp. 966-992
Author(s):  
Michael Jong Kim

Sequential Bayesian optimization constitutes an important and broad class of problems where model parameters are not known a priori but need to be learned over time using Bayesian updating. It is known that the solution to these problems can in principle be obtained by solving the Bayesian dynamic programming (BDP) equation. Although the BDP equation can be solved in certain special cases (for example, when posteriors have low-dimensional representations), solving this equation in general is computationally intractable and remains an open problem. A second unresolved issue with the BDP equation lies in its (rather generic) interpretation. Beyond the standard narrative of balancing immediate versus future costs—an interpretation common to all dynamic programs with or without learning—the BDP equation does not provide much insight into the underlying mechanism by which sequential Bayesian optimization trades off between learning (exploration) and optimization (exploitation), the distinguishing feature of this problem class. The goal of this paper is to develop good approximations (with error bounds) to the BDP equation that help address the issues of computation and interpretation. To this end, we show how the BDP equation can be represented as a tractable single-stage optimization problem that trades off between a myopic term and a “variance regularization” term that measures the total solution variability over the remaining planning horizon. Intuitively, the myopic term can be regarded as a pure exploitation objective that ignores the impact of future learning, whereas the variance regularization term captures a pure exploration objective that only puts value on solutions that resolve statistical uncertainty. We develop quantitative error bounds for this representation and prove that the error tends to zero like o(n-1) almost surely in the number of stages n, which as a corollary, establishes strong consistency of the approximate solution.


2019 ◽  
Vol 181 (2) ◽  
pp. 473-507 ◽  
Author(s):  
E. Ruben van Beesten ◽  
Ward Romeijnders

Abstract In traditional two-stage mixed-integer recourse models, the expected value of the total costs is minimized. In order to address risk-averse attitudes of decision makers, we consider a weighted mean-risk objective instead. Conditional value-at-risk is used as our risk measure. Integrality conditions on decision variables make the model non-convex and hence, hard to solve. To tackle this problem, we derive convex approximation models and corresponding error bounds, that depend on the total variations of the density functions of the random right-hand side variables in the model. We show that the error bounds converge to zero if these total variations go to zero. In addition, for the special cases of totally unimodular and simple integer recourse models we derive sharper error bounds.


Sign in / Sign up

Export Citation Format

Share Document