scholarly journals New Robust Regularized Shrinkage Regression for High-Dimensional Image Recovery and Alignment via Affine Transformation and Tikhonov Regularization

Author(s):  
Habte Tadesse Likassa ◽  
Wen Xian ◽  
Xuan Tang

In this work, a new robust regularized shrinkage regression method is proposed to recover and align high-dimensional images via affine transformation and Tikhonov regularization. To be more resilient with occlusions and illuminations, outliers, and heavy sparse noises, the new proposed approach incorporates novel ideas affine transformations and Tikhonov regularization into high-dimensional images. The highly corrupted, distorted, or misaligned images can be adjusted through the use of affine transformations and Tikhonov regularization term to ensure a trustful image decomposition. These novel ideas are very essential, especially in pruning out the potential impacts of annoying effects in high-dimensional images. Then, finding optimal variables through a set of affine transformations and Tikhonov regularization term is first casted as mathematical and statistical convex optimization programming techniques. Afterward, a fast alternating direction method for multipliers (ADMM) algorithm is applied, and the new equations are established to update the parameters involved and the affine transformations iteratively in the form of the round-robin manner. Moreover, the convergence of these new updating equations is scrutinized as well, and the proposed method has less time computation as compared to the state-of-the-art works. Conducted simulations have shown that the new robust method surpasses to the baselines for image alignment and recovery relying on some public datasets.

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Chentao Zhang ◽  
Habte Tadesse Likassa ◽  
Peidong Liang ◽  
Jielong Guo

In this paper, we developed a new robust part-based model for facial landmark localization and detection via affine transformation. In contrast to the existing works, the new algorithm incorporates affine transformations with the robust regression to tackle the potential effects of outliers and heavy sparse noises, occlusions and illuminations. As such, the distorted or misaligned objects can be rectified by affine transformations and the patterns of occlusions and outliers can be explicitly separated from the true underlying objects in big data. Moreover, the search of the optimal parameters and affine transformations is cast as a constrained optimization programming. To mitigate the computations, a new set of equations is derived to update the parameters involved and the affine transformations iteratively in a round-robin manner. Our way to update the parameters compared to the state of the art of the works is relatively better, as we employ a fast alternating direction method for multiplier (ADMM) algorithm that solves the parameters separately. Simulations show that the proposed method outperforms the state-of-the-art works on facial landmark localization and detection on the COFW, HELEN, and LFPW datasets.


Author(s):  
Peidong Liang ◽  
Habte Tadesse Likassa ◽  
Chentao Zhang ◽  
Jielong Guo

In this paper, we propose a novel robust algorithm for image recovery via affine transformations, the weighted nuclear, L ∗ , w , and the L 2,1 norms. The new method considers the spatial weight matrix to account the correlated samples in the data, the L 2,1 norm to tackle the dilemma of extreme values in the high-dimensional images, and the L ∗ , w norm newly added to alleviate the potential effects of outliers and heavy sparse noises, enabling the new approach to be more resilient to outliers and large variations in the high-dimensional images in signal processing. The determination of the parameters is involved, and the affine transformations are cast as a convex optimization problem. To mitigate the computational complexity, alternating iteratively reweighted direction method of multipliers (ADMM) method is utilized to derive a new set of recursive equations to update the optimization variables and the affine transformations iteratively in a round-robin manner. The new algorithm is superior to the state-of-the-art works in terms of accuracy on various public databases.


Author(s):  
Bernhard M¨uhlherr ◽  
Holger P. Petersson ◽  
Richard M. Weiss

This chapter presents some results about groups generated by reflections and the standard metric on a Bruhat-Tits building. It begins with definitions relating to an affine subspace, an affine hyperplane, an affine span, an affine map, and an affine transformation. It then considers a notation stating that the convex closure of a subset a of X is the intersection of all convex sets containing a and another notation that denotes by AGL(X) the group of all affine transformations of X and by Trans(X) the set of all translations of X. It also describes Euclidean spaces and assumes that the real vector space X is of finite dimension n and that d is a Euclidean metric on X. Finally, it discusses Euclidean representations and the standard metric.


2021 ◽  
Vol 15 (8) ◽  
pp. 898-911
Author(s):  
Yongqing Zhang ◽  
Jianrong Yan ◽  
Siyu Chen ◽  
Meiqin Gong ◽  
Dongrui Gao ◽  
...  

Rapid advances in biological research over recent years have significantly enriched biological and medical data resources. Deep learning-based techniques have been successfully utilized to process data in this field, and they have exhibited state-of-the-art performances even on high-dimensional, nonstructural, and black-box biological data. The aim of the current study is to provide an overview of the deep learning-based techniques used in biology and medicine and their state-of-the-art applications. In particular, we introduce the fundamentals of deep learning and then review the success of applying such methods to bioinformatics, biomedical imaging, biomedicine, and drug discovery. We also discuss the challenges and limitations of this field, and outline possible directions for further research.


2021 ◽  
Vol 7 (3) ◽  
pp. 49
Author(s):  
Daniel Carlos Guimarães Pedronette ◽  
Lucas Pascotti Valem ◽  
Longin Jan Latecki

Visual features and representation learning strategies experienced huge advances in the previous decade, mainly supported by deep learning approaches. However, retrieval tasks are still performed mainly based on traditional pairwise dissimilarity measures, while the learned representations lie on high dimensional manifolds. With the aim of going beyond pairwise analysis, post-processing methods have been proposed to replace pairwise measures by globally defined measures, capable of analyzing collections in terms of the underlying data manifold. The most representative approaches are diffusion and ranked-based methods. While the diffusion approaches can be computationally expensive, the rank-based methods lack theoretical background. In this paper, we propose an efficient Rank-based Diffusion Process which combines both approaches and avoids the drawbacks of each one. The obtained method is capable of efficiently approximating a diffusion process by exploiting rank-based information, while assuring its convergence. The algorithm exhibits very low asymptotic complexity and can be computed regionally, being suitable to outside of dataset queries. An experimental evaluation conducted for image retrieval and person re-ID tasks on diverse datasets demonstrates the effectiveness of the proposed approach with results comparable to the state-of-the-art.


Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 146
Author(s):  
Aleksei Vakhnin ◽  
Evgenii Sopov

Modern real-valued optimization problems are complex and high-dimensional, and they are known as “large-scale global optimization (LSGO)” problems. Classic evolutionary algorithms (EAs) perform poorly on this class of problems because of the curse of dimensionality. Cooperative Coevolution (CC) is a high-performed framework for performing the decomposition of large-scale problems into smaller and easier subproblems by grouping objective variables. The efficiency of CC strongly depends on the size of groups and the grouping approach. In this study, an improved CC (iCC) approach for solving LSGO problems has been proposed and investigated. iCC changes the number of variables in subcomponents dynamically during the optimization process. The SHADE algorithm is used as a subcomponent optimizer. We have investigated the performance of iCC-SHADE and CC-SHADE on fifteen problems from the LSGO CEC’13 benchmark set provided by the IEEE Congress of Evolutionary Computation. The results of numerical experiments have shown that iCC-SHADE outperforms, on average, CC-SHADE with a fixed number of subcomponents. Also, we have compared iCC-SHADE with some state-of-the-art LSGO metaheuristics. The experimental results have shown that the proposed algorithm is competitive with other efficient metaheuristics.


1982 ◽  
Vol 13 (2) ◽  
pp. 133-134 ◽  
Author(s):  
Hans U. Gerber

Let u(x) be a utility function, i.e., a function with u′(x)>0, u″(x)<0 for all x. If S is a risk to be insured (a random variable), the premium P = P(x) is obtained as the solution of the equationwhich is the condition that the premium is fair in terms of utility. It is clear that an affine transformation of u generates the same principle of premium calculation. To avoid this ambiguity, one can standardize the utility function in the sense thatfor an arbitrarily chosen point y. Alternatively, one can consider the risk aversionwhich is the same for all affine transformations of a utility function.Given the risk aversion r(x), the standardized utility function can be retrieved from the formulaIt is easily verified that this expression satisfies (2) and (3).The following lemma states that the greater the risk aversion the greater the premium, a result that does not surprise.


Author(s):  
Jun Sun ◽  
Lingchen Kong ◽  
Mei Li

With the development of modern science and technology, it is easy to obtain a large number of high-dimensional datasets, which are related but different. Classical unimodel analysis is less likely to capture potential links between the different datasets. Recently, a collaborative regression model based on least square (LS) method for this problem has been proposed. In this paper, we propose a robust collaborative regression based on the least absolute deviation (LAD). We give the statistical interpretation of the LS-collaborative regression and LAD-collaborative regression. Then we design an efficient symmetric Gauss–Seidel-based alternating direction method of multipliers algorithm to solve the two models, which has the global convergence and the Q-linear rate of convergence. Finally we report numerical experiments to illustrate the efficiency of the proposed methods.


Author(s):  
Matthew Nayor ◽  
Li Shen ◽  
Gary M. Hunninghake ◽  
Peter Kochunov ◽  
R. Graham Barr ◽  
...  

Imaging genomics is a rapidly evolving field that combines state-of-the-art bioimaging with genomic information to resolve phenotypic heterogeneity associated with genomic variation, improve risk prediction, discover prevention approaches, and enable precision diagnosis and treatment. Contemporary bioimaging methods provide exceptional resolution generating discrete and quantitative high-dimensional phenotypes for genomics investigation. Despite substantial progress in combining high-dimensional bioimaging and genomic data, methods for imaging genomics are evolving. Recognizing the potential impact of imaging genomics on the study of heart and lung disease, the National Heart, Lung, and Blood Institute convened a workshop to review cutting-edge approaches and methodologies in imaging genomics studies, and to establish research priorities for future investigation. This report summarizes the presentations and discussions at the workshop. In particular, we highlight the need for increased availability of imaging genomics data in diverse populations, dedicated focus on less common conditions, and centralization of efforts around specific disease areas.


2020 ◽  
Vol 17 (3) ◽  
pp. 849-865
Author(s):  
Zhongqin Bi ◽  
Shuming Dou ◽  
Zhe Liu ◽  
Yongbin Li

Neural network methods have been trained to satisfactorily learn user/product representations from textual reviews. A representation can be considered as a multiaspect attention weight vector. However, in several existing methods, it is assumed that the user representation remains unchanged even when the user interacts with products having diverse characteristics, which leads to inaccurate recommendations. To overcome this limitation, this paper proposes a novel model to capture the varying attention of a user for different products by using a multilayer attention framework. First, two individual hierarchical attention networks are used to encode the users and products to learn the user preferences and product characteristics from review texts. Then, we design an attention network to reflect the adaptive change in the user preferences for each aspect of the targeted product in terms of the rating and review. The results of experiments performed on three public datasets demonstrate that the proposed model notably outperforms the other state-of-the-art baselines, thereby validating the effectiveness of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document