scholarly journals A Block-Based Regularized Approach for Image Interpolation

2014 ◽  
Vol 2014 ◽  
pp. 1-8
Author(s):  
Li Chen ◽  
Xiaotong Huang ◽  
Jing Tian

This paper presents a new efficient algorithm for image interpolation based on regularization theory. To render ahigh-resolution(HR) image from alow-resolution(LR) image, classical interpolation techniques estimate the missing pixels from the surrounding pixels based on a pixel-by-pixel basis. In contrast, the proposed approach formulates the interpolation problem into the optimization of a cost function. The proposed cost function consists of a data fidelity term and regularization functional. The closed-form solution to the optimization problem is derived using the framework of constrained least squares minimization, by incorporating Kronecker product andsingular value decomposition(SVD) to reduce the computational cost of the algorithm. The effect of regularization on the interpolation results is analyzed, and an adaptive strategy is proposed for selecting the regularization parameter. Experimental results show that the proposed approach is able to reconstruct high-fidelity HR images, while suppressing artifacts such as edge distortion and blurring, to produce superior interpolation results to that of conventional image interpolation techniques.

Recent applications of conventional iterative coordinate descent (ICD) algorithms to multislice helical CT reconstructions have shown that conventional ICD can greatly improve image quality by increasing resolution as well as reducing noise and some artifacts. However, high computational cost and long reconstruction times remain as a barrier to the use of conventional algorithm in the practical applications. Among the various iterative methods that have been studied for conventional, ICD has been found to have relatively low overall computational requirements due to its fast convergence. This paper presents a fast model-based iterative reconstruction algorithm using spatially nonhomogeneous ICD (NH-ICD) optimization. The NH-ICD algorithm speeds up convergence by focusing computation where it is most needed. The NH-ICD algorithm has a mechanism that adaptively selects voxels for update. First, a voxel selection criterion VSC determines the voxels in greatest need of update. Then a voxel selection algorithm VSA selects the order of successive voxel updates based upon the need for repeated updates of some locations, while retaining characteristics for global convergence. In order to speed up each voxel update, we also propose a fast 3-D optimization algorithm that uses a quadratic substitute function to upper bound the local 3-D objective function, so that a closed form solution can be obtained rather than using a computationally expensive line search algorithm. The experimental results show that the proposed method accelerates the reconstructions by roughly a factor of three on average for typical 3-D multislice geometries.


Author(s):  
L. Beji ◽  
M. Pascal ◽  
P. Joli

Abstract In this paper, an architecture of a six degrees of freedom (dof) parallel robot and three limbs is described. The robot is called Space Manipulator (SM). In a first step, the inverse kinematic problem for the robot is solved in closed form solution. Further, we need to inverse only a 3 × 3 passive jacobian matrix to solve the direct kinematic problem. In a second step, the dynamic equations are derived by using the Lagrangian formalism where the coordinates are the passive and active joint coordinates. Based on geometrical properties of the robot, the equations of motion are derived in terms of only 9 coordinates related by 3 kinematic constraints. The computational cost of the obtained dynamic model is reduced by using a minimum set of base inertial parameters.


Author(s):  
Siqi Wang ◽  
En Zhu ◽  
Xiping Hu ◽  
Xinwang Liu ◽  
Qiang Liu ◽  
...  

Efficient detection of outliers from massive data with a high outlier ratio is challenging but not explicitly discussed yet. In such a case, existing methods either suffer from poor robustness or require expensive computations. This paper proposes a Low-rank based Efficient Outlier Detection (LEOD) framework to achieve favorable robustness against high outlier ratios with much cheaper computations. Specifically, it is worth highlighting the following aspects of LEOD: (1) Our framework exploits the low-rank structure embedded in the similarity matrix and considers inliers/outliers equally based on this low-rank structure, which facilitates us to encourage satisfying robustness with low computational cost later; (2) A novel re-weighting algorithm is derived as a new general solution to the constrained eigenvalue problem, which is a major bottleneck for the optimization process. Instead of the high space and time complexity (O((2n)2)/O((2n)3)) required by the classic solution, our algorithm enjoys O(n) space complexity and a faster optimization speed in the experiments; (3) A new alternative formulation is proposed for further acceleration of the solution process, where a cheap closed-form solution can be obtained. Experiments show that LEOD achieves strong robustness under an outlier ratio from 20% to 60%, while it is at most 100 times more memory efficient and 1000 times faster than its previous counterpart that attains comparable performance. The codes of LEOD are publicly available at https://github.com/demonzyj56/LEOD.


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5233 ◽  
Author(s):  
Jun Lu ◽  
Xiaodong Xu

Power amplifier (PA) nonlinearity is typically unique at the radio frequency (RF) front-end for particular emitters. It can play a crucial role in the application of specific emitter identification (SEI). In this paper, under the Multi-Input Multi-Output (MIMO) multipath communication scenario, two data-aided approaches are proposed to identify multi-antenna emitters using PA nonlinearity. Built upon a memoryless polynomial model, the first approach formulates a linear least square (LLS) problem and presents the closed-form solution of nonlinear coefficients in a MIMO system by means of singular value decomposition (SVD) operation. Another alternative approach estimates nonlinear coefficients of each individual PA through nonlinear least square (NLS) solved by the regularized Gauss–Newton iterative scheme. Moreover, there are some practical discussions of our proposed approaches about the mismatch of the order of PA model and the rank-deficient condition. Finally, the average misclassification rate is derived based on the minimum error probability (MEP) criterion, and the proposed approaches are validated to be effective through extensively numerical simulations.


2019 ◽  
Vol 8 (2) ◽  
pp. 31 ◽  
Author(s):  
Angelo Coluccia ◽  
Alessio Fascista

The paper addresses the problem of localization based on hybrid received signal strength (RSS) and time of arrival (TOA) measurements, in the presence of synchronization errors among all the nodes in a wireless network, and assuming all parameters are unknown. In most existing schemes, in fact, knowledge of the model parameters is postulated to reduce the high dimensionality of the cost functions involved in the position estimation process. However, such parameters depend on the operational wireless context, and change over time due to the presence of dynamic obstacles and other modification of the environment. Therefore, they should be adaptively estimated “on the field”, with a procedure that must be as simple as possible in order to suit multiple real-time re-calibrations, even in low-cost applications, without requiring human intervention. Unfortunately, the joint maximum likelihood (ML) position estimator for this problem does not admit a closed-form solution, and numerical optimization is practically unfeasible due to the large number of nuisance parameters. To circumvent such issues, a novel two-step algorithm with reduced complexity is proposed: A first calibration phase exploits nodes in known positions to estimate the unknown RSS and TOA model parameters; then, in a second localization step, an hybrid TOA/RSS range estimator is combined with an iterative least-squares procedure to finally estimate the unknown target position. The results show that the proposed hybrid TOA/RSS localization approach outperformed state-of-the-art competitors and, remarkably, achieved almost the same accuracy of the joint ML benchmark but with a significantly lower computational cost.


2019 ◽  
Vol 141 (3) ◽  
Author(s):  
Pranay Biswas ◽  
Suneet Singh ◽  
Hitesh Bindra

The Laplace transform (LT) is a widely used methodology for analytical solutions of dual phase lag (DPL) heat conduction problems with consistent DPL boundary conditions (BCs). However, the inversion of LT requires a series summation with large number of terms for reasonably converged solution, thereby, increasing computational cost. In this work, an alternative approach is proposed for this inversion which is valid only for time-periodic BCs. In this approach, an approximate convolution integral is used to get an analytical closed-form solution for sinusoidal BCs (which is obviously free of numerical inversion or series summation). The ease of implementation and simplicity of the proposed alternative LT approach is demonstrated through illustrative examples for different kind of sinusoidal BCs. It is noted that the solution has very small error only during the very short initial transient and is (almost) exact for longer time. Moreover, it is seen from the illustrative examples that for high frequency periodic BCs the Fourier and DPL model give quite different results; however, for low frequency BCs the results are almost identical. For nonsinusoidal periodic function as BCs, Fourier series expansion of the function in time can be obtained and then present approach can be used for each term of the series. An illustrative example with a triangular periodic wave as one of the BC is solved and the error with different number of terms in the expansion is shown. It is observed that quite accurate solutions can be obtained with a fewer number of terms.


2013 ◽  
Vol 5 (3) ◽  
Author(s):  
Mili Shah

This paper constructs a separable closed-form solution to the robot-world/hand-eye calibration problem AX = YB. Qualifications and properties that determine the uniqueness of X and Y as well as error metrics that measure the accuracy of a given X and Y are given. The formulation of the solution involves the Kronecker product and the singular value decomposition. The method is compared with existing solutions on simulated data and real data. It is shown that the Kronecker method that is presented in this paper is a reliable and accurate method for solving the robot-world/hand-eye calibration problem.


2006 ◽  
Vol 03 (02) ◽  
pp. 139-159 ◽  
Author(s):  
S. E. EL-KHAMY ◽  
M. M. HADHOUD ◽  
M. I. DESSOUKY ◽  
B. M. SALAM ◽  
F. E. ABD EL-SAMIE

In this paper, an adaptive algorithm is suggested for the implementation of polynomial based image interpolation techniques such as Bilinear, Bicubic, Cubic Spline and Cubic O-MOMS. This algorithm is based on the minimization of the squared estimation error at each pixel in the interpolated image by adaptively estimating the distance of the pixel to be estimated from its neighbors. The adaptation process at each pixel is performed iteratively to yield the best estimate of this pixel value. This adaptive interpolation algorithm takes into consideration the mathematical model by which a low resolution (LR) image is obtained from a high resolution (HR) image. This adaptive algorithm is compared to traditional polynomial based interpolation techniques and to the warped distance interpolation techniques. The performance of this algorithm is also compared to the performance of other algorithms used in commercial interpolation softwares such as the ACDSee and the Photopro programs. Results show that the suggested adaptive algorithm is superior from the Peak Signal to Noise Ratio (PSNR) point of view to other traditional techniques and it has a higher ability of edge preservation than traditional image techniques. The computational cost of the adaptive algorithm is studied and found to be moderate.


Author(s):  
Yunwei Sun ◽  
Charles Carrigan ◽  
William Cassata ◽  
Yue Hao ◽  
Souheil Ezzedine ◽  
...  

AbstractIsotopic ratios of radioactive xenons sampled in the subsurface and atmosphere can be used to detect underground nuclear explosions (UNEs) and civilian nuclear reactors. Disparities in the half-lives of the radioactive decay chains are principally responsible for time-dependent concentrations of xenon isotopes. Contrasting timescales, combined with modern detection capabilities, make the xenon isotopic family a desirable surrogate for UNE detection. However, without including the physical details of post-detonation cavity changes that affect radioxenon evolution and subsurface transport, a UNE is treated as an idealized system that is both closed and well mixed for estimating xenon isotopic ratios and their correlations so that the spatially dependent behavior of xenon production, cavity leakage, and transport are overlooked. In this paper, we developed a multi-compartment model with radioactive decay and interactions between compartments. The model does not require the detailed domain geometry and parameterization that is normally needed by high-fidelity computer simulations, but can represent nuclide evolution within a compartment and migration among compartments under certain conditions. The closed-form solution to all nuclides in the series 131–136 is derived using analytical singular-value decomposition. The solution is further used to express xenon ratios as functions of time and compartment position.


2018 ◽  
Vol 52 (22) ◽  
pp. 3109-3124 ◽  
Author(s):  
Yang Yan ◽  
Alfonso Pagani ◽  
Erasmo Carrera ◽  
Qingwen Ren

The present work proposes a closed-form solution based on refined beam theories for the static analysis of fiber-reinforced composite and sandwich beams under simply supported boundary conditions. The higher-order beam models are developed by employing Carrera Unified Formulation, which uses Lagrange-polynomials expansions to approximate the kinematic field over the cross section. The proposed methodology allows to carry out analysis of composite structure analysis through a single formulation in global-local sense, i.e. homogenized laminates at a global scale and fiber-matrix constituents at a local scale, leading to component-wise analysis. Therefore, three-dimensional stress/displacement fields at different scales can be successfully detected by increasing the order of Lagrange polynomials opportunely. The governing equations are derived in a strong-form and solved in a Navier-type sense. Three benchmark numerical assessments are carried out on a single-layer transversely isotropic beam, a cross-ply laminate [Formula: see text] beam and a sandwich beam. The results show that accurate displacement and stress values can be obtained in different parts of the structure with lower computational cost in comparison with traditional, enhanced as well as three-dimensional finite element methods. Besides, this study may serve as benchmarks for future assessments in this field.


Sign in / Sign up

Export Citation Format

Share Document