scholarly journals Multibaseline Interferometric Phase Denoising Based on Kurtosis in the NSST Domain

Sensors ◽  
2020 ◽  
Vol 20 (2) ◽  
pp. 551 ◽  
Author(s):  
Yanfang Liu ◽  
Shiqiang Li ◽  
Heng Zhang

Interferometric phase filtering is a crucial step in multibaseline interferometric synthetic aperture radar (InSAR). Current multibaseline interferometric phase filtering methods mostly follow methods of single-baseline InSAR and do not bring its data superiority into full play. The joint filtering of multibaseline InSAR based on statistics is proposed in this paper. We study and analyze the fourth-order statistical quantity of interferometric phase: kurtosis. An empirical assumption that the kurtosis of interferograms with different baselines keeps constant is proposed and is named as the baseline-invariant property of kurtosis in this paper. Some numerical experiments and rational analyses confirm its validity and universality. The noise level estimation of nature images is extended to multibaseline InSAR by dint of the baseline-invariant property of kurtosis. A filtering method based on the non-subsampled shearlet transform (NSST) and Wiener filter with estimated noise variance is proposed then. Firstly, multi-scaled and multi-directional coefficients of interferograms are obtained by NSST. Secondly, the noise variance is represented as the solution of a constrained non-convex optimization problem. A pre-thresholded Wiener filtering with estimated noise variance is employed for shrinking or zeroing NSST coefficients. Finally, the inverse NSST is utilized to obtain the filtered interferograms. Experiments on simulated and real data show that the proposed method has excellent comprehensive performance and is superior to conventional single-baseline filtering methods.

Author(s):  
Uyen Mai ◽  
Siavash Mirarab

AbstractPhylogenetic trees inferred from sequence data often have branch lengths measured in the expected number of substitutions and therefore, do not have divergence times estimated. These trees give an incomplete view of evolutionary histories since many applications of phylogenies require time trees. Many methods have been developed to convert the inferred branch lengths from substitution unit to time unit using calibration points, but none is universally accepted as they are challenged in both scalability and accuracy under complex models. Here, we introduce a new method that formulates dating as a non-convex optimization problem where the variance of log-transformed rate multipliers are minimized across the tree. On simulated and real data, we show that our method, wLogDate, is often more accurate than alternatives and is more robust to various model assumptions.


TAPPI Journal ◽  
2019 ◽  
Vol 18 (10) ◽  
pp. 607-618
Author(s):  
JÉSSICA MOREIRA ◽  
BRUNO LACERDA DE OLIVEIRA CAMPOS ◽  
ESLY FERREIRA DA COSTA JUNIOR ◽  
ANDRÉA OLIVEIRA SOUZA DA COSTA

The multiple effect evaporator (MEE) is an energy intensive step in the kraft pulping process. The exergetic analysis can be useful for locating irreversibilities in the process and pointing out which equipment is less efficient, and it could also be the object of optimization studies. In the present work, each evaporator of a real kraft system has been individually described using mass balance and thermodynamics principles (the first and the second laws). Real data from a kraft MEE were collected from a Brazilian plant and were used for the estimation of heat transfer coefficients in a nonlinear optimization problem, as well as for the validation of the model. An exergetic analysis was made for each effect individually, which resulted in effects 1A and 1B being the least efficient, and therefore having the greatest potential for improvement. A sensibility analysis was also performed, showing that steam temperature and liquor input flow rate are sensible parameters.


Author(s):  
Roberto Benedetti ◽  
Maria Michela Dickson ◽  
Giuseppe Espa ◽  
Francesco Pantalone ◽  
Federica Piersimoni

AbstractBalanced sampling is a random method for sample selection, the use of which is preferable when auxiliary information is available for all units of a population. However, implementing balanced sampling can be a challenging task, and this is due in part to the computational efforts required and the necessity to respect balancing constraints and inclusion probabilities. In the present paper, a new algorithm for selecting balanced samples is proposed. This method is inspired by simulated annealing algorithms, as a balanced sample selection can be interpreted as an optimization problem. A set of simulation experiments and an example using real data shows the efficiency and the accuracy of the proposed algorithm.


2021 ◽  
Author(s):  
Stav Belogolovsky ◽  
Philip Korsunsky ◽  
Shie Mannor ◽  
Chen Tessler ◽  
Tom Zahavy

AbstractWe consider the task of Inverse Reinforcement Learning in Contextual Markov Decision Processes (MDPs). In this setting, contexts, which define the reward and transition kernel, are sampled from a distribution. In addition, although the reward is a function of the context, it is not provided to the agent. Instead, the agent observes demonstrations from an optimal policy. The goal is to learn the reward mapping, such that the agent will act optimally even when encountering previously unseen contexts, also known as zero-shot transfer. We formulate this problem as a non-differential convex optimization problem and propose a novel algorithm to compute its subgradients. Based on this scheme, we analyze several methods both theoretically, where we compare the sample complexity and scalability, and empirically. Most importantly, we show both theoretically and empirically that our algorithms perform zero-shot transfer (generalize to new and unseen contexts). Specifically, we present empirical experiments in a dynamic treatment regime, where the goal is to learn a reward function which explains the behavior of expert physicians based on recorded data of them treating patients diagnosed with sepsis.


2018 ◽  
Vol 30 (12) ◽  
pp. 3281-3308
Author(s):  
Hong Zhu ◽  
Li-Zhi Liao ◽  
Michael K. Ng

We study a multi-instance (MI) learning dimensionality-reduction algorithm through sparsity and orthogonality, which is especially useful for high-dimensional MI data sets. We develop a novel algorithm to handle both sparsity and orthogonality constraints that existing methods do not handle well simultaneously. Our main idea is to formulate an optimization problem where the sparse term appears in the objective function and the orthogonality term is formed as a constraint. The resulting optimization problem can be solved by using approximate augmented Lagrangian iterations as the outer loop and inertial proximal alternating linearized minimization (iPALM) iterations as the inner loop. The main advantage of this method is that both sparsity and orthogonality can be satisfied in the proposed algorithm. We show the global convergence of the proposed iterative algorithm. We also demonstrate that the proposed algorithm can achieve high sparsity and orthogonality requirements, which are very important for dimensionality reduction. Experimental results on both synthetic and real data sets show that the proposed algorithm can obtain learning performance comparable to that of other tested MI learning algorithms.


Robotica ◽  
2018 ◽  
Vol 37 (3) ◽  
pp. 481-501 ◽  
Author(s):  
Mehran Hosseini-Pishrobat ◽  
Jafar Keighobadi

SUMMARYThis paper reports an extended state observer (ESO)-based robust dynamic surface control (DSC) method for triaxial MEMS gyroscope applications. An ESO with non-linear gain function is designed to estimate both velocity and disturbance vectors of the gyroscope dynamics via measured position signals. Using the sector-bounded property of the non-linear gain function, the design of an $\mathcal{L}_2$-robust ESO is phrased as a convex optimization problem in terms of linear matrix inequalities (LMIs). Next, by using the estimated velocity and disturbance, a certainty equivalence tracking controller is designed based on DSC. To achieve an improved robustness and to remove static steady-state tracking errors, new non-linear integral error surfaces are incorporated into the DSC. Based on the energy-to-peak ($\mathcal{L}_2$-$\mathcal{L}_\infty$) performance criterion, a finite number of LMIs are derived to obtain the DSC gains. In order to prevent amplification of the measurement noise in the DSC error dynamics, a multi-objective convex optimization problem, which guarantees a prescribed $\mathcal{L}_2$-$\mathcal{L}_\infty$ performance bound, is considered. Finally, the efficacy of the proposed control method is illustrated by detailed software simulations.


2013 ◽  
Vol 2013 ◽  
pp. 1-8
Author(s):  
Teng Li ◽  
Huan Chang ◽  
Jun Wu

This paper presents a novel algorithm to numerically decompose mixed signals in a collaborative way, given supervision of the labels that each signal contains. The decomposition is formulated as an optimization problem incorporating nonnegative constraint. A nonnegative data factorization solution is presented to yield the decomposed results. It is shown that the optimization is efficient and decreases the objective function monotonically. Such a decomposition algorithm can be applied on multilabel training samples for pattern classification. The real-data experimental results show that the proposed algorithm can significantly facilitate the multilabel image classification performance with weak supervision.


2018 ◽  
Vol 13 (4) ◽  
pp. 34
Author(s):  
T.A. Bubba ◽  
D. Labate ◽  
G. Zanghirati ◽  
S. Bonettini

Region of interest (ROI) tomography has gained increasing attention in recent years due to its potential to reducing radiation exposure and shortening the scanning time. However, tomographic reconstruction from ROI-focused illumination involves truncated projection data and typically results in higher numerical instability even when the reconstruction problem has unique solution. To address this problem, bothad hocanalytic formulas and iterative numerical schemes have been proposed in the literature. In this paper, we introduce a novel approach for ROI tomographic reconstruction, formulated as a convex optimization problem with a regularized term based on shearlets. Our numerical implementation consists of an iterative scheme based on the scaled gradient projection method and it is tested in the context of fan-beam CT. Our results show that our approach is essentially insensitive to the location of the ROI and remains very stable also when the ROI size is rather small.


2019 ◽  
Vol 44 (4) ◽  
pp. 407-426
Author(s):  
Jedrzej Musial ◽  
Emmanuel Kieffer ◽  
Mateusz Guzek ◽  
Gregoire Danoy ◽  
Shyam S. Wagle ◽  
...  

Abstract Cloud computing has become one of the major computing paradigms. Not only the number of offered cloud services has grown exponentially but also many different providers compete and propose very similar services. This situation should eventually be beneficial for the customers, but considering that these services slightly differ functionally and non-functionally -wise (e.g., performance, reliability, security), consumers may be confused and unable to make an optimal choice. The emergence of cloud service brokers addresses these issues. A broker gathers information about services from providers and about the needs and requirements of the customers, with the final goal of finding the best match. In this paper, we formalize and study a novel problem that arises in the area of cloud brokering. In its simplest form, brokering is a trivial assignment problem, but in more complex and realistic cases this does not longer hold. The novelty of the presented problem lies in considering services which can be sold in bundles. Bundling is a common business practice, in which a set of services is sold together for the lower price than the sum of services’ prices that are included in it. This work introduces a multi-criteria optimization problem which could help customers to determine the best IT solutions according to several criteria. The Cloud Brokering with Bundles (CBB) models the different IT packages (or bundles) found on the market while minimizing (maximizing) different criteria. A proof of complexity is given for the single-objective case and experiments have been conducted with a special case of two criteria: the first one being the cost and the second is artificially generated. We also designed and developed a benchmark generator, which is based on real data gathered from 19 cloud providers. The problem is solved using an exact optimizer relying on a dichotomic search method. The results show that the dichotomic search can be successfully applied for small instances corresponding to typical cloud-brokering use cases and returns results in terms of seconds. For larger problem instances, solving times are not prohibitive, and solutions could be obtained for large, corporate clients in terms of minutes.


2021 ◽  
Author(s):  
Di Zhao ◽  
Weijie Tan ◽  
Zhongliang Deng ◽  
Gang Li

Abstract In this paper, we present a low complexity beamspace direction-of-arrival (DOA) estimation method for uniform circular array (UCA), which is based on the single measurement vectors (SMVs) via vectorization of sparse covariance matrix. In the proposed method, we rstly transform the signal model of UCA to that of virtual uniform linear array (ULA) in beamspace domain using the beamspace transformation (BT). Subsequently, by applying the vectorization operator on the virtual ULA-like array signal model, a new dimension-reduction array signal model consists of SMVs based on Khatri-Rao (KR) product is derived. And then, the DOA estimation is converted to the convex optimization problem. Finally, simulations are carried out to verify the eectiveness of the proposed method, the results show that without knowledge of the signal number, the proposed method not only has higher DOA resolution than subspace-based methods in low signal-to-noise ratio (SNR), but also has much lower computational complexity comparing other sparse-like DOA estimation methods.


Sign in / Sign up

Export Citation Format

Share Document