scholarly journals Sub-Nyquist SAR via Quadrature Compressive Sampling with Independent Measurements

2019 ◽  
Vol 11 (4) ◽  
pp. 472 ◽  
Author(s):  
Huizhang Yang ◽  
Chengzhi Chen ◽  
Shengyao Chen ◽  
Feng Xi

This paper presents an efficient sampling system for the acquisition of synthetic aperture radar (SAR) data at sub-Nyquist rate. The system adopts a quadrature compressive sampling architecture, which uses modulation, filtering, sampling and digital quadrature demodulation to produce sub-Nyquist or compressive measurements. In the sequential transmit-receive procedure of SAR, the analog echoes are modulated by random binary chipping sequences to inject randomness into the measurement projection, and the chipping sequences are independent from one observation to another. As a result, the system generates a sequence of independent structured measurement matrices, and then the resulting sensing matrix has better restricted isometry property, as proved by theoretical analysis. As a standard recovery problem in compressive sensing, image formation from the sub-Nyquist measurements has significantly improved performance, which in turn promotes low sampling/data rate. Moreover, the resulting sensing matrix has structures suitable for fast matrix-vector products, based on which we provide a first-order fast image formation algorithm. The performance of the proposed sampling system is assessed by synthetic and real data sets. Simulation results suggest that the proposed system is a valid candidate for sub-Nyquist SAR.

Author(s):  
Xiaoqiang Wang ◽  
Yali Du ◽  
Shengyu Zhu ◽  
Liangjun Ke ◽  
Zhitang Chen ◽  
...  

It is a long-standing question to discover causal relations among a set of variables in many empirical sciences. Recently, Reinforcement Learning (RL) has achieved promising results in causal discovery from observational data. However, searching the space of directed graphs and enforcing acyclicity by implicit penalties tend to be inefficient and restrict the existing RL-based method to small scale problems. In this work, we propose a novel RL-based approach for causal discovery, by incorporating RL into the ordering-based paradigm. Specifically, we formulate the ordering search problem as a multi-step Markov decision process, implement the ordering generating process with an encoder-decoder architecture, and finally use RL to optimize the proposed model based on the reward mechanisms designed for each ordering. A generated ordering would then be processed using variable selection to obtain the final causal graph. We analyze the consistency and computational complexity of the proposed method, and empirically show that a pretrained model can be exploited to accelerate training. Experimental results on both synthetic and real data sets shows that the proposed method achieves a much improved performance over existing RL-based method.


2014 ◽  
Vol 7 (5) ◽  
pp. 2303-2311 ◽  
Author(s):  
M. Martinez-Camara ◽  
B. Béjar Haro ◽  
A. Stohl ◽  
M. Vetterli

Abstract. Emissions of harmful substances into the atmosphere are a serious environmental concern. In order to understand and predict their effects, it is necessary to estimate the exact quantity and timing of the emissions from sensor measurements taken at different locations. There are a number of methods for solving this problem. However, these existing methods assume Gaussian additive errors, making them extremely sensitive to outlier measurements. We first show that the errors in real-world measurement data sets come from a heavy-tailed distribution, i.e., include outliers. Hence, we propose robustifying the existing inverse methods by adding a blind outlier-detection algorithm. The improved performance of our method is demonstrated on a real data set and compared to previously proposed methods. For the blind outlier detection, we first use an existing algorithm, RANSAC, and then propose a modification called TRANSAC, which provides a further performance improvement.


2015 ◽  
Vol 26 (2) ◽  
pp. 997-1020
Author(s):  
Marcelo Azevedo Costa ◽  
Thiago de Souza Rodrigues ◽  
André Gabriel FC da Costa ◽  
René Natowicz ◽  
Antônio Pádua Braga

This work proposes a sequential methodology for selecting variables in classification problems in which the number of predictors is much larger than the sample size. The methodology includes a Monte Carlo permutation procedure that conditionally tests the null hypothesis of no association among the outcomes and the available predictors. In order to improve computing aspects, we propose a new parametric distribution, the Truncated and Zero Inflated Gumbel Distribution. The final application is to find compact classification models with improved performance for genomic data. Results using real data sets show that the proposed methodology selects compact models with optimized classification performances.


2021 ◽  
Author(s):  
Jakob Raymaekers ◽  
Peter J. Rousseeuw

AbstractMany real data sets contain numerical features (variables) whose distribution is far from normal (Gaussian). Instead, their distribution is often skewed. In order to handle such data it is customary to preprocess the variables to make them more normal. The Box–Cox and Yeo–Johnson transformations are well-known tools for this. However, the standard maximum likelihood estimator of their transformation parameter is highly sensitive to outliers, and will often try to move outliers inward at the expense of the normality of the central part of the data. We propose a modification of these transformations as well as an estimator of the transformation parameter that is robust to outliers, so the transformed data can be approximately normal in the center and a few outliers may deviate from it. It compares favorably to existing techniques in an extensive simulation study and on real data.


Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 62
Author(s):  
Zhengwei Liu ◽  
Fukang Zhu

The thinning operators play an important role in the analysis of integer-valued autoregressive models, and the most widely used is the binomial thinning. Inspired by the theory about extended Pascal triangles, a new thinning operator named extended binomial is introduced, which is a general case of the binomial thinning. Compared to the binomial thinning operator, the extended binomial thinning operator has two parameters and is more flexible in modeling. Based on the proposed operator, a new integer-valued autoregressive model is introduced, which can accurately and flexibly capture the dispersed features of counting time series. Two-step conditional least squares (CLS) estimation is investigated for the innovation-free case and the conditional maximum likelihood estimation is also discussed. We have also obtained the asymptotic property of the two-step CLS estimator. Finally, three overdispersed or underdispersed real data sets are considered to illustrate a superior performance of the proposed model.


Econometrics ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 10
Author(s):  
Šárka Hudecová ◽  
Marie Hušková ◽  
Simos G. Meintanis

This article considers goodness-of-fit tests for bivariate INAR and bivariate Poisson autoregression models. The test statistics are based on an L2-type distance between two estimators of the probability generating function of the observations: one being entirely nonparametric and the second one being semiparametric computed under the corresponding null hypothesis. The asymptotic distribution of the proposed tests statistics both under the null hypotheses as well as under alternatives is derived and consistency is proved. The case of testing bivariate generalized Poisson autoregression and extension of the methods to dimension higher than two are also discussed. The finite-sample performance of a parametric bootstrap version of the tests is illustrated via a series of Monte Carlo experiments. The article concludes with applications on real data sets and discussion.


Information ◽  
2021 ◽  
Vol 12 (5) ◽  
pp. 202
Author(s):  
Louai Alarabi ◽  
Saleh Basalamah ◽  
Abdeltawab Hendawi ◽  
Mohammed Abdalla

The rapid spread of infectious diseases is a major public health problem. Recent developments in fighting these diseases have heightened the need for a contact tracing process. Contact tracing can be considered an ideal method for controlling the transmission of infectious diseases. The result of the contact tracing process is performing diagnostic tests, treating for suspected cases or self-isolation, and then treating for infected persons; this eventually results in limiting the spread of diseases. This paper proposes a technique named TraceAll that traces all contacts exposed to the infected patient and produces a list of these contacts to be considered potentially infected patients. Initially, it considers the infected patient as the querying user and starts to fetch the contacts exposed to him. Secondly, it obtains all the trajectories that belong to the objects moved nearby the querying user. Next, it investigates these trajectories by considering the social distance and exposure period to identify if these objects have become infected or not. The experimental evaluation of the proposed technique with real data sets illustrates the effectiveness of this solution. Comparative analysis experiments confirm that TraceAll outperforms baseline methods by 40% regarding the efficiency of answering contact tracing queries.


Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 474
Author(s):  
Abdulhakim A. Al-Babtain ◽  
Ibrahim Elbatal ◽  
Hazem Al-Mofleh ◽  
Ahmed M. Gemeay ◽  
Ahmed Z. Afify ◽  
...  

In this paper, we introduce a new flexible generator of continuous distributions called the transmuted Burr X-G (TBX-G) family to extend and increase the flexibility of the Burr X generator. The general statistical properties of the TBX-G family are calculated. One special sub-model, TBX-exponential distribution, is studied in detail. We discuss eight estimation approaches to estimating the TBX-exponential parameters, and numerical simulations are conducted to compare the suggested approaches based on partial and overall ranks. Based on our study, the Anderson–Darling estimators are recommended to estimate the TBX-exponential parameters. Using two skewed real data sets from the engineering sciences, we illustrate the importance and flexibility of the TBX-exponential model compared with other existing competing distributions.


Stats ◽  
2021 ◽  
Vol 4 (1) ◽  
pp. 28-45
Author(s):  
Vasili B.V. Nagarjuna ◽  
R. Vishnu Vardhan ◽  
Christophe Chesneau

In this paper, a new five-parameter distribution is proposed using the functionalities of the Kumaraswamy generalized family of distributions and the features of the power Lomax distribution. It is named as Kumaraswamy generalized power Lomax distribution. In a first approach, we derive its main probability and reliability functions, with a visualization of its modeling behavior by considering different parameter combinations. As prime quality, the corresponding hazard rate function is very flexible; it possesses decreasing, increasing and inverted (upside-down) bathtub shapes. Also, decreasing-increasing-decreasing shapes are nicely observed. Some important characteristics of the Kumaraswamy generalized power Lomax distribution are derived, including moments, entropy measures and order statistics. The second approach is statistical. The maximum likelihood estimates of the parameters are described and a brief simulation study shows their effectiveness. Two real data sets are taken to show how the proposed distribution can be applied concretely; parameter estimates are obtained and fitting comparisons are performed with other well-established Lomax based distributions. The Kumaraswamy generalized power Lomax distribution turns out to be best by capturing fine details in the structure of the data considered.


Symmetry ◽  
2021 ◽  
Vol 13 (7) ◽  
pp. 1114
Author(s):  
Guillermo Martínez-Flórez ◽  
Roger Tovar-Falón ◽  
María Martínez-Guerra

This paper introduces a new family of distributions for modelling censored multimodal data. The model extends the widely known tobit model by introducing two parameters that control the shape and the asymmetry of the distribution. Basic properties of this new family of distributions are studied in detail and a model for censored positive data is also studied. The problem of estimating parameters is addressed by considering the maximum likelihood method. The score functions and the elements of the observed information matrix are given. Finally, three applications to real data sets are reported to illustrate the developed methodology.


Sign in / Sign up

Export Citation Format

Share Document