scholarly journals Minimum Redundancy Array—A Baseline Optimization Strategy for Urban SAR Tomography

2020 ◽  
Vol 12 (18) ◽  
pp. 3100
Author(s):  
Lianhuan Wei ◽  
Qiuyue Feng ◽  
Shanjun Liu ◽  
Christian Bignami ◽  
Cristiano Tolomei ◽  
...  

Synthetic aperture radar (SAR) tomography (TomoSAR) is able to separate multiple scatterers layovered inside the same resolution cell in high-resolution SAR images of urban scenarios, usually with a large number of orbits, making it an expensive and unfeasible task for many practical applications. Targeting at finding out the minimum number of images necessary for tomographic reconstruction, this paper innovatively applies minimum redundancy array (MRA) for tomographic baseline array optimization. Monte Carlo simulations are conducted by means of Two-step Iterative Shrinkage/Thresholding (TWIST) and Truncated Singular Value Decomposition (TSVD) to fully evaluate the tomographic performance of MRA orbits in terms of detection rates, Cramer Rao Lower Bounds, as well as resistance against sidelobes. Experiments on COSMO-SkyMed and TerraSAR-X/TanDEM-X data are also conducted in this paper. The results from simulations and experiments on real data have both demonstrated that introducing MRA for baseline optimization in SAR tomography can benefit from the dramatic reduction of necessary orbit numbers, if the recently proposed TWIST method is used for tomographic reconstruction. Although the simulation and experiments in this manuscript are carried out using spaceborne data, the outcome of this paper can also give examples for airborne TomoSAR when designing flight orbits using airborne sensors.

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Tianyi Wang ◽  
Chengxiang Wang ◽  
Kequan Zhao ◽  
Wei Yu ◽  
Min Huang

Abstract Limited-angle computed tomography (CT) reconstruction problem arises in some practical applications due to restrictions in the scanning environment or CT imaging device. Some artifacts will be presented in image reconstructed by conventional analytical algorithms. Although some regularization strategies have been proposed to suppress the artifacts, such as total variation (TV) minimization, there is still distortion in some edge portions of image. Guided image filtering (GIF) has the advantage of smoothing the image as well as preserving the edge. To further improve the image quality and protect the edge of image, we propose a coupling method, that combines ℓ 0 {\ell_{0}} gradient minimization and GIF. An intermediate result obtained by ℓ 0 {\ell_{0}} gradient minimization is regarded as a guidance image of GIF, then GIF is used to filter the result reconstructed by simultaneous algebraic reconstruction technique (SART) with nonnegative constraint. It should be stressed that the guidance image is dynamically updated as the iteration process, which can transfer the edge to the filtered image. Some simulation and real data experiments are used to evaluate the proposed method. Experimental results show that our method owns some advantages in suppressing the artifacts of limited angle CT and in preserving the edge of image.


Risks ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 70
Author(s):  
Małgorzata Just ◽  
Krzysztof Echaust

The appropriate choice of a threshold level, which separates the tails of the probability distribution of a random variable from its middle part, is considered to be a very complex and challenging task. This paper provides an empirical study on various methods of the optimal tail selection in risk measurement. The results indicate which method may be useful in practice for investors and financial and regulatory institutions. Some methods that perform well in simulation studies, based on theoretical distributions, may not perform well when real data are in use. We analyze twelve methods with different parameters for forty-eight world indices using returns from the period of 2000–Q1 2020 and four sub-periods. The research objective is to compare the methods and to identify those which can be recognized as useful in risk measurement. The results suggest that only four tail selection methods, i.e., the Path Stability algorithm, the minimization of the Asymptotic Mean Squared Error approach, the automated Eyeball method with carefully selected tuning parameters and the Hall single bootstrap procedure may be useful in practical applications.


1994 ◽  
Vol 21 (6) ◽  
pp. 1074-1080 ◽  
Author(s):  
J. Llamas ◽  
C. Diaz Delgado ◽  
M.-L. Lavertu

In this paper, an improved probabilistic method for flood analysis using the probable maximum flood, the beta function, and orthogonal Jacobi’s polynomials is proposed. The shape of the beta function depends on the sample's characteristics and the bounds of the phenomenon. On the other hand, a serial of Jacobi’s polynomials has been used improving the beta function and increasing its convergence degree toward the real flood probability density function. This mathematical model has been tested using a sample of 1000 generated beta random data. Finally, some practical applications with real data series, from important Quebec's rivers, have been performed; the model solutions for these rivers showed the accuracy of this new method in flood frequency estimation. Key words: probable maximum flood, beta function, orthogonal polynomials, distribution function, flood frequency estimation, data generation, convergency.


2017 ◽  
Author(s):  
Rui Song ◽  
Martin Kaufmann ◽  
Jörn Ungermann ◽  
Manfred Ern ◽  
Guang Liu ◽  
...  

Abstract. Gravity waves (GWs) play an important role in atmospheric dynamics. Especially in the mesosphere and lower thermosphere (MLT) dissipating GWs provide a major contribution to the driving of the global wind system. Therefore global observations of GWs in the MLT region are of particular interest. The small scales of GWs, however, pose a major problem for the observation of GWs from space. We propose a new observation strategy for GWs in the mesopause region by combining limb and sub-limb satellite-borne remote sensing measurements for improving the spatial resolution of temperatures that are retrieved from atmospheric soundings. In our study, we simulate satellite observations of the rotational structure of the O2 A-band nightglow. A key element of the new method is the ability of the instrument or the satellite to operate in so called target mode, i.e. to stare at a particular point in the atmosphere and collect radiances at different viewing angles. These multi-angle measurements of a selected region allow for tomographic reconstruction of a 2-dimensional atmospheric state, in particular of gravity wave structures. As no real data is available, the feasibility of this tomographic retrieval is carried out with simulation data in this work. It shows that one major advantage of this observation strategy is that much smaller scale GWs can be observed. We derive a GW sensitivity function, and it is shown that target mode observations are able to capture GWs with horizontal wavelengths as short as ~ 50 km for a large range of vertical wavelengths. This is far better than the horizontal wavelength limit of 100–200 km obtained for conventional limb sounding.


2020 ◽  
Vol 25 (4) ◽  
pp. 1376-1391
Author(s):  
Liangfu Lu ◽  
Wenbo Wang ◽  
Zhiyuan Tan

AbstractThe Parallel Coordinates Plot (PCP) is a popular technique for the exploration of high-dimensional data. In many cases, researchers apply it as an effective method to analyze and mine data. However, when today’s data volume is getting larger, visual clutter and data clarity become two of the main challenges in parallel coordinates plot. Although Arc Coordinates Plot (ACP) is a popular approach to address these challenges, few optimization and improvement have been made on it. In this paper, we do three main contributions on the state-of-the-art PCP methods. One approach is the improvement of visual method itself. The other two approaches are mainly on the improvement of perceptual scalability when the scale or the dimensions of the data turn to be large in some mobile and wireless practical applications. 1) We present an improved visualization method based on ACP, termed as double arc coordinates plot (DACP). It not only reduces the visual clutter in ACP, but use a dimension-based bundling method with further optimization to deals with the issues of the conventional parallel coordinates plot (PCP). 2)To reduce the clutter caused by the order of the axes and reveal patterns that hidden in the data sets, we propose our first dimensional reordering method, a contribution-based method in DACP, which is based on the singular value decomposition (SVD) algorithm. The approach computes the importance score of attributes (dimensions) of the data using SVD and visualize the dimensions from left to right in DACP according the score in SVD. 3) Moreover, a similarity-based method, which is based on the combination of nonlinear correlation coefficient and SVD algorithm, is proposed as well in the paper. To measure the correlation between two dimensions and explains how the two dimensions interact with each other, we propose a reordering method based on non-linear correlation information measurements. We mainly use mutual information to calculate the partial similarity of dimensions in high-dimensional data visualization, and SVD is used to measure global data. Lastly, we use five case scenarios to evaluate the effectiveness of DACP, and the results show that our approaches not only do well in visualizing multivariate dataset, but also effectively alleviate the visual clutter in the conventional PCP, which bring users a better visual experience.


2019 ◽  
Vol 44 (3) ◽  
pp. 167-181 ◽  
Author(s):  
Wenchao Ma

Limited-information fit measures appear to be promising in assessing the goodness-of-fit of dichotomous response cognitive diagnosis models (CDMs), but their performance has not been examined for polytomous response CDMs. This study investigates the performance of the Mord statistic and standardized root mean square residual (SRMSR) for an ordinal response CDM—the sequential generalized deterministic inputs, noisy “and” gate model. Simulation studies showed that the Mord statistic had well-calibrated Type I error rates, but the correct detection rates were influenced by various factors such as item quality, sample size, and the number of response categories. In addition, the SRMSR was also influenced by many factors and the common practice of comparing the SRMSR against a prespecified cut-off (e.g., .05) may not be appropriate. A set of real data was analyzed as well to illustrate the use of Mord statistic and SRMSR in practice.


2004 ◽  
Vol 22 (10) ◽  
pp. 3445-3460 ◽  
Author(s):  
S. V. Thampi ◽  
T. K. Pant ◽  
S. Ravindran ◽  
C. V. Devasia ◽  
R. Sridharan

Abstract. Equatorial ionosphere poses a challenge to any algorithm that is used for tomographic reconstruction because of the phenomena like the Equatorial Ionization Anomaly (EIA) and Equatorial Spread F (ESF). Any tomographic reconstruction of ionospheric density distributions in the equatorial region is not acceptable if it does not image these phenomena, which exhibit large spatial and temporal variability, to a reasonable accuracy. The accuracy of the reconstructed image generally depends on many factors, such as the satellite-receiver configuration, the ray path modelling, grid intersections and finally, the reconstruction algorithm. The present simulation study is performed to examine these in the context of the operational Coherent Radio Beacon Experiment (CRABEX) network just commenced in India. The feasibility of using this network for the studies of the equatorial and low-latitude ionosphere over Indian longitudes has been investigated through simulations. The electron density distributions that are characteristic of EIA and ESF are fed into various simulations and the reconstructed tomograms are investigated in terms of their reproducing capabilities. It is seen that, with the present receiver chain existing from 8.5° N to 34° N, it would be possible to obtain accurate images of EIA and the plasma bubbles. The Singular Value Decomposition (SVD) algorithm has been used for the inversion procedure in this study. As is known, by the very nature of ionospheric tomography experiments, the received data contain various kinds of errors, like the measurement and discretization errors. The sensitivity of the inversion algorithm, SVD in the present case, to these errors has also been investigated and quantified.


Water ◽  
2020 ◽  
Vol 12 (12) ◽  
pp. 3445
Author(s):  
Maria Fattorini ◽  
Carlo Brandini

In this article, we discuss possible observing strategies for a simplified ocean model (Double Gyre (DG)), used as a preliminary tool to understand the observation needs for real analysis and forecasting systems. Observations are indeed fundamental to improve the quality of forecasts when data assimilation techniques are employed to obtain reliable analysis results. In addition, observation networks, particularly in situ observations, are expensive and require careful positioning of instruments. A possible strategy to locate observations is based on Singular Value Decomposition (SVD). SVD has many advantages when a variational assimilation method such as the 4D-Var is available, with its computation being dependent on the tangent linear and adjoint models. SVD is adopted as a method to identify areas where maximum error growth occurs and assimilating observations can give particular advantages. However, an SVD-based observation positioning strategy may not be optimal; thus, we introduce other criteria based on the correlation between points, as the information observed on neighboring locations can be redundant. These criteria are easily replicable in practical applications, as they require rather standard studies to obtain prior information.


Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Claude F. Lafond ◽  
Alan R. Levander

Prestack depth migration still suffers from the problems associated with building appropriate velocity models. The two main after‐migration, before‐stack velocity analysis techniques currently used, depth focusing and residual moveout correction, have found good use in many applications but have also shown their limitations in the case of very complex structures. To address this issue, we have extended the residual moveout analysis technique to the general case of heterogeneous velocity fields and steep dips, while keeping the algorithm robust enough to be of practical use on real data. Our method is not based on analytic expressions for the moveouts and requires no a priori knowledge of the model, but instead uses geometrical ray tracing in heterogeneous media, layer‐stripping migration, and local wavefront analysis to compute residual velocity corrections. These corrections are back projected into the velocity model along raypaths in a way that is similar to tomographic reconstruction. While this approach is more general than existing migration velocity analysis implementations, it is also much more computer intensive and is best used locally around a particularly complex structure. We demonstrate the technique using synthetic data from a model with strong velocity gradients and then apply it to a marine data set to improve the positioning of a major fault.


Geophysics ◽  
2020 ◽  
Vol 85 (5) ◽  
pp. C153-C162 ◽  
Author(s):  
Shibo Xu ◽  
Alexey Stovas ◽  
Hitoshi Mikada

Wavefield properties such as traveltime and relative geometric spreading (traveltime derivatives) are highly essential in seismic data processing and can be used in stacking, time-domain migration, and amplitude variation with offset analysis. Due to the complexity of an elastic orthorhombic (ORT) medium, analysis of these properties becomes reasonably difficult, where accurate explicit-form approximations are highly recommended. We have defined the shifted hyperbola form, Taylor series (TS), and the rational form (RF) approximations for P-wave traveltime and relative geometric spreading in an elastic ORT model. Because the parametric form expression for the P-wave vertical slowness in the derivation is too complicated, TS (expansion in offset) is applied to facilitate the derivation of approximate coefficients. The same approximation forms computed in the acoustic ORT model also are derived for comparison. In the numerical tests, three ORT models with parameters obtained from real data are used to test the accuracy of each approximation. The numerical examples yield results in which, apart from the error along the y-axis in ORT model 2 for the relative geometric spreading, the RF approximations all are very accurate for all of the tested models in practical applications.


Sign in / Sign up

Export Citation Format

Share Document